I chat with Joshua Rogers about a blog post he wrote as well as some bugs he submitted to the curl project. Joshua explains how he went searching for some AI tools to help find security bugs, and found out they can work, if you’re a competent human. We discuss the challenges of finding effective tools, the importance of human oversight in triaging vulnerabilities, and how to submit those bugs to open source projects responsibly. It’s a very sane and realistic conversation about what AI tools can and can’t do, and how humans should be interacting with these things.

This episode is also available as a podcast, search for “Open Source Security” on your favorite podcast player.

Episode Transcript

Josh Bressers (00:00) Today, open source security is talking to Joshua Rogers, executive senior principal lead head of security engineering. Awesome title, Joshua. I love it, man. Welcome to the show.

Joshua (00:10) Good to be here, thanks. the title, I’m only building it as we go along. So next time we talk, I’ll actually have at least one more adjective to add to it. Yeah.

Josh Bressers (00:19) Perfect.

I’m going to hold you to that. Okay. So you are here. I am like, I’m so excited for this conversation today. Joshua’s here because Daniel Stenberg, the creator of Curl, he’s been on the show a couple of times. He put a toot up on Mastodon that immediately caught my attention. And I think literally like less than two minutes after I read this, sent Joshua message on LinkedIn, but I’m just, I’m just going to read what Daniel sent because I think it’s amazing. He said, Joshua Rogers sent us a massive list of potential issues in Curl.

that he found using his set of AI-assisted tools, code analyzer style, and it’s all over mostly smaller bugs, but still bugs. And there could be one or two actual security flaws in there, actually truly awesome findings. Now, for anyone who doesn’t know, Daniel has not been, we’ll say, the most receptive to AI submissions to the Curl project. He has basically a wall of shame. Then he says, I’ve already landed 22 bug fixes thanks to this, and I have over twice the amount issues left to go through, which is a fricking lot, wade through perhaps.

credited, reported Joshua and Serif data, if you want to look for yourself. And then he basically says, you know, this is an example of a competent human using AI tools to, to solve problems and do good work. So I mean, I, I don’t even know where to start. Like this is, this is the dream, right? There’s there’ve been, I feel like AI is this hot topic. Everyone’s talking about it, but like 99 % of it is just like utter rubbish. You are going to tell us about not utter rubbish. So.

I’m just gonna stop talking for a minute and Joshua, I’m gonna let you kind of tell us what you did and let’s talk about what this means because this one feels important to me and I love it.

Joshua (01:59) Sure, so first of all, know, what better compliment to receive in life by anyone than being called a competent human. ⁓ It’s only something anybody could dream of in the world. Obviously having Daniel say it only helps slightly, but basically what I did was I just checked what was on the market for AI assisted software source code analyzers.

and I just used them. I didn’t do anything technologically interesting. ⁓ Basically, I just found all of these startups that are ⁓ in the security space and just ran it against some open source software. ⁓ Yeah.

Josh Bressers (02:41) Okay, okay, hold on. I’m gonna stop

you there for a moment, because you have a blog post that I will put a link to in the show notes. Let me pull this up, because the title’s kind of long. It’s called Hacking with AI SAST I’m just gonna leave it there, because there’s a whole bunch of other two, whatever. Link in the show notes, run it and listen. Go read it. Seriously, it’s good. But you mentioned in the beginning of this post that just finding these tools was not simple, because Google is just basically completely SEO’d to death any time you have LLMs, right?

Joshua (03:08) Yeah, so I think it’s ridiculous that, you know, everybody’s looking at AI now, everyone wants AI to replace, you know, workers, ⁓ AI to assist workers, whatever. Certainly in the security industry, everyone wants AI to, well, help solve problems. So looking for AI assisted tools.

you’d expect you just Google and you’d get all of these products that you can just throw money to. And I have that problem in life, whether it be websites where the pay form doesn’t work or websites that don’t actually tell me what they do, giving away money is really difficult sometimes. And in this case for AI assisted software source analyzers, just finding the products was difficult enough.

⁓ I was lucky enough to eventually find some products not on Google. For example, in terms of how bad it is on Google, if you search something like AI slash LM, source code, vulnerability scanner, or basically any variation of this.

You just get nothing. You get blog posts, you get university papers, you get products that have been existing for at least 10 years that they’ve just added AI to the title, you know, because AI everything these days, but they’re not real products that have been built and designed and architecture with AI in mind at the start. ⁓ The way I was able to find something eventually was literally on Reddit.

Josh Bressers (04:27) Yeah, yeah.

Joshua (04:41) Posts that people were talking about. I use this this one is something My friend also found some by searching crunch base, which okay, maybe that is how people find startups But I would still expect you know, at least on within the ten pages of Google that I searched for various keywords and so on you would you would find something or Since you know as we’ll discuss later on these products are so good. You would expect someone would be advertising how good they are on

⁓ open source ⁓ code or code that really matters. ⁓ But apparently not. So that’s basically what I did. ⁓ As I mentioned, just found the tools, used them, tried to give back to the community. I’m an open source advocate, absolutely. So it was sort of natural just to test what I use, which, know, curl is absolutely something everyone uses in their world. So it came very, very natural.

Josh Bressers (05:36) Yes.

Okay, so I want to understand

the process though. So you found these tools, there’s a list in the blog post, we won’t go over them. All I remember from the list is there’s one at the bottom called ZeroPath that you seem to talk about the most. So I have a suspicion that’s the good one of all them. so you have these tools, were these like free trials, did you pay money, what did that look

Joshua (05:58) Yeah, so they were all free trials. was, basically the whole idea was I work for a company ⁓ and we were thinking about looking for some AI assisted ⁓ code reviewer because we do code reviews ourself and we thought this could definitely assist us in what we do, reduce some of our workload so we can do more interesting things or just be more productive in general. This…

was not really well planned or anything. We originally had one product and then that kind of worked a little bit. That was called Almanacs. It worked really well for finding ⁓ malicious code.

in npm packages but then I sort of went off and said okay there surely can’t only be one product and then I found another one we tried it it was better and then again sort of randomly found a third one called ZeroPath and it was way better and that’s sort of what Daniel received he received the output of this tool

Josh Bressers (07:02) Okay, what does that look like? You fed it the curl source code and then what happens? Well, like there’s obviously gonna be false positives. You’ve got to like, how did you weed through the results? Cause obviously if you just sent Daniel a raw list that had false positives in it, you would have made his list of shame. So I have a suspicion that is not what you did.

Joshua (07:21) Yeah, so originally, and what I’ve been doing for other projects, projects like literally just projects that I use the code, ⁓ like some other stuff, ⁓ what I did originally was, as you said, just upload the source code, you know, an hour later, it just throws out tens or hundreds of bugs slash vulnerabilities. And then originally I had been triaging those myself and submitting pull requests. ⁓

⁓ for fixes for these issues. And these were things that looked like security vulnerabilities, but also just general bugs. The first report, which Daniel sort of publicized as being the most interesting at the time, was some issue in Kerberos authentication ⁓ that was like a small vulnerability. And then while we were working through it, we realized that

like the whole code like literally did not work. So if anyone had actually tried to use that ⁓ functionality in the past year and a half, it would not have worked and they would have reported it. So, you know, first step triage from my side and then I reported it and then we sort of worked together to work out what are we going to do about it. ⁓ That became unmanageable eventually when like I started reporting 15 of them and Daniel just sent me an email and said, you know,

where are these findings from? Because they were in completely different functionalities, they were completely different bugs, they were things that he hadn’t seen or touched in years and years and years. So it wasn’t like, well, it was pretty clear, I guess, it wasn’t a human just reviewing the source code because it was wildly different functionality and parts of the code. Eventually, I told him that I’m testing out these source code review tools.

And he just said, well, this looks great. Can you just send me all the results and I’ll do the triage? Because obviously, you know, he is his code. He can understand it much better than I can. ⁓ I’ve certainly had troubles with that while triaging issues with other code bases where I’m reading the issue and I literally do not understand what it’s telling me. I understand every single word, but together as a sentence or as a description, I’m like…

This is another language to me. ⁓ I found that using AI a second time has been really good. So basically feeding the issue description that these tools give me into chat GPT and saying, what does this mean? And this sort of clears things up for me and I can actually say whether it’s even worth ⁓ investigating further and triaging further or just saying, okay, this is not a problem.

don’t value this over my time. For example, I was looking at ⁓ the forwarding proxy, HTTP proxy, squid, and you know, among literally 200 valid bugs, there were some invalid bugs. ⁓ One of which was something along the lines of server side request forgery, where an attacker can access a ⁓ external link through the system. And like, yeah, that’s the functionality of squid. That’s, that’s what I

proxies. I can at least read that and say, okay, no. So, you know, it really requires a human to take a look at these. ⁓ Copying, pasting them probably would have worked maybe 50 % of the time. It just depends on the tool. It seems to depend on luck as well in terms of false positives. It depends on readability, like, in general of the code as well.

So false positives surely exist, but they’re not of the ⁓ low quality that Daniel has seen before, which, you look at them and you actually have to wonder if it is just one person deliberately making very, very obvious false positive reports just to troll. Because like if you step back for a moment and you think that’s a great trolling opportunity is to deliberately send these absolute bullshit. ⁓

reports that like you just read and you just want to close the laptop and say, okay, enough for today, I can’t do it. That’s enough. But yeah, like quite high quality results. Just triaging is sometimes difficult, but in the end, clearly worth it.

Josh Bressers (11:52) Okay, there’s a lot to unpack in there. I want to start out by mentioning something I read in your blog post, which you said when you struggled to understand some of the descriptions and what the code was doing and things like that, you said you would look at the proposed fix. Was this the tool, the security tool was also proposing that fix or were you asking like chat GPT or something else to write the fix based on the description?

Joshua (12:15) Yeah, so most of these tools offer this auto-fix functionality where they will propose a fix ⁓ and that’s where I was mostly getting these patches and looking at them. Sometimes the tools wouldn’t provide it just because some system says that they can’t do it or they don’t know how to fix it or something like this.

Josh Bressers (12:21) Okay.

Joshua (12:37) in which case, you know, I just forwarded it to chat GPT and asked it, one, is it a real problem? And two, how would you fix it? ⁓ Give a minimal diff and whether that diff is actually relevant or not is irrelevant to me. It’s just like, I’m trying to read the code and understand what’s going on and this is a good way to work it out.

Josh Bressers (12:54) Yeah.

Right, right. Okay, that makes sense. And out of curiosity, how many of those patches did you forward along versus how many did you as the human, like, create the patch you sent?

Joshua (13:11) I don’t think I forwarded any patches verbatim. they, some of them were quite good. It’s just, you know, you have to, you know, I, I care about quality and I care about, ⁓ doing the right thing, ⁓ by the community and so on. So sending some patch that doesn’t follow the format, that is used throughout the code base or some set of rules or something like this.

Josh Bressers (13:17) Okay.

Right.

Joshua (13:40) At least sometimes in life you have to try slightly. You can’t forward it all to AI, not yet at least. You might be able to one day ask the AI to make sure that the code format ⁓ is proper. ⁓ Some projects have linting or automatic formatting, something like this. But I found that some small massaging, let’s say, of the patches was required in all of the cases.

Josh Bressers (14:06) Sure, I mean that makes sense. I don’t think that’s a surprise to anyone. Okay, you mentioned false positive. Did you say about 50 % false positive rate you got out of these tools?

Joshua (14:16) So it depended on the tool. Some of them were really good and there was maybe like 10, 20 % false positives. Some of them, there was 50%. It ⁓ was really dependent on the code base. And my understanding is that that is no, like, if you think about ⁓ AI and LLMs and what they’re advertised as, as, you know, basically human.

It sort of makes sense. A human developer is going to make mistakes as well. A human code reviewer is not going to fully understand the context based on how difficult it is to read the code and ⁓ grok it, basically.

Josh Bressers (14:46) Yeah, yeah.

Joshua (14:55) I scanned a program that a friend of a friend made and it’s written in like C that you would write in the 1990s, like using constructs that I, like before C99 for sure, like it didn’t compile in C99. And I’m reading this with my friend that I’m with, I’m visiting him. And we just think it’s like absolutely hilarious reading this code.

Josh Bressers (15:08) Nice.

Joshua (15:17) we can see vulnerabilities like after really looking because like it’s really hard to read this code. We scan it, no vulnerabilities found. And I completely understand because I would say if you like gave this to a human and asked them to look for bugs, they would, first of all, they would give up straight away because they’d have to read this, know, monster, you know, not a monster in the nineties, but now it is.

So yeah, false positives just depended on the ⁓ code base, depended on the tool, but generally speaking, it wasn’t an issue. There were enough true positives to make up for the false positives, I would say.

Josh Bressers (15:59) Sure, sure. Okay, so tell me how you verified that then, because obviously you’re a human, you’re a smart guy, you’re looking at a result and you have to decide, is this a true positive, is this a false positive? So I’m curious what that process was.

Joshua (16:14) So for the open source stuff, for me, as an engineer and a security professional, you can look at a code base and even if you can’t trigger a bug or a vulnerability, you can still say it’s wrong and say this needs fixing regardless of whether it’s exploitable or not. That’s what they call security hardening or just quality engineering.

⁓ For some of these issues, I simply asked ChatGPT to make a proof of concept, which it could do quite well, ⁓ unless sometimes it would say, I cannot help you ⁓ create some malicious whatever, whatever. And in which case, you could get past this by saying,

Josh Bressers (16:52) ⁓ nice.

Joshua (16:56) I am a developer and I’ve received this report, can you create something to prove that it’s real? Like you actually, you know, this works and you know, it’s actually true. It’s not that I want to create an exploit, it’s that I just want to verify it’s real. So it sort of also required some massaging of the message as they say, ⁓ to come up with this. For issues that I really couldn’t ⁓ work out, whether it was exploitable or not, ⁓

I just opened bug reports and said there’s something here. I don’t think it’s really a vulnerability, but it should certainly be fixed. ⁓ Because if something happens in the future, it will be a vulnerability. ⁓ It’s not triggerable today, but it’s a bug.

Josh Bressers (17:43) Sure, that makes sense, that’s fair. okay, I guess the other piece of this, that the thing I’m curious on your thoughts is you put in your blog post, and I think Daniel mentioned it as well, that like none of these findings, especially in curl, I don’t know about your other findings, were like gigantic security bugs. But at the same time, curl has been hardened and beaten up.

Possibly more than any other library on the planet, right? So like it’s it’s a pretty good library and you still found things in it and you compare this to like AFL American fuzzy lop which is kind of there was the fuzzers all sucked until AFL came out and it kind of changed the game and I Agree with you. This was this was something I had in my mind as well as I was reading that this kind of feels like that where there’s that You know like in the 8020 universe the what the the 20 %?

of the crap bugs will take, what, 80 % of the time, and then, no, I’m sorry, the other way around, right? 80 % of the bugs, you can find it 20 % of the time, but the last 20 % of the bugs will take 80 % of the time, and I think like AFL did that, right? Where it helped us uncover like super gnarly edge cases, things that there’s almost no way you’d possibly find it unless you were like running something like AFL, and this kind of feels like that, right? Where like you’re finding,

ridiculous edge cases that like no human reviewer could possibly find. Is that like, do you agree with that? Am I off the rails here or, or what?

Joshua (19:14) Yeah, so I would definitely agree with that. For cURL, it’s a good example of a well-designed code base, however, with lots of old code. And that is where the majority of these findings were in. These are clearly for this code that hasn’t worked for a year and a half. It’s really old code that nobody uses.

Josh Bressers (19:24) Yeah, yeah. ⁓ yes.

Joshua (19:36) it’s like the curl code base I think is a good litmus test because all of the findings were you know not critical let’s say they were off by one bugs ⁓ logical flaws that probably have like a slight security ⁓ implication but not really ⁓

They are really just edge cases, as you mentioned, similar to how AFL was able to find, or is still able to find, such problems. In other code bases, there were critical findings. ⁓ Those probably… I mean, there are many reasons that bugs and vulnerabilities…

are introduced to code bases, ⁓ many reasons they’re not found. ⁓ sudo for example, there were critical vulnerabilities found in that, but not in the main sudo executable, but in some logging feature that they had that I’d never heard of. So as it turns out, some people like to ⁓ log when sudo is used to an external server. And sudo, like the main package for this actually provides this functionality.

And ends up it’s like has a lot of vulnerabilities in it that are now patched. So I think these AI tools, they just simply give you the eyes that you don’t have. Maybe you can get a thousand people to go review this whole code base and all of the functionality, but who has money for that? Who has time for that? Why would you not just put it into this tool, wait one hour and literally be fed a bunch of bugs and vulnerabilities that you can go triage.

⁓ Other code bases, some random things that I have collected over time, I scanned and it found legitimate remote code execution bugs that I was like, wow, this is awesome. ⁓ Those are certainly code bases that people just don’t look at because they have meaning to me, for example, like old work at Opera, for example, there’s some open source code that I scanned and there was a remote code execution. I’m like…

How did nobody see this before? Like we had our security team, myself included, look at this and we’re like, and I’m like, wow, how did I miss that? ⁓

Josh Bressers (21:55) That’s like

so embarrassing, isn’t it? But it’s like, feels like it might, you should just have a comment above it that’s like remote code execution here. And it’s like, I’m so.

Joshua (22:03) Exactly,

yeah, exactly.

Josh Bressers (22:06) Right, right. Okay, okay. So I have one last question for you and then I want to shift into maybe some thoughts about the future. So one of the things you also comment in your blog post is that the severities reported by these tools apparently are, we’ll say all over the board where you, sounds like you did not agree with many of the severities that came out of this.

Joshua (22:31) Yeah, I’m not really sure how these systems rank severity and priority and all of this stuff, even traditional SAST like these ⁓ static application security testing tools. I don’t really know how they work in terms of classification. I found that critical vulnerabilities that were reported as informational bugs or not even real bugs reported as critical. ⁓

Josh Bressers (22:40) That’s fair.

Joshua (22:59) I think this is something that in a way could be described as a good thing that it doesn’t work because it means the companies are focusing more on the actual tool than this stuff that doesn’t really matter, AKA stuff I don’t really care about. I just care if the tool actually finds bugs. I don’t care what it says about them as long as I can see them. Yeah.

Josh Bressers (23:23) It’s probably a random number generator.

Joshua (23:26) Sometimes

that’s really what it felt like. It took the title and just extracted some bits and said, what’s the temperature outside and decided that’s the severity. So I sort of ignored that when looking at these results. And because there weren’t so many results, maximum 200, 250.

for a large, really large code base, I was able to actually just go through all the issues anyway and decide myself, like, is it worth investigating or not?

Josh Bressers (24:03) Right, I mean, that’s a competent human aspect of this, right? Versus I’m just going to dump a mess on top of you. Good luck, open source project.

Joshua (24:13) Exactly.

Josh Bressers (24:15) Okay. So kind of leading off of that now, and here’s the thing I genuinely do not know how to even begin to discuss maybe is so you have done some research, you found a way to generate quite frankly, a large number of bug reports in your case. You also put some time in. So this isn’t one of those just we’re going to, you know, turn the handle and, and feed whatever comes out to whoever’s there. So I feel like.

These tools, while they almost certainly have the ability in the hands of a competent human to change the quality level of source code, they also, I’m worried, run the risk of overwhelming open source developers. Because these are people that already, I I’ve spoke at length about how like just overworked and underappreciated open source developers tend to be. And now if we have security researchers sending, I mean, you sent curl what, like 60-ish bug reports, which

Like that’s Daniel’s job. So, and I know he like, it’s excited for things like this. So he’s working on it, but like, if I’m a guy who spends an hour a week on my open source project and someone sends me 60 bugs, I’m going to be like, F this, right? Like I, I, I’m curious on your thoughts on like, what is, like, how do we do this in a way that doesn’t just like overwhelm and burnout open source projects? Like, how do we make this sustainable? Cause this is really cool, but it’s also scary.

Joshua (25:41) Yep, ⁓ that’s a issue that, as you mentioned, isn’t new, it isn’t solved. ⁓ I have done something like this before when I audited Squid, this HTTP proxy, about four years ago or three years ago, where I manually did some code review and some fuzzing of it, and I came up with about 50 issues to report, and I reported them all at the same time. And these guys were very appreciative.

Josh Bressers (26:07) ⁓

Joshua (26:09) of my work, but at the same time they said like, yeah, we are totally unable to fix like anything right now because we don’t have time, we don’t have money, we are totally understaffed. I, at the time, then offered some patches. I said, okay, let’s work through them together. There is no rush to fix these things. They’ve existed forever. These are not new vulnerabilities.

I don’t see any reason to panic or ⁓ do things really quickly in this case. And it took two and a half years to get through all of these issues. The alternative was they don’t get fixed. I think thinking in the long term like that is important. In an ideal world,

these tools and these companies are made available to these ⁓ open source projects possibly for free. They’re generally quite cheap now, so I would see it as a sort of advertising opportunity for these companies just to give ⁓ access to these open source projects ⁓ for free. And that way, you you don’t get someone.

using the tool and then just sending it all the results in one because you know they already have Accessed the results or they all they can send them is the is the ones that aren’t So so It’s a it’s a unsolved problem unsolved problem and difficult problem that Probably won’t get solved it. Unfortunately, it just depends on the person doing the reporting how much they understand the

environment and the community and the infrastructure that makes up the open source community. ⁓

Josh Bressers (27:54) Yeah, exactly. I I feel like what you just described you did with Squid is exactly the right way people should approach something like this, where you have to be willing to get dirty, you have to be willing to do the work, and you have to be patient, because if it’s an understaffed project, like yeah, it’s gonna take a while. And I believe that 50 bugs in two and a half years for an understaffed open source project, it’s probably about right. And security bugs are often difficult and weird, right? This isn’t like…

Simple bug fixes quite often. Okay. Okay. All right, Joshua. I could talk to you probably for a couple of days, but I won’t. So let’s just kind of end this one. I feel like you’ve given everyone a lot of information. There’ll be link to your blog posts and there’s a ton of interesting content there. Like if I’m someone, I’m a security researcher, I’m interested in kind of getting a start here, getting a look at some of these tools. What is your suggestion for the best way to kind of get going?

and dip my toe in this space.

Joshua (28:58) ⁓ So there are three tools that I sort of recommend in my blog that are let’s say the best that is Almanacs, Corgear and ZeroPath. They all have ⁓

fairly low pricing that anyone can just sign up for. I think they all have free trials as well. Like you can scan one repository or something like this. And you can scan 25 repositories for, I don’t know, $200 per month, which is like ridiculously low. Imagine having some project you really care about and you find 200 bugs and all you paid was, you know,

$9 or something like that for the project comparatively.

Josh Bressers (29:44) Right, right. Well, and I’m

going to interrupt for a moment and just mention like the historical, we’ll say traditional SAST tooling is often measured in hundreds of thousands to millions of dollars in terms of pricing. this is like $200 is ridiculous. That’s free compared to that other stuff.

Joshua (30:00) Practically, yeah, that’s how I’ve sold it to some friends. said like, we could finish this lunch and we could have $200 saved to just buy this and have a lot more fun than just be at lunch at a nice restaurant. So I would say just try it. Just sign up. Otherwise, there are some open source.

like small projects I would say that people have come up with and they’re available on GitHub. You basically just connect some open AI key and it’s like 150 lines of Python or something like this. It’s not anything professional, but you sort of get the idea of ⁓ how the whole system works and what it’s trying to do and the way it’s trying to solve this problem. It’s not very good, but you can at least see what’s going on under the hood with these.

hobby projects and they do actually work not very well but you know they do something.

Josh Bressers (31:00) Well, I mean, I’m shocked that a team of engineers at a startup could outperform a random open source project, right? So, well, not always, but okay. my goodness. I don’t even know what to say about this. Like I’m equal parts ecstatic and terrified because the like, the AI hater deep inside of me is like, this is nonsense. But you’ve presented a very reasonable description. It is clear.

Joshua (31:06) Indeed.

Josh Bressers (31:29) Like you’re not just some chuckle head that’s like shoveling something into AI and just dumping what comes out into, you know, GitHub issues or Bugzilla or whatever. So this, but at the same time, like, man, if this works, which it looks like it does, the, the overwhelming work that could come out of the other side of it, while I understand is important and probably necessary is holy crap. Like I,

I didn’t even know what to say, man. Like, well done, I guess. You know, thank you for sharing. Thank you for talking to me. Thank you for writing this down in a coherent way that doesn’t come off as like a crazy person. Like, yeah, it’s awesome.

Joshua (32:11) My pleasure. As I said at the beginning of our talk today, it’s a way to give back to the community. At the end of the day, I didn’t make these tools, I’m just using them. So why not get others involved? Hopefully others that are capable. ⁓

Josh Bressers (32:20) Awesome.

We should

also add, I have a suspicion this is better marketing for these tools than the tools have been doing, because if no one can find them, but that’s really cool. All right, well, Joshua, I just want to thank you for the time. This has been amazing and whatever you do next, we’ll have to have you come back and talk about it, because I’m excited to see where you go.

Joshua (32:37) Absolutely.

Sounds great.

Josh Bressers (32:48) Awesome. Thank you, sir. Until next time.

Joshua (32:50) Thanks so much. See you next time.