Josh chats with Brad Axen from Block about his creation Goose as well as the Agentic AI Foundation (AAIF). I am quite skeptical of many AI claims, but Brad has a very pragmatic view about where things are today and where we might see them head. Donating Goose to the AAIF is great news as well as seeing MCP and AGENTS.MD in the foundation. We discuss how to deal with the problem of raising up junior developers, challenges of AI PRs, and some thoughts on how to get started if you’re interested in AI development.
Episode Links
This episode is also available as a podcast, search for “Open Source Security” on your favorite podcast player.
Episode Transcript
Josh Bressers (00:00) Today, open source security is talking to Brad Axen the principal engineer working on building AI agents at Block. And Brad is one of the founding members of the Agenic AI Foundation. I hope I that right. And you are the creator of the Goose AI application. So Brad, I’m really excited to have you here today because I’ll just, I’ll lead off for everyone listening. I am a notorious AI skeptic is what I would say.
And Brad has been brave enough and kind enough and generous enough to be willing to talk to me. So Brad, I’ll let you give you, you tell us who you are and then we’ll kind of go from there. Cause I think this is going to be a lovely, lovely discussion.
Brad (00:35) Yeah, great to meet you, Josh. And I actually really enjoy talking to the AI skeptics, because I think that’s part of the job, right? When you’re building AI agents, it’s like, it’s my job to try to find value. And if there’s enough value there, I think it starts to speak for itself. that’s like, so the people who are skeptical are telling me that I need to do better, and I appreciate that. But yeah, so a little bit more about me. ⁓
I’ve been at Block for nine years and then lately have been helping build our AI driven products. So there’s some of those out in the wild like ManagerBot and MoneyBot. ⁓ But I also work on like tools for developers and this open source tool Goose, which is kind of for everyone. I would be happy to talk more about that as well.
Josh Bressers (01:18) Yeah. Yeah. So let’s start with Goose. How about because I think it’s maybe the most amusing of all the things. So I, I will try to describe this and you can correct all the things I get wrong. So the kind of magic agentic AI coding assistance, I’m not sure exactly what to call them is, is Goose. don’t know if it was the first, but it’s definitely one of the earliest. And the idea is you have, it can run in the command line and you have a GUI, which is nice because sometimes
pretty things are pretty. And it’s the idea, guess, is you describe what you want it to build and then it goes and builds code and it tries to, you know, do all the things you describe. I don’t know exactly how to, I lack the vocabulary to make sense here maybe, but the idea is it can like edit, edit code. It can grep for things. You can ask it to analyze code. The cool thing I like about Goose is it can work with
kind of any backend. like, for example, I have a framework desktop is where I’m like trying to learn all this stuff. Cause I can’t be a proper skeptic if I don’t know what the hell I’m talking about. And so I use a framework desktop and I can just point goose at a framework desktop and like it, I won’t say it’s flawless. It works. I’ll put works in quotes because there’s, there’s many caveats to this technology, but okay, what did I get wrong and what did I miss out?
Brad (02:34) No, that’s great actually, and I love a framework computer myself. ⁓ So I think the only thing I would say there is that most people are using Goose exactly the way you said, which is like ⁓ to develop software. And you don’t necessarily have to be an engineer to do that, but you’re asking it to build something for you, and then you try it out, and you have a conversation until it’s doing what you want it to do. ⁓ I will say though that…
We’ve made it general purpose in the sense that you can connect it to your Google Drive or your email. And you can use it just to help you write or to catch up. even though we, like at Block, for example, we have around 8,000 people using Goose, and a lot of them don’t build anything with it. They’re just using it to catch up on their day or to get organized, all that kind of thing,
Josh Bressers (03:24) Interesting. I guess that makes sense. And that’s something I thought of really, but, ⁓ that’s cool. Okay. So let’s start out by asking, we just talked about agentic coding. Like, what does that even mean?
Brad (03:39) Yeah, so this is, and I’ll say that, ⁓ you know, back to being skeptical, this is the, I think, the place to be the least skeptical of anymore, which is that these agents are getting really good at writing software, and so much so that I think even experts in the field are happy to use them every day, like you see that all over the place. But what I would say is that what I think is so compelling about this is that you don’t need to know anything about…
writing code to just talk to it and get some something to come out of the other side. I see the hesitation there. Yeah.
Josh Bressers (04:15) Okay. Okay. I
want to stop you there for just a moment. And here’s my experience at this. And maybe you can clarify or agree or disagree is in my usage of the, the, agents, as well as talking to other people, I know it, I feel like there’s this disconnect, right? Where there’s all this marketing, there’s all this hype that’s like, just
Say what you want, like just type in, build me a web browser and it’ll do it and it’s magic and everyone yay. But what I found is you have to basically be a software architect with these tools. if you just, if you give it, you know, not detailed instructions, if you’re vague on something, sometimes they go off the rails. Sometimes it’ll build you a thing that, that functionally works. But I found more often than not, they don’t.
And then it feels like when I try to get them to add the minor detail or something, then it like just the train falls off the bridge and it’s like, what have I just done? And so I will let you address that, but that is my current experience with these tools.
Brad (05:22) I really like the way you said that, that you have to be an architect. I think that that’s the piece, because there’s nothing, I definitely don’t want to imply, and maybe I already did, that it’s easy, because you’re right. If you want to get something high quality out on the other side, you’re going to spend hours. It’s just that the hours that you’re spending, they’re not.
⁓ programming. They’re not at like what we would call traditional engineering.
So instead of sending those hours like doing traditional programming or like learning a language, instead you’re kind of the loop around asking it about what it already knows. And it is a weird way to say that, but it has all of the programming knowledge in there.
And it’s kind of the human’s job to be like, OK, I’m having this problem or I want to see this result. What do you recommend is the best way to do this? And then you say, OK, great. That sounds like a good recommendation. I don’t know what you mean, but I believe you. Go ahead and do that. And it’s weird how much that loop matters of uncovering what the right strategy is and then asking it to implement the strategy. so I think that’s the kind of thing that actually you get that expertise. And it’s very different than learning a programming language yourself.
but you actually very much build this skill of working with the AI to get good results out.
Josh Bressers (06:44) Yes. And I do think it’s easy to fall in the trap of thinking anyone can write anything by just typing in a couple sentences. And it is complicated. Well, software architecture is complicated, right? Like whether you’re dealing with humans or an AI or anything, like it’s hard to architect and describe this stuff.
Brad (07:05) And I think what we do a disservice with is that, you know, on Twitter, we’re all collectively obsessed with one shots. Like I just wrote, like, I want a web browser and then I got one out on the other side. And that’s like a one in a hundred situation, right? And then you get it and you try to make it do one more feature. And like you said, it stops, the whole thing stops working. So I don’t, I don’t want, I don’t definitely don’t want to convey that as the like expected experience with any of these tools. But what is really cool and I feel this myself is that like,
As a person who’s very comfortable now working with AI, but I’ve never done any ⁓ app development or web development myself, I’m more of an ML person, I can go ask for and build desktop apps or new websites. And they’re of a quality that I never could have accomplished personally. And I never had to learn it, I just had to know how to work with the AI.
Josh Bressers (07:57) So I’ll agree with that. And I just had a discussion the other day with someone I work with about that is so when I ask Goose, specifically I’m working with Goose more often than not here, is I will ask it to build me something. And if it’s like Python, like for example, I’ve been trying to make it do some work with the CVE vulnerability data. I would say the writing the descriptions and just the iterations of back and forth, plus I’m on a framework.
desktop, which I know isn’t like the best models in the world. If I did this with, you know, Claude or open AI or something, it probably would. I know it’d work better, but that’s not the point, right? I want to learn and I want to fail a lot. That’s what I’m doing this for. But so like for backend things, right? Like working with the JSON data, you know, serving the data over a REST API, things like that. I’m probably actually slower with my iteration and trying to go back and forth. But for building like the, web UI on the other end, I’m infinitely more productive because I literally cannot do that.
Right like that.
Brad (08:53) like yes.
That’s the part, that’s the skepticism part where I’m like, anyone who’s like, I don’t see the value, I’m like just do something that you can’t do. And then the fact that you can do it at all is such a giveaway that there’s something happening here. ⁓
Josh Bressers (09:09) Yeah.
Brad (09:09) And
I really appreciate that as someone who’s trying to branch out into more and more kinds of development myself. So there’s that. I think to your point on the like, OK, what about when I’m racing it, talking about actual speed on the stuff where I am an actual expert? Like for me, I have a lot of experience in Python. So if we’re going to see who’s going to build a back end app faster in Python.
Until three months ago, I for sure was winning that race. I think with Opus 4.5 specifically, I might be losing now. ⁓ And I don’t think that, I don’t think it was close until Opus, and now I’m very impressed with the quality of that model. I think also there’s a couple of others in that family that are ⁓ nearly that compelling, like Gemini 3, for example. ⁓ So I think that…
That gap has started to close for me personally and if we continue to see the kinds of improvements we’re talking about, I bet that there will be an open source model that can do that this year. And I would be really excited to try that out.
Josh Bressers (10:12) I’d also be excited to try that out because well, number one, I’m kind of a cheapskate, so I don’t want to pay, you know, for tokens. But then I also just feel like the, there’s something, I guess, enjoyable about doing this kind of on my stuff, if that makes sense.
Brad (10:16) Right.
Totally. Yeah, and I think…
I’m actually getting tempted to get some serious hardware. I wanted to see if I can run full-size GLM 4.7, for example, because that is an excellent model and I can’t really get it running at a feasible speed on my Mac. So it’s very tempting. And I think those families, like the largest size QWEN or GLMs this year, will be the thing that is starting to really turn the corner on, like, oh, this is just faster for me to describe what I want than it is for me to write what I want.
Josh Bressers (11:00) Interesting. Interesting. Okay. So I’m sorry. We kind of got off on a tangent. I want to get back to Goose. So we were talking about Goose and you are the creator of Goose. And the, think the other interesting thing, and we’ll get to this also in just a moment is, is Block has donated Goose to the Agentic AI Foundation, which I am always a fan of companies donating projects to foundations for a variety of reasons. But I’ll also say like,
Tell us kind of the history of this. Why did Block decide to make Goose open source? Because this feels like one of those, I bet there were meetings where someone was like, we could sell this, right?
Brad (11:36) Right? Yeah. I ⁓ actually really appreciate working at a company like Block because although that subtext was there, I think our whole leadership chain was very enthusiastic about getting it out into open source. And we’ve done that with a few other ⁓ traditional software projects. And so just generally really happy to be kind of contributing back to the community that powers all of tech. It’s like such everything all the way to the bottom is like open source in our
stack on how we do the operate the business. So part of it was just that we wanted to be able to like contribute some of the work that we do back into that community. But I also think that we have this belief that building it in the open will eventually make it a better solution. And so that is something that we can take advantage of because we use Goose to power our AI products at Block.
And so we’ve had many open source contributors coming to Goose helping make it better. And now, ⁓ I think it was hard to say this six months ago, but I think in the last month we’ve had more code contributed to Goose outside of Block than inside of Block. And that’s the turning point that we’re looking for.
Josh Bressers (12:48) that’s fantastic. mean, that’s the dream, right? That’s the dream of open source. Now, do you know how much code for Goose is written by Goose?
Brad (12:57) It’s a huge fraction, yeah. ⁓ So this is actually a ⁓ thing I talk about with the maintainers sometimes is like, our code base.
has been, is AI developed, certainly, but it was developed with old models, so maybe it’s just time to take a sweep with, like, let’s just refresh all of it with new models, just so that we’re at, like, kind of a higher quality tier. But I think, and that kind of goes back to the what to be skeptical about, I think when you’re running an open source project like Goose, you want to encourage people to use the tool to build the tool.
And then the maintainer role, think, is a lot about ensuring that we have the quality loop that makes it function and make sure that the contributions are good. And so that means that a lot of the core, like most critical parts of Goose, like the agent loop ⁓ or some of the trickier pieces of the GUI, are written by hand by humans.
Josh Bressers (13:52) Okay. And that was one of, mean, I, that’s kind of one of my questions as well. So I know a lot of open source developers. talked to a lot of open source developers. mean, in the news right now is that Daniel Stenberg ended the curl bug bounty because of AI slop reports. There’s a bunch of open source projects that are just straight up saying no, no AI contributions. We don’t want to deal with this. And so I’m curious on your like thoughts or how you you’re running a project that does this. So obviously you’re going to have a very different bar than a lot of other projects, but I’m curious, like, how do you, how do you deal with.
you know, bad pull requests or terrible PRs, bad bug. I just said PR pull requests, but anyway, but like, you know, bad bug descriptions, things like that, because I’m, I have no doubt the volume of slop you see is, it’s significant. would, I would think.
Brad (14:39) Yeah, and I think the other thing that is tough is how do you be a good actor in a community when you don’t, no one necessarily admits to when it’s AI generated or not. Actually, we strongly encourage people to say when it’s generated by AI, it’s part of our pull request flow, like there’s a checkbox for this, because I think it’s a nicer conversation to have to be like, hey, your AI made this mistake. ⁓ You might wanna try adding this to your prompt and come back to
us with another submission, which is I think a lot more constructive than like, this looks like bad code, you know, what did you do wrong there? And so that’s part of it. I think it’s like normalizing the like, hey, this is how I used AI to build this software so that we as the maintainers can say, okay, your AI made these mistakes where…
and we recommend you try again like this, but we’re happy where this contribution is going. So that’s one angle. I think talking more about how you use AI to build tools will help. The other is that I think we need better tools for reviewing code. And the job, I think, becomes about code review. This is also true at work, by the way.
we generate a lot of code with AI, that means that we’re sometimes as much, like the same project might be getting 10 times as many lines of code per week, right? And so like, how do you just keep up with that knowing that we care about these products enough that every line has to be read by a person? And so I’ve been, ⁓ it’s funny, I think several people on our team over the holiday break spent some like heads down time making better diff viewers. And it’s like that kind of tech that I think we just need like a really strong
Streamline flow, that’s a short-term thing, but having more time spent building AI tools that help you review code is, I think, going to be a necessary counterbalance to having AI tools that help you generate code.
Josh Bressers (16:35) I would agree with that a hundred percent. And this is one of the things I’m working on at the moment is looking, like specifically using LLMs to look for like security vulnerabilities, because as a human, like I’ve done more code reviews than I can count. And it is brutal and it is slow and you miss a ton of stuff as a human. And so I’m curious, like can the robot, well, I know the robot is faster. And I guess at the same time, I suspect even if it does a kind of a bad job, humans do a bad job at this too, so.
It’s, I’m very interested in this topic as well, yes. I think it’ll be really cool to see where it goes.
Brad (17:11) There’s a, this is a ⁓ bit of a crazier idea, but I’m starting to think about labeling specific files in our code base as this is low risk. We’re going to let this be fully regenerated and fully AI reviewed at any time. ⁓ And it would be, I think, an interesting experiment to see, like, does that code stay high enough quality after two months, five months? And…
Whereas like I mentioned earlier, some of the core stuff or anything touching security or your API tokens or whatever, those things, of course, we would have human review. ⁓ But it’s an interesting thing. Where can you let people experiment as fast as they want to and not really hold them back is an interesting question of balance that I’m trying to figure out.
Josh Bressers (18:01) I mean, that’ll be, I will be interested in watching that. Cause I know like just in the world of security in general, like I’ll pick on crypto libraries where well, cryptography, I’m one of the people that crypto means cryptography in my world. but like when you’re working on something like, like let’s say open SSL, there are
Brad (18:12) Yeah, yeah,
Josh Bressers (18:19) we’ll say layers of importance to these functions, right? And when you’re dealing with the actual cryptography math, that has a very different bar than the things way over here on the API side. And so, yeah, I I think that probably makes sense.
Okay. Okay. Anyway. All right. So Goose, right? I want to use that to, jump us into the Agenic AI foundation, which is a new thing from the Linux foundation. I’m incredibly pro open source foundations for a variety of things. Because I feel like, especially from a governance perspective of a project like Goose, this makes sure that someday block doesn’t be like, well, we’ve decided Goose is an open source anymore. Everyone. Haha. Too bad. Right. Cause now this is you’ve, you’ve gifted it into.
somewhere that that isn’t going to happen. So tell us more about this foundation. I’m excited to hear about it.
Brad (19:10) Yeah, I think so this is exactly the same reason why we’re excited about it is because we again with especially when it comes to protocols, which are the other two initial projects going to the foundation, which is just to say it. So the two protocols going ⁓ along with Goose are agents.md, which is the spec for telling your agent how you want it to work on your code bases And then ⁓ model context protocol MCP which is the spec for how an agent talks to various
systems throughout the world. And MCP, of course, the, I probably don’t need to even tell anyone about that. That’s taken off like crazy in 2025, seems like.
Josh Bressers (19:48) No, no, I think
we should, it’s worth explaining for sure.
Brad (19:52) OK, great. So MCP is, I think, of the projects being contributed, the one that has had the most coverage, the most success in a lot of ways. to make it a really big pitch, it’s basically the idea is to write a protocol to let agents operate the internet. And you, um,
have a lot of these concerns that agents add, which is like you might want to be holding a continuous connection. You might want to, you certainly need to describe the systems in English, which like normal APIs kind of self-describe a little bit, but usually people skip it. And whereas it’s actually crucial when you have an agent trying to operate it. And so it’s way to, ⁓ you know, and I think it kind of, it’s one of those protocols that builds an ecosystem, right? So like as someone who’s maybe for Square, for example, we have a Square
our APIs have an MCP, so anybody who has an AI agent can come use it to operate their business with Square. Then in reverse, as someone building an agent, I get a lot of value out of people using MCP because Goose can go operate any tool. Like I mentioned earlier, it works for Google Drive or your email, and that’s because someone has written an MCP for Gmail or Google Docs or whatever.
⁓ So, yeah, MCP is how agents go communicate and operate those systems. so Goose, ⁓ the only non-protocol, I think is an important piece of that initial foundation launch because it makes those other two things like concrete.
Because if I just go say, MCP is great, it’s changing the world, and you come back and say, well, how can I use it? And I’m like, well, you can’t. It’s like a protocol. You have to build something. Goose is the way that I think we can show people this is what it looks like to use MCP and how we ground any changes or improvements in the protocol through real user experiences. So Goose is kind of the concrete project that fits the pieces together. And I think as a foundation, that also gives us a way to talk about what other
protocols do we need? So like what aren’t we covering today? Where could we be going and kind of like use Goose as a place to go build some of those integrations before we say like, we should be endorsing this as like a stable ⁓ part of the picture of building AI solutions.
Josh Bressers (22:11) So
I have no idea how to answer. Like you mentioned, you know, other technologies, kind of what’s next. do you have thoughts on that? Is there something you think is kind of the next thing for us to look out for? Cause I have no idea.
Brad (22:26) Yeah, think like, so I have one specifically that we’re, I think, working on at the moment, which is ACP, but I have to say that there are two of those that are both related to AI agents. So I mean the agent-client protocol from Zed And ⁓ it’s a protocol that lets different front ends talk to an agent. And…
I think this is a great example of an evolving protocol. So this is why you can, ⁓ with Zed, they can connect to Claude Code or ⁓ AMP or Goose. And so from your editor, you can be using this agent that you like. So it’s kind of a way for it to tie into different interfaces. And we’re trying to rewrite Goose, for example, so that you can have a bunch of different experiences, like our terminal UI, our GUI, or something more fun, like maybe texting it from your phone or having a
mobile app like all of these things can connect to the same agent. And so I think there’s some potential there just as one example, but it’s one of those things where I think as a foundation, don’t really want to… ⁓
try to make technology bets, you want to endorse technology that is already succeeding. So you want to see that it’s out there working. And so Goose has this ability to try out something, show the value, maybe a little bit in advance of it being part of the foundation. And then we would love to talk to the team about going and seeing where that could go as a protocol that other people start using too.
Josh Bressers (23:57) Yeah, that seems fair.
And that would be nice. I will say having the ability to work in like VS code or the IDE of my choosing is more pleasant to me than trying to talk to Goose in a text box, which sometimes I feel like a K person, but that’s also just me maybe.
Brad (24:12) Right. Yeah.
I think also ⁓ there’s so many, like, one of the things, we were talking about how it’s a skill to use AI. And I think the thing that’s gonna teach people that skill is coming up with better interfaces. And I really wanna experiment on that.
Josh Bressers (24:32) Okay.
I think that might be part of the answer, but so I want to ask you that because not that I’m blaming you or I’m going to expect you to have an answer to this, but like one of the things I have in my brain is if you need to be like a very senior software architect to actually, think, make some of these tools successful. And at the same time, we’re
bleeding out on the low end now where, you know, the juniors and the people coming out of school are like, I can’t find jobs. There’s nothing to do. And, and, know, you’ve got a lot of senior leadership is like, great, we don’t need to hire more developers. We’ll just buy this tool and make the developers we have work harder. Like if I know that you could say with the easy answers, well, we’ll just make the tools better. And then it doesn’t matter how inexperienced they are, but it is a concern I have, right? That, that we’re, we’re ruining.
the junior field in this space.
Brad (25:29) That’s a big question. So let me say that I used to, ⁓ before coming to tech, was in academia and I can’t imagine what…
Josh Bressers (25:29) I know it is.
Brad (25:39) college teachers are going through right now, like professors trying to get a bunch of 19-year-olds to write papers. ⁓ I think it’s the same problem. And to say that, ⁓ so, sorry, I’m going a little bit long-winded on this one, but I hope it comes around. So when I was in ⁓ undergrad, did undergrad and then grad school in physics. And the first several years of doing physics, we do these kind of problems that…
It turns out there is a piece of software that has existed for decades that can just do all of these problems. No questions asked, and it’s called Mathematica. And so it’s like this giant lie, right? Where you get to year four and someone is like, yeah, by the way, everything you’ve done so far, you could just install Mathematica and just ask it for the answer. And I kind of suspect that we’re going to need to develop that same kind of approach, where you are…
where people learn the fundamentals in whatever way is like the most.
⁓ productive for learning, not for doing, right? It’s just so that you know what kinds of problems you’re solving and how you can inspect that the results are good. And then you have that foundation to build off of. But it’s gonna be a really tough five, ten years before we’ve like figured that out. as every, like you’re saying, we’re like, we’re not necessarily teaching the junior developers those skills right when they need them. ⁓ And so yeah, I do agree that it’s as much gonna be about defining a new way to learn and then operate.
It’s kind of a new job, right? It’s not far removed. I think the people who are in software have a ton of advantages to doing that thing. But I do think you have to embrace that you’re learning how to do something new.
Josh Bressers (27:24) I’ll agree with that. And it’s funny you mentioned Mathematica because I was a double E, an electrical engineer in college. And I remember learning all about circuits and how to do all the various transforms and Ohm’s law and all Kirkhoff voltage law, all that stuff. And then eventually they’re like, and here’s some software that does all this stuff for you. And it’s like, wait, what? But I get it. You had to learn, you know.
Brad (27:43) Yeah, exactly.
Josh Bressers (27:47) Okay, man. Yeah, yeah. I mean, that was a good answer. I will not disagree with that answer because I think of all my opinions, the one that I suspect the two of us are in complete agreement with is whether you like agentic AI and LLMs and all this technology or not, it is almost definitely not going anywhere.
Brad (28:12) Yeah, I agree. I think that what I, you’re, again, this is where the ask it to do something that you don’t normally do, because that’s fun instead of frustrating.
Josh Bressers (28:25) That’s true.
Brad (28:25) And then,
yeah, then you’re like, I can do something new. And I think it brings you around to this like, okay, there’s parts of the job that I’m good at, but I don’t like doing. Maybe it can do those for me. And you kind of warm up to it. And you know, maybe someday it’ll do everything. I kind of doubt that. I kind of think that it’ll do 90%, but you’ll focus on the 10 % most important things. And I think that’s still an interesting way to build software.
Josh Bressers (28:53) Yeah, yeah. ⁓ for sure. I mean, look, the reality is in most corporate environments, the vast majority of software development at this point has turned into making two open source packages work correctly together by going to Stack Overflow and looking for the answer on how to do that.
Brad (29:07) Yeah, right, right.
Josh Bressers (29:08) there’s nothing wrong with mind you like that’s I’ve done that many a time myself but yeah okay Brad we’ve come to the end this is I learned a lot and I want to thank you for your time like this has been cool but so in the last kind of closing thoughts I’ll give you the floor what do you think we should know what are some things we should watch out for coming up what are you know anything you want us to know anything you want us to go look at research understand anything it’s like completely up to you how you want to end this one
Brad (29:35) Yeah, mean, I so one thing that I’ll say is that I think look out for 2026 being the year where we figure out how to
get beyond a chat experience, where you’re not just having this back and forth and waiting for every message and reviewing everything the model says. I hope that the foundation and Goose as a project are both going to offer some alternatives to letting this thing work async and you having more of this experience where you describe what you want, you wait, you come back, and you have something more complete to work with. So I’m not exactly sure how we’re going to do that, but I think it’s probably the most interesting thing we can be working on is scaling
up how you can have these agents do the things you want and not have to be inspecting every message they send.
Josh Bressers (30:25) Is that out of curiosity? Is that something like Gastown that you’re thinking or are we thinking something even past that?
Brad (30:31) Gastone is a great example of this same goal. Like, agent orchestration is probably the shortest way to say it, but yeah, how do you come up with a new interface? And I would love for Goose to show some options, and I think several other people will as well.
Josh Bressers (30:35) Okay.
And I’ll put a link to Gastown in the show notes because we are not going to explain that. Like we’re done. All right, Brad, I want to thank you so much. This has been lovely. I appreciate the time and I’m excited to see where the foundation goes from here and kind of what we see in Goose in the future. So it should be a very interesting 2026 for sure.
Brad (30:51) Yeah, for sure. Do not have to type.
Yeah, definitely. It’s been great chatting.