Live from re:Invent…it’s Stack Overflow!
This episode was recorded at AWS re:Invent 2025! Check out Ryan’s recap of events from the floor on our blog.
Connect with Prashanth on LinkedIn.
Connect with Michael on LinkedIn.
TRANSCRIPT
Ryan Donovan: Tired of database limitations and architectures that break when you scale? Think outside rows and columns. MongoDB is built for developers by developers. It’s asset-compliant, enterprise-ready, and fluent in AI. Start building faster at mongodb.com/build.
[Intro Music]
Ryan Donovan: Hello, and welcome to the Stack Overflow Podcast. I’m Ryan Donovan, your host, and today, recording live from re:Invent in Las Vegas. And my guests today are a couple of returning guests at Stack Overflow regulars, Prashanth Chandrasekar, CEO, and Michael Foree, Director of Data Science. So, welcome back to the show. So, today we wanna talk about some of the stuff that happened at re:Invent, what it means to the larger tech community, that sort of thing. So, Prashanth, I’m gonna start with you. You watched Matt Garman’s keynote. What were your sort of big takeaways from the announcements?
Prashanth Chandrasekar: Yeah, no. Wonderful to be back here, and obviously, re:Invent, AWS re:Invent is an amazing show. Something like 60,000 people are here.
Ryan Donovan: Yeah, 67.
Prashanth Chandrasekar: And it was great because we timed our own AI assist announcement along with the re:Invent show, which we could talk about here today. But I think with Matt Garman’s keynote, I thought it was great. I particularly appreciated three points that he talked about. One was all around this notion of, clearly, agents, which he talked about, you know, these three frontier agents that are just launching, from autonomous coding agent, to security agent, to DevOps. I thought that was compelling ’cause you’re effectively moving, finally, in that direction, which I think is the conversation inside enterprises, as we have them with our Stack internal product. And so, obviously, our own solution plays into that. That was one; the other aspect was he used this great analogy about raising teenage kids and this notion of ‘trust but verify.’ And I think it’s again, very especially true in the enterprise where there’s a lot of enthusiasm for AI agents, and assistance, and AI in general, but the trust level still is low, and for you to gain any sort of scale across enterprises, you’re gonna need to focus on trust. And again, it’s something that we think about in Stack quite a bit with our products. And maybe the third one is, obviously, for me, it was good to see much of their own frontier models, which I thought was a very important step for them. And they’re now sort of in the game, so to speak, of the big AI labs, et cetera. You know, AWS has a kind of a right to play that, no doubt, because obviously, they’ve got this great compounding set of forces around their trainium chips, their cloud infrastructure, which they’ve obviously been a leader for a very long time. And now, of course, with data agents, coming all together. So, I think that those are the three things that stood out to me.
Michael Foree: Yeah, I’d say there’s a heavy emphasis on agents this year. Everyone’s talking about agents, trusting AI, the analogy of teenagers – how AI is growing and becoming more trustworthy than it was before. It’s getting more mature, but at the teenage level, you still need guardrails to make sure that it’s doing what you’re expecting it to do and going in the right direction. And it’s kind of got a mind of its own at this point. It can go and do things. Non-determinism is a resounding point that I’ve also heard, where it doesn’t do the same thing every single time. It surprises you quite a bit.
Ryan Donovan: One thing that sort of struck me with the frontier agents is that I talk to a lot of AI startups that are doing SRE security proactive agents, and this is gonna put a lot of them on their heels, I think.
Prashanth Chandrasekar: Yeah, I think that there’s always this story about, you know, AWS, or all these companies, right? With the big hyperscalers. There’s a lot of coopetition, generally speaking, ’cause you know, the platforms that they are building, they’re ultimately serving the customer. Obviously, AWS is a very customer-centric, much like we are at Stack Overflow. We’re user and customer-centric, and I think for them, it’s ultimately: what does the customer want? They can fulfill that through a partner ecosystem, providers where they can do that, obviously with their own platform, and we’ve got the ability to keep extending their reach. So, I think it’s only normal for them to extend their value into new areas that customers are asking for, but there’s also room for partners to excel. You know, we’re a partner of AWS, and so, I think there’s always room to, because let’s say that the partner’s probably the expert in the area, very deep, and always pushing the envelope on what’s possible in whatever area. So, I don’t think just ’cause you’re, let’s take cyber as an example, you’ve gotta be 10 steps ahead. You gotta be thinking about 2030 now versus 2020 touch security resilience today. And so, with base-level cyber security foundation, yes, it makes sense for AWS to extend into that area with an agent, with their own security services obviously. We’ve got plenty managed services around that, but they also partner with large security companies. I was in a panel yesterday with Palo Alto Networks and AWS’s security leaks. And so that was quite interesting to see that they played, right? And they participate at the same time.
Ryan Donovan: Yeah. It was interesting to me. I was part of this AMA with Jason Bennett, the VP and Global Head of their startups program. And I asked basically that, are you putting people out of business? Are you thinking about the startups you fund based on how it fits in your products? And he was like, ‘we’re a platform, right?’ We wanna serve the customer and we don’t care. They have a coding agent, but I think Anthropic runs on AWS, right? We’re here today.’
Michael Foree: Yeah. You mentioned the changes in jobs and the infrastructure. I think that there’s an interesting analogy that I’ve picked up on when the cloud became a thing about a decade ago, the risk profiles of what was launched changed from on-prem to the cloud. People’s jobs changed based on what was possible, what was easy, what was hard, and where problems cranked up. And I think that it’s gonna be similar with AI. I don’t think that jobs are gonna go away. I think that they’re gonna radically change, because AI is good at some things, so we don’t need people to do X, but it’s gonna be bad at some things, so we do need people to step in and do X. On the security front, there’s gonna be new security vectors and vulnerabilities that crop up that’re gonna need this type of attention. And the people that I’ve been talking to at the conference, one of the things that I hear frequently is, my job isn’t gonna change, but that job over there is gonna change. And I’ll even have people that are not directly pointing fingers at each other, but it’s like, ‘oh, the data engineers should be worried.’ And then I go, and I talk to a data engineer, and they’re like, ‘no, my job is rock solid. Those people over there, they should be concerned.’ I think the theme is that there’s going to be plenty of work, plenty of jobs, but also there’s gonna be a lot of disruption in what your job is and what it’s going to be going forward.
Ryan Donovan: Yeah, I think, like you said, there’s probably gonna be some change, but there are a lot of companies that are looking to cut workforce from AI. There was a panel yesterday with May Habib, CEO of Writer, and she said, people call her up and say, ‘how do I eliminate 30% of my workforce?’ Prashanth, what do you think of the future for jobs?
Prashanth Chandrasekar: Yeah. I think that the number of jobs in aggregate will continue to increase, ’cause there’s only gonna be more possibilities, more problems to solve now that we’ve unlocked all this new capability. So, for example, I was talking to a CEO yesterday, his area is in life sciences. He’s building a frontier lab for life sciences, and that would never have existed a few years ago in the context of what he’s doing. And he’s like, ‘it’s a complete game changer for us to have access to this kind of end-to-end, integrated tech stack for the AI capability.’ And so, that only creates new companies like that in aggregate, right? So overall, there’ll be more companies created, and you’re seeing that in the startup ecosystem. You see that here, you know, it’s bustling with startups. And we see that on our own platform. You know, people are using Stack Overflow, a lot of early-stage startups for building new things, asking a lot of questions on AI, et cetera. And so, all those things are indications that the ecosystem will be increasing. But to Michael’s point, the types of jobs will change, most likely, because some of them will be automatable.
Ryan Donovan: Yeah.
Prashanth Chandrasekar: Anything that’s sort of more– let’s say you to take the legal space as an example, right? If you’re a junior associate or a paralegal doing a lot of grunt work, and now you have tools like Harvey that can automate away a lot of the low-level tasks, then perhaps you need less of those kinds of roles. But it also suggests that, you know, you may need different kinds of roles and technology to build these new applications that are sitting on top of all the site business logic that exists. So, more number of companies, different types of roles to handle different parts of the stack that have now been unlocked as part of all this innovation, and aggregate – I believe there’ll be more developer jobs, even though the types of work the developers will be doing will be different, as often talked about. And this is obviously consistent with prior waves, like the DevOps wave, the outsourcing wave, in all those places, people have said jobs were going away, but in reality, jobs have increased because there are a lot more things to go do and work on.
Ryan Donovan: Yeah, and new jobs give questions. So, good news for us. So, another thing I’ve seen is a lot of physical AI. I’ve both been getting a lot of pitches for robotics companies or abstraction layers for robotics, and there are a bunch of robotics companies here. AWS has a partnership with Nvidia. What do you think of that sort of wave coming?
Michael Foree: I certainly think that robotics is up and coming. It’s a powerful economic engine. Right now, it’s a hot spot for investing, and it’s one of the eight areas that Amazon itself has identified as an opportunity for investment. The number one, of course, being AI, right? And so, I think that combining AI with robotics is a natural question to ask and is absolutely a place where people should consider combining. One thing that limits LLMs today is that you have to be in front of a computer to interact with them, right? But the robot gets it out of that computer realm and allows it to interact with the world around it, right? That’s one thing that you can’t quite do with the cloud, or with data centers, or the internet, or so on and so forth. So, I think that people have been trying to interject robots and autonomous cars as an outstretch of that to interact with the actual human beings. And I think that the advancements with AI in computing are gonna help blend those worlds going forward. So, I, for one, am really intrigued to see how robotics, AI, [and] computing are gonna blend the digital and the physical space. And I think it’s absolutely gonna be a world ripe for disruption in the near future.
Prashanth Chandrasekar: Maybe to add, I would say that obviously there’s a tremendous amount of data being produced. Tesla’s a good example, or, you know, maybe Hillbot, the robotics company. The fact that they’re running around and cataloging literally every interaction—human interaction, machine interaction—with the real world is just a tremendous amount of data. [It’s] good for the ‘picks and shovels’ providers, like the AWSs or even the Databricks of the world. All that data ultimately is being used by AI models, and this is sort of similar, another compounding force. I mean, again, these are these S-curves building on each other, and the robotics– robotics as an industry has been around, obviously, for decades. But again, you’re getting to this amazing inflection point when you have the ability to bring all these things together with all this data, with this computational power, cheaper and cheaper access to not only compute, but chips now, you know, as we see the cost of that is likely gonna drop because the profile will change as you have competitors like AWS, and Google, and others away in that space. I think it’ll just become more accessible for this industry, specifically robotics, to really be autonomous, and I think it has very profound implications. Probably the job implications are a lot more profound in robotics because if you think about the white-collar jobs versus blue-collar jobs that exist. And people often talk about how blue-collar jobs are back because white-collar jobs are gonna be more disrupted. I would say it sort of depends. Let’s run the math on what factory jobs are gonna look like in fact, because it’s a large workforce that is in fact in manufacturing. And even though we’re more of a services economy in the US, if you think about it from a global basis, there’s a tremendous amount of job change for people who are in blue-collar jobs, and especially when you’re using robotics for those purposes.
Ryan Donovan: Yeah, and this is interesting about the data. I think there were a few of the robotics companies that are either primarily or secondarily data plays, right? They’re just cameras on legs in places where it’s hard for people. But you talk about the blue-collar. There was one company that is working on the five-figure dexterity problem. So, they’re building robots that can cut a carrot, tickle a fish, or something. Do you think we’re approaching that terminator moment? I watched a Companion on the plane over here. Do you think we’re getting to the point where some of the sci-fi warnings are coming true?
Michael Foree: That’s a really interesting thought. You know, Ryan, I think that currently, we have a lot of AI capabilities locked inside of a computer, and it’s bounded only by what the computers can do, and moving into robotics allows that influence to expand. I think that currently, the influence and the ability of AI to do bad things is boxed in by the fact that the humans that built them are not inherently evil or bad. And that if bad things were going to happen with robotics, we would already be seeing those bad things happen in the current environment. There’s an XKCD comic, and the comic is like, ‘oh, with self-driving cars, somebody can go and paint a fake stop sign and trick the self-driving car,’ and well, people could do that already today, but people aren’t doing that. Right? If the robotic revolution is gonna bring on Terminator and a doomsday, we would be seeing that today without the robots, but just within the computer space, and I’m not seeing us do that, and I wouldn’t anticipate that robotics would change that in a meaningful amount. I believe genuinely in the goodness of humans and our desire to be collaborative and to work towards a collective good. And I think that if the people controlling the robots are bad, then bad things are gonna happen. And if the people controlling the robots are good, then good things are gonna happen. And I’m optimistic about humanity in general. So, I’m optimistic about the future of tech.
Ryan Donovan: I love the optimism. I do see a lot of swarm drone companies coming out there. It makes me nervous. I think, you know, sort of related to some of the Swaminathan Sivasubramanian keynote today, his mountings around the policy and the evaluation, the sort of guardrails they’re building into their bedrock agent for, I think, those sort of things becoming more and more important for LLMs, but more important for the physical AI, too. I know we’ve done some work with our own sort of evaluation based on our data. What do you think of the future of evals and the growing importance?
Prashanth Chandrasekar: Overall, I think having guardrails is to promote trust within companies, as I mentioned previously on the scaling of these models inside enterprises, especially as we adopt them responsibly and so on. And so, I generally think it’s the right moment in time to talk about that, because if you talk to– you know, we have a lot of enterprise customers, and when we discuss with them their issues, and we ask them about their problems, a lot of it comes down to, ‘hey, we’re trying a whole bunch of AI agents, and assistance, and experiments, and POVs,’ but they are constrained; and for various reasons, mostly people reasons, and data reasons, and data infrastructure reasons, [are] unable to really expand them in a very large way across company – point number one. Point number two is even if they have expanded it, and they’ve given open access to a lot of people to go use these things, even with maybe throwing, let’s say, caution to the wind a little bit, they’re not seeing the ROI yet. And then, it’s a little bit of a confounding topic, because I think 2026 will probably be the year of rationalization around, ‘okay, why are we spending all this money as enterprises and what are we getting for it?’ And so, there’s that pressure in the system, clearly. So, something like policy guardrails, I think, is a well-timed innovation.
Ryan Donovan: Sure.
Prashanth Chandrasekar: Because it ultimately says, ‘here’s a path for you to feel comfortable about what it can or cannot do. It’s never gonna go to the dark side, these agents. And so, you know, feel comfortable about that.’ And so, use it in your daily work, and then let’s get a real, true sample set of what this is actually producing in terms of return for the company, and of course, productivity improvements for the individual. And of course, our own product with Stack Internal, which is the private version of Stack Overflow, which two weeks ago we launched our own a solution now, which has expanded from the previously known Stack, the Teams product. Stack Internal now, which has the knowledge ingestion capabilities to bring in content from other parts of the organization to then convert it into this beautiful Q&A format that’s so useful for things like retrieval, search for AI use cases, as it breaks down this longform content that may be getting out of date, and then using the scoring algorithm that we built to make sure there’s various aspects of, ‘hey, this subject matter expert has answered this and it’s done within this timeframe,’ so, you can score it high. All that functionality going into Stack Overflow, and then it leveraging an MCCP server to plug into AI assistance and agents is our way of helping with this governance aspect by saying, ‘look, trust the data on Stack Overflow Internal inside your company because it’s gone through this painstaking process of curation, human curation, and knowledge intelligence. And then, feel free [and] a lot more open to sort of doing the same as us. So, that’s our version of what AWS is doing. And so, I think certainly the response from enterprise customers on our side has been tremendously positive. And so, I’m very optimistic that 2026 will be the year where people leverage these sort of innovations to go faster.
Michael Foree: You ask about evaluations in guardrails, and I think that the evaluations of LLMs is hugely critical, and I think AWS’s framing of guardrails, and not checkpoints, is an important progression that reflects the progress that’s being made with LLMs. We are evaluating LLMs at Stack Overflow. My team is evaluating them. We’re trying to figure out where they go well, where they don’t go well. Spoiler alert, they’re not perfect. In one instance, I went to ‘electricalengineering.stackexchange,’ and I grabbed a question about a code circuit diagram, and I deliberately didn’t include the electrical schema, the pictures of the circuit board, that type of stuff, and the LLM assumed that it had all of the relevant information and provided an incorrect answer. It was correct for the context, but it didn’t realize that it was missing crucial information. LLMs don’t know what they don’t know. They need the human to come and supplement that, and that’s one of the great things about Stack Internal and the work that we released just a couple of weeks ago, where the human is there to make sure that the context is proper. You pull the human in where the human’s important, you pull the AI in where the AI is important, and with evaluations, I would say to all of our listeners, LLMs succeed in places that are unexpected, and they fail in places where they’re unexpected. The best thing that you can do is to try an LLM and see what it does, and document where is it reliable, and where is it not reliable? And here’s the hard part: you gotta do it again in six months because it’s gonna change.
Ryan Donovan: Yeah. People I’ve talked to here are like, ‘are people getting anything out of this? Or is this just work slop?’ Michael, I know you’re going around with the placard asking people. What have you found so far about AI use?
Michael Foree: I found that everybody is bullish on AI. Everybody wants to do it, and the headline of the article you quoted is ‘95% of the AI use cases fail.’ They don’t have an ROI. If you read more of the paper, it talks about how lots of people—and I’m seeing this at the conference—people use AI in their everyday, day-to-day life.
Ryan Donovan: Mm-hmm.
Michael Foree: I talked to one lady who uses it to help decorate her house. I talked to a CTO who uses it to write technical documents for his team. I talked to an executive at a pharma company that uses it to do research on trends in the open industry. I talked to a CIO who, with somebody on their team, built an agent that does industry-level research, and none of these things have an ROI. I don’t wanna say that they’re fun hobby projects ’cause they’re definitely not, they’re definitely used for work, but there’s not a dollar value that you get because you don’t have to read 20 different articles, and instead you have an AI read the articles and then you read a 10 page summary. There’s a money value, but there’s definitely something that I get out of it because I get to use it. And what I’m seeing is that when I talk to people with AI in production. Very often, the AI is a loss leader. It’s a new way of getting users to engage with your platform. United Airlines has a chatbot for certain types of their customers, and there’s not a direct dollar value, ‘oh, if you pay $5, you get to use the AI.’ Instead it’s, if you use the AI, you’re gonna buy the thing that you were gonna buy anyway, except that there’s a 40% less chance that you’re gonna attrite, so you actually buy on our platform, through United Airlines—and I’m making that 40% number up, by the way, don’t quote me on that. You hear that, internet? Don’t quote me on that.
Ryan Donovan: A percentage of the statistics are made up.
Michael Foree: Yeah, right. Yes. But you don’t charge for the AI, you give it away for free so that the customers buy on your platform instead of going to a different platform. United Airlines was able to go and do those ROI calculations, so I think that they’re showing some sort of positive ROI, but a lot of the other companies. Are launching AI into prod because they know intrinsically it’s the right thing to do to capture the users. And they don’t go to the diligence to say, ‘this is a level of attrition that was changed. This is the level of engagement that was changed. I’m gonna do it ’cause it’s the right thing to do, and now I’ll go back, and I’ll justify it later.’ So, I think that headline of the statistic is a little bit stilted because people are launching stuff into production knowing it’s the right thing, but without having the numbers to back it up. But also, in the paper itself, it shows everybody wants to use AI in their day-to-day lives, even if I can’t justify, like, ‘yeah it helped redesign my house.’ There’s genuinely no value in that, but I’m gonna do it because I want to do it and I get benefit.
Ryan Donovan: The chat lab instance, I think there is an ROI on that where you don’t pay for the human to answer those questions. But on a side note, I talked to a company yesterday that has a bunch of models, including one that evaluates ROI on projects. I’m gonna give you a chance to respond.
Prashanth Chandrasekar: Yeah. I think this is just part of the regular cycles of people adopting new technology, right? Anything that people have done in the past, including maybe the Cloud Platform shift many years ago. It reminds me of that, when people were moving from On-Prem to Cloud. It was a little bit more obvious there in terms of the financial savings because, you know, you could say you’re going from CapEx to Opex. So, that was very clear, but then the question became around human productivity, like, what are you actually seeing? Are you able to spin up your programs a lot faster? Yes, but it took a while, right? There’s this ‘crossing the chasm’ kind of equivalent, so, it takes a little bit of time. This one is similar but different at the same time in that there’s a lot more complexity based on Michael’s point of the day-to-day usage. How does it ultimately compound in a way where it’s not just work slop? You have to try a lot of times with these tools, because they’re non-deterministic, so [it’s] almost like you’re pressing the roulette wheel a few times to see if we know what actually works. So, there’s a lot of that, which is different from something like cloud computing, which is like, ‘hey look, you’re putting it on the cloud. You can spin up an instance very quickly. You can spin it down very quickly.’ There’s a big difference there. However, that also had an S-curve, an adoption curve. This one also [has] an adoption curve, but it’s a lot more, I would say, prolonged because of the current state of where all those components are relative to enterprise use, as we were talking about, with things like trust and all those elements. So, all of a sudden, there will be a flip of the switch, and then people will start using it. I think we’re pretty close to that point because enthusiasm is high in terms of the user base and the pressures in the system to use it. There’s a lot of money that’s being spent to use it. So, there’s pressure from all sides, and the question is like, ‘okay, something’s gotta come out of that pretty soon,’ in terms of productive application. I mean, we’re seeing it in the context of, for example, prototyping very heavily, right? Pretty much everybody is using, including us at Stack, you know, a lot of the new features we launched this past year, whether that is our AI assist on our public platform, or any of the features like our chat experience, or any of these new things or challenges on a public platform, or even Stack Internal. Many of the early versions of that we vibe-coded on our team, we were evaluating it with users and asking them for their feedback, et cetera. So, we’re using it. It’s productive. It’s better to use that for a somewhat working product than a mock. You get much more real feedback. It’s like the equivalent of Michael’s placard, you know, getting user feedback. This is a little bit more real-time, so I think it’ll get there.
Ryan Donovan: Yeah. I mean, I think the easier that they make it to use it– the smartphones didn’t really kick off until the iPhone or the iPod came about, and it was the easiest version of using it. Do you think people are using agents in daily life, or are there steps to go?
Michael Foree: I think that there’s two different– the use cases are bimodal. Two very different use cases, and one is the normal everyday user that goes to ChatPGT, or their favorite website that hosts an LLM, and they interact with it and engage with it directly, and I think that the barrier to entry is extremely low.
Ryan Donovan: Mm-hmm.
Michael Foree: You just go, and you type some words, or you copy-paste something from an existing place, or you take a picture, and you upload it. For me, when I realized that I could use Gemini to identify plants in my yard, and figure out if something is a weed or not, or what poison ivy is, you know, ‘leaves the three, leave it be.’ It’s like, okay, that’s not very practical, but Gemini is so easy. The other angle that’s much more challenging is, from a technical user, integrating the AI with the rest of your tech stack, because the boundary lines where AI starts in your normal tech stack ends is different than the normal tech stacks that we have. And it’s challenging to know when to start using the LLM, and for what purpose, and to build on those guardrails to make sure that it’s doing the thing that I want it to do. Prashanth, you mentioned non-determinism earlier, and the non-deterministic nature of LLMs is a real booger bear, right? If I have a calculator and I’m like, ‘what’s two plus two?’ I get four every single time, right? If I spin up an EC2, I know that it’s gonna be available and that I can run X, Y, and Z, but if I stick an LLM into the middle of this, sometimes I go left, sometimes I go right, and I need to be okay with that. And it’s really quite stressful. The non-deterministic angle is really quite stressful from an engineering point of view. And it’s something that I’m looking to try and lean into. How can the non-determinism be an asset, an advantage? Instead of trying to box in the LLM, where can we really let that grow and expand? So, you’ve got these two bimodal cases. From a normal user, it’s so easy to use. From a technical angle, integrating into my tech stack, I’m seeing a lot of friction when I talk to people about when to plug it in and how to use it properly.
Ryan Donovan: Yeah. One of my primary AI uses is identifying spiders, so.
Michael Foree: Yeah, I can identify a spider. It’s a spider.
Ryan Donovan: Yeah. I think people are trying to find ways to actually not use LLMs in LLM flows, now. A lot of people are like, ‘this is not–‘ like you said, the two plus two equals four, or coding specific security tests. It’s like, ‘you don’t need an LLM for these.’
Prashanth Chandrasekar: I think it’s an exciting time to be in tech. There’s the amount of innovation happening, and competition happening in the space, I think is quite impressive. I mean, if you just imagine where we were in ‘23-24, and then even just a couple weeks ago with the advancements by Google responding very strongly with their Gemini3 model, to this week with AWS launching its own frontier models. It’s just impressive to see. People are definitely dancing and making others dance, so to speak.
Ryan Donovan: That’s right.
Prashanth Chandrasekar: And so, I think it’s quite impressive to see that. And for us, I think we’re very proud that we work with all these big companies. They’re all partners of ours, whether that is in the context of our Stack Internal enterprise product, or it’s in our advertising space, or in our data licensing space. Many of these companies license our data to power their LLM models. We’re very proud to play that role, and our role is to really be the most trusted source for technologists, and that is really what we’re focused on. Whether or not that happens on our platform, or through other mechanisms, like through LLM models or through even applications like the code generation tools or, you know, coding agent tools like Cursor, et cetera; our Stack Overflow data now shows up in all these places through our MCP servers, and public data servers, and also our private servers. So, we’re just trying to serve our developer user and our technologist user to the best of our ability, wherever they are.
Michael Foree: I agree with you. Exactly. It’s a whirlwind. Going back just a couple of years, I was pretty sure I could anticipate the challenges in a data world, and then GPT-3.5 launched and turned everything on its head, and nothing’s been the same since. It’s been quite the ride. I wouldn’t trade it for anything. The excitement slowed down a little bit, or maybe I’ve just adapted, and I’ve gotten used to the rapid changes, but it’s quite exciting, and I’m leaning into it. And I think our listeners should also lean into the change.
Ryan Donovan: Right. Well, we live in exciting and interesting times. Thank you for listening, everyone, and talk to you next time.
