Craft and quality beat speed and scale, with or without agents


Linear is a tool for planning and building products that streamline issues, projects, and product roadmaps.

Connect with Tom on Twitter.

This episode’s shoutout goes to user ozz, who won a Populist badge for their answer to Column width not working in DataTables bootstrap.

TRANSCRIPT

Ryan Donovan: Edge AI is changing the way we live, work, and interact with the world. Create future-ready devices with Infineon PS OC Edge, the next generation of machine learning and Edge AI microcontrollers. Learn more at infineon.com/psocedge at I-N-F-I-N-E-O-N dot COM slash P-S-O-C-E-D-G-E.

[Intro Music]

Ryan Donovan: Hello, ladies and gentlemen, and welcome to the Stack Overflow Podcast, a place to talk all things software and technology. My name is Ryan Donovan, I’m your host, and today we are talking about AI agents, but how productive are they really? We’ve had some survey data that has shown mixed results. We’ve heard a lot from other people that maybe people aren’t using them, maybe they have an impact, but we’re gonna find out from somebody who is using them in the field. So, my guest today is Tom Moor, who is head of engineering at Linear. So, welcome to the show, Tom.

Tom Moor: Hey, Ryan. Thanks for having me.

Ryan Donovan: Of course, at the top of the show, we like to get to know our guests a little bit. How did you get into software and technology?

Tom Moor: Oh man. Okay, we’re going way back. Interestingly, I got into technology through music. I guess that’s maybe not too rare. Like very, very early, I was buying like physical music magazines on like, a monthly basis, and one of them had a tutorial on how to make a website for your music. And it was literally HTML printed on Dead Tree, and you copied it by hand from paper into a Windows notepad at the time, and then you kind of got this, like, magical moment of seeing it in Internet Explorer and then being able to tweak it and like, ‘oh, okay, I created this.’ And that was definitely like a very foundational experience for me, and then kind of got from there into building games, of course, like a lot of engineers do. ProMedia Flash made a bunch of games—actually, like, managed to sell a few games for sponsorship to the earlier flash portals—those are still around somewhere. Somebody could kind of dig them out, I’m sure. Yeah, kind of like, built some Android games and then eventually kind of worked my way into, like, the startup scene. I was very attracted by this idea of separating your income from your time, as I’m sure that’s not an attractive idea to folks, but like, it really hit home with me. And then kind of, yeah, I got into startups from there and entrepreneurialism, and then I, you know– after teaching myself to code, I had the skills to get into the industry, came to San Francisco, was part of the founding team of a startup that got in an accelerator in San Francisco, and then stayed there for a decade and kind of made my way through the startup scene.

Ryan Donovan: Obviously, in that time, technology and the software development lifecycle has changed quite a bit, right? I mean, you’re probably at the beginning of the whole cloud native movement, right?

Tom Moor: I think it had taken off, I’d say. So, our first startup took part in around 2010-2011, so I know exactly the era because it was MongoDB. So, we were hosting our own MongoDB, I believe, at that time. Mongo didn’t offer a platform for it. And I remember this because it was the source of all of our woes. People hadn’t quite figured out scaling it, and certainly, we hadn’t at that time. But yeah, like I said, when I started it was– you’d still see websites that were like proudly coded in Notepad as like, you know, those little banners on the website

Ryan Donovan: That’s right. Or dropping the marquee tags all over the place.

Tom Moor: Yeah, yeah.

Ryan Donovan: And obviously, you know, today everybody’s talking about AI agents. That’s the big new hot technology. We’ve had some survey data where about 50% of our respondents had said that there’s productivity gains of some sort from AI agents, and we’ve also seen that, you know, basically, the more that people use AI, the less that they trust it. Are AI agents really getting the productivity gains that we were promised.

Tom Moor: I’m surprised at the second one. I’m not surprised at the first one necessarily, but like decreasing trust is, yeah, definitely a bit surprising to me from my own experiences. One thing I would like to– I mean, I don’t know if the survey dug into this, but just from your own understanding, on the definition of agents, are you thinking more those agents that live in your editor and you are steering on a fairly controlled basis or the kind of agents that we’re starting to see take shape, where they fully live in the cloud as like an alternative team member? Do you think these are one in the same? Do you think people are kind of conflating them?

Ryan Donovan: I think that’s an open question. We’ve talked about that before with folks where it’s like, ‘what are we talking about when we talk about an agent,’ right? Like, is this, you know, cursor that’s doing all this stuff for you? Or is it proactively doing things? Like, what exactly is it? So maybe that’s a good start. What do you talk about when you talk about agents?

Tom Moor: We haven’t covered Linear yet, so I could do an intro to Linear at some point. Let me do that quickly ’cause it just adds a little bit of context for this. So, I work at Linear, and Linear is like a purpose-built platform for building software. So, it’s very Meta, right? End-to-end. So, ideally, you have ideas coming in one end from customers. And people, and software shipping out the other side. But at its core, it’s an issue tracker, like a very nice issue tracker. So, that’s where the work is defined, and we hold the definition of the work or the context around it, and where you assign that work to people. We’ve definitely come at this from the angle of, obviously, three years ago, all work is assigned to humans. Now, we’re seeing some percentage of that work get assigned to agents. And those agents that we are working with are generally cloud-based agents because it’s a team product, right? So, they live just like instance would. Those agents get triggered from Linear by assigning them work, and then they kind of come back with a pull request, or they come back with an answer. So, I’m a little bit focused on that ’cause I think that’s kind of where the future is, but I do think, like, today, the best agents are the ones that still live in your editor because they still need a little bit of steering. There’s value to both. If you know which problems should be sent to which, as it were. So, I think of, like, the agents that are in your editor as being just, like, really incredible autocomplete. We had, like, the first version of autocomplete with Copilot, like– I dunno, that’s probably like five years ago now, where the first LLM Autocompletes came out.

Ryan Donovan: I don’t know if it’s that long, but it, it certainly feels that long.

Tom Moor: I think you might be surprised. I seem to remember, ’cause I, I looked it up for a presentation I did of half a year ago, and it was like 2019 or something – the first version of some sort of LLM autocomplete. It was pre-ChatGPT. So, then ChatGPT came around, and then they started working on like the agent-version of these things, right? And then Cursor and Claude Code, and all those, of course.

Ryan Donovan: The most basic definition of agents that I sort of work with is something that has, kind of, chain-of-thought reasoning, plus tool use. So, whether it’s [that] you kick it off or it works proactively, that’s the sort of base level of what I’ve mentally worked with.

Tom Moor: By definition, right? It has, like, some agency, right? Like, you give it a prompt, and then it can figure out from there. That’s kind of like where I’m thinking of, like, a super autocomplete, like, it’s taking away even writing the beginnings of the code. Although I’ll say, when I use agents, I tend to give it some code. Like, you’re giving it method names, you’re giving it names of things that exist that you might pattern match against, the general approaches you would like it to take, like, ‘create a new file here. I want it to have this class in it. It’s gonna be modeled after this other class.’ I feel like to get the best experience, you kind of wanna write a small spec.

Ryan Donovan: The more constraints, the more information you give it, the less opportunities it has to make mistakes, right?

Tom Moor: I don’t know if the folks that have tried it and maybe not put as much thought into, kind of like, the context it needs. It’s just trite at this point. It’s like a junior engineer, right? Like, you can’t just say, ‘go and fix this thing.’ You need to give it more direction, and it needs to have the context that’s in your head. At the end of the day, it can’t read your mind, so you wanna make sure that that stuff is all detailed out. And particularly with the remote agents, because there’s less opportunities to interrupt, which is why we’ve seen Linear be a good place for tha,t because there’s more context available already. You know, you have maybe like, the Century report is attached, you have a description from a support agent, you have some comments from engineers that have done some pre-analysis of things. Then, you give that to the agent, and it’s like, ‘okay, well now I have a stack trace, I have a description, I know generally where to look,’ and you get much better results that way, right?

Ryan Donovan: You have this central place that aggregates all the information, all the context, and then feeds it to an agent. That’s definitely something we’ve been working on thinking about here at Stack Overflow with our internal product. That is a place to aggregate context, and how do we feed that to agents to make them more effective? So, you’ve been using agents, you’ve been finding them useful, and building into the product. What do you think drives the sort of hesitancy for other people?

Tom Moor: What I’ve seen on our own team is once you have a couple of bad experiences, some people will kind of check out at that point and they’re like, ‘I spent five minutes on this or 10 minutes on this, I felt like it was a waste of time, and that’s time I could have spent solving it myself.’ And I can totally understand that. Particularly if the issue itself – if you’ve already given the agent an issue, which you feel was on the smaller side that it could tackle, and it’s like, ‘oh man, 10 minutes. I could have just done it myself in that time, and why did I waste my time on this? I’m not gonna do that next time.’ That’s an attitude you could take, and I think that’s fair. You could maybe take a step back and say, you know, ‘let’s come back again in three or six months when things have gotten even better.’ Or you could kind of, like, introspect the process and be like, ‘why did it fail?’ Right? Like, ‘what was missing? Was it the way that I described the problem? And could that have been resolved by maybe just tweaking the prompt or the issue description to include a little bit more context?’ Maybe that would take me an extra 30 seconds, but now it’s saved me the time of checking out the branch, pulling down the code, you know, like, making the change, doing a commit message, pushing it up, making a PR review. I dunno, these all seem like small things, but they add up, right? Like, especially on a professional software team where you’re dealing with hundreds of bugs and your time is already tight from all the other things that you have to do. So.

Ryan Donovan: I mean, it sounds like people haven’t quite figured out how to, like, standardize those workflows for the automation of an agent, right? Figuring that out is a time-consuming problem. I heard a quote from a friend of mine: ‘never spend six minutes doing something by hand where you can spend six hours failing to automate it.’

Tom Moor: I never wanna be the luddite in these situations, and I think a lot of people come at it from the attitude of – there’s some sort of, like, negativity towards AI, whether it’s like environmental concerns, or they don’t like Sam Altman, or Elon Musk.

Ryan Donovan: Right. Whatever it is.

Tom Moor: Or one of these guys, and then they kind of project that onto the technology, you know? The technology is incredible. I think it’s an incredible tool that we’re still just, like, very early learning how to harness, and you know, as people are working in technology, we should embrace it, and figure out how to use it to make our lives easier.

Ryan Donovan: Folks in technology have been through a lot of hype cycles. And they’re getting very wary about the hype. And AI has a lot of hype behind it.

Tom Moor: Right. Like I’ve never seen anything as hyped. Even crypto has nothing in comparison.

Ryan Donovan: Yeah. You know, every time I see Sam Altman, he’s like, ‘we might be building God.’ Like, let’s be careful here. So, how do you think teams can sort of ground their AI agent processes and workflows in reality?

Tom Moor: Just be aware of the capabilities of the agents. What tools they have is one thing, right? Like, each agent is different in this respect, and again, that context. So, for the remote agents, we mainly feed them small bugs—even things as small as a spelling mistake. It sounds silly, but like, you know, you get a support ticket in that talks about a spelling mistake on the homepage. Now, what this has enabled is: now, someone on our support team could even just assign that to an agent and have it go fix it in the code that they maybe don’t have access to, actually. And then a developer can come and review that pull request and just merge it. And that has turned something, which in some companies – I’ve reported spelling mistakes on websites, you go back a month later, it’s still there. Like, nobody fixed it. It was too much effort. There’s too much process. There’s like– it got lost in the system. But now, we’re in a place where, you know, in Linear, I think that type of support request that would come into our own team would likely be resolved in like 30 minutes. I mean, we have a pretty good bug process regardless, which we can or cannot talk about later, but – now that’s freed up the time that anybody would’ve spent on that to really focus on more important things, like craft.

Ryan Donovan: The initial conversations we had, you talked about craft and quality being better than speed and scale, which I think when people talk about AI and AI agents, they’re looking at speed and scale. So, talk about how you maintain craft and quality. And then, you know, we can talk about the bug process too.

Tom Moor: I think it’s a big part of it. There’s a couple of aspects: one is kind of what I was mentioning just now, which is offloading a certain percentage of your work to agents. You now have freed up that time, right? And what are you gonna spend that time on? And like, our hope is that you’re gonna spend that time on making the other features that you’re working on, the other aspects of the product, better. I think the craft comes from moving fast, which is a little bit counterintuitive. Like, when you think of craft in a traditional sense of crafting a chair or something, and taking the time to, like, very carefully make sure that the millimeter’s correct as you’re sanding down the wood, or whatever. But in software, we want to make sure that we’re shipping constantly. So like, at Linear, we deploy hundreds of times a day. Of course, we have continuous deployment. I think that should be standard for most folks that listen to this. And what that enables is, like, we iterate with customers really, really fast. And that’s usually a very small subset of customers, like design partners, beta testers, things like this, and internal shipping. So, if we’re working on a new feature, we’ll be shipping that many times a day. We’ll be feeling it out internally, providing feedback, and then kind of, like, we gradually increase the amount of people that see that feature so that by the time it gets to GA it’s really, really polished. But in order to achieve that, you have to have lots and lots of cycles, hundreds and hundreds of cycles of refinement. If you’re moving slowly, you can’t do those hundreds of cycles of refinement. That’s where you can kinda make that connection. You can say, ‘okay, we’re using agents, like, rather remote or on the desktop, they’re helping us code faster, they’re helping us find bugs before they were even created,’ in the case of AI code review, which actually we haven’t spoke about yet, but I think that’s been really, really powerful for us.

Ryan Donovan: The speed comes from the tighter feedback cycles, which is something I’ve heard talked about even before AI. Like, that’s the reason you have CICD, it’s that you want to get this really quick feedback cycle.

Tom Moor: It’s not a brand new thought in that respect, but yeah, AI just kind of accelerates that further. That’s how I can tie the speed and the quality together, at least.

Ryan Donovan: That is an interesting thing to talk about. That’s a good way to think about it. And you mentioned the AI code review – that’s something I think where most developers are, like, ‘that’s where the human comes in,’ right? That’s– you get the PR, you do the code review. How do you have AI code review without having it make those mistakes?

Tom Moor: Like you say, the human’s still there, and then the human ends up reviewing the AI code review, if that makes sense. So, it’s like review, review.

Ryan Donovan: Watching the watchers.

Tom Moor: It doesn’t take that aspect away, but I think that the types of things we’ve seen it catch logical mistakes in complex logic, which I think is very easy for a human to overlook. You could say that in a best-case scenario, that might be covered by a unit test. We all know there’s areas of every code base that isn’t unit tested that might still contain some complex logic, whether that’s quite a complex component on the front end or something, or an algorithm that just doesn’t have every edge case tested. So, those types of things, I think, like, security issues is a really important one that they’re very good at catching. The patterns of security failures are just incredibly well documented, right? But it’s hard for any individual engineer to hold all of those possible failure modes in their head. So, that’s something that we’ve seen caught a number of times, things like file traversal, and passing user input that’s un-sanitized into areas. They’re very good at being able to kind of, like, trace the code flow through. And of course, we have non-AI tools for this, too. And I think ideally, you wanna have a combination of all of these things to get really great coverage.

Ryan Donovan: That’s an interesting thing that I’ve seen is people get more comfortable with AI tools. There’s a strong push towards having non-generative AI component, at least, maybe it’s chain of conditionals on it. Maybe it’s some traditional ML.

Tom Moor: Particularly, when you are working with AI on a day-to-day basis, like actually developing features with it, it’s easy to start seeing it as the ultimate hammer to smash against everything. And you have to know when to eject out of that. Like, with our own internal AI systems at Linear, you know, we have a lot of AI built into the product at this point, to kind of try and help you run the project management process. We often have to go, ‘oh, actually, you know, maybe this is a place where we can step out to some heuristics. We could do some heuristics first, then only on the subset of things pass it to an LLM,’ stuff like that. So, you constantly have to check yourself a little bit because it’s such a magical box.

Ryan Donovan: And I think you said at the beginning you were a little surprised at the decreasing trust as people get to know AI, but I think this is exactly it. They learn the capabilities, they get past the hype, and they’re like, ‘oh, okay, I’m gonna trust this a little bit less,’ because you should.

Tom Moor: ‘I tried using it. I now trust it less, so I’m not gonna use it,’ type-of-angle. But I think the way that you just phrased it sounds sensible and good. You learn the technology and you know, like, okay, this is just a word predictor at the end of the day, and we’re going to– so, maybe like, let’s not trust that.

Ryan Donovan: Yeah, yeah. This is the fanciest statistics we’ve seen, you know.

Tom Moor: So, your answer is like you, you can’t trust it. This is why it is still valuable to be a software engineer, and it’s still valuable to know all of the fundamental underlying systems. I struggle to see how that side of things is gonna disappear in any near future, right? Like, the better software engineer you are, the better fundamentals you have, the better you’re gonna be at using these tools because you can more quickly check them. You’re not gonna trust the mistakes, and you can know when things are wrong and right, and how to guide the systems, right? Like I was saying before, in terms of those mini specs, you’re not just saying, ‘make the button blue,’ or like, ‘add this feature,’ right? You can give it very thorough specification on how to build something that only a professional would be able to.

Ryan Donovan: We want everybody to sort of look at this as like a senior engineer, but the way you get senior engineers is you have junior engineers first, and if we’re sending all the junior engineering work to AI agents, how do we get senior engineers?

Tom Moor: I hope we’re not doing that. I mean, right now, I see, like I say, I think small features, they’re very capable of, and small bugs. So, Linear – at the beginning, we hired pretty much all staff and senior folks, right? We wanted to make sure that people were very autonomous, ’cause we’re a remote company and just wanted to keep that bar really high. As we’ve grown, we’re now 50 engineers. We’ve started to hire people like juniors and mid-level people, and the juniors, of course, are a generation earlier at this point, and those folks are the best users of these tools, right? Like, they’re not being replaced by them. They are—at least, you know, the ones that we’re hiring—they’re truly augmenting themselves and making sure that they’re teaching the senior engineers how to do these things. So, that’s how I see it going.

Ryan Donovan: So, the apprenticeship goes both ways.

Tom Moor: Yeah, absolutely. Yeah.

Ryan Donovan: So, you also, at Linear, you have interesting plug into Cursor. Cursor seems to have been, you know, one of the more talked about vibe coding, an AI agent, coding tools. Can you tell us a little bit how that plugin agent works?

Tom Moor: At its core, Linear is an issue tracker, and what we worked on from the last year is the ability to bring agents into your team so you can kind of interact with them as team members, is the ideal scenario. So, we kind of created a developer platform for this where agents can register, you know, can create their apps. We have specific APIs for it that folks can look at. We’ve tried to actually make them as easy as possible, so whilst we have cursor, one of the big guys, we also have enterprise teams and other teams that have just built their own agents and integrated them with Linear, which is really nice too. So, we have this platform that enables you to register an app, and then you can mention that app and assign that app in the same way that you would other users. So, an issue comes in from a customer, you assign it to Cursor, and then Cursor will immediately get started on attempting to fix that in their background agents. It can ask for more input if it needs it, and then you’ll get notified about that, and the issue still stays assigned to you. So, there’s always this, like, human responsibility element that we want to keep. So, it’s your issue. You’ve delegated the coding to the agent. It can go away, do some work. If it needs help, it will ask you, and you can iterate that way. And then when it’s ready, you’ll get a branch, and you can either then put a pull request in for that if it looks like it did a good job, if it one-shotted it, which does happen; or you can then take that, and pull it down into Cursor on your local machine to make any adjustments that are needed, or finished off. So, we see that first percentage of an issue, and it makes it much easier to go from there generally. So, Cursor is, like, probably the biggest name that we have on the system right now. We have a number of other startups that have built agents that integrate with Linear and other things. So, the way I think about these agents is: they live in the cloud, which is really nice because, in an ideal world, they’re exposed in Linear, but they’re also exposed in your other tools. So Slack, Microsoft Teams, GitHub, and you should be able to kind of have a conversation across those tools. So, you can kind of start something in Slack with an agent, finish it in Linear, then go through a GitHub code review with the same, and it should feel like you’re interacting with one entity that’s kind of got the code checked out remotely, and is working through it with you.

Ryan Donovan: The Linear, and Slack, and all these things is sort of just routing to all these agents. Like, ‘here’s the work.’

Tom Moor: We’re kind of building the platform. We are building another orchestration platform, I suppose, but we already had the orchestration platform; it was just for humans. So, we were the orchestration platform for humans, and now we’re saying, ‘okay, well, if 10% of work or 20% of work is gonna be done by agents, then they should definitely be first-class citizens of this system.’ And we wanna make sure that you have the visibility into what those agents are up to. And so, that’s kind of what we’ve enabled. Like, this person is in charge of this issue. This is where the issue is up to in terms of in Linear; you can actually see all of the thought processes of the agents, as well. So, you can actually dive into the details and see, ‘oh, okay, it’s gripping for files right now.’ That type of stuff that you would also see on the desktop. And then, of course, you have the collaboration aspect there, too. So, if somebody kind of goes on holiday, they can pick it up and it’s not a problem because it’s not living on somebody’s desktop. Or you can kind of collaborate on interacting with one of these things because it’s all like multiplayer, real-time collaboration. And then, I guess on the backend of that, we also have then the system of automations around this stuff. So.

Ryan Donovan: With every sort of new technology, that new extraction layer, we eventually build these orchestration layers on top of it, and that sounds like this is the orchestration upon the orchestrators, right? The agents being an orchestration platform themselves.

Tom Moor: Yeah. I suppose they’re an orchestration platform of tools, if you could look at it that way. Yeah, yeah.

Ryan Donovan: Or automation, maybe.

Tom Moor: I liked: agents have subagents, too. So, our approach is kind of split into three. So, we have this agent platform that I just mentioned, kinda like the one piece. The second piece is the intelligence layer that we are kinda, like, baking into the core product. So, one piece of that is something we call ‘product intelligence,’ and that is part of Linear that, basically, we have the concept of triage, which is an inbox for all your incoming issues from support, from your Slack community, things like that. They all go in there. And then we have an agentic system that will basically do the research over those issues immediately. So, it’s the second it hits, it will go and work in the same way that, ideally, a human would. It will go and try and find related issues using obviously vector semantics, and other keys there. It will find labels, the project potential assignee, and things like this, so that once that’s all done, we put all of that on the issue, and then we notify the human: ‘hey, this thing has come into the inbox,’ and now it’s got this head start, like it’s already been labeled. We have a suggested project, and the most important one, I think, is actually suggested as signee. Especially on larger teams, let’s say you have 100 engineers, or even 1000 engineers who should work on this, it’s often like, ‘ who is knowledgeable about this area of the system’ is often like a really big question. We’ve seen, you know, customers have Excel spreadsheets, or they have Notion docs with a giant table of features and engineers in them. So, we kind of take that part away and just say, ‘okay, let’s just– we’ll do that and we’ll show you who’s the most likely responsible.’ So, I feel like that’s one of these iceberg features where, to the user, it’s just a little panel with some suggestions in it, and then behind the scenes, we’re actually deep researching this thing for up to two or three minutes on the background.

Ryan Donovan: Knowing who’s responsible for things is no joke. I did that at a previous job where I was like, ‘I just need to know who to talk to,’ and it didn’t exist. It was like 100 services, and there was no central place to, like, ‘who owns this?’

Tom Moor: The other thing that it’s able to pull out is like, ‘why?’ So, if you have a suggested assignee, that says, ‘oh, this is for Tom,’ right? You hover over it and say, ‘this is for Tom because he worked on this other issue that is really similar to this,’ or ‘this is for Tom because he’s responsible for this microservice,’ or the types of things that it’s able to expose, which is really nice.

Ryan Donovan: It’s a future of, you know, not checking your inbox and just getting to work on the code, which is, I think, what all developers would hope for.

Tom Moor: We do that triage duty, or we call it, or like, we call it ‘goalie’ sometimes on a rotation in the same way that you would do an on-call rotation. So, all of our product engineers have a week where they’re doing this on a regular basis, and the less time researching and the more time either fixing or, you know, maybe it gets passed off to an agent. The one thing that we’re hoping to see is, at some point, once you’ve had enough agents fixing bugs, it’ll suggest the agent, like, ‘oh, this bug looks like something the Cursor could fix, actually, because it fixed these three other issues.’ And it’s like, ‘okay, great. Let’s, let’s try that.’

Ryan Donovan: Ladies and gentlemen, it is that time of the show again where we shout out somebody who came onto Stack Overflow, dropped a little knowledge, shared some curiosity, and earned themselves a badge. So, today, congrats to populous badge winner ‘Ozz’. They dropped an answer that was so good, it outscored the accepted answer, and they dropped it on ‘Column width not working in DataTables bootstrap.’ Curious about it? We’ll have it in the show notes. My name is Ryan Donovan. I edit the blog, host the podcast here at Stack Overflow. If you have questions, concerns, comments, topics to cover, please email me at podcast@stackoverflow.com, and if you wanna reach out to me directly, you can find me on LinkedIn.

Tom Moor: Thanks for listening. I’m Tom Moor. I’m the head of engineering at Linear, and you can find me on Twitter at Tom Moor, T-O-M-M-O-O-R.

Ryan Donovan: How can they find linear?

Tom Moor: Oh, Linear.app.

Ryan Donovan: All right, well, thank you for listening today, and we’ll talk to you next time.



Source link