The “cracked coder” fetish – by Max Read
Greetings from Read Max HQ, now returned to its rightful place in Brooklyn! Today, we’re writing about Luke Farritor, “cracked coders,” and “epistemic arrogance.”
Read Max, in case you have forgotten, is funded almost entirely by paying subscribers. The reading, writing, panic attacks, anxiety scrolling, spacing out, etc. that goes in to any given newsletter is extensive, and the support of generous readers helps make this a full-time job. Think of it this way: If you’re a free subscriber who’s found Read Max entertaining, informative, enjoyable, distracting, “not the worst thing you’ve read,” you can do the monetary equivalent of “buying me a beer” by subscribing, for almost the exact price of a beer. Plus, paying subscribers will get a second weekly newsletter featuring recommendations for underrated and overlooked movies, books, and music.
This week Bloomberg published a write-around profile of Luke Farritor, maybe the most prominent non-testicular name among the army of young men hired out of the tech industry to staff Elon Musk’s Department of Government Efficiency in the early days of the Trump administration. Farritor was relatively well-known even before his involvement with D.O.G.E. because he was a member of the team that won the Vesuvius Challenge Prize in 2024, having written the script that identified legible characters in the famously indecipherable Herculaneum Papyri, and the framing of the article (written by Susan Berfield, Margi Murphy and Jason Leopold) is something like “how did this ambitious, smart, super-online, home-schooled libertarian kid who won a scientific-A.I. prize end up illegally gutting USAID and the N.I.H.?”
The answer, of course, is that he’s ambitious, smart, super-online, home-schooled, and libertarian; the reporting about Farritor’s pre-D.O.G.E life suggests that he’s always been “cracked” (i.e., a great and obsessive programmer), hard-working, and abrasive, easily influenced by the whims and obsessions of his Twitter feed and his tech heroes:
On Politichat, a group chat set up by Raikes students, Farritor identified himself as “hopelessly libright.” That’s “lib” for libertarian. “He would be passionate about being contrary,” says one classmate. “I don’t know the extent that he believed in some of the things. He just wanted to push people.” That wasn’t always welcome. Like a lot of group chats, it became a “pretty rough echo chamber,” wrote one classmate who created a private chat, Calm Politichat, to allow dissenting opinions to be heard more openly. Farritor was invited to join.
Farritor’s political discussion seemed to mirror his own Twitter feed—among its prominent figures were the Silicon Valley venture capitalist Marc Andreessen, who would go all in for Donald Trump in 2024; Peter Thiel, the libertarian who helped fund Trump’s first campaign; and Elon Musk. “He thought that Musk was the closest thing to Iron Man. He loved that Musk was pushing the envelope in all these ways,” says one classmate. “I remember at one point Musk was going on a big series about how everyone needs to have more children,” says another. “Farritor started talking about that in Politichat. I remember thinking: ‘Why is this something we’re talking about?’ What he talked about was based on the whims of his algorithms. We were participating more in university life.”
Maybe unsurprisingly, his former classmates seem somewhat skeptical of Farritor’s sudden rise to power:
Former classmates were surprised he had joined DOGE. Some were disturbed, some angry, some proud. “Luke doesn’t represent Raikes, and he isn’t a product of it,” says one. “It felt cool for someone from the Raikes School to be in the limelight,” says another. “Then it very quickly turned to a little bit of concern.” A third texted Farritor something like: Hey man, saw the news and am rooting for you. Farritor replied something like: Thank you. Hope you and your family are doing well. Even some who believed he was doing harm were still concerned enough to inquire about him: Thanks, I’m OK.
But when a former friend posted an article critical of Farritor and DOGE on Instagram, Farritor replied with a meme of a crying baby and the caption: “When the corrupt elites can’t access USAID anymore.” Farritor was blocked by that account too. […]
Some of Farritor’s classmates wondered about the power he seemed to have been given. “Like the entirety of DOGE is scary. It’s very much like going into government and dismantling the core foundations. It’s scary from that perspective. And it’s scary that it’s 22- and 23-year-olds doing it. And I’m saying this as a 23-year-old,” one classmate says. “Normally I think experience shouldn’t matter all that much. But for the government I would like people to have experience.”
As interesting as the article itself, I would say, was the reaction from the broad universe of Musk-sympathetic tech posters on Twitter. Casey Handmer, a V.C. and Vesuvius Challenge collaborator who was quoted in the Bloomberg piece, complained on Twitter that the article was “irresponsible… defamatory” and “brings dishonor upon the profession” of journalism:
The anti-woke tech publicist Lulu Chang Meservey, meanwhile, singled out for ridiculte an anonymous government source who told Bloomberg’s reporters that “Luke’s résumé didn’t pass muster” because “you have to bring some expertise”:
As many have already pointed out on Twitter, what is so particularly upsetting about the article to people like Handmer and Meservey is less that it doesn’t pcredit Farritor with intellectual ability (it does, consistently) or that it doesn’t properly contextualize his glib cruelty with reference to the correct fever-dream conspiracy theories that would justify it, but that straightforward facts of the D.O.G.E. saga challenge one of the fundamental beliefs of the new Tech Right: That a sufficient number of sufficiently “cracked” programmers can solve any problem put in front of them. Or, put more broadly, that “intelligence” is a quality measurable on on a single scale equally applicable across all spheres of human activity.
This is so obviously wrong it seems strange to even have to describe why: Writing a Python script to identify Greek characters, impressive though it certainly is, doesn’t translate in any direct way into “administering budget cuts across a range of government agencies.” But in Silicon Valley, steeped in I.Q. fetishism, an obsession with “agency,” and a moral universe still governed by fantasy high-school resentments, the belief that (heritable) single-vector “intelligence” endows one with full-spectrum authority (and, inversely, that failure to demonstrate this intelligence is delegitimizing), holds sway. “Just put 10 cracked programmers in charge of it” has become the (admittedly at least somewhat trollish) stance of the Tech Right when faced with any sufficiently un-deferential institution, enterprise, or bureaucracy. (Politically speaking, this idea overlaps appealingly and naturally with the widespread low-information voter belief that a single sufficiently driven and common-sensical guy could “fix” the government–see, e.g., the movies Swing Voter, Dave, Man of the Year, or any interview with a swing Trump voter.)
But as the article pretty plainly demonstrates, D.O.G.E. is the highest-profile and most consequential example of how ineffective and destructive this idea is. The agency’s failure to succeed even on its own terms (it didn’t come anywhere close to making cuts of the size initially promised by Musk), and the fact that its legacy is, at best, the needless deaths of hundreds of thousands of people around the globe, is about as clear an indication as possible that “put 10 cracked programmers in charge” (let alone just one) is not a good solution to basically any problem faced by any large organization, and especially not particularly complex and sensitive ones like the U.S. government. Farritor is certainly very smart, in the sense of being a good programmer and problem-solver. But working effectively at a high-level position within a complex bureaucracy requires not just “cracked” coding ability and work ethic but domain expertise, relevant experience–and even those widely derided “soft skills.”
One particularly noteworthy aspect of both the article and the reaction was the way it revealed what I suppose you’d call epistemic arrogance–a total lack of curiosity about how the agencies D.O.G.E. gutting might work, or why a government source might suggest that even an impressive “Python A.I. thing” is not a sufficient C.V. for this kind of government work. These tweets from Bo Austin help get at what I mean:
Epistemic arrogance is baked in to the culture of Silicon Valley: Blind, foolhardy confidence may be terrible for operating within large and intricate systems, but it’s great for founding and investing in regulations-flouting software companies. Many of the industry’s leading lights are proud ignoramuses, completely unaware of the gaps and blind spots in their knowledge, and ambitious young hackers and programmers are no doubt modeling their own attitudes toward the world on the overconfident performance of genius by people like Elon Musk.
But what seems particularly striking about this arrogance at this moment, though, is the extent to which it’s also baked into–and reinforced by–the L.L.M.-based chatbots now driving billions of dollars of investment. L.L.M.-based chatbots are effectively epistemic-arrogance machines: They “themselves” have no idea what they “know” or don’t, and in many circumstances will generate baldly incorrect text before admitting to lacking knowledge. Their accuracy has improved significantly over the past three years, but an L.L.M. chatbot fundamentally can’t know what it doesn’t know.
Worse, they’re all but designed to negotiate around and even reinforce the epistemic arrogance of their users, obsequiously confirming their insights, praising their fluency, and overlooking their blind spots and knowledge gaps. I was agog, for example, to read that Travis Kalanick recently claimed that he’s come “close to some interesting breakthroughs” in quantum physics just by conversing with Grok–truly the definition of a man who doesn’t know what he doesn’t know:
“I’ll go down this thread with [Chat]GPT or Grok and I’ll start to get to the edge of what’s known in quantum physics and then I’m doing the equivalent of vibe coding, except it’s vibe physics,” Kalanick explained. “And we’re approaching what’s known. And I’m trying to poke and see if there’s breakthroughs to be had. And I’ve gotten pretty damn close to some interesting breakthroughs just doing that.”
What’s strange here is that epistemic humility is in some sense the single most important skill necessary to make good use of these chatbots–an awareness not just of their limitations, but also of your own. Unleashing the bots among a population of billionaires who share their precise weakness seems like a good way to compound the cruelty and destruction.