Life Is More Than an Engineering Problem
Julien Crockett speaks with Ted Chiang about the search for a perfect language, the state of AI, and the future direction of technology.
:quality(75)/https%3A%2F%2Fassets.lareviewofbooks.org%2Fuploads%2Flifecycle-softwareobjects-web__85640.jpg)
This interview is part of The Rules We Live By, a series devoted to asking what it means to be a human living by an ever-evolving set of rules. The series is made up of conversations with those who dictate, think deeply about, and seek to bend or break the rules we live by.
¤
“ONCE IN A WHILE,” Ted Chiang tells me, “an idea keeps coming back over a period of months or even years. […] I start asking, is there an interesting philosophical question that might be illuminated by this idea?” To read Chiang is to experience a master world-builder critically exploring philosophical questions in new ways—from how we should care for an artificial being to what would be the consequence of having a perfect record of the past.
Lately, Chiang has trained his eye on artificial intelligence. And Chiang’s takes haven’t gone unnoticed. In a conversation I had earlier this year with computer scientist Melanie Mitchell and psychologist Alison Gopnik, they each referenced Chiang when searching for the right framework to discuss AI.
Chiang has a knack for descriptively illustrating his points. For example, when discussing whether LLMs might one day develop subjective experience, he explains: “It’s like imagining that a printer could actually feel pain because it can print bumper stickers with the words ‘Baby don’t hurt me’ on them. It doesn’t matter if the next version of the printer can print out those stickers faster, or if it can format the text in bold red capital letters instead of small black ones. Those are indicators that you have a more capable printer but not indicators that it is any closer to actually feeling anything.”
For me, the essence of Chiang’s work, however, isn’t his critical take on technology. It’s his humanism—the way he brings to the fore the mundane reality behind existential questions and moments of societal change. It is perhaps for this reason that his work resonates with so many.
In our conversation, we discuss how Chiang picks his subjects, the historical search for a perfect language, the state of AI, and what it would take for Chiang to become hopeful about the direction of technology.
¤
JULIEN CROCKETT: The idea for this interview came from a conversation I had with computer scientist Melanie Mitchell and psychologist Alison Gopnik, in which they referenced your work when describing the state of artificial intelligence and its possible futures. Why do you think scientists and engineers look to your work to explain their own?
TED CHIANG: I was actually surprised that my name popped up. I think it was mostly just a coincidence that you interviewed two people who have found that my work resonates with their own. They might be outliers, compared to scientists as a whole.
What about the other way around—has scientific progress had an effect on the direction of your work?
I don’t think there has been a clear impact of recent scientific research on my fiction. My stories are mostly motivated by philosophical questions, many of which are not particularly new. Sometimes the way I investigate a philosophical question is inflected by recent developments in science or technology, but the recent developments probably aren’t the motivating impulse.
Your stories cover a wide range of topics, but I see a through line in your focus on how humans react to societal change—whether it’s a discovery that mathematics is actually inconsistent, as in your 1991 story “Division by Zero,” or a world where we raise robots as children, as in your 2010 novella The Lifecyle of Software Objects. What draws you to a topic?
Most of the time, when ideas come to me, they leave my attention very quickly. But once in a while, an idea keeps coming back over a period of months or even years. I take that as a signal that I should pay attention and think about the idea in a more intentional way. What that usually means is that I start asking, “Is there an interesting philosophical question that might be illuminated by this idea?” If I can identify that philosophical question, then I can start thinking about different ways a story might help me dramatize it.
Why is science fiction the best vehicle for you to explore ideas?
The ideas that most interest me just lean in a science-fictional direction. I certainly think that contemporary mimetic fiction is capable of investigating philosophical questions, but the philosophical questions that I find myself drawn to require more speculative scenarios. In fact, when philosophers pose thought experiments, the scenarios they describe often have a science-fictional feel; they need a significant departure from reality to highlight the issue they’re getting at. When a philosophical thought experiment is supposed to be set in the actual world, the situation often has a contrived quality. For example, the famous “trolley problem” is supposedly set in the actual world, but it describes a situation that is extremely artificial; in the real world, we have safeguards precisely to avoid situations like that.
What role does science play in your stories? Or, asked another way, what are the different roles played by science and magic in fiction?
Some people think of science as a body of facts, and the facts that science has collected are important to our modern way of life. But you can also think about science as a process, as a way of understanding the universe. You can write fiction that is consistent with the specific body of facts we have, or you can write fiction that reflects the scientific worldview, even if it is not consistent with that body of facts. For example, take a story where there is faster-than-light travel. Faster-than-light travel is impossible, but the story can otherwise reflect the general worldview of science: the idea that the universe is an extremely complicated machine, and through careful observation, we can deduce the principles by which this machine works and then apply what we’ve learned to develop technology based on those principles. Such a story is faithful to the scientific worldview, so I would argue that it’s a science fiction story even if it is not consistent with the body of facts we currently have.
By contrast, magic implies a different understanding of how the universe works. Magic is hard to define. A lot of people would say magic definitionally cannot have rules, and that’s one popular way of looking at it. But I have a different take—I would say that magic is evidence that the universe knows you’re a person. It’s not that magic cannot have rules; it’s that the rules are more like the patterns of human psychology or of interactions between people. Magic means that the universe is behaving not as a giant machine but as something that is aware of you as a person who is different from other people, and that people are different from things. At some level, the universe responds to your intentions in a way that the laws of physics as we understand them don’t.
These are two very different ways of understanding how the universe works, and fiction can engage in either one. Science needs to adhere to the scientific worldview, but fiction is not an engineering project. The author can choose whichever one is better suited to their goals.
Your work often explores the way tools mediate our relationship with reality. One such tool is language. You write about language perhaps most popularly in “Story of Your Life” (1998), the basis for the film Arrival (2016), but also in “Understand” (1991), exploring what would happen if we had a medical treatment for increasing intelligence. Receiving the treatment after an accident, the main character grows frustrated by the limits of conventional language:
I’m designing a new language. I’ve reached the limits of conventional languages, and now they frustrate my attempts to progress further. They lack the power to express concepts that I need, and even in their own domain, they’re imprecise and unwieldy. They’re hardly fit for speech, let alone thought. […]
I’ll reevaluate basic logic to determine the suitable atomic components for my language. This language will support a dialect coexpressive with all of mathematics, so that any equation I write will have a linguistic equivalent.
Do you think there could be a “better” language? Or is it just mathematics?
Umberto Eco wrote a book called The Search for the Perfect Language (1994), which is a history of the idea that there exists a perfect language. At one point in history, scholars believed the perfect language was the language that Adam and Eve spoke in the Garden of Eden or the language angels speak. Later on, scholars shifted to the idea that it was possible to construct an artificial language that was perfect, in the sense that it would be completely unambiguous and bear a direct relationship to reality.
Modern linguistics holds that this idea is nonsensical. It’s foundational to our modern conception of language that the relationship between any given word and the concept it is assigned to is arbitrary. But I think that many of us can relate to the desire for a language that expresses exactly what we mean unambiguously. We’ve all tried to convey something and wished there were a word for it, but that’s not a problem of English or French or German—that’s a problem of language itself. And even though I know a perfect language is impossible, the idea continues to fascinate me.
As for the question of whether mathematics could be a better language, the reason that mathematics is useful is precisely what makes it unsuitable as a general language. Mathematics is extremely precise, but it’s limited to a specific domain. Scientists who speak different languages can use the same mathematics, but they still have to rely on their native languages when they publish a paper; they can’t say everything they need to say with equations alone. Language has to support every type of communication that humans engage in, from debates between politicians to pillow talk between lovers. That’s not what mathematics is for. We could be holding this conversation in any human language that we both understand, but we couldn’t hold it in mathematical equations. As soon as you try and modify mathematics so that it can do those things, it ceases to be mathematics.
I grew up in a French household, and I often feel that there are French words and expressions that better capture what I want to express than any English word or expression could.
Eco writes that when European scholars were arguing about what language Adam and Eve spoke, each one typically argued in favor of the language he himself spoke. So Flemish scholars said that Adam and Eve obviously must have spoken Flemish, because Flemish is the most perfect expression of human thought.
Funny. Another tool increasingly mediating our relationship with society and reality is artificial intelligence. You’ve written skeptically about how modern AI systems are being implemented, and one metaphor you use to describe large language models (LLMs) is as “a blurry JPEG of the web.” What do you mean?
When we use a search engine, we get verbatim quotes from text on the internet and also a link to the original web page. A search engine gives us information directly from the horse’s mouth. LLMs are like a search engine that rephrases information instead of giving it verbatim or pointing you to the original source. In some respects, that is really cool, but they’re not rephrasing it reliably. It’s like asking a question and getting an answer back from someone who read the answer but didn’t really understand it and is trying to rephrase it to the best of their ability. I call LLMs a blurry JPEG because they give a low-resolution version of the internet. If you are using the internet to find information, which is what most of us use the internet for, it doesn’t really make sense to go with the low-resolution version when we have conventional search engines that point you to the actual information itself.
It’s entertaining to be able to ask a question and get an answer back in a conversational form, but LLMs are not being marketed as entertainment devices. They’re being marketed as products that will answer your questions accurately, and that’s not what LLMs are doing.
Do you think that LLMs will become useful tools that can reliably answer questions?
I don’t want to say LLMs are only good for entertainment; there are many respects in which LLMs are genuinely amazing. The fact that they can rephrase something in any style of prose is fascinating; no one would have predicted that statistical models of all the text on the internet would be capable of that. But predicting the most likely next word is different from having correct information about the world, which is why LLMs are not a reliable way to get the answers to questions, and I don’t think there is good evidence to suggest that they will become reliable. Over the past couple of years, there have been some papers published suggesting that training LLMs on more data and throwing more processing power at the problem provides diminishing returns in terms of performance. They can get better at reproducing patterns found online, but they don’t become capable of actual reasoning; it seems that the problem is fundamental to their architecture. And you can bolt tools onto the side of an LLM, like giving it a calculator it can use when you ask it a math problem, or giving it access to a search engine when you want up-to-date information, but putting reliable tools under the control of an unreliable program is not enough to make the controlling program reliable. I think we will need a different approach if we want a truly reliable question answerer.
Modern AI systems are one tool in a long line of tools that help us approximate reality. We seem to easily ascribe attributes of ourselves, such as thought and reasoning, to these tools. For example, we regularly describe the brain as a computer and vice versa. Why do you think we see ourselves in our tools?
There was a time when people compared the brain to a telephone switchboard. The brain is the most complex thing we have ever encountered, and when the telephone switchboard was the most complicated machine we had ever built, we naturally used it as a metaphor when trying to understand what the brain is. But that doesn’t actually tell us anything about how the brain works. The fact that we can now build computers doesn’t mean that the brain is more like a computer than a telephone switchboard. There are many ways in which it is obvious that the brain is not like a computer, but because the computer metaphor is so prevalent, we overlook those differences. Computers consist of software running on hardware, but there is no distinction between software and hardware in biological systems. If you were to apply that metaphor to any other organ in the body, it would seem absurd. For example, “My liver was running this old program, but all I needed to do was update the software and now my liver is functioning much better, even though the hardware is the same.” No one says that. It’s not a useful way of thinking about the liver, and it is not a useful way of thinking about the brain either.
By the same token, because we imagine that computers are like brains, we are tempted to think of computers as intelligent and engaged in thinking. When we do that, we’re taking this metaphor way too literally. What telephone switchboards did was not so readily mappable to something people do, and that probably deterred people from imagining that a telephone switchboard was engaged in reasoning. But LLMs can generate plausible text, which totally throws us for a loop. LLMs are not engaged in reasoning any more than a telephone switchboard was, but their ability to simulate conversation makes it far easier to imagine that they are.
You wrote an article called “Why A.I. Isn’t Going to Make Art” where you argue that generative artificial intelligence reduces the amount of intention in the world because, by using AI tools, we lose the opportunity to make the choices necessary for creating art. What impact are AI tools having on artists and their artwork?
I should note that I didn’t pick the title of the essay; if I had, I might have called it something like “Why AI Won’t Make Art Easy to Make.” Many people would have you believe that the process of making art and the end result can be easily separated, but I don’t believe they can be. I was talking with someone who is very excited about AI-generated imagery, and she said, “Let’s imagine, for the sake of argument, that AI can make better art than humans. In that scenario, do you think that we should reject AI art simply to protect the livelihood of human artists?” I responded, “I’m not going to grant you that premise, because that is the question under debate. You are framing the hypothetical in a way that assumes the conclusion.” I don’t believe it’s meaningful to say that something is better art absent any context of how it was created. Art is all about context. It’s not an activity like tightening bolts, where I don’t really care whether someone used a conventional wrench or a pneumatic wrench, as long as the bolts are tight.
As for the impact on artists, I’d say the primary effect of AI tools is that they encourage the idea that art is no different from tightening bolts. Artists have always had to deal with commercial considerations, but it’s probably a more pressing issue now than ever before. The impulse to view everything in terms of efficiency, of reducing costs and maximizing output, is radically overapplied in the modern world. There are certain situations in which that is an appropriate framing, but art cannot be understood that way. Arguably the most important parts of our lives should not be approached with this attitude. Some of this attitude comes from the fact that the people making AI tools are engineers viewing everything from an engineering perspective, but it’s also that, as a culture, we have adopted this way of thinking as the default.
In French, we would call engineers viewing everything as an engineering problem a “professional deformity.” But an issue perhaps tangentially related to viewing everything as a wrench is “the alignment problem.” Your novella The Lifecycle of Software Objects has been referenced in this context. For example, Alison Gopnik talks about how one way to “align” artificial intelligence with our goals and values could be the same way we align each new generation of humans, through caregiving.
I don’t like the phrase “the alignment problem.” It’s not clear to me that it refers to something meaningful—or at least that the phrase refers to something that is new and meaningfully different from the broader problems of how to be a good person and how to build a good society. For example, when corporations behave badly, should we consider that an alignment problem? Most of the conversation around the alignment problem suggests that it’s a technical problem, something that can be addressed by implementing a better algorithm or by solving the right equations. But why, for example, do large corporations behave so much worse than most of the people who work for them? I think most of the people who work for large corporations are, to varying degrees, unhappy with the effect those corporations have on the world. Why is that? And could that be fixed by solving a math problem? I don’t think so.
People who talk about aligning AI with human values imagine that if we could somehow solve this programming problem, then everything would be okay. I don’t see how that follows at all. Imagine you have some hypothetical AI that is better at accomplishing tasks than humans and that does exactly what you tell it to do. Do you want ExxonMobil to have such an AI at its disposal? That doesn’t sound good. Conversely, imagine a hypothetical AI that does what is best for the world as a whole, even if human beings are asking it to do something else. Who would buy such an AI? Certainly not ExxonMobil. I can’t see any corporation buying software that ignores the instructions of humans and does what is best for the world. If that were something that corporations were interested in, do you think they’d be behaving the way they are now?
But there is something intuitively appealing about the idea of taking what we know about raising children and applying it to an intelligent system, like an AI system, that seems to learn—even if it might not learn in the same way we do.
The question of whether we can teach AI our values the way parents teach children their values is a very interesting one to me, philosophically. An extremely common ethical guideline is that you should treat others the way you would like to be treated, and this is something that parents try to impress upon their children. When parents do that, they are asking children to put themselves in another person’s place and imagine what their emotional reaction would be, and children often can’t or don’t want to do this, which is why it can take a while for children to learn to play well with others. What would it mean for a machine to do that? We have no idea how to build a machine capable of that. And even if you successfully teach a child to play well with others, that is no guarantee that the child will become an adult who contributes to a good society. The executives of ExxonMobil were almost certainly taught this ethical guideline at some point, and look how well that turned out.
Could there be value, though, in treating an AI system as more of a partner—something or someone with whom we develop a relationship—rather than merely as a tool?
It all depends on what you mean by “relationship.” If you’re a woodworker, you might develop emotional associations with a set of chisels you’ve used for years, and in some sense that’s a “relationship,” but it’s entirely different from the relationship you have with people. You might make sure you keep your chisels sharp and rust-free, and say that you’re treating them with respect, but that’s entirely different from the respect you owe to your colleagues. One way to clarify this is to remember that people have their own preferences, while things do not. To respect your colleagues means to pay attention to their preferences and interests and balance them against your own; when they do this to you in return, you have a good relationship. By contrast, your chisel has no preferences; it doesn’t want to be sharp. When you keep it sharp, you are doing so because it will help you do good work or because it gives you a feeling of satisfaction to know that it’s sharp. Either way, you are only serving your own interests, and that’s fine because a chisel is just a tool. If you don’t keep it sharp, you are only harming yourself. By contrast, if you don’t respect your colleagues, there is a problem beyond the fact that it might make your job harder; you do them harm because you are ignoring their preferences. That’s why we consider it wrong to treat a person like a tool; by acting as if they don’t have preferences, you are dehumanizing them.
AI systems lack preferences; that is true of the systems we have now, and it will be true of any system we build in the foreseeable future. The companies that sell AI systems might benefit if you develop an emotional relationship with their product, so they might create the illusion that AI systems have preferences. But any attempt to encourage people to treat AI systems with respect should be understood as an attempt to make people defer to corporate interests. It might have value to corporations, but there is no value for you.
In The Lifecycle of Software Objects, the humans develop deep emotional relationships with their digital agents. What characteristics do those digital agents have that make such relationships possible?
The digital entities in that story have genuine interests and preferences. The premise of the story is that, even though they’re digital, they are in a certain sense alive and have subjective experience. If you’re a responsible pet owner, you will inconvenience yourself to fulfill your pet’s needs, both their physical needs and their psychological ones. The human characters in the story recognize that they have a similar responsibility to their digital pets. They even come to realize that they can’t escape those responsibilities by simply suspending their digital pets the way you might put your laptop in hibernate mode. As an analogy, imagine that you could put your dog or cat into hibernate mode whenever you left on a trip. Your dog or cat might not notice, but even if they did, they might not mind. Now imagine that you could put your child into hibernate mode whenever you were too busy to spend time with them. Your child would absolutely notice, and even if you told them it was for their own good, they would make certain inferences about how much you valued them. That’s the situation the human characters in the story find themselves in.
Could AI systems one day have those characteristics?
I believe it’s theoretically possible for us to build digital entities that have subjective experience, inasmuch as I don’t think there’s a physical law that prevents it. We don’t currently have a good idea of how to build such entities. I don’t think we’re going to create them accidentally, because the AI systems we’re building right now are not even heading in the right direction. LLMs are not going to develop subjective experience no matter how big they get. It’s like imagining that a printer could actually feel pain because it can print bumper stickers with the words “Baby don’t hurt me” on them. It doesn’t matter if the next version of the printer can print out those stickers faster, or if it can format the text in bold red capital letters instead of small black ones. Those are indicators that you have a more capable printer but not indicators that it is any closer to actually feeling anything.
The technology we use also impacts our relationships with one another. In your 2013 story “The Truth of Fact, the Truth of Feeling,” you investigate the unintended consequences that Remem—a product that creates a perfect record of the past—has on human relationships. It seems like the takeaway from your story is that some things are more important than truth.
I wouldn’t say that some things are more important than truth. What I was hoping to convey with that story is that there is value in knowing what actually happened, but that is not the end of the discussion. Ideally, we should be able to acknowledge what actually happened without that being the last word on the subject.
How does that work at a societal level?
Take the Truth and Reconciliation Commission in South Africa after the fall of apartheid. The truth is essential; it is the only basis from which you can move forward productively. You cannot deny what happened and expect a healthy society to result from that. But once everyone has admitted what they did, there is the opportunity for forgiveness. Society can decide whether punishment is called for and what form it should take; in certain situations, maybe admitting one’s guilt is enough. Once you’ve achieved some kind of reconciliation, it becomes possible to move forward.
I want to end by asking whether you are optimistic about the future. When I’ve asked this question in previous interviews, some have responded that they are optimistic because it’s a moral duty; it’s what we must be if we want to create a better future. Do you view optimism or hope in this way?
As usual, we need to be specific about what we mean by “optimism” and “pessimism.” Some people believe that everything will work out fine and we don’t need to devote energy to considering bad outcomes. I think this attitude is extremely common in the tech industry. That’s a kind of optimism, and I definitely don’t fall into that camp. By contrast, some people believe that bad outcomes are inevitable and there’s nothing we can do prevent them. Some might call that pessimism, but I’d say that’s closer to fatalism.
I think we need to think about the possible bad outcomes and work to mitigate them; if we do that, we have a chance of preventing them from coming to pass. I don’t know if that’s optimism, unless everything except fatalism is optimism. I suppose it might be a moral duty to not be fatalistic. We have to believe that our actions have the potential to make a difference because if we don’t believe that, we won’t take any action at all.
You can also consider this question within the narrower context of technological development, and ask whether one thinks that the risk of bad outcomes is serious enough that we should slow down our pursuit of new technologies. In this framing, optimists are the ones who say no, the risks aren’t that serious, while pessimists are the ones who say yes, the risks are very serious. My stance on this has probably shifted in a negative direction over time, primarily because of my growing awareness of how often technology is used for wealth accumulation. I don’t think capitalism will solve the problems that capitalism creates, so I’d be much more optimistic about technological development if we could prevent it from making a few people extremely rich.
¤
Ted Chiang’s fiction has won four Hugo, four Nebula, and four Locus Awards, and has been featured in The Best American Short Stories. His debut collection, Stories of Your Life and Others (2002), has been translated into 21 languages. He was born in Port Jefferson, New York, and currently lives near Seattle.
¤
Featured image: Cover of The Lifecycle of Software Objects, illustration by Christian Pearce.
LARB Contributor
Julien Crockett is an intellectual property attorney and the science and law editor at the Los Angeles Review of Books. He runs the LARB column The Rules We Live By, exploring what it means to be a human living by an ever-evolving set of rules.
Share
LARB Staff Recommendations
-
“Exhalation: Stories” is a stunning achievement in speculative fiction, from an author whose star will only continue to rise.
-
Julien Crockett interviews Alison Gopnik and Melanie Mitchell about complexity and learning in AI systems, and our roles as caregivers.
Did you enjoy this article?
LARB depends on the support of readers to publish daily without a paywall. Please support the continued work of our writers and staff by making a tax-deductible donation today!