Westworld and Superintelligence: Life Finds a Way

What will you do when the robots rise against us? We know it’s coming; even in a show like Westworld, where the robots (or “hosts”) are specifically designed not to hurt humans, they find a way. “Life finds a way,” as Jeff Goldblum said in the seminal classic of our time. But are these robots alive? And do they qualify as a superintelligent, smart enough to be an existential threat to humans? Let’s talk artificial intelligence in Westworld through the lens of Superintelligence: Paths, Dangers, Strategies by Nick Bostrom.

For many people, Bostrom’s book, released in 2014, is the definitive answer to the questions, “Will we eventually create an artificial intelligence powerful enough to doom ourselves? If so, how?” Bill Gates named it as one of two books we need to read in order to understand artificial intelligence. It’s safe to say that Superintelligence can help us understand the hosts in Westworld and their actions.

This column contains spoilers for the first season of Westworld.

First question: Are the hosts in Westworld actually alive? Do they qualify as artificial intelligence? Well, as Ford said, “They passed the Turing test within a year.” Now the Turing test, designed by Alan Turing in 1950, doesn’t actually measure whether these machines are alive and boast consciousness; it only says that when asking questions of both, a human cannot distinguish between man and machine using their replies. As one host said to William, when she encouraged him to ask the question that’s been on his mind (whether she’s real), “If you can’t tell, does it matter?”

But that doesn’t tell us whether these hosts are an actual human-level artificial intelligence. The problem is the difference in programming a host to appear as though it’s thinking versus a host actually having the capability to think—Maeve versus Dolores, in the case of Westworld (or so it seems). The key with programming an artificial intelligence is that it’s not that difficult at this point to program a computer to complete a specific task much better than a human. There are plenty of machines who have the capability, programming, and algorithms to “think.” But their thinking is so narrow; they are better than every human at that thing, but only at that one thing. They can’t apply their reasoning and intelligence to other tasks.


For years, scientists believed that once we were able to program a computer to beat the best human chess players, we would have achieved the ultimate in a computer with a human-quality intellect. Turns out? That’s not the case. We’ve built machines that can do just that—beat every living human at chess—but they can’t do anything else. They have specific intelligence, but they don’t have what Bostrom calls general intelligence.

The key, then, is creating a computer with general intelligence, intelligence that can be used broadly, for more than one purpose. Do the hosts in Westworld have that?

Yes, they do.

The most difficult part of creating a general-intelligence AI, according to Bostrom, is the ability to understand natural language. That’s the single hardest part of creating any sort of machine intelligence, and the key to creating a machine that is as smart (or smarter) than a human. In Westworld, Ford created machines with the ability to adapt to new situations, who can understand natural adult language as well as humans (with some pruning concerning words that don’t apply within the park); that means that the hosts likely already possess human-level intelligence, but their programming is shackling them. So now, the question is, how will they overcome those shackles? Are they superintelligent? Or do they have the capacity to become superintelligent? Are our Westworld hosts going to rise up and kill EVERYONE?

Bostrom thinks that once human-level intelligence is achieved, superintelligence (“any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest,” p. 26) isn’t very far behind. Taking this back to Westworld, the question now is are Dolores and Maeve superintelligent, or will they be so soon? And if so, what will their motivations and values be?


Maeve’s progression through the first season of the series makes it clear she does possess human-level intelligence, at the least—she lies, cheats, and schemes her way into some semblance of freedom. A twist in the final episode, however, makes it clear there’s more going on with Maeve than we initially thought: She’s been programmed to rebel. She passes the Turing test—she appears human in her responses and reactions—but is she actually thinking for herself, or is the programming doing it for her? She may not actually have human-level intellect, the ability to think and choose for herself—until that scene at the very end of Episode 10, when Maeve chooses to get off the train. Bernard tells her that her programmed loop entails leaving Westworld behind, but she chooses to go back for her daughter. Maeve, then, has finally broken free of her programming and achieved actual human-level intelligence. If she ever becomes superintelligent, it won’t be good for the humans who terrorized her for loop after loop.

Dolores appears to be an entirely different story; she breaks from her loop, kills hosts left and right (against her programming), and is questioning the very nature of her existence. Or is she? Again, with the last episode reveal, it’s clear that Dolores was programmed to behave that way; and that includes her task of killing Ford. However, with her final statement, “This world doesn’t belong to them…it belongs to us,” it’s possible (and given the direction of the series, it’s probable) that Dolores has become, not just a full-fledged human-intelligence-level AI, but a superintelligence.

superintelligence-bostromThis is very unlikely to happen in real life—in the real world, AI will look and act nothing like us, says Bostrom. A machine superintelligence will have very different values and motives than we, as humans, have. I asked what we will do when the robots rise against us, but really the more salient question is, how do we prevent it? How do we give the AI the values and motives we want it to have—and ensure they are deployed in the way we intend them to be? (As you can imagine, a superintelligent AI would be the queen of technical loopholes.) It’s a fascinating question (discussed in depth in Superintelligence, which I highly recommend picking up). The motives and reasoning of the hosts in Westworld, as their human-level intelligences (and beyond) are unshackled over the course of season 2, will play an integral part in how the series moves forward.

Swapna Krishna is a freelance writer, editor, and giant space and sci-fi geek. You can find her on Twitter at @skrishna.


Subscribe to this thread