By all accounts, a robot that has named itself Murderbot should have absolutely no camaraderie with humans. Not that it even wants to—the SecUnit at the heart of Martha Wells’ All Systems Red does the bare minimum of its job, i.e., keeps its human clients alive, then immediately ducks into its cubicle to stream the latest episode of Sanctuary Moon. This is no C-3PO, human/cyborg relations, fluent in over six million forms of communication. Murderbot can’t even adequately express its desire for privacy, stumbling through conversation with its clients while holding its gruesomely half-healed organic parts together. It possesses no subtlety, and no interest in refining that aspect of its communication.
Ironically, that awkwardness is exactly what will keep Murderbot from getting taken offline.
Slight spoilers for Martha Wells’ All Systems Red and Annalee Newitz’s Autonomous.
Every smooth-talking Ava from Ex Machina or guileless David from A.I.—manufactured to look like mates we want to win or children we want to protect—will trip up at some point and raise humans’ hackles. Because flawless robots aren’t just unconvincing, they’re chilling. So long as roboticists attempt to make their creations pass every test, Turing and otherwise, with impossibly high marks, these robots will founder in the uncanny valley.
But the robots who make mistakes at the start? Those are the experiments that will succeed. The robots who will earn a spot alongside humans are the ones who will want nothing better than to flee the room to watch TV alone. The creations that humans will be able to look at with empathy rather than fear are the androids who can’t maintain eye contact. Awkward robots are our future—or our present, judging from more than a few fictional bots who are charmingly imperfect.
Despite working with a half dozen scientists, Murderbot chooses the actors in its favorite serial soap opera as an accurate representative of human drama. When it is forced to interact with flesh-and-blood people, it filters real life events through the narrative arc of television: “on the entertainment feed, this is what they call an ‘oh shit’ moment” it considers after revealing a key piece of information it probably shouldn’t have withheld. In contrast to the grand stories of honor and heroics it watches, the Murderbot does things like save its clients from a bloodthirsty beast lurking in a crater simply because it’s paid to do so. And when the humans attempt to reciprocate by offering that the Murderbot can hang out with them in what amounts to their living room, the Murderbot—which has foolishly dispensed with its usual opaque helmet—wears such a look of horror on its organic face that it strikes everyone silent with the sheer lack of subterfuge in its response.
That should be the end of it, an awkwardness weighing so heavily that no one should even try to dislodge it, and yet Murderbot’s faux pas is what endears it to the humans. Despite themselves, they’re charmed, and curious enough to prod, with questions of “why are you upset?” and “what can we do to make you feel better?” Instead of fearing that they’ve angered a being that refers to itself as Murderbot and could gun them all down for the offense, they are instead strangely protective of its emotional state.
The funny thing about the uncanny valley is that robots almost pass the test; after all, there have to be edges to the valley. Humans will engage with a robot that resembles them to the point that their brains nearly make the leap to accepting this other being as something familiar—then all it takes is a jerky twitch or rictus smile, and human empathy goes into freefall. The Murderbot’s face should have repulsed its clients—not because of its expression of horror, but because its looks are an approximation of some other human out in the universe, placed on top of an armored body with guns for arms. But because of the naked awkwardness of turning down an invite to socialize, the Murderbot manages to completely veer away from the uncanny valley.
In fact, the most effective robots neither need to resemble humans (in part or at all) nor act like some flawless, upgraded version of them. A recent study from the University of Salzburg’s Center for Human-Computer Interaction found that people actually preferred a robot that was flawed, that made mistakes, that looked to humans for social cues instead of having the answers preprogrammed. This uncertainty or these small failures on a robot’s part confirmed the Pratfall Effect, explained PhD candidate Nicole Mirnig, a corresponding author on the study: The theory “states that people’s attractiveness increases when they make a mistake.”
Annalee Newitz’s Autonomous introduces us to Paladin, a sympathetically gawky military bot fine-tuning his identity as he goes along. Though he’s top-grade for his function—that is, tracking down pharmaceutical pirates—Paladin’s human intelligence skills are sorely lacking. He constantly mines interactions with other humans, from his partner Eliasz to their various targets while going undercover, for gems that will unlock his questions about the complexities of interactions. Most importantly, he conducts personal mini-experiments, relying on Eliasz for guidance in social cues, with the expectation of failure. In one self-imposed human social communications “test” taking place during a firing range exercise, Paladin decides not to communicate with Eliasz, learning everything he needs to know about his partner’s unconscious physical responses to being pressed that close to a robot as bulky and non-human-resembling as Paladin.
Despite not resembling a human at all, aside from the brain housed inside his carapace, Paladin appears no less anthropomorphic because of his trial-and-error approaches to socializing. By asking questions, attempting solutions, and making up for missteps, Paladin seems more human than a machine that already possess the algorithms or data banks from which to draw the correct answer on the first try. That checks out with the real-world study, which found that the faulty robots were not considered less anthropomorphic or less intelligent than their perfectly performing counterparts. They contain multitudes, just like people.
In fact, part of Paladin exploring his identity is engaging in one of the ultimate instances of human trial-and-error: He gets into a relationship, complete with an awkward navigation of both parties’ emotional and sexual needs, plus questions about his own autonomy in this partnership that keep him up at night when he really should be using his human brain for something more productive.
But that deep curiosity, that existential experimentation, is what makes Paladin compelling, just like Murderbot’s need for serials and self-care rather than endure painfully stilted conversation. And while those interactions are messier and more awkward than a robot that smoothly follows protocol, they establish deeper relationships with humans—with both their professional and personal partners, and also the humans who read these stories then tab over to “aww” at the security robot that “drowned” itself and then earned a memorial service.
Today’s robots are overcoming the uncanny valley, not by leaping over the chasm of almost-but-not-quite but by bridging the divide with very human awkwardness. It’s equal parts charming and disarming. The robots that trip our internal alarms are the ones who are programmed to be smarter than us, stronger, indestructible—the ones we have to worry about superseding humanity. But the bots who reflect back our own flaws, who mirror our own stumbles in social situations—those robots have staying power. Whether our future holds evolved versions of Siri and Alexa or sentient beings closer to Paladin and Murderbot, our best robot peers will be the most awkward ones.
Natalie Zutter has fun trying to trip up the Alexa at her apartment, though so far the wily thing won’t play along. Talk awkward robots with her on Twitter!