As an autistic lover of sci-fi, I really relate to robots. When handled well, they can be a fascinating exploration of the way that somebody can be very unlike the traditional standard of “human” but still be a person worthy of respect. However, robots who explicitly share traits with autistic people can get… murky.
The issue here is that autistic people being compared to robots—because we’re “emotionless” and “incapable of love”—is a very real and very dangerous stereotype. There’s a common misconception that autistic people are completely devoid of feelings: that we’re incapable of being kind and loving and considerate, that we never feel pain or sorrow or grief. This causes autistic people to face everything from social isolation from our peers to abuse from our partners and caregivers. Why would you be friends with someone who is incapable of kindness? Why should you feel bad about hurting someone who is incapable of feeling pain? Because of this, many autistic people think that any autistic-coded robot is inherently “bad representation.”
But I disagree! I think that the topic can, when handled correctly, be done very well—and I think that Martha Wells’ The Murderbot Diaries series is an excellent example.
Note: Some spoilers for the Murderbot Diaries.
In The Murderbot Diaries, we follow the titular Murderbot: a security unit (SecUnit) living in a sci-fi dystopia known as the Corporation Rim, where capitalism runs even more disastrously rampant than it does in our world. Our friend Murderbot is a construct—a living, sentient being created in a lab with a mix of mechanical and organic parts. In the Corporation Rim, SecUnits are considered property and have no rights; essentially, they’re lab-built slaves. It’s a dark setting with a dark plot that’s saved from being overwhelmingly miserable by Murderbot’s humorous and often bitingly sarcastic commentary, which forms the books’ first-person narration.
From the earliest pages of the first book, I was thinking, “Wow, Murderbot is very autistic.” It (Murderbot chooses to use it/its pronouns) displays traits that are prevalent in real-life autistic people: it has a special interest in the in-universe equivalent of soap operas; it hates being touched by anyone, even people it likes; it feels uncomfortable in social situations because it doesn’t know how to interact with people; it hates eye contact to such an extent that it will hack into the nearest security camera to view somebody’s face instead of looking at them directly (which, side note, is something that I would do in a heartbeat if I had the capability).
The central conflict of the series is the issue of Murderbot’s personhood. While SecUnits are legally and socially considered objects, the reality is that they’re living, sentient beings. The first humans we see realize this in-story are from a planet called Preservation, where constructs have (slightly) more rights than in the Corporation Rim. Eager to help, they make a well-intentioned attempt to save Murderbot by doing what they think is best for it: Dr. Mensah, the leader of the group, purchases Murderbot with the intention of letting it live with her family on Preservation. As Murderbot talks to the humans about what living on Preservation would be like—a quiet, peaceful life on a farm—it realizes that it doesn’t want that. It slips away in the middle of the night, sneaking onto a spaceship and leaving Dr. Mensah (its “favorite human”) with a note explaining why it needed to leave.
As an autistic person, I recognized so much of Murderbot in myself. Since my early childhood, my life has been full of non-autistic people who think that they know what’s best for me without ever bothering to ask me what I want. There’s this very prevalent idea that autistic people are “eternal children” who are incapable of making decisions for themselves. Even people who don’t consciously believe that and know it’s harmful can very easily fall into thinking that they know better than us because they’ve internalized this idea. If you asked them, “Do you think autistic people are capable of making their own decisions?”, they’d say yes. But in practice, they still default to making decisions for the autistic people in their lives because they subconsciously believe that they know better.
Likewise, if you had asked the Preservation humans, “Do you think Murderbot is a real person who is capable of making its own decisions?”, all of them undoubtedly would have said yes—even Gurathin, the member of the Preservation team who has the most contentious relationship with Murderbot, still views it as a person:
“You have to think of it as a person,” Pin-Lee said to Gurathin.
“It is a person,” Arada insisted.
“I do think of it as a person,” Gurathin said. “An angry, heavily armed person who has no reason to trust us.”
“Then stop being mean to it,” Ratthi told him. “That might help.”
But even though the Preservation humans all consciously acknowledged that Murderbot is a person, they still fell into the trap of thinking that they knew what it needed better than it did. Ultimately—and very importantly—this line of thinking is shown to be incorrect. It’s made clear that the Preservation humans never should have assumed to know what’s best for Murderbot. It is, at the end of the day, a fully sentient person who has the right to decide what its own life is going to look like.
Even with that, the series could have been a poor portrayal of an autistic-coded robot if the story’s overall message had been different. In many stories about benign non-humans interacting with humans—whether they’re robots or aliens or dragons—the message is often, “This non-human is worthy of respect because they’re actually not that different from humans!” We see this in media like Star Trek: The Next Generation, where a major part of the android Data’s arc is seeing him start to do more “human” things, like writing poetry, adopting a cat, and even (in one episode) having a child. While presumably well-intentioned, this has always felt hollow to me as an autistic person. When I see this trope, all I can think of is the non-autistic people who try to voice their support for autistic people by saying that we’re just like them, really, we’re basically the same!
But we’re not the same. That’s the entire point: our brains just plain don’t work the way that non-autistic brains do. And, quite frankly, I’m tired of people ignoring that and basing their advocacy and respect of us around the false idea that we’re just like them—particularly because that means that autistic people who are even less like your typical non-autistic person get left behind. I don’t want you to respect me because I’m like you, I want you to respect me because my being different from you doesn’t make me less of a person.
That’s why, when I was first reading the Murderbot series, I was a little trepidatious about how Murderbot’s identity crisis would be handled. I worried that Murderbot’s arc would be it learning a Very Special Lesson about how it’s actually just like humans and should consider itself a human and want to do human things. I was so deeply, blissfully relieved when that turned out not to be the case.
Through the course of the series, Murderbot never starts considering itself human and it never bases its wants and desires around what a human would want. Rather, it realizes that even though it’s not human, it’s still a person. Though it takes them a few books, the Preservation humans realize this, too. In the fourth novella, Exit Strategy, Murderbot and Dr. Mensah have one of my favorite exchanges in the series:
“I don’t want to be human.”
Dr. Mensah said, “That’s not an attitude a lot of humans are going to understand. We tend to think that because a bot or a construct looks human, its ultimate goal would be to become human.”
“That’s the dumbest thing I’ve ever heard.”
Something I want to highlight in this analysis is that the narrative treats all machine intelligences like people, not just the ones (like Murderbot) who look physically similar to humans. This grace extends to characters like ART, an AI who pilots a spaceship that Murderbot hitches a ride on. ART (a nickname by Murderbot, short for “Asshole Research Transport”) is an anomaly in the series: in contrast to all the other bot pilots who communicate in strings of code, it speaks in full sentences, it uses sarcasm as much as Murderbot, and it has very human-like emotions, showing things like affection for its crew and fear for their safety.
But even those bot pilot who communicate in code have personhood, too: while they can’t use words, Murderbot still communicates with them. When a bot pilot is deleted by a virus in Artificial Condition, that’s not akin to deleting a video game from your computer—it’s the murder of a sentient being.
This, too, feels meaningful to me as an autistic person. A lot of autistic people are entirely or partially nonverbal, and verbal autistic people can temporarily lose their ability to speak during times of stress. Even when we can speak, many of us still don’t communicate in ways that non-autistic people consider acceptable: we operate off scripts and flounder if we have to deviate; we take refuge in songs and poems and stories that describe our feelings better than we can; we struggle to understand sarcasm, even when we can use it ourselves; we’re blunt because we don’t see the point in being subtle; and if you don’t get something we’re saying, we’ll just repeat the exact same words until you do because we can’t find another way to word it.
Some nonverbal autistic people use AAC (Augmentative and Alternative Communication) to communicate—like using a text-to-speech program, pointing at a letter board to spell words, writing/drawing, or using physical gestures, facial expressions, and sounds. Whatever method an autistic person uses, it doesn’t say anything about their ability to think or how much of a person they are. All it says is that they need accommodations. This doesn’t just extend to autistic people, either: many people with a variety of different disabilities use AAC because they can’t communicate verbally (not to mention deaf people who communicate via their local sign language).
Like many aspects of disability that mark us as different from abled people, this is one aspect of our brains that people use to demonize and infantilize us: because we can’t communicate in ways that they consider “right”, they don’t believe we’re capable of thinking or feeling like they do—some of them, even on just a subconscious level, don’t consider us human at all.
Because of this, it feels deeply meaningful to me that Murderbot shows characters who can’t communicate with words and still treats them as people. When Murderbot hops on a bot-driven transport, it can’t talk to it with words, but it can watch movies with it. In real life, a non-autistic person may have an autistic loved one who they can’t communicate with verbally, but they can read the same books or watch the same movies and bond through them.
The central tenet of The Murderbot Diaries is not “machine intelligences are evil,” but it’s also not “machine intelligences are good because they’re basically human.” What the message of the story comes down to (in addition to the classic sci-fi “capitalism sucks” message that I love so very much) is “Machine intelligences are not human, they will never be human, they will always be different, but they’re still people and they’re still worthy of respect.” While it takes a bit of time, the Preservation humans do eventually understand this: the fourth book, Exit Strategy, even ends with Dr. Bharadwaj—a Preservation human who Murderbot saves from death in the opening scene of the series—deciding that she’s going to make a documentary about constructs and bots to try to make other people see this, too.
At the end of the day, that’s what I want for real-life autistic people. I don’t want parents who put their autistic children through abusive programs to try to force them to stop being autistic. I don’t want “allies” whose support of us hinges upon us not acting “too autistic”. I don’t want anyone to accept me if that acceptance is based on a false idea of who I am, on the idea that there’s a hidden “real me” buried underneath my autism and only abuse can uncover it. I don’t want to be around people who like a fake version of me that only exists in their head. Like Murderbot, I don’t want people to like me because they’re ignoring something fundamental about me—I want them to understand who I really am and love me for it.
I want people to look at me as an autistic person and say, “You are not like me, and that is fine, and you are still a person.” That, to me, is the ultimate goal of all disability activism: to create a kinder world where there is no standard for what being a “real person” entails and basic respect is afforded to everyone because of their intrinsic value as a living being.
When I see non-autistic people who refuse to acknowledge the humanity of autistic people, I want to suggest they read The Murderbot Diaries. If they did, I think that this robot could teach them something important about being human.
Cassie Josephs (he/they/she) is a Seattle-based writer who’s carried a deep passion for storytelling since their seven-year-old-self made their Barbies act out tales of murder, betrayal, and political intrigue. These days, her tales are actually a bit lighter. (But only a bit.) He can be found on Twitter @cassjosephs and on his website https://cassjosephs.com.