Skip to content
Answering Your Questions About Reactor: Right here.
Sign up for our weekly newsletter. Everything in one handy email.
When one looks in the box, whatever remains, however improbable, must be the cat.

Reactor

Blog Science Fiction

Artificial Intelligence in Fiction, Fact, and Our Dreams of the Future

By

Published on April 12, 2022

Image by Mike MacKenzie (CC BY 2.0)
Artist's visualization of a Artificial Neural Network
Image by Mike MacKenzie (CC BY 2.0)

Originally published February 2020.

To celebrate the launch of Machina, a new story about the race to build the robots and AI that will take us to Mars, from Serial Box, Machina co-authors Fran Wilde (The Bone Universe, The Gemworld), Malka Older (The Centenal Cycle), Martha Wells (Murderbot Diaries), and Curtis Chen (Waypoint Kangaroo) sat down with Naomi Kritzer (Catfishing on Catnet) and Max Gladstone (The Empress of Forever, The Craft Sequence) for a Tor.com Roundtable to talk about AI as it appears in fiction, fact, and in our dreams for the future.

 

Fran Wilde: Iteration: When we think of AI, we often forget that the humans building and designing current models, with all of their flaws, are part of the equation. What could possibly go wrong? What’s your favorite recent fiction for that?

Malka Older: There’s a parallel here with fiction, which we sometimes forget registers the biases and flaws of its author and era. These might be largely invisible to those contemporary readers who share them , while embarrassingly clear with more cultural and/or temporal distance. I’d like to see more fiction that imagines a complicated evolutionary history for AI, with trends and missteps.

Martha Wells: There are so many things that can go wrong, and I think we haven’t even touched on a small percentage of them yet in fiction. For instance, the idea of an AI adopting the racism and misogyny of the online users it was meant to moderate, and how much damage it could do. There was a TV show in 2010 called Better Off Ted that touches on this when the new AI that controls the elevators for the evil corporation is only trained to recognize the white employees, so no one else is able to get around the building.

Naomi Kritzer: One of the many (many, many, many) ways in which humans screw up is that we make decisions that make perfect sense in the short term and will massively increase our problems in the long term. A recent piece of fiction that explores this problem (in conjunction with AI and AI-adjacent technologies) that I very much enjoyed was Fonda Lee’s short story “I (28M) created a deepfake girlfriend and now my parents think we’re getting married

Curtis C. Chen: OMG I loved Better Off Ted and I love Fonda’s deepfake story. Many people underestimate the power that humans have have to build in fundamental flaws that then get machine-multiplied in AI systems with amoral efficiency. Those problems often happen in hidden ways, inside software where no user can see it, and is therefore difficult to even diagnose. We need to get better at asking how these systems are built and demanding proper audits when things go wrong, and IMHO governments really need to seriously regulate tech companies. (Full disclosure: I am a former Google web apps engineer.)

Max Gladstone: We’re really talking about two related issues when we talk about AI in science fiction. The first is the actual shape “artificial intelligence” has taken so far—neural network based reinforcement learning as in AlphaGo, for example, or deepfakes. These tools let us point complex computer systems at a problem, like “win a game of Go” or “turn this video of Nicolas Cage into a video of Jennifer Lawrence,” and get out of the way. They’re cognitive exoskeletons, like the power loader in Aliens only for our wills, and they’re changing the world by letting us do things we want faster, more decisively—which then lets us want more and different sorts of things. In a way that’s the story of every tool a human being has ever built. (With some neat side effects—I do love the fact that pro-level players can now get stronger at Go than ever before in human history, because it’s possible to play a superior opponent basically on demand.) Then there’s the question of real AI—what happens when the machines with these capabilities start making decisions and interpreting the world for themselves? To my mind, that’s not a story about maker and machine, it’s a story about parents and children—like o.g. Frankenstein, I suppose. When I think about AI I’m drawn to powerful depictions of strained parenthood, with children coming into their own and facing the failures of their parents… So-called “Dad games”—Witcher 3, Bioshock Infinite—cover a lot of this territory.

 

Buy the Book

Catfishing on Catnet

Catfishing on Catnet

Naomi Kritzer: Can we talk about those times when a computer decides it knows better than you what you need? This happens all the time with current technology—what’s it going to be like when we have actual strong AI that thinks (maybe even correctly) that it’s smarter than we are and better informed about our needs than we are?

Malka Older: This gets to the crux of the tension around AI: we want something smarter than we are, to solve our problems, but we want to control it. We don’t trust AI—no ethics, no “humanity”—but we don’t trust ourselves either—flawed, fallible, too emotional, too “human.”

Martha Wells: I think it’s frustrating enough dealing with an answering system for an airline or pharmacy that wants you to talk to it but can’t understand your accent, it’s going to be so much worse when that system is making decisions for you based on a flawed understanding of what you need.

Fran Wilde: You mean like when the online bookstore AI recs me my own novels? Or when a database gets hold of an old address and won’t let go so all my important mail goes to a place I haven’t lived in ten years? I… don’t even want to talk about healthcare billing and AI. Elizabeth Bear’s “OK Glory” is one story that is sort of related, in that these systems can still be gamed all to heck. Another direction this can go, of course, is the overly helpful AI Tilly, as developed by Ken Liu in “The Perfect Match”—what if what we want is not to know what we want, and to discover it along the way?

Max Gladstone: When we say “the computer knows what you need,” though, how often is it the computer that knows, and how often is it the business development office? I don’t know anyone who would rather have an algorithmically structured news feed than a news feed that updates in reverse chronological order. But apparently algorithmic news feeds help ad conversions—or something.

Curtis C. Chen: For me, it entirely depends on the help being offered. I’m perfectly happy to let Google Maps tell me what route to take when driving, since it knows more about road closures and real-time traffic conditions than I do, and I can’t wait until self-driving cars are the default. But I will want some kind of manual override, because there will always be things in the real world that a system or its creators couldn’t anticipate during development.

 

Buy the Book

Network Effect

Network Effect

Martha Wells: Is there any proposed solutions for countering the bias that an AI might pick up from social media trolls, bots, etc, in fiction or reality? Or alternately does anyone know of any other examples of this happening, in fiction or reality?

Malka Older: We could ask first whether we’ve found any solution for countering this in humans. After all, if we build a better social media environment, that’s what the AI will be taking its cues from.

Curtis C. Chen: If I may put on my old-man hat for a moment, I remember when Snopes.com was the authority for fact-checking any kind of internet rumor that was going around. I suspect there’s not a lot of research being done presently on auto-debunking tools, since that kind of work involves judgments calls that are tough even for full-grown humans. But maybe that’s what future “semantic web” efforts ought to focus on.

Naomi Kritzer: I think one of the most critical pieces of this is a willingness to acknowledge that the problem exists—I’ve seen people online (mostly white people online) in complete denial of the problem of algorithmic bias. I think there are ways to counter this problem but we can’t do it if we’re committed to the idea that an algorithm is some sort of pure, untouched-by-human-prejudice sort of thing.

Fran Wilde: A team at Cal-Tech has been working on using machine learning to identify rapidly evolving online trolling, and another is being developed at Stanford to predict online conflict, but—given what happened to Microsoft’s Tay-bot in 2016—where, once exposed to Twitter, an algorithm went from “The more humans share with me, the more I learn,” to being carted off the internet spewing profanity in less than 24 hours—this is a really good question. Everyday tools are learning from us, and our usage, not just online but on our phones and—If autocorrect is any predictor, that’s a terrifying thing. Something I’d like to see is a human-AI learning team that could curate a sense for what is and isn’t biased based on context. I think this would help reduce cascading error problems.

Max Gladstone: I’m a relatively new parent, so I admit that these days I see everything through the lens of parenting. But here, we’re really talking about a question of parenting. Anyone who looks at the world sees that it is unjust, biased, and often cruel. But most of us don’t think that’s the way the world should be. I wonder if we’ll be able to teach young algorithms to tell the difference between is and ought.

 

Buy the Book

Infomocracy

Infomocracy

Malka Older: We have an (understandable) tendency to anthropomorphize AI, imagining the intelligence as just like us—even wanting to be more like us—only faster. How is AI going to be alien to humans? How can we conceive of a significantly different intelligence? Are there any books/movies/shows that do this well?

Curtis C. Chen: The movie Her was mostly about other aspects of AI, but (SPOILERS) I did like how, at the end, the AIs were portrayed as having their own culture and concerns entirely separate from any human affairs.

Martha Wells: That’s why I don’t like the trope of the AI that wants to be human, when you think about what an AI would be giving up to have its consciousness squished down into a human body. I like the way this is handled in Ancillary Justice, where Breq has no choice, and has to deal with losing its ship-body and the multiple perspectives of its ancillaries.

Naomi Kritzer: As sort of an interesting reverse of this trope, Ada Palmer’s Terra Ignota series has humans who were essentially raised from infancy to be extremely powerful computers; they are both human, and very alien. It’s a deeply controversial practice in the world of the book, but the people on whom it was done all defend their lives as better, not worse, than those of other people. (I haven’t read the third book yet, so it’s possible there were further revelations about the set-sets I haven’t gotten to.)

Fran Wilde: I love Curtis’ example. Also, much of what we find amusing or threatening goes back to the ways we interact with the world (similarly, see: our most popular four-letter words). AI, without these physical referents and threats, will only have inferred meaning there. I think writers like Greg Egan and William Gibson have touched on some of the potential strangeness that could ensue, but I also suspect that whatever it is, we won’t be able to recognize it—at least at first—because it may be kind of a Flatland problem: how does a sphere intersect with a square? Only at certain points.

Max Gladstone: How would that sort of real AI—an entity born on the sea of information, something that uses our silicon networks as a substrate—even know we exist? Humans spent hundreds of thousands of years not understanding electricity or chemistry, and when it comes to understanding why and how our brains do the things they do, we’re still more or less at the venturing-into-the-dark-with-torches-and-a-pointy-stick stage of development. We anthropomorphose AI because I think inheritance and continuity is one of our main interests as a species. You find titanomachies everywhere. When you start asking ‘what would an AI /really/ be like,’ I think you have to be ready to abandon many of your preconceptions about consciousness.

 

Buy the Book

Updraft

Updraft

Fran Wilde: Extrapolation: What might AI look like in the future that we don’t expect now? What if they have a sense of humor,… or not? Will all our office in-jokes become literal?

Malka Older: I wonder about emotions. Fictional representations tend to portray that as a sort of final hurdle to becoming human—or, as with Marvin the paranoid android, a one-note effort. But we keep learning about the importance of emotions in our own, for lack of a better word, processing. AI might find them useful too.

Max Gladstone: I’m waiting for the day when an AI comedian pulls out the equivalent of AlphaGo’s Game 2 Move 37 against Lee Sedol: an ineffably hilarious joke, one that cracks up everyone in the room and nobody can explain why.

Curtis C. Chen: For my money it’s past time to retire the “AIs have no sense of humor” trope. I know humans who don’t have a sense of humor, so that’s not a good metric for personhood. If we do develop AI systems with more fully formed personalities, I would expect to see things along the lines of cultural differences—similar to how people from non-US countries don’t understand American idiosyncrasies like all-you-can-eat buffets or strip mall liquor stores. Would a non-biological entity understand all our ingrained references to food, eating, or even smells?

Martha Wells: For the past several years I’ve seen people online having arguments with very simplistic bots, so like Naomi, I don’t like our chances of being able to tell the difference between a person and a more sophisticated AI.

Naomi Kritzer: One of the things that strikes me—I think on some level we all assume that even with very good AI, we will always be able to tell the difference between a real person and a technological imitation. In fact, computers have been passing the Turing Test (at least with some humans) since the era of ELIZA, which was not even a particularly good fake.

 

Buy the Book

Waypoint Kangaroo

Waypoint Kangaroo

Curtis C. Chen: What are your thoughts on The Campaign to Stop Killer Robots?

Max Gladstone: I was really worried this was going to be one of the Effective Altruist orgs that go off on this wacky utilitarian tangent that the only moral things to do with time and resources are build rocket ships and stop basilisk-style AIs, since that would alleviate infinite suffering by saving the human race, so we shouldn’t be worried about, say, civil rights or clean drinking water or climate change. (Which logic is part of the reason Isaiah Berlin argues against conceiving of ideal forms of government… anyway.) But this seems like an important org with a good cause. Though I’d argue that a lot of ‘The problem’ on their website is already raised by current drone warfare tech.

Martha Wells: I think it’s an issue that’s going to become even more urgent as time goes on.

Naomi Kritzer: An international treaty against fully autonomous weapons seems like a self-evidently good idea—the contemporary equivalent to banning biological weapons.

Fran Wilde: I think outsourcing the moral burden of pulling the trigger is already happening with drones… so outsourcing the decision to outsource is another short, terrible hop away. And I think “the algorithm did it” is already being used as a defense. Those are kind of stops on the way to Skynet/Terminator territory, at least in my mind, so a group that raises awareness on the topic is a pretty good idea.

 

Buy the Book

The Ruin of Angels

The Ruin of Angels

Malka Older: How do you see the tension between specific-use AI and generalized, we-don’t-know-what-it-will-do-for-us-let’s-just-see-how-smart-we-can-make-it AI playing out into the future? Fictional examples?

Max Gladstone: I’m trying to remember where I first encountered the concept of ‘governors’ on AI—tools used to stop purpose-built systems from acquiring that generalized intelligence. Maybe in MJ Locke’s Up Against It? Oh, and this is a plot element in Mass Effect of course.

Curtis C. Chen: My personal impression (which may be wrong) is that it seems like most cautionary tales about AI are about general-purpose systems that magically achieve godlike sentience and can immediately control all other technology. Um, that’s not how anything works? I’m more interested in the idea, which hasn’t been explored very much in fiction AFAIK, of specific-use AIs that have to deal with their own blind spots when confronted with generalized problems. Which, of course, would be similar to how humans often have trouble proverbially walking a mile in another person’s shoes.

Naomi Kritzer: One aspect of specific-use AI that lends itself to fiction is the problem of unintended consequences. Problems that no one saw coming, of course, but also new applications that are found, and weaknesses that are exploited. David Levine’s short story “Damage” tells the story of a very specific-use AI (the brain of a warship, intended to obey its pilot) that acts independently in ways that were not intended by her creators.

Fran Wilde: I suspect budgets for push-some-buttons, see-what-happens development beyond particular-use AI are pretty tight, so the restrictions on buckshot development (except in the research lab) may be financial. That said, the Librarian in Snowcrash was pretty darn swiss-knife useful (for plot reasons), and—if you look at the protomolecule from The Expanse as a rogue AI with an unstated mission, researchers just kind of dropped that on humanity to see what would happen. So, I suspect our desire for a one-AI-to-rule-them-all is still there, even if our capacity to fund that development isn’t.

 

Curtis C. Chen: Is there a particular AI application that you think would be spectacularly useful, but that nobody is currently working on as far as you know?

Malka Older: I’ve said elsewhere that AI is the new bureaucracy—impersonal, impervious to blame, mystifying if you don’t have access to see inside the black box—so I’d like one that effectively deals with the old bureaucracy, please. Let it figure out the phone menu and talk to the customer service representative and be recorded for training purposes.

Max Gladstone: If anyone’s working on an AI that would help me meal plan, I want to know about it.

Naomi Kritzer: The thing that strikes me periodically is that for all that the computers are tracking our every move, sometimes in very creepy ways, they’re not using that information in the ways that would actually make my life more convenient. I grocery shop the same day every week, at the same shopping plaza, and my Android phone is well aware of this fact, and yet there’s a liquor store at that shopping plaza that is not pushing coupons to my phone in a bid to get me to add it to my weekly routine. Why not? That would be creepy but useful instead of just creepy.

Fran Wilde: I’d like an AI that helps curate my old photos, books, and music so I can find things when I want them, and generally enjoy a few moments of memory serendipity without too much effort. Kind of like those snapfish emails from 14 years ago, but more tailored to my mood and sensibilities.

 

Machina is a Serial Box original—join the future race to Mars here & now

 


Fran Wilde is the creator and co-author of Machina, a race to send autonomous robots to space. Her novels and short fiction have won the Nebula, Compton Crook, and Eugie Foster awards, and have been finalists for four Nebulas, two Hugos, two Locii, and a World Fantasy Award. She writes for publications including The Washington Post, The New York Times, Asimov’s, Nature Magazine, Uncanny Magazine, Tor.com, GeekMom, and iO9. Fran’s double-masters degrees in poetry and information architecture and interaction design mean she’s a card-carrying code poet. She is Director of the Genre MFA at Western Colorado University. You can find her at her website.

Naomi Kritzer has been writing science fiction and fantasy for twenty years. Her novelette “The Thing About Ghost Stories” was a finalist for the 2019 Hugo Award; her short story “Cat Pictures Please” won the 2016 Hugo and Locus Awards and was nominated for the Nebula Award. Her YA novel Catfishing on CatNet (based on “Cat Pictures Please”) came out from Tor Teen in November 2019. She lives in St. Paul, Minnesota with her spouse, two kids, and four cats. The number of cats is subject to change without notice.

Martha Wells has written many fantasy novels, including The Books of the Raksura series (beginning with The Cloud Roads), the Ile-Rien series (including The Death of the Necromancer) as well as science fiction (The Murderbot Diaries series), YA fantasy novels, short stories, media tie-ins (for Star Wars and Stargate: Atlantis), and non-fiction. She was also the lead writer for the story team of Magic: the Gathering‘s Dominaria expansion in 2018. She has won a Nebula Award, two Hugo Awards, an ALA/YALSA Alex Award, two Locus Awards, and her work has appeared on the Philip K. Dick Award ballot, the BSFA Award ballot, the USA Today Bestseller List, and the New York Times Bestseller List.

Once a Silicon Valley software engineer, Curtis C. Chen (陳致宇) now writes speculative fiction and runs puzzle games near Portland, Oregon. His debut novel Waypoint Kangaroo (a 2017 Locus Awards Finalist) is a science fiction spy thriller about a superpowered secret agent facing his toughest mission yet: vacation. Curtis’ short stories have appeared in Playboy Magazine, Daily Science Fiction, and Oregon Reads Aloud. He is a graduate of the Clarion West and Viable Paradise writers’ workshops. You can find Curtis at Puzzled Pint on the second Tuesday of most every month. Visit him online.

Max Gladstone has been thrown from a horse in Mongolia and nominated for the Hugo, John W Campbell, and Lambda Awards. A narrative designer, writer, and consultant, Max is the author of the Hugo-nominated Craft Sequence (starting with Three Parts Dead and most recently continuing with Ruin of Angels), the intergalactic adventure Empress of Forever, and, with Amal El-Mohtar, the time travel epistolary spy-vs-spy novella This Is How You Lose The Time War. He has written games, comics, short fiction, and interactive television. He is the lead writer of the fantasy procedural series Bookburners, and the creator of the Eko Interactive series Wizard School Dropout, directed by Sandeep Parikh.

Malka Older is a writer, aid worker, and sociologist. Her science-fiction political thriller Infomocracy was named one of the best books of 2016 by Kirkus, Book Riot, and the Washington Post. With the sequels Null States (2017) and State Tectonics (2018), she completed the Centenal Cycle trilogy, a finalist for the Hugo Best Series Award of 2018. She is also the creator of the serial Ninth Step Station, currently running on Serial Box, and her short story collection And Other Disasters came out in November 2019. Named Senior Fellow for Technology and Risk at the Carnegie Council for Ethics in International Affairs for 2015, she is currently an Affiliated Research Fellow at the Center for the Sociology of Organizations at Sciences Po, where her doctoral work explored the dynamics of post-disaster improvisation in governments. She has more than a decade of field experience in humanitarian aid and development, and has written for The New York Times, The Nation, Foreign Policy, and NBC THINK.

About the Author

About Author Mobile

Fran Wilde

Author

Fran Wilde’s short fiction has appeared in Asimov’s, Nature, and elsewhere. Her debut novel Updraft, first in the Bone Universe series, was published in 2015 to wide acclaim. She blogs about food and genre at Cooking the Books and for the popular social-parenting website GeekMom. She lives in Philadelphia with her family.
Learn More About Fran
Subscribe
Notify of
guest
1 Comment
Oldest
Newest Most Voted
Inline Feedbacks
View all comments