When One Trick Pony writer/illustrator Nathan Hale did his research on how to draw a futuristic robot, he definitely shied away from giant, cool-looking mechas, because, in his words, they don’t seem feasible. After all, he reasoned, science fiction couldn’t even get a phone right: After 100 years of movies and television showing futuristic communications, not a single one predicted the small, black rectangle that’s become so ubiquitous.
“Like the stupid holographic chalkboard Tom Cruise uses [in Minority Report]—no one wants that!” he joke-raged at the NYCC panel It’s Technical: Our Future with Robots and More. “Nothing in the Star Wars universe is usable—that’s all garbage stuff. R2-D2 wouldn’t be able to get into the [NYCC] convention center!”
“I write giant, unbelievable robots,” Sleeping Giants author Sylvain Neuvel wryly interjected. Which illustrates exactly the reasoning behind the choice of panelists discussing our relationship to tech in the present and the future: They all have different takes, from the aforementioned robot debate to whether we’re more likely to travel via self-driving cars or virtual reality.
In 2017, we live in a unique era, moderator Maryelizabeth Yturralde explained: We’re constantly catching up to the technology of science fiction, yet we haven’t hit the quintessential “flying cars” future, either. But our advancements put us in the beneficial position of being able to anticipate future technology—as well as the ethical dilemmas they inspire.
A fun icebreaker was to have each panelist describe the futuristic tech that they would love to see. The group seemed to be split pretty evenly between biological and technological. Neuvel mentioned a recent MIT triumph concerning molecules that can target cells infected with a virus and perhaps someday cure all viral infections. Kirsten Miller, whose YA sci-fi thriller Otherworld (co-written with Jason Segel) explores virtual realities, praised the “absolutely astounding” advances in prosthetics, including “the fact that 3D printers are going to be changing people’s lives, changing people’s faces, helping people move and live their lives in ways that weren’t possible years ago.” Adam Christopher, author of the Ray Electromatic Mysteries, agreed about medical advances but added that “a lot of what’s happening now is so early, it’s very easy for the media to latch on to something and claim it’s gonna be this miracle cure” when it might still be decades before it becomes more useful.
Hale went much more sci-fi, albeit grounded in real-world context: personal force fields, in light of talk in the news about the difficulty of changing gun control laws. And while Autonomous author Annalee Newitz confessed that “I’m still excited about self-driving cars,” it wasn’t just because they look cool: “When we actually do get them, when they’re actually good enough to be driving on the streets, that will mean we’ll have sufficient advances in machine learning that we’re gonna have some interesting AI algorithms that will be applied to other technologies.” Such as transporting people from homes to bus stops and train stations who wouldn’t otherwise be able to. “I just want really good public transit,” she said.
A recurring theme of the panel was how technology changes our interactions with other people, from how we perceive them to how we communicate. Newitz mentioned Sarah Gailey’s recent essay about Blade Runner and how it raises the question of who gets to be a person.
A person, Newitz clarified, as opposed to a human: “Because I think you can be a person without being a human.” Considering that Autonomous establishes a future in which human-equivalent AIs are indentured for up to ten years, and it’s an “honor” for humans to also be indentured, she’s been considering these lines of thought for some time.
How we treat AIs can be predicted by how we treat people online, Miller theorized, citing the “veil of anonymity” and conversations that devolve into shouting and name calling. “How is that going to work,” she asked, “when you’re dealing with something that isn’t necessarily biological on the other end?”
Christopher suggested that society is approaching a plateau with regard to social media—even, or especially, with the younger generations who have grown up with every moment recorded for posterity. However, Newitz countered that “talking about the future of social media is like talking about television”—as if it’s one big blob instead of disparate parts. She predicted a diversification in platforms, with entirely new networks cropping up, each with their own social norms and acceptable behaviors, further fragmenting the ways that people interact with one another in those spaces.
As it turned out, Hale’s ideal futuristic robot hero turned out to be a pony because, as he explained, it allowed him to completely sidestep the uncanny valley: “We all have [had] a toy pony at some point in our lives that we all felt really good about.”
A fascinating choice, as Newitz pointed out at a different point in the panel that questions about threatening AIs have been framed completely wrong: “We’re not going to invent an AI and have it in a box, then plug it into something. If we ever get anything that is a living artificial intelligence, it’s going to be emergent, and it’s going to emerge out of human data. It’s not going to emerge out of some godlike data over here that’s unlike humans. It’s going to be trained on all of our thoughts, all of our data, all of our information. it’s not gonna be a super intelligence, it’s gonna be as neurotic and screwed-up as we are.” Instead of looking for an AI like the one on Person of Interest, she said, “we should be looking for a cat—something like an animal emerging out of our tech.”
Despite these authors living in 2017 and writing sci-fi with perhaps greater self-awareness about the future than their forebears, the question remains: What if they get it wrong?
“If the story’s good,” Hale said, “I don’t care if I get the technology wrong!”
Christopher, whose research for the Ray Electromatic Mysteries was entirely retro, said that while it was easy to laugh at what the Golden Age sci-fi writers got wrong, they were still writing it for the time. “In 20, 30, 40 years’ time,” he said of the genre’s current offerings, “it’s gonna be just as laughable.”
The bigger issue, Miller said, is posing the ethical questions that need to be asked. Newitz agreed: “Science fiction is always about the present. I can’t predict the future, but I think it’s interesting to look at some of the debates we have in the real world as examples of this.” For instance, she said, talk of AI safety mechanisms and how to limit AI before it comes to life can be applied to a much more pressing issue: “These are the conversations we ought to be having about gun control, but we’re having them about AI. Maybe this is a safe way to have these conversations,” she said, suggesting that at some point in the future, “a light bulb will go on over people’s heads.”