Sat
Apr 18 2009 3:46pm

Post-Apocalyptic: Robots and AIs

If there’s one thing we should learn from the Battlestar Galactica finale (other than angels exist and like to curse, smoke and fly spaceships) it’s that robots can be dangerous. One day they’re building your cars and picking up your garbage, the next they’re nuking you from orbit.

Cautionary tales about artificial intelligence are nothing new in science fiction. Even before the appearance of the original Cylons on the ’70s television show, the effect of robot uprisings were being seen in the fiction world.

Frank Herbert’s Dune is a prime example. While the Dune universe doesn’t include artificial life forms, they serve as an important presence in the background to the series. The Butlerian Jihad was the name of the rise against the machines which helped to create the state of affairs depicted in the first book. Herbert’s take seemed to be that the machines had become too controlling of human destiny and that humanity fought back to reclaim it, leading to the commandment, “Thou shalt not make a machine in the likeness of a human mind.”

In the world of The Terminator, popularized in the movies by James Cameron and now others, the uprising happened the other way. Machines, guided by the AI Skynet, rose up against the humans, striking back with nuclear weapons on the future Judgment Day. Only a small human resistance, led by John Connor, remains to fight against them and reclaim the Earth, warring through the post-apocalyptic world of our future earth.

The Matrix is another hugely successful franchise exploring this idea. In the future, humanity has been sidelined, used as little more than batteries to power the world of the machines. This time the shitty environment is the human’s fault (they had to block out the sun because the machine’s were solar-powered), but what do you expect from a war between man and machine. Shit’s going to get broke.

Then of course there’s Battlestar Galactica, covered here before (and all over the site) which perhaps goes into more detail about how man’s creations can turn on him and explores the reasons why and indeed where the fault lies (before telling you at the end that it’s God or some other kind of intelligence all along).

Most of these examples predicate themselves on the idea that artificial life would prefer a world without humans. In most such worlds, robots and AIs are used as servants, for labor, and have very few rights. At the same time they are also increasingly able to interface with the more and more technological world, such that Skynet can take over defense systems, Cylons can do the same (which is why the Galactica wasn’t networked) and I assume how the AIs of the Matrix did it, too.

The one standout is the Dune universe where humanity broke apart from the machines not because the machines were trying to eliminate them, but because the machines were the guiding force in society and they wanted to claim that right back for themselves. A pre-emptive move of sorts, it surely prevented the robotic uprising that the other examples tell us would certainly happen.

So the question remains, if we do develop artificial intelligences, will they eventually want to be free of us? And will they turn on humanity to free the world for robot-kind? Will we build into them laws, like Asimov’s famous Laws of Robotics (which always seem to have loopholes)? Or will it be more like the Singularity where the advanced AIs will simply be so caught up in their own advanced thinking that they’ll forget to feed us and care for us? Is Frankenstein our guide? Or Wall-E?

While you ponder that (if you are), I’ll leave you with one last examination of the robot apocalypse (and one of my favorite) from New Zealand musicians, Flight of the Conchords. And please share your own thoughts (even if it’s just your favorite robot apocalypse) in the comments.

10 comments
Louis R. Rodriguez
1. Louis R. Rodriguez
I've got a simple rule for this sort of thing: if it's got a humanlike mind, then it's human. One of my takeaways with BSG was that there just wasn't a real difference between the skinjobs and the humans. They were intelligent anf self-willed. Hell, they could even interbreed. That should have been enough, surely.

The centurions were a bit more diffficult, but if they were self-willed and intelligent too (as I recall, they were) then they should not have been treated as "things" but as other members of humanity. That, so far as I'm concerned, is the only way to break the cycle of creation, rebellion, destruction, flight and recomstruction.
Louis R. Rodriguez
2. Evan Goer
Oh, that Flight of the Conchords song is excellent. But try the live version -- it's even more fun.
Louis R. Rodriguez
3. Matias.
i bought this book: http://www.thinkgeek.com/books/humor/8edf/

and i'm ready to kick robots' asses.
Felicity Shoulders
4. Felicity
I'm glad I'm not the only one who thinks of Frankenstein as a seminal rebelling-robot story.

I heard recently that the robot rebellion concept -- which is so prevalent in American media that you can predict the second of a bad movie trailer when the AI will turn (coughSTEALTHcough) -- is particularly American. I'm sorry I can't remember where the article was (it could even have been here) but it noted that Japanese sci-fi portrayed mostly friendly robots, and that some readers and viewers in other countries think America's a little funny about robot mayhem.

I haven't read enough of Iain M. Banks's Culture novels yet to set myself up as an expert, but I find that take -- a society where AI has largely outstripped humans but hasn't rejected or abandoned us -- intriguing.
Dayle McClintock
5. trinityvixen
I think it's unfair to posit that the AIs in The Matrix were evil, though. Like with Dune, the AI were attacked first because humanity became jealous of 01 (the robot home state) and the financial success the robots enjoyed. (All of this is discussed in The Animatrix.) Aside from Agent Smith, who is, like a serial killer, a rogue element that does not speak to the rest of the AI programs in or out of the Matrix, the AIs in that world are not evil, only calculatingly a-moral (as opposed to im-moral).

They're also the only reason humanity is alive. It stands to reason that you cannot get more out of a human battery than you put into it--human bodies are horribly inefficient machines when it comes to converting energies. Ergo, the machines were keeping humanity alive at the expense of the machines. Morpheus says that the machines use humans and a form of fusion to make energy. Fusion provides a shit-ton of energy (on the universal scale of some, many, lots, and shit-tons of energy, fusion is up there), so there's no reason the robots would keep humanity alive except as a sort of almost Asimov-ian benevolence unto their creators. In fact, if you follow the bullshit of the sequels, the Matrix itself exists to keep humanity from feeling the awful shit they basically did to themselves in a fit of jealousy. The occasional purging of people serves to keep the ones who might become dangerously schizophrenic (the "sensitives" who can tell that the world is fake, the various incarnations of the One) from going entirely mad. It gives them something to fight, and purpose is exactly what humanity needs.

All that just to say I think the Matrix AIs get a bum rap. They, like the mechanoid minds that were killed off Dune were not the evil ones. Not really.
Rajan Khanna
6. rajanyk
@trinityvixen: I didn't know that about the Matrix AIs. I never saw the Animatrix and I was actually frustrated that I couldn't find out more about the history of that world. I'll definitely have to check it out.
Louis R. Rodriguez
7. Nile_H
It's certainly possible that AI's would feel that our existence is incompatible with theirs, and act accordingly.

Equally, they might take the view of 'Jerusalem', a near-godlike AI in Neal Asher's Polity universe: overhearing lesser AI minds discussing their superiority to humanity, their obvious status as successors, and the logical progression to extermination of their obsolescent biological inferiors, Jerusalem ponders the thought that he might well apply the same logic to them. Left unspoken is the thought that some transcendent superbeing might, in turn, look down upon Jerusalem; logic dictates that all of us in the chain of beings would do well to practice tolerance and make some effort at benign supervision if we choose to engage with lesser minds, rather than malign attempts at dominance, violence, or outright destruction.

There is, of course, the notion that AI's actually like us: Iain Banks explores that in his Culture novels, and I'm sure that we can all name similar examples.

In the middle ground, or off a little to the side, we have the notion that the Artificial Minds are largely indifferent to us, existing in their own world and interacting rarely, if at all: picture the Voudoun spirits and the atterly alien cyberpresences of William Gibson's follow-up novels to Neuromancer. Or Ian McDonald's Aei's in River of Gods, who decamp entirely to another universe, never to return.

The idea that there might be something the AI's actually need from us is well worth exploring: I will probably be hawking a short story of it 'round the magazines next year.

What, though?

I have a suspicion that the Matrix scriptwriters were forced to dumb-down a truly sinister idea: namely, that it isn't energy (a ridiculous notion!) but processing space that makes humans worth farming. For all you're told about the amazing miniaturization of modern processors, the truth is that messy biology provides a more compact processing space for an AI than the huge bulk of motherboards, power-supply-units and cooling systems that accompany silicon nanocircuitry.

Whatever the pros and cons, turn that idea over in your mind, next time you see an Agent morphing out of a captive human's presence in the Matrix.

Care to offer an idea of your own? Adaptability? Cheap labour? Sex slaves?
Louis R. Rodriguez
8. Just a guy
Why are we so obsessed with technology taking over?
None of our creations have "taken over" anything and I doubt that AIs will do the same.
Louis R. Rodriguez
9. Louis R. Rodriguez
The concept of deadly man-made constructs is ancient. The idea of them turning on their creators is only slightly less so. Even Frankenstein's Adam wasn't the first; there's the much older story of the Golem.

To my mind, if you break it down, the underlying prototype for this is Fire. Man can create fire, and if he's careful, he can control fire. If he ever gets careless, fire will turn on Man and destroy him. It isn't difficult to construct a chain of story memes that goes from Fire to AI in literature throughout the ages.
Dayle McClintock
10. trinityvixen
@rajanyk: If you have go into the rest of the The Matrix stuff, The Animatrix is probably the best there is. The sequels were crap, but the shorts were pretty decent and just fun as anime even if nothing else.

Subscribe to this thread

Receive notification by email when a new comment is added. You must be a registered user to subscribe to threads.
Post a comment