Skip to content
Answering Your Questions About Reactor: Right here.
Sign up for our weekly newsletter. Everything in one handy email.

Intelligence, algorithms, and anthropomorphism

Why is it we don’t have artificial intelligence? The wizards have talked about AI since the dawn of digital computing. That’s been more than a half century.

Is AI a mere trope? I think not, but the jury remains out.

Oh, many technologies are (or were) hyped as AI: expert systems, chess-playing programs, and language translators. Are any of them really intelligent? By a layperson’s—or a dictionary’s—definition of intelligence, surely not. All these programs are confined to very narrow specialties. None shows common sense: an understanding of the world. Even in their designated narrow specialties, these programs can exhibit breathtaking ineptitude. Think of some of the more amusing online language translations you’ve seen.

Note that challenges once seen as artificial intelligence tend, once solved, to be demoted. That is, we keep moving the (figurative) goal posts. Take chess playing: once programmers wrote software to defeat chess masters, many discounted the success as mere algorithm. Intuitively, that’s an acknowledgment that the chosen software solution—brute-force assessment of many options, looking ahead a vast number of moves—differs fundamentally from how a human expert plays. It’s as though “artificial” intelligence must ape (pun intended) “real” intelligence in addition to solving the problem that—in nature—only human intelligence can imagine or solve.

It would help if human experts agreed on the meaning of such basic terms as intelligence, consciousness, or awareness. They don’t. It’s hard to build something that’s incompletely defined. Maybe AI is like obscenity: we’ll know it when we see it.

Most discussions of AI and when we will achieve it eventually fall back on the Turing test. If a person in isolation can’t tell if she’s talking (or exchanging text messages) with a person or an AI, then it is an artificial intelligence.

What kind of anthropomorphic standard is that?

This is an SFnal community, and we’re accustomed to thinking about meeting intelligent aliens. Suppose we encounter space-traveling aliens and one of them communicates with us via text messages. Either the alien passes the Turing test, or it doesn’t. Either way, what does it mean?

If the alien fails—if we can tell it’s an alien!—are we to conclude, despite its ability to design and build spaceships, that it is unintelligent? That hardly seems right. Alternatively, suppose the alien passes. It reads and writes just like one of us. In SFnal literary criticism, we would call such aliens, disparagingly, “humans in rubber suits” and find them unbelievable. We don’t expect a creature native to a very different environment to end up acting just like one of us.

Unless, apparently, that alien environment is inside a computer. Why in the world(s) should we consider the Turing test useful for characterizing computer-resident intelligences? In my novel Fools’ Experiments, the hero wrestles with this dilemma. He opines about the Turing test:

What kind of criterion was that? Human languages were morasses of homonyms and synonyms, dialects and slang, moods and cases and irregular verbs. Human language shifted over time, often for no better reason than that people could not be bothered to enunciate. “I could care less” and “I couldn’t care less” somehow meant the same thing. If researchers weren’t so anthropomorphic in their thinking, maybe the world would have AI. Any reasoning creature would take one look at natural language and question human intelligence.

(And in the book, we do get to artificial intelligence—but not by traveling any road the mainstream AI community has considered.)

Herewith Lerner’s test: if an artificial thing understands how to perform many tasks that, in nature, only a human can understand—no matter the means by which the thing analyzes those tasks—that thing is an artificial intelligence.

We’ll leave consciousness and self-awareness for another day.


Edward M. Lerner worked in high tech for thirty years, as everything from engineer to senior vice president. He writes near-future techno-thrillers, most recently Fools’ Experiments and Small Miracles, and far-future space epics like the Fleet of Worlds series with colleague Larry Niven. Ed blogs regularly at SF and Nonsense.

About the Author

Edward M. Lerner

Author

EDWARD M. LERNER worked in high tech for thirty years, as everything from engineer to senior VP. He writes near-future technothrillers, most recently Fools' Experiments and Small Miracles, and far-future space epics like the Fleet of Worlds series with colleague Larry Niven. Ed blogs regularly at SF and Nonsense, http://edward-m-lerner.blogspot.com/
Learn More About Edward M.
Subscribe
Notify of
guest
19 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments