Wed
Oct 21 2009 11:45am

Intelligence, algorithms, and anthropomorphism

Why is it we don’t have artificial intelligence? The wizards have talked about AI since the dawn of digital computing. That’s been more than a half century.

Is AI a mere trope? I think not, but the jury remains out.

Oh, many technologies are (or were) hyped as AI: expert systems, chess-playing programs, and language translators. Are any of them really intelligent? By a layperson’s—or a dictionary’s—definition of intelligence, surely not. All these programs are confined to very narrow specialties. None shows common sense: an understanding of the world. Even in their designated narrow specialties, these programs can exhibit breathtaking ineptitude. Think of some of the more amusing online language translations you’ve seen.

Note that challenges once seen as artificial intelligence tend, once solved, to be demoted. That is, we keep moving the (figurative) goal posts. Take chess playing: once programmers wrote software to defeat chess masters, many discounted the success as mere algorithm. Intuitively, that’s an acknowledgment that the chosen software solution—brute-force assessment of many options, looking ahead a vast number of moves—differs fundamentally from how a human expert plays. It’s as though “artificial” intelligence must ape (pun intended) “real” intelligence in addition to solving the problem that—in nature—only human intelligence can imagine or solve.

It would help if human experts agreed on the meaning of such basic terms as intelligence, consciousness, or awareness. They don’t. It’s hard to build something that’s incompletely defined. Maybe AI is like obscenity: we’ll know it when we see it.

Most discussions of AI and when we will achieve it eventually fall back on the Turing test. If a person in isolation can’t tell if she’s talking (or exchanging text messages) with a person or an AI, then it is an artificial intelligence.

What kind of anthropomorphic standard is that?

This is an SFnal community, and we’re accustomed to thinking about meeting intelligent aliens. Suppose we encounter space-traveling aliens and one of them communicates with us via text messages. Either the alien passes the Turing test, or it doesn’t. Either way, what does it mean?

If the alien fails—if we can tell it’s an alien!—are we to conclude, despite its ability to design and build spaceships, that it is unintelligent? That hardly seems right. Alternatively, suppose the alien passes. It reads and writes just like one of us. In SFnal literary criticism, we would call such aliens, disparagingly, “humans in rubber suits” and find them unbelievable. We don’t expect a creature native to a very different environment to end up acting just like one of us.

Unless, apparently, that alien environment is inside a computer. Why in the world(s) should we consider the Turing test useful for characterizing computer-resident intelligences? In my novel Fools’ Experiments, the hero wrestles with this dilemma. He opines about the Turing test:

What kind of criterion was that? Human languages were morasses of homonyms and synonyms, dialects and slang, moods and cases and irregular verbs. Human language shifted over time, often for no better reason than that people could not be bothered to enunciate. “I could care less” and “I couldn’t care less” somehow meant the same thing. If researchers weren’t so anthropomorphic in their thinking, maybe the world would have AI. Any reasoning creature would take one look at natural language and question human intelligence.

(And in the book, we do get to artificial intelligence—but not by traveling any road the mainstream AI community has considered.)

Herewith Lerner’s test: if an artificial thing understands how to perform many tasks that, in nature, only a human can understand—no matter the means by which the thing analyzes those tasks—that thing is an artificial intelligence.

We’ll leave consciousness and self-awareness for another day.


Edward M. Lerner worked in high tech for thirty years, as everything from engineer to senior vice president. He writes near-future techno-thrillers, most recently Fools’ Experiments and Small Miracles, and far-future space epics like the Fleet of Worlds series with colleague Larry Niven. Ed blogs regularly at SF and Nonsense.

19 comments
Josh Kidd
1. joshkidd
I think that the distinction you are trying to make here is between strong AI and weak AI. In many ways, a chess playing computer is artificially intelligent. Though, you are right, it is not what we think of when we think of AI.

My personal thought is that we don't have a clear enough idea yet of what it means for an intelligence to be embodied. There's an interesting book that explores this called Out of Our Heads: Why You Are Not Your Brain by Alva Noe.
Brian Kaul
2. bkaul
I think one advantage the Turing test has is broad applicability - by the "Lerner test," any device that's programmed to solve a sufficient number of specific problems would pass as "intelligent" even if totally unable to adapt to other situations and analyze new problems. If a device made by humans is programmed to be able to perform a task that is normally understood only by humans, that doesn't imply that the device is intelligent, but that the humans who created it were. If, on the other hand, a device is able to, like humans, analyze new situations which it hasn't encountered before, and generate/obtain the knowledge of how to perform the appropriate tasks to solve the problems presented, without human interference, then perhaps it should be considered intelligent.
Edward M. Lerner
3. EdwardMLerner
Joshkidd, bkaul: IMO, the challenge is defining intelligence without being self-referential ("I know it when I see it") and/or case-by-case recourse to human evaluators. The AI community has failed at that.

Without claiming the post's Lerner test is perfect, it's no more subjective than the Turing test. Wearing my SFnal hat, I'll suggest there are tasks alien intelligences would consider normal and yet humans might not necessarily be able to handle.

For example, autonomous navigation around the natural world is often considered something an AI should be able to accomplish. But what is the natural world? Intelligences native to vacuum or ocean -- or a computer network -- will have different ideas about nature.

Does that ability-to-navigate criterion make humans unintelligent? We'd like to think we *are* intelligent and discount those "other" environments. Then does the ability-to-navigate criterion mean intelligence is species-specific? Are we doomed not to recognize other species -- and they, us -- as intelligent even though both species can solve many of the same problems (if perhaps in different ways)?

To paraphrase something in the original post: *experts* can't agree what natural intelligence means. That being so, how can anyone know if/when software developers replicate intelligence?
Mike McD
4. msmcdon
"We don’t expect a creature native to a very different environment to end up acting just like one of us. Unless, apparently, that alien environment is inside a computer. "

Well, unlike computers, we don't go and create aliens from other planets ourselves, except in books. I'd say a healthy portion of our expectations that AI display a human-like intelligence is because we're setting out with the goal of mimicking our own intelligence / consciousness / awareness. We're setting out in the first place with a goal of creating humans in silicon suits.
Rick Rutherford
5. rutherfordr
Exactly -- "Artificial Intelligence" is being used as shorthand for "Artificial Human Intelligence".

The interesting question (i.e. problem to be solved by an SF author) is how to come up with a meaningful definition of intelligence without reference to human intelligence?
Edward M. Lerner
6. EdwardMLerner
Agreed, AI is usually shorthand for "human-mimicking artificial intelligence." And that, IMO, is the problem.

Each of us is, to some extent, a product of his/her environment. We don't use the Turing test to decide that a human born and bred a continent away from us isn't intelligent. We make allowances for those differences in environment and upbringing. A human-mimicking AI needs common sense -- and common sense means quite different things in Manhattan and in the depths of the Amazon jungle.

For "intelligence" to be a scientifically meaning concept -- whether used to describe "natural" alien intelligences, AIs (theirs or ours), uplifted dolphins and apes, or technology-enhanced transhumans -- we need to allow for beings who are the products of very different environments.

AFAIK, there is no workable definition of intelligence that spans that spectrum of possible intelligences.
David Goldfarb
7. David_Goldfarb
Note that Turing himself didn't say "A computer must pass the test to be intelligent" -- he put it the other way: "If a computer can pass the test, we must admit that it is intelligent." And that's precisely because natural language is so wonky. But Turing certainly left open the possibility that there could be AI that couldn't necessarily pass the Turing Test.
philip hodgson
8. hodgsopg
The question of whether a computer can think is no more interesting than the question of whether a submarine can swim. - Edsger Dijkstra

We don't build submersible vehicles that use fins to swim, we probably could, but the submarine is a more efficient design. Computers in the future (and today) won't (in general) mimic the way humans think, but have there own way of working.

Perhaps a better test is to look at emergent behaviour, that is behaviour of the system that is not directly programmed into it, but arises from the interaction of sub-sections. A computer system with a large amount of emergent behaviour could seem to be intelligent.
Feste
9. Feste
Then there's the question of if you even want conscious intelligence, what use is it?

Considering that we already have adaptable, conscious computers (i.e. us), what actual benefits do you gain from making more of them which are just a bit different.
Jason Ramboz
10. jramboz
The intro to this post reminded me of the "Kasparov vs. Deep Blue" track from Moxy Früvous' Live Noise album. The point that Jian (if I remember correctly) makes is, essentially, "So what? It won at chess. That just makes it better at doing one thing than a human, just like other tools can. It's still not intelligent."

It also ends with a hilarious discussion of disaster movie scenes involving Kasparov being able to run away laughing while Deep Blue sits there and does nothing. Really, you just have to hear it.
Dan Sparks
11. RedHanded
@ Edward

Intelligence - the ability to deal with a broad range of abstractions, the ability to grasp the facts of reality and to deal with them long-range (i.e., conceptually).

Consciousness - the faculty of perceiving that which exists, also the faculty of awareness

Both of these concepts (like all concepts) have a certain value when refering to a specific example but cover the scope of all possible values in general. (a higher or lesser ability to deal with reality or of perceiving that which exists). I would say a being who has the ability to form a wide variety of concepts would be considered to have human intelligence, the question that remains is how high of a human intelligence.

Regardless of environment the ability to deal with a broad range of abstractions is the mark of intelligence. To view the facts of reality and be able to deal with them conceptually, that is the mark of intelligence.

Is that not a definition that could discount all inessential data (place of origin, culture, upbringing)?
Edward M. Lerner
12. EdwardMLerner
Redhanded: IMO, all definitions of intelligence -- yours and mine included -- are subjective. Consider your suggested criteria: will everyone agree on the meaning of a "broad range of abstractions" or a "wide variety of concepts"?

I think awareness/consciousness is likewise hard to prove. If I am skeptical, how does an AI convince me it knows it exists (rather than being programmed to assert it exists)?
Kevin Carlin
13. kcarlin
Having studied Piaget's theories of child cognitive development many, many years ago, the mechanics of human intelligence are not fully understood and far more interesting than the average thoughtful critic might realize. Memories as we generally think of them, for instance, are products of having developed, through a number of steps that occur uniquely for each person, cognitive symbol skills that enable the bundling of the component memories (and speech, and so on). The hardest part of the class was to think in terms of what a child of one or three or six years is actually capable of processing with the cognitive skills bundle they have available. Those mirror neurons that give us automagic empathy with what we see and hear have to be patiently and systematically suppressed to get to the answers analytically. As adults, it is very hard to make that leap, even the very gifted graduate psychology students in the course struggled for several weeks. And as we tumble down those skill trees, and kick me for over-simplifying here, symbol manipulation and speech and consciousness all play together, but do not necessarily travel together.

In Julian Jaynes' The Origin of Consciousness in the Breakdown of the Bicameral Mind the case is made for a state of being, historically present and marginalized by the introduction of writing, where the left and right hemispheres of the human brain cooperated in very different fashion than they do in normal modern experience. "Voices in the head" issuing commands in a fashion similar to a modern schizophrenic, from one hemisphere to the other. A few decades further down the road the theory still seems to stand up as an intriguing possibility with some followers and new published articles popping up now and again.

Intelligence is about being able to do certain kinds of things, and psychology seems prepared to at least admit the possibility of intelligence without consciousness. Greater intelligence seems to be the ability to do higher order things, and we like to throw some benchmarks out there. Tool users. Tool makers. Algebra users. Calculus users. Space travelers. But intelligence does not exist without a referent. Intelligent with regard to what task or skill? And tests must have context.

The stories that base initial communications on numeracy seem to be on the most solid footing. Numeracy, astronomy, stone henges/astronomically-based architecture is a very common chain of development in human civilizations.

These AI tasks, chess playing, neural nets, expert systems, tend to be about developing "higher order" skills in our speedy but literal-minded binary brained buddies.

Alien life will be intelligent in their survival skills, as we are in ours. Where there is overlap, there will be commonality. Things like physics and chemistry between star-faring races should see a lot of commonality. Biology may make for much longer conversations.
Edward M. Lerner
14. EdwardMLerner
kcarlin: a very thoughtful (and thought-provoking) comment.
Dan Sparks
15. RedHanded
@ 12

Not everyone may agree with what constitutes a "broad range of abstractions" or a "wide variety of concepts" but regardless there is an objective meaning behind words, as in A is A, things are what they are regardless of opinion, otherwise how could humans communicate with one another? And we do communicate with each other, we can convey meaning and understand anothers meaning. We perceive sensory information and group that information into concepts in order to deal with reality. To state it simply intelligence is the ability to deal with reality in the long term and goes hand in hand with consciousness. I would call that general intelligence as opposed to saying that someone is intelligent in a specific area.

Consciousness is an axiom, meaning that it is fundamental to reality and that in order to even question the concept of consciousness you have to use it as it is implied in every statement and action. Existence exists (another axiom, A is A, things are what they are, a thing can not be all brown and all red at the same time, etc.)consciousness is that which perceives what exists.

Really the question you asked could be applied to any person as well as an AI. How do you know you exist and are aware? How would you explain that to another person or an alien race that would question whether you (as in humans) are intelligent or aware or have consciousness? Personally I think it comes down to volition, being able to choose in the face of options. Humans are guaranteed nothing, we are not born knowing all we need to know in order to survive (a plant automatically acts for it's survival, uses sunlight, carbon dioxide and nutriets in order to grow to maintain and further it's life, animals use instict (fight or flight, pain/pleasure mechanism) to further their life, to find what is good or bad for them and in situations where that isn't enough then they die), only humans can choose to either further their life, to find what is good for their life or to not, only humans have to figure out what is good for them, what foods are poisonous, what plants are edible, how to build a fire, a house or a spear or a car. That choice is what makes us aware because we KNOW their are options and we have to figure out which are right for us. I'd say for any AI it would be the same, they would be able to choose in the face of options based on the information they have and not automatically know what is good or bad for their life but have to discover it. A program or a robot that only does what it is programmed to do and cannot choose another path would not be aware or conscious because if it were then it would have a choice.

This makes me think of the movie I, Robot. If you haven't seen it then you should, I recommend. The movie shows the difference between what just a programmed robot does as opposed to a robot with consciousness. You will notice that the robot with consciousness has to discover what is good for it, is even aware that there are choices and realizes that it has to make choices. The other robots do what they are programmed to do and have no choice in the matter.
Edward M. Lerner
16. EdwardMLerner
Redhanded: I have seen (and I enjoyed) the movie I, Robot.

I infer that you see a sharp dividing line between consciousness and intelligence vs. not. I suspect it'll be much fuzzier -- more ambiguous -- than that. Is a primate that uses sign language, and primitive tools, intelligent? (There must be some choices made in there.) If yes, how about its less accomplished brother or sister?

For most of history, different humans could not agree whether other races or groups (us vs. the barbarians) were truly the same as us.
Dan Sparks
17. RedHanded
@ Edward

I do see a dividing line between conciousness and intelligence vs. not and I admit it can be fuzzy but that's where I think the context of the situation would come into play. I would say that the primate in question would be intelligent, but like I said before it's a question of how intelligent, as in there is a scale going from 1 to who knows how far. I would compare the intelligence of the primate in your example to that of the intelligence of possibly a toddler. The less accomplished brothers and sisters may or may not be, but they would have the potential to show intelligence based on their ability to deal with reality and learn. I think we can agree that the primate and a human being are not on the same level of intelligence as we go way beyond the reasoning of childhood (if we choose to of course).

So different humans have split humanity apart into "us" and "them" and probably always will, but that won't change the fact that they were humans.
Edward M. Lerner
18. EdwardMLerner
Redhanded: you concluded, "So different humans have split humanity apart into "us" and "them" and probably always will, but that won't change the fact that they were humans."

That dichotomy is precisely my point. Our working definitions of intelligence, and awareness, and consciousness, and humanity keep boiling down to "I'll know it when I see it ." History shows us that who we see as our equals is a moving target...

And so, going back to the original post, my concern that our ability to build or recognize an AI remains entangled in anthropomorphism.
Feste
19. clovis simard
Bonjour,

Description : Mon Blog, présente le développement mathématique de la conscience c'est-à-dire la présentation de la théorie du Fermaton.La liste des questions mathématiques les plus importantes pour le siècle à venir, le No-18 sur la liste de Smale est; Quelles sont les limites de l'intelligence tant qu'humaine et artificielle.

(fermaton.over-blog.com)
Cordialement
Clovis Simard

Subscribe to this thread

Receive notification by email when a new comment is added. You must be a registered user to subscribe to threads.
Post a comment