In 2014, artificial intelligence experts at the University of Reading celebrated as their AI program managed to pass the Turing test. Coined by Alan Turing in a 1950 paper, the test requires that a computer convince testers that it is human at least 30% of the time through keyboard conversations. Now, this apparent triumph has since been disputed, with opponents pointing out that the AI program was designed to act like a 13-year-old Ukrainian boy, putting certain constraints on the conversation at the start. Now, a new research article in the latest issue of Science claims that an AI program has passed the Turing test—but a visual test, not a conversational one.
The test was fairly straightforward: Both a human and the computer system were shown a character that doesn’t belong to any of the world’s alphabets but looks like it could be part of a fictional language; i.e., it shares features with preexisting letters. Each was then asked to redraw the character, but with subtle differences; you can see below that that means changing the proportions while maintaining the original form. In other tests, both the software and the person were given a set of unfamiliar characters (again, not real letters) and asked to create a new one that matches the others in the series.
A team of human judges were then asked to guess which set belonged to the human, and which to the AI. Here’s the kicker: They could identify the AI’s characters only about 50 percent of the time, the same as chance.
The fact that this visual test is deceptively simple actually supports the scientists’ reasoning. As the researchers explained in the Science paper,
People learning new concepts can often generalize successfully from just a single example, yet machine learning algorithms typically require tens or hundreds of examples to perform with similar accuracy. People can also use learned concepts in richer ways than conventional algorithms—for action, imagination, and explanation. We present a computational model that captures these human learning abilities for a large class of simple visual concepts: handwritten characters from the world’s alphabets.
Rather than approach the problem like a computer would, the AI mimicked humans’ elasticity of learning, including the ability to learn new concepts from just a few patterns. This computational model is called a probabilistic program, the researchers further explained in a press release from MIT. Josh Tenenbaum, one of the system’s co-developers who comes from the MIT Center for Brains, Minds, and Machines, lays it out: “In the current AI landscape, there’s been a lot of focus on classifying patterns. But what’s been lost is that intelligence isn’t just about classifying or recognizing; it’s about thinking.”
Lead author Brenden Lake (who earned his PhD in cognitive science from MIT; the system is his thesis work) described the process as “learning to learn.” The team hopes to jump from copying characters to more high-concept applications of the program. While they didn’t say what those would be, Tenenbaum explained that it’s all about building blocks:
This is partly why, even though we’re studying hand-written characters, we’re not shy about using a word like “concept.” Because there are a bunch of things that we do with even much richer, more complex concepts that we can do with these characters. We can understand what they’re built out of. We can understand the parts. We can understand how to use them in different ways, how to make new ones.