Quantcast

In Brief: Chatbot "Passes" Turing Test, But Does It Matter?

By Norman Chan

It's an outdated milestone for artificial intelligence.

I'm glad that a chatbot has finally succeeded in passing the Turing Test. It means that cognitive scientists and A.I. researchers can finally move on from this outdated milestone of artificial intelligence and focus on metrics that really matter. The Turing Test, as we've discussed at length in the past, was proposed by Alan Turing as a way to determine if computers could "think". The actual test, which puts a chat program in front of 30 judges to engage in conversation, only actually requires that it convinces 30% of the judges to believe that it's a real person. The winning program, a chatbot named Eugene Goostman, succeeded in convincing 33% of the judges by playing the role of a 13-year-old Ukrainian boy without a mastery of English. Basically, it had an advantage in fooling the judges by establishing the terms of its "intelligence" through its purported identity. That didn't stop the University of Reading, where the challenge was held, to boast about the significance of the achievement. (Probably doesn't hurt that there's a biopic coming out later this year on the life of Turing, either.) The larger problem with the victory is that the Turing Test is more a statement about our own limits of perception and language comprehension, rather than of computational prowess. Chatbots can do a good job of imitating intelligence through effective scripting, not modeling of the human brain or our linguistics systems. It's definitely not proof of anything close to consciousness. Good for Eugene Goostman and its creators, but it's nothing more than a fancy Chinese Room.