It seems like every few years, we hear about computer scientists exclaiming that they're on the verge of creating an artificial intelligence system that's finally able to beat the Turing test. This benchmark in AI research, first proposed by Alan Turing in 1950, has been a sort of holy grail for cognitive scientists. The test is simple: if a computer system can convince a human in conversation that it is also human, the machine can be considered "intelligent". Regardless of the test's relevance for modern computing systems and how we implement forms of artificial intelligence (it's already everywhere--just look at Google Search), there is a romantic notion shared between some AI researchers that passing the Turing test will be a major milestone in computing.
In an April 12th essay for the journal Science, cognitive scientist Robert French argued that we have made two revolutionary advances in information technology that warrant revisiting the idea of formally challenging the Turing test. The first is the vast availability of data, including samples of conversations in the form of video, audio, and text--on the petabyte level. Second, we now have the computing power and data processing algorithms to collect and organize the data for use by a computer to "learn" to mimic human conversation. From French's essay in the journal:
Suppose, for a moment, that all the words you have ever spoken, heard, written, or read, as well as all the visual scenes and all the sounds you have ever experienced, were recorded and accessible, along with similar data for hundreds of thousands, even millions, of other people. Ultimately, tactile, and olfactory sensors could also be added to complete this record of sensory experience over time.
Assume also that the software exists to catalog, analyze, correlate, and cross-link everything in this sea of data. These data and the capacity to analyze them appropriately could allow a machine to answer heretofore computer-unanswerable questions
What French is proposing is basically the machine in Philosopher John Searle's Chinese Room thought experiment. The experiment proposes a room with an infinite amount of data representing the appropriate responses for conducting a conversation in Chinese. A person (or computer system) inside the room can use the data to fetch responses for any input given from the outside world, but Searle argues that the person inside would not understand Chinese--they would just be matching inputs with outputs. Similarly, an AI that can process a massive data set of recorded human experiences may convincingly pass the Turing test, but this form of intelligence does not imply cognitive aptitude or consciousness.
Still, even though most computer scientists do not see the Turing test as a research goal, there are many practical benefits to creating an AI that, at least on its surface, can mimic the conversational skills of humans. Human-computer interaction is an important field of study as digital technology becomes more pervasive in our day-to-day lives. For example, a version of Apple's Siri that could pass the Turing test would be much easier to use than its current implementation. Even if it can't truly understand or carry out all of your commands, it could at least provide a less frustrating user experience.