This article was last modified on February 18, 2011.


Turing on the “Imitation Game”

The following is a paper written by Noam Chomsky, not myself, for a symposium on the Turing Test. It is shared here courtesy of Professor Chomsky — to the best of my knowledge, the only place you can find it online without paying a subscriber fee. Enjoy!

***

Turing on the `Imitation Game’

In his justly famous 1950 paper “Computing Machinery and Intelligence,” A.M. Turing formulated what he called “the `imitation game’,” later known as “the Turing test,” a “new form of the question” whether machines can think, designed to focus attention on “the intellectual capacities of a man.” This “new question [is] a worthy one to investigate,” Turing urged, offering several “conjectures” on machine potential that should “suggest useful lines of research.” Human intellectual capacities might be illuminated by pursuit of the task he outlined, which also might advance the welcome prospect “that machines will eventually compete with men in all purely intellectual fields.”

The dual significance of the enterprise — constructing better machines, gaining insight into human intelligence — should no longer be in doubt, if it ever was. There are, however, questions about just where its significance lies, about its antecedents, and about the specific research strategy that Turing proposes.

On the matter of significance, Turing expressed his views lucidly and concisely. He begins by proposing “to consider the question, `Can machines think?,” but went on to explain that he would not address this question because he believed it “to be too meaningless to deserve discussion,” though “at the end of the century,” he believed, “the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted.” He explained further that for his purposes at least, it would be “absurd” to resolve the issue by determining how the words machine and think “are commonly used” (a project he conceives much too narrowly, though that is not relevant here).

Turing said nothing more about why he considered the question he posed at the outset — “Can machines think?” — “to be too meaningless to deserve discussion,” or why he felt that it would be “absurd” to settle it in terms of “common usage.” Perhaps he agreed with Wittgenstein that “We can only say of a human being and what is like one that it thinks”; that is the way the tools are used, and further clarification of their use will not advance the dual purposes of Turing’s enterprise. One can choose to use different tools, as Turing suggested might happen in 50 years, but no empirical or conceptual issues arise. It is as if we were to debate whether space shuttles fly or submarines swim. These are idle questions. Similarly, it is idle to ask whether legs take walks or brains plan vacations; or whether robots can murder, act honorably, or worry about the future. Our modes of thought and expression attribute such actions and states to persons, or what we might regard as similar enough to persons. And person, as Locke observed, is not a term of natural science but “a forensic term,…appropriating actions and their merit; and so belongs only to intelligent agents, capable of a law, and happiness, and misery,” as well as accountability for actions, and much else. It would be a confusion to seek “empirical evidence” for or against the conclusion that brains or machines understand English or play chess; say, by resort to some performance criterion. That seems a fair rendition of Turing’s view.

Of the two “useful lines of research” that Turing contemplated, one — improvement of the capacities of machines — is uncontroversial, and if his imitation game stimulates such research, well and good. The second line of research — investigating “the intellectual capacities of a man” — is a more complex affair, though of a kind that is familiar in the sciences, which commonly use simulation as a guide to understanding. From this point of view, a machine is a kind of theory, to be evaluated by the standard (and obscure) criteria to determine whether the computational procedure provides insight into the topic under investigation: the way humans understand English or play chess, for example. Imitation of some range of phenomena may contribute to this end, or may be beside the point, as in any other domain.

For the reasons that Turing seemed to have in mind, we also learn nothing about whether Jones’s brain uses computational procedures for vision, understanding English, solving arithmetic problems, organizing motor action, etc., by observing that, in accord with our ordinary modes of thought and expression, we would not say that a machine carries out the activities, imitating people. Or, for that matter, by observing that we would not say that Jones himself is performing these actions if he follows instructions that mean nothing to him with input-output relations interpreted by an experimenter as matching human performance of the actions; say, in an “arithmetic room” of the style suggested by John Searle, in which Jones implements an algorithm for long division, perhaps modeled on the algorithm he consciously employs; or a “writing room” in which Jones mechanically carries out instructions that map coded sound inputs to outputs interpreted as letters in sequence, instructions that might be a close counterpart to an algorithm implemented by Jones’s sensorimotor and linguistic systems when he writes down what he hears. No meaningful question is posed as to whether the complex including Jones is doing long division or writing, so there are no answers, whether or not the procedure articulates in an instructive way what the brain is actually doing.

Questions about computational-representational properties of the brain are interesting and it seems important, and simulation might advance theoretical understanding. But success in the imitation game in itself tells us nothing about these matters. Perhaps, as Turing believed, the imitation game would provide a stimulus for pursuit of the two “useful lines of research” he advocated; he said little about why this research strategy is preferable to other ways to improve machine capacity and study human intelligence, and it does not seem obvious, apart from some cultural peculiarities that an outside observer might assess with a critical eye.

Turning to antecedents, Turing’s imitation game is reminiscent of ideas that were discussed and pursued during what we might call “the first cognitive revolution” of the 17th century, within the context of “the mechanical philosophy,” which was based on the conception of matter as inert and governed by principles of contact mechanics. Descartes and his followers attempted to show that the natural world could be incorporated within this framework, including a good part of human perception and action but not workings of the human mind, notably “free will,” which “is itself the noblest thing we can have,” Descartes held, and is manifested most strikingly in the ordinary use of language.

The conception raised questions about the existence of other minds: How do we decide whether some creature is a complex mechanism, or is endowed with a mind as well (as we are, we discover in other ways)? To answer this question, experimental tests were proposed to determine whether the creature exhibits properties (mainly language-related) that transcend the limits of mechanism. If it passes the hardest experiments I can devise to test whether it expresses and interprets new thoughts coherently and appropriately as I would, the Cartesians argued, it would be “unreasonable” to doubt that the creature has a mind like mine.

Though similar in some ways to Turing’s imitation game, the Cartesian tests for other minds are posed within an entirely different framework. These tests are ordinary science, designed to determine whether some object has a particular property, rather like a litmus test for acidity. The project collapsed when Newton undermined the mechanical world view, so that the mind/body problem could not even be formulated in Cartesian terms; or any others, so it appears, at least until some new concept of “physical” or “material” is formulated. The natural conclusion, spelled out in the years that followed, is that thinking is a property of organized matter, alongside of other mysterious properties like attraction and repulsion. Thought in humans “is a property of the nervous system, or rather of the brain,” as much “the necessary result of a particular organization [as] sound is the necessary result of a particular concussion of the air” (Joseph Priestley). More cautiously, we may say that people think, not their brains, though their brains provide the mechanisms of thought. As noted, it is a great leap, which often gives rise to pointless questions, to pass from common sense intentional attributions to people, to such attributions to parts of people, and then to other objects.

Throughout the same period, the project of machine simulation was actively pursued, understood as a way to find out something about the world. The great artisan Jacques de Vaucanson did not seek to fool his audience into believing that his mechanical duck was digesting food, but rather to learn something about living things by construction of models, as is standard in the sciences. Turing’s intentions seem similar in this regard.

Turing’s two “useful lines of research” have proven to be eminently worth pursuing, however one evaluates the research strategy he proposed. Turing’s sensible admonitions should also be borne in mind, more seriously than they sometimes have been, in my opinion.

Also try another article under Philosophical
or another one of the writings of Gavin.

4 Responses to “Turing on the “Imitation Game””

  1. The Framing Business » Noam Chomsky v. IBM’s Watson Computer Says:

    […] discussed computer intelligence in a paper I wrote for a Turing symposium. (Which you can find here — the only free location on the Web, […]

  2. plate Says:

    Does anyone know what Chomsky is referring to in the sixth paragraph when he mentions “Jones’ brain?”. He seems to enjoy typing and connecting words into sentences but saying nothing except keeping “Turing’s sensible admonitions” in mind which he mentions in the first paragraph. Leave it to a world famous linguist to take a full page to say (at least I think this is his point) that what humans do can be called thinking and what computers do is called something else. If you can’t say something simply and concisely then you don’t know what you’re talking about.

  3. Rameez Rahman Says:

    Hey, Plate, you missed Chomsky’s point. He is not saying what you think he is saying. one purpose of AI, as Chomsky notes at other places and that Herbert Simon had also stated elsewhere, was to understand the way humans think. And in this respect, mainstream AI (that focussed on nifty robots or algorithms that could imitate humans according to some superficial notions) do not help at all. The computation that goes on in the human mind is not simple, adaptive rules.

    If it is still not clear, in response to your question about Jones brains, what Chomsky saying is that having nifty robots or programs imitate Jones (at least superficially) tells us NOTHING about the computational system inside Jones brain.

    As for your point, whether what people do can be called thinking and what machines do something else, as both Turing and CHomsky, point out: The question is MEANINGLESS!

  4. Dr. Farogh Dovlatshahi Says:

    GAVIN, thanks for the posting of Chomsky’s article on the Immitation Game.
    C O U L D Y O U give the date etc. so that it may be included in a references.

Leave a Reply