Conversation from January/February 2011, between myself and Noam Chomsky:
GS: As the world’s leading linguist, what are your thoughts on Watson, the robot that will be appearing on “Jeopardy”? This appears to be the most advanced form of AI to date.
NC: I’m not impressed by a bigger steamroller.
GS: I assume that “a bigger steamroller” is a reference to Deep Blue. Watson understands spoken language and adapts its knowledge based on human interaction. What level of AI would be required to impress you?
NC: Watson understands nothing. It’s a bigger steamroller. Actually, I work in AI, and a lot of what is done impresses me, but not these devices to sell computers.
GS: What do you think of the Turing Test?
NC: Exactly what Turing did in the 8-page paper in which he outlined it, and which no one seems to able to read. As he made clear, the question whether machines think “is too meaningless to deserve discussion” (from memory, but something like that).
GS: What work are you doing in the field of AI?
NC: The work I do on language involves computational systems of mind. Same with many others. Same with work of the David Marr school on vision. And much more in the cognitive sciences. That’s the serious part of AI, I think.
GS: As you consider the idea of questioning if a computer can have intelligence pointless, do you feel that AI needs a new name?
NC: There was a famous debate about 40 years ago between Marvin Minsky, the guru of AI, and Jerry Lettvin, a biologist at MIT. At one point Jerry said he thought the field was misnamed: it should be called “natural stupidity”. I didn’t say that I regarded the Turing test as pointless. Rather, I agree with Turing, who thought there was a point to his “imitation game” (“Turing test”).
GS: My apologies if I incorrectly attributed the “pointless” remark to you… I took it from Jack Copeland, who writes, “Can a computer possibly be intelligent, think and understand? Noam Chomsky suggests that debating this question is pointless, for it is a question of decision, not fact: decision as to whether to adopt a certain extension of common usage. There is, Chomsky claims, no factual question as to whether any such decision is right or wrong–just as there is no question as to whether our decision to say that aeroplanes fly is right, or our decision not to say that ships swim is wrong.” Is he right in his interpretation of your views?
NC: I don’t know who Copeland is, but he should take the trouble to read Turing’s paper –- after all, it’s quite short. I simply endorsed what he said. I won’t comment on the rest.
(Gavin notes: Jack Copeland is a Professor of Philosophy at the University of Canterbury and the Director of the Turing Archive for the History of Computing. He has undoubtedly read the paper in question.)
GS: Watson doesn’t “know” anything, as you said. Is it always wrong to use “knowledge” or “intelligence” when discussing computers, no matter how advanced? Are these terms strictly human?
NC: The computer is in itself is maybe a good paperweight. It’s the program that is doing everything. A program is a theory written in a weird notation, so it can be executed by computer. By the standards of theories, this one is an awful theory.
I discussed computer intelligence in a paper I wrote for a Turing symposium. (Which you can find here — the only free location on the Web, folks!)
GS: If Watson is a “bigger steamroller” compared to Deep Blue, is the human brain a “bigger steamroller” than a primate brain? If not, why does the analogy not work?
NC: Insects can carry out intellectual feats that you and I can’t. That doesn’t mean that their brains are bigger steamrollers than human brains. They function differently, and it’s a hard problem to discover how.
Response from Ray Kurzweil
Ray Kurzweil, a notable futurist and inventor, responded to Chomsky’s words on February 13, 2011 (the eve of the competition’s airing).
Kurzweil says that Chomsky’s “answers are so brief that it is difficult to understand what he is trying to say. I would say that Watson is clearly not yet ‘strong AI’, but it is an important step in that direction. It is the clearest demonstration I’ve seen of computers handling the subtleties of language including metaphors, puns and jokes, something people had said would not be possible. I don’t agree with Chomsky that Watson is not impressive in that regard. As long as AI has any flaws or limitations, people will jump on these. By the time that the set of these limitations is nil, AI will have long since surpassed unaided human intelligence.”