views:

395

answers:

5

So we learned a bit about the Turing Test in my AI class. This got me thinking about it. I can see a few limitations with it:

  1. It's limited to a certain context. What if I'm not designing an AI to converse with humans?
  2. It favors acting humanly over acting rationally. For example, if I'm designing an AI to control nuclear missiles, do I really want it to act human? Granted, this is an extreme example, but you get the idea.
  3. It could be influenced by factors that don't indicate that the computer can think humanly. For example, suppose I ask what 2334 * 321 is. I could tell if the device is a computer because it will probably answer me fairly quickly while a human would have to figure it out. The solution? Make the computer pause.

Now, I'm sure that the Turing Test still has its place in determining machine intelligence. But I see it as being fairly limited in scope. Are there any alternatives? For that matter, am I wrong as to what I perceive to be its limitations?

EDIT: Let me be clear: I'm not suggesting that the Turing Test should be abandoned. I'm just curious if there are any other tests that overcome its limitations (probably trading them for other limitations).

+2  A: 

Depends on how you define a conversation.

The question-and-answer context used in Turing's thought experiment might seem limited and contrived. But what if you switched it to a medical diagnostic session or a financial portfolio optimization problem? Those feel less like a Q&A session, but it would still be difficult to get a machine to be indistinguishable from a human.

What about natural language processing? Still not a solved problem, because natural grammars aren't context-free.

I think the Turing test still holds up as a way of thinking about the problem.

duffymo
+5  A: 

I think you are missing the point of the Turing Test. It's not meant to be a judge of the quality of an AI algorithm, but rather the success of an AI algorithm meant to simulate human intelligence. In that sense it is really more a test of the state of the art in AI rather than any particular AI algorithm. That is, if we can design an AI algorithm to pass this test, then we can say that we are able with AI to develop machines with human intelligence.

It's reasonable to assume that there are other tests that would be equally sufficient, but this test is elegant in its simplicity and relative lack of constraints. There are basically no constraints on the inputs except their format.

tvanfosson
+1 totally agreed.
chakrit
+7  A: 

Tell you what: before we answer your question, define "intelligence".

The Turing Test, as originally described, had some other problems too, the most notable of them being that it's not "effective", which is to say there is no way to tell when it's over.

Now, look at your (quite reasonable) objections: on one hand, if it gives the right answer too quickly, it would make you suspicious; on the other, you're not sure it would be good if it gives wrong answers, even if it would make you think it might be "intelligent."

But, now, consider our interaction: you don't know that I'm not an intelligent computer. How about, for a Star Trek reference, Mr Data on ST:TNG? He's certainly distinguishable from a human and doesn't give human responses at all times, but mostly passes.

Now, let's for a moment consider a person you meet who, instead of being intelligent, is completely a mechanism: no "consciousness", no "soul." (This kind of entity is called a "philosophical zombie" in the literature.) Except for that missing "consciousness", this person, or simulacrum of a person, acts like a person in all other ways: expresses pain on an injury, shows pleasure when eating a good meal, shows affection to kittens and small children. (Corrected because I want to pass this test myself.)

How could you tell that this philosophical zombie wasn't "intelligent" ?

The point here is that you've got good questions, but there aren't necessarily well-accepted answers. My own opinion of the Turing Test is that it's a good valid test, because the point of it, as Turing himself said, is that if you can't tell the difference between an intelligent or sentient computer, and a really intelligent entity, then you have to assume there is no difference.

Some other things you could read:

Charlie Martin
+3  A: 

Philip K Dick would have a field day with this one...

1). What other contexts exist? AI-Dolphin? Human intelligence is the only one we have direct experience of (and philosophy still isn't sure about that), the only one we can even begin to approach anything empirical and measurable. Everything else would be second order abstraction. At best

2). Do you really want AI to act rationally? Rationality says nothing of morality which can take you some scary places. Humans who act purely rationally do some pretty fecked up things. Even if you broaden rationality to logicality - the two are not the same - there's still something ineffably more about being human.

3). well that's the trick isn't it? You can't just pause because most real humans will not be able to answer that at all, or will answer in very human terms - "err, about 750k ish". Turing is all about appearances.

Don't get caught up in the technical too much, it's more of a philisophical hypothesis.

annakata
Humans will take many ways to answer that math question, some will google it, some will whip out the calculator, some will refuse to answer it, some will take a while to come back with an answer in a while, and my mother will guess and get it right O_o
Kent Fredric
With regards to #1: I was more thinking in terms of writing an AI to fly an airplane or to be a bad guy in a video game. These may not include conversation with humans.
Jason Baker
@Jason - but those *are* conversations with humans, just not literally.
annakata
@jason, Lol, I can see it now, enemy AI randomly surrenders/defects or wants to be friends and drink tea, and/or stop fighting when date == Dec25 and ask you for a game of footy instead.
Kent Fredric
+1  A: 

There should be alternatives, in a way the Turing test measures AI up against "human intelligence" which isn't the only possible type. We're mostly intelligent, but often just barely ahead of creatures like dogs, who have their own type of "intelligence" and their own form of "logic". If you questioned an AI that could pass the Turning test long enough, you'd expect like any other human that it would be wrong about a few things. None of us is perfect, that is intrinsically built into our existence (Turing tests can fail from both sides).

The following post might be interesting: The Necessity of Determinism, since it questions the underlying need for artificial intelligence anyways.

As for an alternative, it would be interesting to have a test that compares some "intelligent" behavior at various different levels, such as insects, dogs, humans and above. The real issue is how the intelligence behaves at it's edges. It's the dynamic behavior of humans to be able to work their way through things, even when they don't have enough information that is one of our defining properties. You'd notice the computer by the way it screws up, not by the way it works (which is trying to mimic people).

So if you're really looking for intelligence, it becomes an issue of how dynamically the system works when the system is dysfunctional. If you'd can't tell what the error conditions are, then it is dynamic enough to handle an infinitely large amount of input. If there is some input that causes an obvious change in output, then it's not dynamic enough.

Beyond being dynamic, you could measure the quality of the "thinking" itself. As the "thought" is more logical and rational, it moves up to a higher-order existence that spans insects, dogs, people, Vulcans (yes, from Star Trek), to pure math. We're not the only intelligent beings, we're just way more logical and rational than the other creatures. Turing presumed we were at the peak, but I that is easily debatable. A good test could show the true level of intelligence.

Paul.

Paul W Homer