So we learned a bit about the Turing Test in my AI class. This got me thinking about it. I can see a few limitations with it:
- It's limited to a certain context. What if I'm not designing an AI to converse with humans?
- It favors acting humanly over acting rationally. For example, if I'm designing an AI to control nuclear missiles, do I really want it to act human? Granted, this is an extreme example, but you get the idea.
- It could be influenced by factors that don't indicate that the computer can think humanly. For example, suppose I ask what 2334 * 321 is. I could tell if the device is a computer because it will probably answer me fairly quickly while a human would have to figure it out. The solution? Make the computer pause.
Now, I'm sure that the Turing Test still has its place in determining machine intelligence. But I see it as being fairly limited in scope. Are there any alternatives? For that matter, am I wrong as to what I perceive to be its limitations?
EDIT: Let me be clear: I'm not suggesting that the Turing Test should be abandoned. I'm just curious if there are any other tests that overcome its limitations (probably trading them for other limitations).