views:

686

answers:

24

When, if at all, do you believe that we will see AI systems that are capable of passing the Turing Test?

If a machine acts as intelligently as a human being, then it is as intelligent as a human being

This concept forms the basis of the Turing Test, a means to evaluate if an artificial intelligence is indistinguishable from a human being by observing its behavior and interaction with human beings. As yet, no system has been able to pass as human (although numerous human been mistaken for machines).

If you believe that we will eventually build such systems, I would be curious to know what approach you think will be successful.

If you don't think such a system is likely to be created any time soon, what do you think that obstacles are?

+15  A: 

Six to eight weeks.

Robert S.
+12  A: 

December 21, 2012

Michael
http://www.sculptors.com/~salsbury/Articles/singularity.html
Lou Franco
ahh, that's when magic comes back not..... oh I get it.
WolfmanDragon
+20  A: 

Jon Skeet is the only computer that passes the Turing Test.

Rich Bradshaw
hahaha, I lol'd
Carson Myers
+34  A: 

Questions like this remind me of a quote a friend of mine had in college

4 years spent in AI research will really make you question your atheism

JaredPar
+7  A: 

I once worked with - well, alongside - a guy who I swear was trying to pass the TT in reverse. You do get a lot of socially challenged people in this profession...

Carl Manaster
Along those lines, singularity might well occur not when there's a computer that is smarter than the average human programmer, but when there's a computer that is on par or slightly dumber than a human programmer but has better social skills. It will advance up the industry hierarchy much quicker that us humans and it will all be downhill from there.
JoeCool
+3  A: 

Ten years from now. And in ten years, I reserve the right to answer "10 years from now" again.

Mark Brittingham
Every 5 years, Doug Lenat claim we'll solve it in 5 years...
Chadwick
A: 

I don't think that the Turing test is necessarily a good test of AI as it will be broken based on a great graphics parsing algorithm...not a thinking computer. I think these concepts are totally different.

Andrew Siemer
Um, why would a great graphics parsing algorithm be indistinguishable in conversation from a human?
David Thornley
Because a picture is worth a thousand words.
Nosredna
@Nosredna - my point exactly. A parser has a goal in mind...to locate a string of alpha numeric figures to get passed a locked door. If I gave it a captcha image and said describe what you see it could do this easily (assuming the algorithm). However, given a picture of my 6 kids sitting on a picnic bench...it would fail entirely. It wouldn't even knwow where to begin! This is what a human excels at! When a computer's AI is capable of similar tasks then AI is truly capable of passing *any* turing test!
Andrew Siemer
You could have the best image processing program ever invented, but if it couldn't have a convincing conversation with me about whatever topic I wanted to discuss, it wouldn't pass the test even if it could identify my uncle in a family photo.
Joel Mueller
A: 

Several chat programs can fool actual people "Eliza" is probably the first and seems primitive now, although she can actually fool some people. And these programs continue to get more sophisticated .

NetHawk
No chat program has consistently fooled people who know that they should be trying to tell the difference between a person and program. There's a formal Turing test done every year since 1990, and nobody has ever taken home the gold medal Loebner Prize. http://www.loebner.net/Prizef/loebner-prize.html
Joel Mueller
A: 

I just want to note that the validity of the Turing test as a true test of intelligence has been under fire for quite some time (I believe since it was first stated!), so it's not necessarily recognized as the perfect and complete human-equivalent-intelligence test (though certainly still the most famous). Also, though people seem to think computers are all-powerful given enough resources and time, many problems are provably incomputable by a Turing machine - and problems far more practical than simply the well-known Halting Problem.

Isaac Asimov wrote what is perhaps the most famous sci-fi short story ever written called The Last Question. It dealt with a computer that exponentially increased in intelligence until being God-like. I found this discussion very interesting on why Asimov's computer could be considered an exaggeration due to incomputable problems.

JoeCool
I haven't seen evidence that humans are capable of computing the uncomputable, so that's not a distinction.
David Thornley
True. I simply was noting incomputability because most people start out with this general intution that in the year 4234 AD when we have Deep Thought, we can feed it any problem and it will solve it instantly. Which is not the case.
JoeCool
Even Deep Thought couldn't compute the question to the ultimate answer.
David Thornley
But our planet earth will answer the question if it is not destroyed by humans or aliens before it is done.
frast
There are also problems that are provably incomputable by a turing machine that are trivially computable by a human. At least, that's the chapter of Godel, Echer Bach that I'm currently on.
Breton
Breton, I would love to hear a little bit more about that... without buying the book haha. Do you happen to have a link to something related or could describe a quick example from the book? To me the only type of problem I can imagine being as you describe would be "fuzzy logic" type problems where humans are actually cleverly estimating a solution to a problem that is incomputable. If this is the case, then soft computing can (or will be able to eventually) do the same.
JoeCool
+7  A: 

If a machine acts as intelligently as a human being, then it is as intelligent as a human being

You're going to have to write a lot more conditionals to get that program working.

Ólafur Waage
Interesting, like what?
LBushkin
It's a joke .
Ólafur Waage
A: 

When we design learning computers, it turns out that they don't and we do.

Dinah
+4  A: 
Carson Myers
Alt-text: "Hit Turing right in his test-ees."
Chris Lutz
Thanks for that -- I updated the post.
Carson Myers
A: 

The level of progress made on this front has slowed, but I think we will eventually reach the point where this is possible. Instead of directly creating the AI ourselves I think the human race could reach this goal indirectly through the form of the singularity.

The theory goes that through technical advancement we will eventually create a machine that is capable of re-designing an improved version of itself, this will result in a "intelligence explosion" whereby through quick iterations (of accelerating pace) this machine will very quickly surpass human intelligence.

A recent interview in new scientist with Ray Kurzweil, he stated that he believed that the singularity would occur sometime midway through this century.

Simon P Stevens
If we're lucky, by 2038, so we can deal with the Unix equivalent of Y2K.
David Thornley
A: 

When the judge is Turing....

LarsOn
+1  A: 

At the current rate, > 100 years. It's important to make big predictions so that somebody can prove you massively wrong in the next year or so ;-)

David Plumpton
+1  A: 

We don't know enough about human intelligence to effectively simulate it. (It's at least arguable that we can't know enough about human intelligence, that intelligent beings can only understand things significantly less complex than they are.)

This means that we will get intelligent machines through some thoroughly empirical process using some sort of evolutionary algorithm.

This means that they will think considerably differently from humans, and will be distinguishable in a Turing test. However, neither will a human be able to pretend to be one of those AIs.

Therefore, if the AIs ever develop the ability to be indistinguishable from humans, they'll be a whole lot smarter than we are.

David Thornley
If the evolutionary process is aimed at passing the Turing Test, you might get one that passes it, but is limited in other ways.
Kathy Van Stone
Humans aren't all-capable either. Humans and hypothetical AIs will both have limitations and abilities, and these will be different. An AI capable of writing C++ code and haiku like I do, and come up with insights about H.P. Lovecraft, WWII strategy, and politics, will have my abilities and more besides.
David Thornley
A: 

I suggest reading some Ray Kurzweil books. The Age of Spiritual Machines is great (from back in 1999) as well as The Singularity is Near (2005 I think). In both books, he predicts that in around 20 years we'll have computers than can perform as many calculations per second as many scientists feel that human brains can do. And then he believes that it won't be long after that before we start seeing true AIs.

I've also read that the size, complexity and shear computing power of all the computers connected to the internet could bring about some form of global computing awareness. I'm not sure if that would pass the Turing Test, but it'd sure be interesting!

My own prediction is we'll have human level AI within 30 years, as I expect the Singularity to happen within that time.

Terry Donaghe
It's good to see someone else mentioning the singularity, at least I know I'm not entirely loopy.
Simon P Stevens
It's just a pity the singularity is at odds with thermodynamics and therefore will never happen.
David Plumpton
David, I have no idea what you're talking about. The Singularity is simply defined as the point at which technology is accelerating so quickly we can't make any educated guesses about what comes next. I don't see what that has to do with thermodynamics...
Terry Donaghe
+3  A: 

While this may seem a laudable goal, I question whether it's actually of any value. Do we really need another human?

The history of tool development has mostly been about extending the ability of humans to do thinks an unaided human can't do well: to lift heavy objects, move water uphill, travel faster than the fastest horse, breathe underwater, compute where a bomb will land if thrown just so, or unscramble a cunningly concealed message.

How about making an AI that can tell really good jokes, every time? Or one that can tactfully choose my wardrobe every morning? How about an AI that can govern a city without corruption or prejudice?

Aim higher please.

Alex Brown
For your sake, I hope the wardrobe-choosing AI is not the same as the joke-telling AI.
Nosredna
+3  A: 

I don't believe it will ever happen.

(Sorry to the enthusiasts)

The full reason could be it's own book. The short answer is that a person can only understand intelligence that is either equal to or less than themselves. Anybody who is in "gifted" programs or especially genius level can tell you that.

You can say "That person is smarter than me" and sort of know that it is true, but to actually understand how or why means you would have to have equal intelligence to the thing you are understanding.

That said, to create an intelligent machine you have to actually comprehend more than the machine you create can comprehend. Just like any transfer of energy, the transfer of intellect will incur loss.

If it helps, take a step back for a little bit. How well do you understand your own intellect? If you are honest you have to say you do not understand it as well as you can use it.

Jeff Davis
Any reason why?
Chris Lutz
+8  A: 

Yes, absolutely, but not for the commonly understood reasons. The problem is that although the intended way for program to pass the Turing Test is to simulate or emulate human intelligence through the unique(for now) human activity of communication via human language, the Turing Test is flawed and allows a significant shortcut:

Instead of focusing on emulating intelligence, a program can instead focus on fooling humans, which it turns out is not all that hard (as an endless number of "Nigerian Finance Ministers" have proven). ELIZA is the example most often given to demonstrate this, but IMHO, PARRY is the much more significant case as the ELIZA concept is limited because of the difficulty of extending it to larger communication & knowledge spaces, however there isn't really anything about PARRY that would prevent it from being scaled up to a huge level.

That this is possible was demonstrated some years ago by a Turing Test, where the part of the "Computer" being tested was played by a program that was also intercepting the communications to and from the Humans under test and then simply copying an answer to a question that most closely matched the one that it was just asked. This (again IMHO) is a shallower, though much broader example of the PARRY approach.

The inescapable conclusion is that we could readily develop a program, today, that could handily pass most of the even moderately limited Turing Tests if we just put the time and money into it (note: most Turing Tests conducted today are not moderately limited, but severely limited tests). How could this most easily be done? As follows:

Create a version of the Google spiders that constantly scans the Internet for textual exchanges and then store and categorize the responses by semantic analysis of the questions (or initiating text) and store it in a specialized version of the Google databases. Next constantly invite people to participate in on-line Turing Tests as both testers and testees. Record and catalog the dialogs as before but giving more weight to these exchanges.

Next, participate in Turing Tests, scoring itself on how often testers guessed that it was a computer. Finally use the type of massive tree-branching weighting & look-ahead search and move-selection algorithms used by Chess & Checkers programs currently (adapted for semantic analysis and exchange or course).

Now our program is ready for the real public Turing Tests. And of course while it is doing that continue to run low-profile Turing Tests of its own around the world, feeding it's questions/prompts that it receives from the real testers to those other folks and storing their responses as a backup for more complicated and involved situations. And of course, apply the first principle of Chess, Checkers and PARRY: Take control of the engagement, thus limiting the other side's options.

So, in short: the Internet can pass the Turing Test.

RBarryYoung
Then someone should do it and claim the $100,000 Loebner Prize, and get themselves untold amounts of publicity and speaking engagements. They'd be set for life. Until that happens, I don't buy the idea that a sufficiently big database is all we need to convince a skeptical person that they are conversing with another human being.
Joel Mueller
A) I am talking about a Corporate Enterprise level project. $100k is chump change at that level. $10-$30MM would be mu SWAG. B) No, they would not be "set for life" anymore than the creators of PARRY or even ELIZA were. They still have to work for a living, because would NOT demonstrate real artificial intelligence no be practical for anything other than maybe support Forums (but liability issues woud kill that). And that was my point: easily technically feasible, but economically out of the question (no return, financial or scientific).
RBarryYoung
That's like saying antigravity cars are easily technically feasible, but wouldn't be cost-effective, so nobody will build one. I find your argument unconvincing and hand-wavy. I don't believe that your proposed project could fool me under Alan Turing's proposed conditions, no matter how big you made the database.
Joel Mueller
...and this is why: if I was participating in a Turing test, I wouldn't sign off on a conversation partner being a human until we had conversed long enough that the two of us had become friends. You haven't explained how your proposed program would make the transition from "an extremely large database of conversation snippets" to "forming meaningful relationships with human beings."
Joel Mueller
"Forming meaningful relationships with human beings" is not a requisite part of the Turing Test. Also, Turing Tests are usually short: 5 min is typical, 20 min would be extremely long. Finally, even imposing you addl relationship as a restriction, plenty of humans would not want to develop a relationship with the tester. That doesn't make them inhuman and it doesn't necessarily distinguish between computers and humans better. (Consider PARRY which parroted paranoid-schizophrenic commincations. Schizophrenics are still human.)
RBarryYoung
Joel, my argument has nothing to do with anti-gravity, because unlike A-G cranks, I have presented a rational argument in a reasonable monetary range. If you think may argument is weak, then point out the weaknesses, don't try to smear it with pseudoscience attacks. And if you just don't agree with my argument, fine. But don't try to pretend that you're being rational about it, if all you are basing it on is a subjective reaction
RBarryYoung
Of course friendships are not a formal part of the test! Neither are time limits. The point of the test is to fool me into thinking I'm talking to a human when I'm not, and I know that going in. I get no penalty points for incorrectly classifying a terse human as a computer. Since I'm aware that programs probably have large databases of conversation snippets, my strategy would be to try to get to know the 'person' on the other end. Forming a meaningful relationship with me is a requisite part of fooling me, and I still see no reason to think your program can fool me.
Joel Mueller
The point about antigravity is that neither that nor a program that can pass the Turing test has *actually been done*. Until a program has actually passed the Turing test, people claiming it can be done have exactly as much actual evidence to support their position as the ones that say antigravity is easy.
Joel Mueller
http://users.ecs.soton.ac.uk/harnad/Papers/Harnad/harnad92.turing.html
Joel Mueller
Joel: when you misidentify a human as a computer in the Turing Test, you do "lose points" because what the computer is trying to do is to be ID'd as a human, **as often as humans are**. So if you reject 50% of the humans as "computers" for being uncooperative, then you've lowered the bar by that much for the computers.
RBarryYoung
Joel: Your A/G point then is that questions like this are always wrong because until it has actually been done no one can know if something can be done? That isn't an argument against my position or even A/G, it's an argument against all science, logic, reasoning, projections and even simple planning. "We can reach the Moon within 10 years(c.1961)." "Doing 'X' help our economy." "I will need an umbrella today." "My project can be finished in 6 months." You're saying that all of these claims the same as "I can make Anti-Gravity" because none of them had happened yet? And I'm the hand-waver?
RBarryYoung
People have tried many times to write a program that can pass the Turing test, but 100% of them have failed. People have also tried many times to invent antigravity, and 100% of them have failed as well. You don't see the similarity, really? This doesn't mean that it's crazy to try, it just means that after so many failures, claiming you've solved the problem without actually solving it is going to be met with a justifiable degree of skepticism. "Show me your results" is the very _essence_ of the scientific method, not, as you seem to imply, its opposite.
Joel Mueller
RBarryYoung
Oh and Harnad,1992(http://users.ecs.soton.ac.uk/harnad/Papers/Harnad/harnad92.turing.html) actually agrees with my point: the Turing Test as currently conceived is flawed because it is too easy to fool. Hanard uses this to assert that the Turing Test should be supplanted with a similar, but much improved test he calls the "Total Turing Test" (TTT). So since the scholarly paper that you are citing is premised entirely on the same thing that my post is (that TT is too easy to fool), and that you have been vehemently disagreeing with, may I assume that you are now prepared to agree with me? :-)
RBarryYoung
I didn't say it can't be done, not even once. I said that it hasn't been done, and that your untested hypothesis doesn't seem very plausible to me. The part that is not plausible is the idea that one doesn't need true AI to pass the Turing test, because that goes against the very definition of the Turing test, which is basically "how do we know when we have true AI?" Maybe you're right, and the Turing test is flawed, but I don't think so, and if you read carefully, neither does Professor Harnad. I would be very interested to be proved wrong, but until then, stop pretending your idea is proof.
Joel Mueller
For example, please give me a plausible reason to believe that a sophisticated pattern-matching algorithm with a large database of conversation snippets would be able to fool someone who won't agree that they're talking to a human until they've become friends with the person on the other end.
Joel Mueller
ELIZA proves nothing, by the way. It's nothing more than a fun parlor trick that repeats everything you say to it in the form of a question. I've written implementations of ELIZA, hooked them up to IRC, and watched people try to have cyber-sex with ELIZA for half an hour before they realized they were talking to a computer. Does that mean that I passed the Turing Test? Of course not! Those people didn't have reason to suspect a computer, and they still figured it out eventually. As Harnad suggests, I would NEVER discover that my pen pal is a computer if it could really pass the Turing Test.
Joel Mueller
"So since the scholarly paper that you are citing is premised entirely on the same thing that my post is" BZZT. Read the scholarly paper again. The only flaw Harnad points out in the TT is that it is limited to typewritten communication only. His example is sending your pen pal a physical birthday card with a flower enclosed. The program would need to have robotic arms and cameras in order to open the letter and know what kind of flower you mailed. That's what he calls the Total Turing Test - adding robotics, because programs alone wouldn't be enough to fool people. Nice try, though.
Joel Mueller
+1  A: 

Mr. Data could probably pass a Turing test, although not every time, and he's from the 24th century, I think. I think that qualifies as an authoritative answer.

David Berger
+1  A: 

I don't think computer programs have to get smarter to pass the Turing test. I think we just need a little bit more of a decline in human intelligence.

Nosredna
A: 

As long as AI researchers maintain the idea that human intelligence is based entirely within the brain, we'll make no progress. Psychologists have progressed beyond that idea, but AI researchers, as far as I've seen, seem stuck in that mindset, and are pursuing primarily brain based theories.

I've seen rather convincing arguments for the idea that human intelligence works a bit more like this: While a great deal of sophisticated algorithms are located within the brain, quite a lot of what we recognize as human behavior is the result of the brain, the body, and the environment all working in concert. If you wanted to create a computer that could convincingly simulate a human, the computer would need a body, an environment, and a childhood.

With enough advances in hardware, it's conceivable that a clever programmer could simulate the body,environment,childhood in a computer on a vastly accellerated timescale. A conversation could be staged by having a simulated AI researcher ask that simulated human to be an examiner in a turing test.

When the hardware gets good enough to do that, and the AI researchers get clever enough to do that, then we'll be 10 years off.

To contrast, imagine you just heralded in your first born child, a lovely 8 pound boy, and immediately extracted its brain, and put it in a box wherin the only stimulous the brain recieved was ascii encoded text. Just how intelligible would you expect that person to be immediately? How intelligible would it be after 10 years of that treatment? 20? It seems to me that's basically what AI researchers are trying to do, in a sense.

Breton
I don't think AI researchers, as a group, are trying to emulate the human brain. I think there's been some interest in seeing how the brain works and copying that, but that was a result of other dead ends. I'd say the field of AI is pretty diverse.
Nosredna
You're absolutely right, but I have no idea how to say such things, while keeping my original point, and also keeping it brief and punchy.
Breton
Haha. Well that makes sense. A rhetorical device. OK, Breton, point taken. You pass the Turing Test in my book. :-)
Nosredna
A: 

Passing the Turing test isn't a precise measure of the state of artificial intelligence science/technology. It would depend on the examiner & the topic of discussion.

A fundamental problem that isn't often brought up is that computers are deterministic - little more than state machines, actually - but we don't know yet whether humans are or not. If humans are capable of free choice, then we are by definition non-deterministic, which means there's a possibility that we can solve a wider class of problems than any Turing machine. If that's the case, deterministic computers will never achieve human-level intelligence.

Cybis