views:

419

answers:

7

I have always loved the idea of AI and evolutionary algorithms. Unfortunately as we all know the field hasnt developed nearly as fast as expected in the early days.

What I am looking for are some examples that have the wow factor:

  • Self directed learning systems that adapted in unexpected ways.
  • Game agents that were particularly dynamic and produced unexpected strategies
  • Symbolic representation systems that actually produced some meaningful and insightful output
  • Interesting emergent behaviour in multiple agent systems.

Lets not get into the semantics of what defines AI. If it looks or sounds like AI, lets hear about it.

I'll go first with a story from 1997. Dr Adrian Thompson is trying to use genetic algorithms to create a voice recognition circuit in an FPGA. After a few thousand generations he succeeds in having the device distinguish between 'stop' and 'go' voice commands. He examines the structure of the device and finds that some active logic gates are disconnected from the rest of the circuit. When he disables these supposedly useless gates, the circuit stops working... http://www.damninteresting.com/on-the-origin-of-circuits

Dr Adrian Thompson: http://www.cogs.susx.ac.uk/users/adrianth/ade.html

edit: Can we try and keep the discussion to techniques/algorithms that produced something impressive? I can search google if I want to read about the thousands of AI technologies that are 'In the early stages but showing promise' as MSalters points out.

+2  A: 

Slightly outside of the traditional AI realm, are HTMs (Hierachical Temporal Memory) as developed at Numenta. This technology is still in its early stages but shows promises in the targeted "WOW factor" areas.

mjv
_Everything_ in AI is "technology in early stages but showing promises".
MSalters
@MSalters: too true. 40 years ago Marvin Minsky started saying "someday we'll have computers powerful enough to make all this stuff work". He's still saying that today (unless he's dead).
MusiGenesis
@MSalters: Everything in AI is technology in early stages but showing promise. If it gets further than that, it's no longer considered AI.
David Thornley
+1 because Numenta is at least attempting to draw its cues from actual brain structures (I don't know if they're simulating neurons, too).
MusiGenesis
A: 

I found the recent research of evolution and cooperation among robots very intriguing. This blog entry gives a good summary of the experiment and its results. Most interesting to me was the observed behavior of both martyr AI and "evil" AI.

marco0009
"A robot Hitler"? Sometimes I'm glad AI has been pretty much a flash-in-the-pan so far.
MusiGenesis
I call Godwin's law!
Marc Gravell
@Marc: marco started it, not me (it's in the title of his linked blog entry).
MusiGenesis
+1  A: 

Some times ago, I've found this series of articles: Designing Emergent AI.

The author of these articles has created the game "AI War: Fleet command" that features an emergent AI. Maybe you'll find this interesting.

cedrou
+2  A: 

So far the most impressive aspect of AI has been the ratio of promises to deliveries. In my opinion, the only truly viable approach to computer-based intelligence is simulated neural networks, because all of the things in the real world that we consider to be "intelligent" (humans, chimpanzees, dogs, cockroaches etc.) all possess variants of the same basic control system: a big mess of neurons hooked up to input and output devices.

Amazingly, despite this apparent truth, the Computer Science field that calls itself "neural networks" has pretty much abandoned the attempt to simulate actual biological neurons and neuronal structures. I couldn't begin to tell you why this is the case, although I suspect it's because programmers in general do not like going outside their comfort zones and learning about topics outside of Computer Science.

The only upside to this is that Terminator is still just a movie.

MusiGenesis
Why should computer neural networks try to resemble biological neural networks, when there's useful things to be done with the computer version? There are people who will try to model biological neurons: biologists for one, and Cognitive Scientists. Cognitive Science is something of a multidisciplinary approach to understanding the mind, primarily put together from computer science and experimental psychology, but also philosophy, linguistics, child psychology, and other fields I can't remember offhand.
David Thornley
@David: what useful things have the computer versions of neural nets accomplished that remotely compares to what biological neural networks have accomplished? As far as modeling biological neurons, it's better to be a programmer who can read a Neuroanatomy textbook than a Biologist who doesn't know how to program in an object-oriented language. However, I'll happily criticize Biologists for not doing this, as well.
MusiGenesis
Also, I'm a proponent of Radical Behaviorism, so when I pass a Cognitive Scientists on the street, I'm supposed to cross over to the other side. :)
MusiGenesis
I studied cog sci for a couple of years at university. Our class had maths guys, comp sci guys, linguists, physiologists, philosophers and even a couple of law students. It was an interesting class but the activities tended to the lowest common denominator due to the diversity of the students. Our class project required us to form groups and create a program for a lego robot. Have you ever tried programming a robot with a lawyer and a philosopher? We ended up with a 50 page report in legalise and an IR sensor pointing at the robot to make it self aware. Thats when I switched to comp sci.
Alex
@Alex: LOL. You're lucky you didn't have any business school students in there with you. Here's one of my all-time favorite quotes from a business textbook I read once: "your skill at negotiating will affect the outcome of the negotiations".
MusiGenesis
@MusiGenesis: Computational neural nets do some neat things in themselves, without regard to what the biological versions do. It's sort of like the disconnect between designing passenger jets and figuring out how birds fly. BTW, at the U of Minnesota, one Cog Sci prof told us he loved to hold discussions etc. in the B.F. Skinner room.
David Thornley
@David: like Hitler loved to dance a jig in front of the railcar where the Treaty of Versailles was signed. Doh! Godwin's Law again! :)
MusiGenesis
Intuitively, it makes sense that a computer neuron should be a model of something that the computer does well (i.e. binary logic). There are neural networks that do just that; they are lightning fast, deterministic and provable. Modeling biology just adds another unnecessary layer of abstraction.
Robert Harvey
@Robert: I agree that a computer neuron *can* be (not necessarily *should* be) a model of binary logic, and that any model that does anything interesting and useful is worthy of respect. However, living things are most surely *not* "deterministic and provable", at least so far as those terms are usually applied to computers, so it is no surprise that computational neural nets do not produce behavior that is like living things. Thus, they don't represent a good approach to artificial intelligence (at least as I understand the term).
MusiGenesis
@Robert: also, I think it makes sense that a computer neuron can be a model of something else that the computer does well: object-oriented code. All of the salient properties of biological neurons (action potentials, axon lengths, neurotransmitters, synapses etc.) can be modeled quite easily in an object-oriented language.
MusiGenesis
@Robert: I disagree that a neuron should be a model of binary computation. Where else do you see binary systems in nature? Binary logic is not the only model for computation, and indeed digital computers can not model a chaotic circuit. Remember that our ideas about computation are not very old, a few hundred years at most. The brain has evolved over millions of years. I think it would be a little arrogant of us to assume that our current computational paradigms can completely describe cognition. I'm not saying it isnt possible, but I think its dangerous to take it as an assumption.
Alex
The existence of binary neural networks such as ALNs is proof of concept. In some cases these networks are orders of magnitude faster than the equivalent biological models. If your goal is to model biology, then of course you would use a biological model. If your goal is to model intelligence or problem-solving, there might be solutions that take better advantage of computer circuits. Nobody argues that they are modeling a biological system when they are writing an ordinary program, or that biological neurons are not the best currently available computational model for intelligence.
Robert Harvey
There is overhead in computing what an actual biological neuron does. Like any other analog artifact (such as a photograph or an audio clip), the state of a neuron (and its subsequent states) can be represented digitally, as can the collective state of its neighboring neurons. We claim that we understand something of this process, and that we are faithfully modeling the actual behavior of neurons, but the truth is our existing models are quite crude, and bear only a passing resemblance to the complexity of what actually takes place inside a real neuron.
Robert Harvey
@Robert: I'm a bit confused as to whether you're arguing with me or agreeing with me in your comments. My personal goal is not to model Biology for its own sake; rather, I am saying that what we normally think of as "intelligence" is a property of living things, and thus is a property of *real* neural networks, which are just masses of *real* neurons. The comp sci field called "neural networks" relies on crude models that bear almost no relation to real neurons, and their results bear almost no relation to *real* intelligence.
MusiGenesis
@Robert: ALNs may be orders of magnitude faster than living things, but so what? They aren't good at: walking around, finding food, evading predators, or in the human realm, recognizing words written in a variety of hands and fonts, recognizing spoken words, music, telling one artist's work from another's, mastering Calculus, going to the moon etc. etc. Despite the amazing complexity and diversity of human activities, it's all the result of a bunch of neurons going off. Every attempt to model these human intellectual activities ("AI" in other words) has failed *miserably*.
MusiGenesis
@Robert: here, then, is my fundamental point - by emulating *real* neurons and *real* conglomerations of real neurons, it may be *possible* to emulate real human intelligence. I think all other approaches are doomed to miserable failure. I am not saying that current "neural networks" do not have a value all their own. I'm saying that they are *not* a viable approach to Artificial Intelligence. I'm truly not sure whether you agree or disagree with this point.
MusiGenesis
I agree with you, for the most part. To be honest, I'm not sure that I find the idea of creating an intelligent android all that interesting. What I do find interesting are novel, computer-based point solutions that produce unexpected results, like the Genetic Algorithm answer here. http://stackoverflow.com/questions/1394017/what-are-some-impressive-algorithms-or-software-in-the-world-of-ai/1395453#1395453.
Robert Harvey
+7  A: 

I built an evolutionary algorithm for retail inventory replenishment in a product targeted at huge plant nurseries (and there are some really big, smart ones -- $200m companies).

It was probably the coolest thing I've ever worked on. Using three years of historical data, it crunched and evolved for a week straight while I was on vacation.

The end results were both positive and bizarre. Actually, I was pretty sure it was broken at first.

The algorithm was ignoring sales from the previous few weeks, giving them a weight of 0 for all indicators (which is at odds with how these guys currently work -- right now they consider the same week in the previous year and also factor in recent trends).

Eventually I realized what was going on. With the indicators the organism had to work with, over time it was more efficient to look at the same part of the previous month and ignore recent trends.

So instead of looking at the last several days, it looked at the same week in the previous month because there were some subtle but steady trends that repeat every 30 days. And they were more reliable than the more volatile day-to-day trends.

And the result was a significant and reproducable improvement in efficiency.

Unfortunately, I was so excited by this that I told the customer about it and they cancelled the project. That first run was extremely promising, but it was hard to sell as proof even though you could crunch almost any data from the last three years and see that the algorithm magically improved efficiency. EA's are not hard, but people find them convoluted at first, and the idea of doing something so arcane was just a little bit too much to swallow.

The big takeaway for me was that if I ever create something that appears a bit too magical, I should hold off on talking about it until I can put together a good presentation. :)

Brian MacKay
Brian did you try any other approaches? Any reason why you went with GAs over a traditional statistical approach?
Alex
+1  A: 

One of the most interesting things in AI for me is a very old discussion started by rodney brooks about his behavioral architecture called Subsumption architecture. He completely abandons all kinds of symbolic representation and always says, take the world as your model. This saves the robot from generating a wrong world view and all complicated issues in correcting the model. He published many interesting books and was one of the first persons in the embodied cognition approach that is used a lot in research at the moment. Interesting reading material can be found on http://people.csail.mit.edu/brooks/index.html. Some of his later publications get very philosophical but the earlier descriptions of the robots and how their behavior emerged from a simple set of rules and actions are worth reading.

Janusz
Interestingly, I heard a few years back that Brooks was beginning to move towards Radical Behaviorism (B.F. Skinner's branch of Psychology), which makes total sense for a roboticist, given behaviorism's focus on the relationships between behavior and environmental stimuli.
MusiGenesis
+1 for Rodney Brooks.
MusiGenesis
A: 

There is an ambitious open source Java library called CIlib that provides a host of Computational Intelligence methods. It is currently being used at University level by a research group to advance their own research.

gpampara
LOL @ Gary...you had to advertise your own product here on SO....
The Elite Gentleman