views:

444

answers:

7
+5  Q: 

Brain modelling

Just wondering, we've reached 1 teraflop per PC, and we are still not able to model an insect's brain. has anyone seen a decent implementation of a self-learning self-developing neural network?

+3  A: 

I saw an interesting experiment mapping the physical neural layout of a rat's brain to a digital neural network with weighting modelled on the neuron chemistry of each component taken using MRI and others. Quite interesting. (new scientist or Focus, 2 issues ago?)

IBM Blue Brain comes to mind http://news.bbc.co.uk/1/hi/sci/tech/8012496.stm

The problem is computation power as you rightly point out. But for a sequence of stimuli to a neural network the range of calculations tends to be exponential as that stimuli encounters deeper nested nodes. Any complex weighting algorithm means that time spent at each node can get expensive. Domain specific neural-maps tend to be quicker because they are specialized. Brains in mammals have many general paths, making it harder to teach them, and for a computer to model a real mammal brain in a given space/time.

Real brains also have tons of cross-talk like static (some people think this is where creativity or original thought stems from). Brains also don't learn using 'direct' stimulus/reward ... they use past experience of non-related matter to create their own learning. Recreating the neurons is one thing in a computational space, creating an accurate learning is another. Never-mind the dopamine (octopamine in insects) and other neurological chemicals.

imagine giving a digital brain LSD or anti-depressants. As a real simulation. Awesome. That would be a complex simulation I suspect.

Aiden Bell
+4  A: 

I think you're kind of making the assumption that our idea of how neural networks work is a good model for the brain at a large-scale level; I'm not sure that is a good assumption. Hell, not too many years ago we didn't think the glial cells were important to mental functions, and it was the idea for a long time that there is no neurogenesis after the brain matures.

On the other hand, neural networks do seem to handle some apparently complex functions pretty well.

So, here's a little puzzle question for you: how many teraflops or petaflops do you think a human brain's computation represents?

Charlie Martin
More than we have. We stand more chance of growing a human brain and giving it digital input/output. Maybe overclock it or specializing it a bit. Emotion-driven computing using artificial chemical stimuli. It could invoke pain receptors on wrong predictions.
Aiden Bell
@Aiden: I really hope you don't have any children. :-)
McWafflestix
:) if the brain has a problem with it, I will plug in my neuro-leads and have virtual dual with our virtual bodies. Unless it's female. Then it's different.
Aiden Bell
Try exa-, or even zeta-, flops!
DeadHead
Well, we can make an estimate. The brain has about 100 billion = 10^11 neurons, and each neuron solves a Hodkins-Huxley equation in roughly 100 milliseconds. The HH equation takes on the order of 10,000 FLOPS to solve once. So I get 10^16 flops. so that's what, 10 petaflops, no? Of course, then the question is *what values* to feed to each of those HH solutions.
Charlie Martin
haha, well, personally, I don't believe that the computing power is going to be an issue in this. The major problem is designing the AI and having it properly function, and learn as fast as humans (or some other animal) do.
DeadHead
Petaflops is not the right kind of benchmark here. It would be a game changing event if we made something that could do the same kinds of things as a human brain could, only 100x slower.
Albinofrenchy
Maybe one-day we will see computers rated such as: 12GigaEinsteins
Aiden Bell
+1  A: 

Yup: OpenCog is working on it.

sean riley
Its not really... implemented yet and won't be for quite a while...
DeadHead
A: 

In 2007, they simulated the equivalent of a half mouse brain for 10 seconds at half the actual speed: http://news.bbc.co.uk/1/hi/technology/6600965.stm

Mark Cidade
That's the one (see my answer)
Aiden Bell
+1  A: 

Jeff Hawkins would say that a neural net is a poor approximation of a brain. His "On Intelligence" is a terrific read.

duffymo
Yes! You may want to check www.numemta.com and their NuPIC software. This is based on Hierarchical Temporal Memory technology, itself based on concepts developped by Jeff Hawkins in this book.
mjv
A: 

It's the structure. Even if we had computers today with the same or higher performance than a human brain (there are different predictions when we'll get there, but there are still a few years to go), we still need to program it. And while we know a lot of the brain today, there are still many, many more things we do not know. And these aren't just details, but large areas that are not understood at all.

Focusing only on the Tera-/Peta-FLOPS is like looking only at megapixels with digital cameras: it focuses on only one value when there are many factors involved (and there are a few more of those in a brain than in a camera). I also believe that many of the estimates just how many FLOPS would be needed to simulate a brain are way off - but that's a different discussion altogether.

Robert Kosara
Robert, that's exactly what I ment by the initial question - the processing power is sort of there, but we realise there is absolutely no understanding on how to use this to power to model a simple learning process. (an idea for a startup? :-)
Andy
You might be able to evaluate the potential for a startup if you are an AI expert. These things are usually university spinouts.
Aiden Bell
+1  A: 

Just wondering, we've reached 1 teraflop per PC, and we are still not able to model an insect's brain. has anyone seen a decent implementation of a self-learning self-developing neural network?

We can already model brains. The question these days, is how fast, and how accurate.

In the beginning, there was effort expended on trying to find the most abstract representation of neurons with the least amount of physical properties needed.

This led to the invention of the perceptron at Cornell University, which is a very simple model indeed. In fact, it may have been too simple, as the famous MIT AI professor, Marvin Minsky, wrote a paper which mistakenly concluded that it would be impossible for this type of model to learn XOR (a basic logic gate that could be emulated by every computer we have today). Unfortunately, his paper plunged neural network research into the dark ages for at least 10 years.

While probably not as impressive as many would like, there are learning networks that are already in existence that can do visual and speech learning and recognition.

And even though we have faster CPUs, it is still not the same as a neuron. Neurons in our brain are, at the very least, parallel adder units. So imagine 100 billion simulated human neurons, adding each second, sending their outputs to 100 trillion connections with a "clock" of about 20hz. The amount of computation going on here far exceeds the petaflops of processing power we have, especially when our cpus are mostly serial instead of parallel.

Unknown