views:

297

answers:

7

While reading some papers about the Turing completeness of recurrent neural nets (for example: Turing computability with neural nets, Hava T. Siegelmann and Eduardo D. Sontag, 1991), I got the feeling that the proof which was given there was not really that practical. For example the referenced paper needs a neural network which neuron activity must be of infinity exactness (to reliable represent any rational number). Other proofs need a neural network of infinite size. Clearly, that is not really that practical.

But I started to wonder now if it does make sense at all to ask for Turing completeness. By the strict definition, no computer system nowadays is Turing complete because none of them will be able to simulate the infinite tape.

Interestingly, programming language specification leaves it most often open if they are turing complete or not. It all boils down to the question if they will always be able to allocate more memory and if the function call stack size is infinite. Most specification don't really specify this. Of course all available implementations are limited here, so all practical implementations of programming languages are not Turing complete.

So, what you can say is that all computer systems are just equally powerful as finite state machines and not more.

And that brings me to the question: How useful is the term Turing complete at all?

And back to neural nets: For any practical implementation of a neural net (including our own brain), they will not be able to represent an infinite number of states, i.e. by the strict definition of Turing completeness, they are not Turing complete. So does the question if neural nets are Turing complete make sense at all?

The question if they are as powerful as finite state machines was answered already much earlier (1954 by Minsky, the answer of course: yes) and also seems easier to answer. I.e., at least in theory, that was already the proof that they are as powerful as any computer.


Some other questions which are more about what I really want to know:

  • Is there any theoretical term which can say something more specific about the computational power of a computer? (given its limited memory space)

  • How can you compare the computational power of practical implementations of neural nets with computers? (Turing-completeness is not useful as argumented above.)

A: 

Basically it means that with programming language or architecture which are Turing complete
you can execute wide variety of algorithms... mostly -- any kind of them.

Non-Turing languages are greatly tighter in potential.

OP obviously understands what turing complete means..
BlueRaja - Danny Pflughoeft
+6  A: 

When modern computers are said to be Turing Complete there is an unspoken exception for the infinite storage device Turing described, which is obviously an impossibilty on a finite physical computation device. If a computation device can do everything a Turing machine can do (infinite storage not withstanding) it is Turing complete for all practical intents and purposes. By this less strict definition of Turing completeness, yes, its possible that many neural networks are turning complete.

It is of course possible to create one that is not Turing complete.

codeelegance
+4  A: 

To partially address your second question:

Neural Networks have the property of being universal approximators - that is, they can approximate any function to an arbitrary degree of precision. It's the "degree of precision" part that keeps the Neural Network from needing to be infinite.

Gabe Moothart
+15  A: 

The point of stating that a mathematical model is Turing Complete is to reveal the capability of the model to perform any calculation, given a sufficient amount of resources (i.e. infinite), not to show whether a specific implementation of a model does have those resources. Non-Turing complete models would not be able to handle a specific set of calculations, even with enough resources, something that reveals a difference in the way the two models operate, even when they have limited resources. Of course, to prove this property, you have to do have to assume that the models are able to use an infinite amount of resources, but this property of a model is relevant even when resources are limited.

metaliving
+1 like all things in math involving infinity, it may not be physically realizable, but that does not make it a physically useless concept.
BlueRaja - Danny Pflughoeft
I'm still a bit curious about what "given a sufficient amount of resources" exactly means (in a mathematically sense) and also what it would mean for a neural network (adding more neurons to it is not really comparable to adding more memory to a computer).
Albert
Well, in feedforward multi-layer neural networks, you can think of adding more neurons to the network as improving on the precision of the network. In other words, the precision of the network at a given problem is (non-linearly) correlated with the number of neurons and layers in the network. This comes in contrast with a traditional algorithm, where there is no precision issue: there is a certain threshold in the amount of resources, above which the problem can be solved, under which solving the problem with the given algorithm is impossible.
metaliving
I'm not really having feedforward networks in mind. Those are clearly (under each relaxed interpretation) not Turing complete. You need a recurrent network to get the Turing computational power.
Albert
+1  A: 

It is almost always nice to know which class in the Chomsky hierarchy your system is. This is especially important in the more constricted classes, like regular languages / finite automata vs context-free languages. Also having the skill to recognize which class your the problem you are trying to solve is in is also important, otherwise one might try to do silly things like parsing HTML or XML with regular expressions only, which is impossible.

Having the knowlegde that your formalism or system is turing complete makes a statement that you can build whatever you want with it. It does not say anything about practicality, just the possibility or impossibility of solving problems. This is painfully true when considering turing tarpits, but there is also many turing complete systems which are specifically made for niche purposes that no one should ever dream of using for general purpose work in a production setting.

In short, good knowledge of the Chomsky hierarchy will help you out in very many situations, not only for choosing the right type of parser; regexp, pushdown, CFG or more powerful, but also for choosing the right type of machine or formalism for expressing processes in general.

johanbev
+5  A: 

I think the concept of Turing completeness is not intended to tell us whether a particular computer can perform a particular task.

Rather, it purposes to tell whether a particular language is capable of expressing a particular task. That is, I'd say it's really about expressing an algorithm not performing it.

As neural networks don't have a language, it's a question of expressing an algorithm in terms of a neural network, rather than the capability of that network. So I don't know the answer to the last bit of your question!

Sanjay Manohar
+2  A: 

I think an important point about the turing machine is that for any given input and program, the machine will only need a finite amount of tape, assuming it halts some time. That's why I would say the term "turing complete" is useful: You only need finite memory to run one specific turing complete program on some specific input (if the program halts). But if you have a non-turing-complete machine/language/technology, it won't be able to simulate certain algorithms, not matter how much memory you add.

nikie
+1 I was just in the process of writing almost exactly the same answer when yours came in.
JeremyP