views:

211

answers:

5

I've read in Wikipedia that neural-network functions defined on a field of arbitrary real/rational numbers (along with algorithmic schemas, and the speculative `transrecursive' models) have more computational power than the computers we use today. Of course it was a page of russian wikipedia (ru.wikipedia.org) and that may be not properly proven, but that's not the only source of such.. rumors

Now, the thing that I really do not understand is: How can a string-rewriting machine (NNs are exactly string-rewriting machines just as Turing machines are; only programming language is different) be more powerful than a universally capable U-machine?

Yes, the descriptive instrument is really different, but the fact is that any function of such class can be (easily or not) turned to be a legal Turing-machine. Am I wrong? Do I miss something important?

What is the cause of people saying that? I do know that the fenomenum of undecidability is widely accepted today (though not consistently proven according to what I've read), but I do not really see a smallest chance of NNs being able to solve that particular problem.

Add-in: Not consistently proven according to what I've read - I meant that you might want to take a look at A. Zenkin's (russian mathematician) papers after mid-90-s where he persuasively postulates the wrongness of G. Cantor's concepts, including transfinite sets, uncountable sets, diagonalization method (method used in the proof of undecidability by Turing) and maybe others. Even Goedel's incompletness theorems were proven in right way in only 21-st century.. That's all just to plug Zenkin's work to the post cause I don't know how widespread that knowledge is in CS community so forgive me if that did look stupid.

Thank you!

+3  A: 

From what little research I've done, most of these claims of trans-Turing systems, or of the incorrectness of Cantor's diagonalization proof, etc. are, shall we say, "controversial" in legitimate mathematical circles. Words like "crank" get thrown around frequently.

Obviously, the strong Church-Turing thesis remains unproven, but as you pointed out there's really no good reason to believe that artificial neural networks constitute computational capabilities beyond general recursion/UTMs/lambda calculus/etc.

Derrick Turk
I totally agree with what you said about trans-Turing etc. but the incorrectnes of Cantor's is the separate topic and I could prove it in a few lines here if you would like) And, again, my belief in the raise (uh.. ressurection) of logicism goes along your belief in Church-Turing thesis :)
Bubba88
Thx for your answer BTW
Bubba88
@Bubba88: I am curious, what is your proof of the incorrectness of Cantor? I am just wondering if this is not a case of something like not accepting the Axiom of Choice or the like.
Ukko
@Ukko: Actually, I'm not so confident in what I've said at the moment; we had a conversation with one modern scientist and he rather convinced me that I was mistaken. Or at least not totally correct :) But if you're curious, you could take a look at Zenkin's interview (in English) and the actual thesis (in Russian, sorry :)).To explain it in just two words - it's sort of not accepting the actual infinity itself; still being a 'classical' and not 'constructive' (AFAIK) mathematician. That's what the 'disproof' is based on. But that's really a great subject to discuss and I'm not keen in it :)
Bubba88
Here are the links: A website dedicated to A.A. Zenkin:<link>http://webcache.googleusercontent.com/search?q=cache:http://alexzen.by.ru/</link>The interview:<link>http://webcache.googleusercontent.com/search?q=cache:http://alexzen.by.ru/papers/ng-02/contr_rev.htm</link>And.. everything else was not found in cache. Anyways, there is a paragraph on <alexzen.by.ru> starting with: 'III.02. WHETHER THE LORD EXISTSIN THE G.CANTOR TRANSFINITE PARADISE?'Thx for your attention :)
Bubba88
In that case this is actually an old position, he probably also does not accept proof by induction and certain types of limits as well. Mathematically there is nothing wrong with that, it like asking "what can I prove if all sets are finite?" the reason that these positions lost historically is that the answer is usually "Nothing much of interest." I think this often comes from an intuitionist starting point, "I can't see infinity so it doesn't exist." sort of thing. Math doesn't care though, if the universe is proven to be finite Cantor is still correct.
Ukko
+2  A: 

From a theoretical viewpoint, I think you're absolutely correct -- neural networks provide very little that's new or different.

From a practical viewpoint, neural networks are simply a way of casting solutions into a form where parallel execution is natural and easy, whereas Turing machines are sequential in nature, and executing their sequences in parallel is relatively difficult. In fact, most of what's been done in CPU development over the last few decades has basically been figuring out ways to execute code in parallel while maintaining the illusion that it's executing in sequence. A lot of the hardware in a modern CPU is devoted to maintaining that illusion, and the degree to which parallel execution has become explicit is mostly an admission that maintaining the illusion has become prohibitively expensive.

Jerry Coffin
Yes) there is of course much more to the NNs from a practical viewpoint (expressiveness, ground-up-symbols and much more) but the question is not about that. I just thought that there _might_ be a source that coul'd give a stimula to saying that NNs are transrecursive and decided to ask on SO)
Bubba88
@Bubba88: Right -- my point was that I haven't seen anything very convincing in that direction.
Jerry Coffin
The other thing about NNs is that they are programmed differently from von Neumann machines, and therefore are better in some areas and worse in others. However, they can easily be implemented on von Neumann machines, and therefore can have no greater descriptive power.
David Thornley
+1  A: 

From a layman's perspective, I see that

  • NNs can be more effective at solving some types problems than a turing machine, but they are not compuationally more powerful.
  • Even if NNs were provably more powerful than TMs, execution on current hardware would render them less powerful, since current hardware is only an apporximation of a TM and can only execute problems computable by a bounded TM.
mdma
+1  A: 

Anyone who "proves" that Cantor's diagonal method doesn't work proves only their own incompetence. Cf. Wilfred Hodges' An editor recalls some hopeless papers for a surprisingly sympathetic explanation of what kind of thing is going wrong with these attempts.

You can provide speculative descriptions of hyper-Turing neural nets, just as you can provide speculative descriptions of other kinds of hyper-Turing computers: there's nothing incoherent in the idea that hypercomputation is possible, and speculative descriptions of mechanical hypercomputers have been made where the hypercomputer is stipulated to have infinitely fine engravings that encode an oracle for the Halting machine: the existence of such a machine is consistent with Newtonian mechanics, though not quantum mechanics. Rather, the Church-Turing thesis says that they can't be constructed, and there are two reasons to believe the Church-Turing thesis is correct:

  1. No such machines have ever been constructed; and
  2. There's work been done connecting models of physics to models of computation, going back to work in the early 1970s by Robin Gandy, with recent work by people such as David Deutsch (e.g., Machines, Logic and Quantum Physics and John Tucker (e.g., Computations via experiments with kinematic systems) which argues that physics doesn't support hypercomputation.

The main point is that the truth of the Church-Turing thesis is an empirical fact, and not a mathematical fact. It's one that we can have confidence is true, but not certainty.

Charles Stewart
Did like your description of hyper-computers verily though the idea about mechanical oracle is what I didn't notice while searching for such models. I just don't want to argue much, but to clarify things, I must say (about Cantor's 'diagonalization'): the point of Zenkin's works was to prove that there are no uncountable sets and that Cantor's 'transfinite paradise' was one of the science's greatest failures for recent 100 years. My believe in correctness of Zenkin's is very firm, though that may not sound much convincing, I could translate his work (or find translated) if that be helpful. Thx
Bubba88
+1  A: 

You may be interested in S. Franklin and M. Garzon, Neural computability. There is a preview on Google. It discusses the computational power of neural nets and also states that it is rumored that neural nets are strictly more powerful than Turing machines.

Albert
Is that for some expanded definition of neural net? Because at present the ones that we use today are running inside Turing equivalent machines (a smidge weaker without infinite memory even) so that makes a more powerful claim a bit tricky. Are they assuming infinite precision or something like that?
Ukko
I haven't read the full paper (as the preview strips out some parts and I haven't found it elsewhere) so I have not really read the arguments. Anyway, you have strictly to differ between practical implementations and the theory. And you also have to realize that a computer is in practice not equivalent to a Turing machine (because it has no infinite memory). For neural nets, it can make a difference for example if you are calculating without any bounds of exactness. So all simulations on a computer are inexact to some degree.
Albert
It really is not a very interesting result then if you think about it. It is saying that a computational machine built from a non-countable set of objects can calculate something that cannot be calculated by a countably infinite machine. I am not really sure what such a thing would look like, it would be very interesting to see something in that set though.
Ukko