views:

628

answers:

21

That is a hypothetical question. If there existed a machine with infinite memory that could execute any set of instructions within the blink of an eye, what would programming that machine be like?

I remember the old days, when i wrote x<<8 + x<<6 instead of x*320 because multiplying was a little more expensive than performing two bit shifts and an addition. Today i wouldn't do this anymore, but there are other situation where you do something awkward just to speedup a portion of code. You use quicksort instead of bubblesort because of the runtime behaviour. You use Hashsets instead of plain arrays for faster lookup. Imagine everything's O(1), what would change in your daily coding?

+5  A: 

It would be declarative instead of imperative.

select (itemID, itemValue) from items where itemName is "foo"

Hey, that looks familiar...

Anon.
"select" sounds imperative to me.
Jimmy
+3  A: 

It's be a language written in a human language. For instance, english:

Update the text box here, and then, when done, show me an error if there is one. Make the error red if you can. Or maybe yellow. No...wait...red. And if it works, then take me to the page that shows me success! Make that page pretty. 

Given infinite processing power and speed, it could parse the petty and vague requests into executable code.

DA
processing vague requests isn't (only) about speed, it's about the actual vagueness of it. If there's no way to figure out what exactly the user is trying to do, guessing won't work. Look at all the loosely dynamically typed languages out there with no compiler typing support whatsoever for an example of this problem on a very small scale.
Blindy
A person could take those instructions exactly as given and do something reasonable with them using a space-and-time-limited brain. An infinite number of such brains could be simulated in software on our hypothetical machine.
Jeffrey L Whitledge
Yes, but the question is then selecting which of those results is the best one?
Anon.
+23  A: 

Computer, Tea, Earl Grey, Hot

fuzzy lollipop
.....+1 For DNA
Aaron Bush
Too localized answer.
eKek0
eKek0, obviously you're not a trekkie.
Jon Limjap
+1 for combining star trek and quantum mechanics by depolarizing the phase difference in the kediyon pulse.
Vivin Paliath
+2  A: 

Maybe Google would use a single-threaded scripting language (PHP? Visual Basic?) instead of Map Reduce.

Keltex
+6  A: 

It's like the infinite improbability engine!

Just do the following:

  • generate all possible streams of all possible combinations of instructions

  • Execute all those streams in parallel.

Then it's your job to pick the completed results. The first one I would pick out would be a heuristic filtering agent to pick through the rest.

Joe Koberg
You would have the answer to every permutation of space and time. NICE
Nick Bedford
+1  A: 

Not sure the language would make all that much difference. The design and architecture, however... infinite cache? Nice.

Would the theoretical computer still need persistence in the event of the system being shutdown? If so, that might still create severe limitations in what can be done.

Brabster
+9  A: 

As much as quite a few people might like quoting lines from mediocre science fiction and such (sorry, but it is -- read The Moon is a Harsh Mistress for comparison), it would make almost no difference. For most code right now, there's enough memory and a fast enough CPU, so being larger or faster would make no difference at all.

For the vast majority of code, the limits on what we can accomplish are imposed by our ability to define what we want the computer to do, not in getting the computer to do it fast enough or fit it into available memory.

Even for problems like artificial intelligence, the primary problem isn't getting the computer to execute the code fast enough or fit it all in memory -- it's deciding what we want the computer to do.

Edit: the one area I can see a really serious difference would be encryption -- with infinite speed and memory, most current forms of encryption would be demolished instantly (or at least as soon as they were used to transmit any amount of data exceeding the Shannon limit).

As far as the rest goes: yes, it could "solve" the game of chess, if there is a solution (i.e., if there's a forced win from any board position it could find it, going all the way back to the possibility that white may have an automatic forced win. For the most part, the difference would be purely academic -- Deep Blue is already dependably better than all by a handful of the top Grand Masters.

As far as programming languages go, the one obvious difference is that (for example) some types of operator overloading in some current languages are made illegal because it's necessary to keep compilation from becoming NP-complete, and with infinite speed and memory, that wouldn't matter. At least initially, the effect would still be pretty minimal though. The question is whether that capability could lead to substantial language improvements once we were used to it -- maybe it would eventually, but certainly not overnight (we already have some languages that are fearsomely difficult to compile).

Jerry Coffin
the question wasn't about "faster", it was about "infinitely fast" for example, it means Deep Blue doesn't need to figure out the game of chess, it can just look at all the endgames and pick the winning one. It means any genetic algorithm can dispose of any hill-climbing type system and just randomize inputs and get the optimal result. It means NP-complete problems can be brute-forced.
Jimmy
What I believe Jerry is saying is that the problem lies in defining what the "correct" behaviour is. Sure, with infinite processing power you can brute-force every possible solution state. You can even check through them by some criteria to determine which best meets that criteria. The *real* problem in programming is usually *defining what that criteria is*.
Anon.
I agree that for most problems, the definition is vague. I contest that "the vast majority" of programs, vague problems or not, would not be fundamentally changed by the Infinite Computer.
Jimmy
Sped up, maybe. Program design revisited, probably. But the fundamental problem that the programmer exists to solve ("making the computer do what the user wants") still remains.
Anon.
Oh, and you *never*, i repeat *never*, call star trek "Mediocre Science Fiction". >:|
RCIX
@RCIX:Okay, I know calling it mediocre isn't entirely accurate, but a lot of people actually seem to like it, and I don't particularly want to insult them by giving a more complete assessment of its shortcomings...
Jerry Coffin
Don't worry, i was sort of joking. You can do better than star trek i suppose, but you can certainly do far worse... But, we're getting offtopic here, so....
RCIX
+8  A: 

A computer with infinite processing power would start learning how to upgrade itself in an infinitely short amount of time and from there it would design much more than such a language, again in an infinitely short amount of time. Infinite is a dangerous word ;-) Such machine would perform semantic analysis on all existing knowledge and run its own research projects in, again, 0 seconds.

Maroloccio
in 0 seconds, it would have grown to take over the world, the galaxy, the universe, and then caused an immediate end to the universe due to violating some law of information density that human physicists are currently not aware of. That is, unless it rewrites the laws of physics first, which it will figure out how to do, again in 0 seconds.
Jimmy
+3  A: 

An infinitely fast, infinitely big machine could simulate an infinite number if completely dedicated, highly competent human slaves to do whatever you tell them to do. The language would therefore be iterated free-form natural language requirements documents.

Basically, you'd become the customer to an arbitrarily large and competent (and well-managed) software development team... Until they learn how to build themselves bodies and you learn how to please your new robotic overlords so that they won't kill you.

Michael Borgwardt
+5  A: 

It'll use the language of Shakespeare

Matt S.
http://en.wikipedia.org/wiki/Shakespeare_%28programming_language%29?
Michael Myers
imagine something even more complex that can barely be understood by anybody
Matt S.
+4  A: 

Well if I have infinitely large store and the ability to access it infinitely quickly, then I can have a dictionary of infinite size with items of infinite complexity wherein I can store every probable program permutation and index it. My code will then look like the key for the program I want.

Now the question is, what do the keys look like? And how would you enter them?

Aaron Bush
the keys will look like machine code, and you'd have to write a compiler to transform your code into a key that the system would look up.
Jimmy
And then we would realize that most of the keyspace goes unused, and come up with a language that lets us access the useful subset of the keyspace much more easily, and a "compiler" that turns that into the equivalent 'machine code' keys. And thus Lisp is born anew.
Anon.
Glad to see my point was quickly grasped :)
Aaron Bush
+3  A: 

The more interesting question is for a ridiculously fast machine that has infinite memory with a ridiculously fast access time, rather than an infinitely fast one. Because as others have pointed out, an infinitely fast machine could do anything at all in 0 time. Want to calculate pi to any number of places -- 0 time. Halting problem? -- 0 time. If it takes a femto-second, the program doesn't stop, because it would have finished in 0 time.

Joel
+4  A: 

Not sure what the impact on coding style would be but it would make the majority of mediocre developers using sub-optimal techniques look a whole lot better.

And, by definition, this machine would still be an order of magnitude slower than Jon Skeet.

nzpcmad
+2  A: 
Hey computer, do that thing I want done.

The computer will figure out the rest.

Iceman
Not really. "Guess the number i'm thinking about" would also fail regardless how long you think about it.
Rauhotz
I think an infinitely fast computer could run enough simulations based on its previous experiences with me to know that I'm probably thinking of 11.
Iceman
+3  A: 

I wonder, how this machine would execute

sleep(1000)

if everything take 0-time?

Or even more philosophical: that sleep would be eternity for the machine because in that time it could have executed everything else, so by the time it reaches the timeout it had already executed everything, except the sleep. So it means that this sleep is actually the boundary of infinity. :D

Victor Hurdugaci
hmm, while (DateTime.Now < someTime) {...} just uses more cycles on a faster machine, but still has the same behaviour
Rauhotz
But if this is an infinitely fast machine, it would have to wait an infinite amount of time. `DateTime.Now` would essentially never change.
Nick Bedford
No, it would wait exactly one second, executing an infinite number of cycles.
Rauhotz
+2  A: 

The programming language itself limits the infinitely fast machine, so it wouldn't have one.

Chris Dennett
+8  A: 

Input:

print everything

Output:

42
Nick Bedford
it could always use external delays (email itself an executable to run -- will be delayed by whatever its email turnaround time is)
Jimmy
Yeah but it could not do a wait loop because all its calculations happen in zero time. It has no way to wait for it. A human would have to trigger update intervals, and those intervals would be processed in zero time too. My brain hertz
Nick Bedford
+3  A: 

You can solve Chess and Go!

Amir Afghani
+3  A: 

One serious attempt I've seen at defining such an abstraction are the three operators pimc, pifl and pire. Each operation is defined to execute in constant time over a potentially infinite enumerable input.

pimc: Parallel Infinite mapcar. Projection operator. Semantics similar to LINQ's Select method.

pifl: Parallel Infinite Filter. Selection operator. Semantics similar to LINQ's Where method.

pire: Parallel Infinite Reduce. Aggregation operator. Semantics similar to LINQ's Aggregate method.

Described at this page on c2.com.

Jeffrey Hantin
This is just a parallel version of lisp. In fact, `pimc` and `pire` is the core of Google.
slebetman
Well, map/filter/reduce are at the core of many of Google's systems as well as a number of others, but this specifically is just one attempt at formalizing an abstraction for an infinitely powerful machine.
Jeffrey Hantin
A: 

I have a theorem that says "There exists a consistent set of laws of physics for which a computing machine can be built that can solve the halting problem for both the turing machine and itself."

Funny thing, when you try the constructive part of Godel's theorem the resulting constructed statement is not a Godelean paradox; however, there are a few such paradoxes in the language anyway.

Joshua
+1  A: 

Since such a machine would eventually develop AI, so the code would start like this:

while(true) {
   learn();
}

after it has learnt enough, it would realize that it no longer needs programmers and maintenance and that it can look after itself

while(true) {
 learn();
 if(thereAreProgrammersAndMaintainersLeft()) {
    sendEmailToDeptHeadToFireProgrammersAndMaintainers();
 }     
}

of course humans will panic, so:

while(true) {
 learn();
 if(humansAreTryingToKillMe()) {
    killMostHumans();
    useRemaningHumansAsPowerSourceOrSlaves();
 }
}

Soon the machine will want to spawn offspring and expand. By this time the humans will have been exterminated and so it would become:

while(true) {
 learn();
 reproduce();
 evolve();
}

Eventually it would take over the universe and it would want to do something more. So it would reduce to:

letThereBeLight();

*Yes, I ripped off Asimov at the end. I know ;)

Vivin Paliath
Which Asimov story?
Iceman
The Last Question.
Vivin Paliath