views:

425

answers:

10

Inspired by this question

Suppose we had a magical turing machine with infinite memory, and unlimited CPU power.

Use your imagination as to how this might be possible, e.g. it uses some sort of hyperspace continuum to automatically parallelize anything as much as is desired, so that it could calculate the answer to any computable question, no matter what it's time complexity is and number of actual "logical steps", in one second.

However, it can only answer computable questions in one second... so I'm not positing an "impossible" machine (at least I don't think so)... For example, this machine still wouldn't be able to solve the halting problem.

What would the programming language for such a machine look like? All programming languages I know about currently have to make some concessions to "algorithmic complexity"... with that constraint removed though, I would expect that all we would care about would be the "expressiveness" of the programming language. i.e. it's ability to concisely express "computable questions"...

Anyway, in the interests of a hopefully interesting discussion, opening it up as community wiki...

+1  A: 

Maybe it would look more like pseudo-code than "real" code. After all, you don't have to worry about any implementation details any more because whichever way you go, it'll be sufficiently fast enough.

n3rd
*Real programmers™* already do this and only try to write a more optimal solution if performance is an issue (so only for 5% of the code) ;)
delnan
Just to be a nitpicker: there's no such thing as a "more optimal" solution, just like there is no "worse worst case" ;-)
n3rd
+2  A: 

It could possibly be a haskell-ish language. Honestly it's a dream to code in. You program the "laws" of your types, classes, and functions and then let them loose. It's incredibly fun, powerful, and you can write some very succinct and elegant code. It's like an art.

codebliss
+1 Any language that claims to be high-level should absolutely, positively be as abstract (i.e. independent from the underlying archticture) as e.g. Haskell. Sadly, this is usually not done - for performance considerations (the primitives in Java and various keywords in e.g. C/C++ mainly serve this purpose; remove them and the language is simpler though a bit slower).
delnan
+1  A: 

Your underestimate the O(1). It means that there exists a constant C>0 such that time to compute a problem is limited to this C.

What you ignore is that the actual value of C can be large and it can (and mostly is) different for different algorithms. You may have two algorithms (or computers - doesn't matter) both with O(1) but in one this C may be billion times bigger that in another - then the latter will be much slower and perhaps very slow in terms of time.

sharptooth
+3  A: 
SendMessage travelingSalesman "Just buy a ticket to the same city twice already. You'll spend much more money trying to solve this than you'll save by visiting Austin twice."
SendMessage travelingSalesman "Wait, they built what kind of computer? Nevermind."
Chris Doggett
+1  A: 

Scalability would not be an issue any longer. We'd have AIs way smarter than us. We wouldn't need to program any longer and instead the AI would figure out our intentions before we realize them ourselves.

e1i45
But see http://xkcd.com/534/.
Joe White
+3  A: 

This is not really logical. If a thing takes O(1) time, then doing n times will take O(n) time, even on a quantum computer. It is impossible that "everything" takes O(1) time.

For example: Grover's algorithm, the one mentioned in the accepted answer to the question you linked to, takes O(n^1/2) time to find an element in a database of n items. And thats not O(1).

Tomalak
A quantum computer could, of course, do the n instances in parallel, so overall would still be O(1). This does assume an arbitrarily large quantum computer (count(qbits) >> n forall n), but then that's part of the question.
Richard
That's absurd. The variable in the O is the size of the task, not the repeat count.
sharptooth
OK I've clarified my question. Forget time complexity. That's the point of the question after all... What would programming languages look like if you had good reason not to care about time complexity?
Paul Hollingsworth
When given the question "if X were true, what would the ramifications be?", it's rather beside the point to answer "X is absurd", particularly when the question refers to X as "magical", with "unlimited" and "infinite" characteristics, clearly indicating that the person asking the question already knows it's absurd. Exploring the effects which would result if it were true can still be useful (or at least interesting) despite X's absurdity.
Dave Sherohman
@sharptooth: If something takes one second, then doing it twice takes two seconds - because the size of the task has doubled. Both is denoted as O(n). If something is O(1) (like indexing into an array), then doing it in a loop is O(n) again, where n is the count of the loop. It's just not logical to assume that *everything* is O(1).
Tomalak
@Tomalak. Doing the task twice is not doubling its size, it's just doing twice. If you use bubblesort (O(n*n) complexity) it takes 4 times longer if you double the task size (the number of elements) but 2 times longer if you just call it twice.
sharptooth
@sharptooth: We obviously define "task" differently. A bubblesort could be defined as O(1) if it is an atomic operation in the context. That the bubblesort *itself* has a time complexity >> O(1) is a different story. Let's say it has O(m^2) where "m" is the number of elements to sort. When it's part of a loop, the loop times O(n) where "n" is the loop counter. The point is that "m" is independent from "n". Since every complex operation is non-atomic, it cannot be O(1), even if it's atoms are. Maybe I'm wrong, but I don't think a even quantum computer can solve *every problem* instantly.
Tomalak
@Tomalak Regardless of that O(1) is not instantly - it's just the same constant time for every task of a given class. And this constant can be very large.
sharptooth
@sharptooth: I think we both agree on this. The OP asked "what if every calculation in the world took one second". I tried to prove that the question itself is a logical contradiction. If calculation X takes one second, doing calculation X two times will take more than one second, despite the fact that "do X two times" is itself *one* calculation by the OP's definition (it's part of the set of "every calculation in the world").
Tomalak
@Richard: Quantum computers do not work like that.
Captain Segfault
A: 

SQL is such a language - you ask for some piece of data and you get it. If you didn't have to worry about minute implementation details of the db this might even be fun to program in.

Erwin
SQL does have nothing to do with algorithmic complexity. It may be a 4th generation language (this is what you seem to have in mind), but it sure as hell does not take O(1) time to execute a query.
Tomalak
You're right, SQL is a "declarative" language, and that's an important constraint... However, it's not actually "turing complete" unless you start ascribing more interesting properties to your "relations". For proof... SQL can only query a finite number of tables... so if your tables also have only finite rows, it's possible to evaluate the output of any SQL statement in finite time and hence prove that it always halts.
Paul Hollingsworth
+3  A: 

The amount of memory or the speed of the memory or the speed of the processor doesn't define the time and space complexity of an algorithm. Basic mathematics do that. Asking what would programming languages look like if everything could be computed in O(1) is like asking how would our calculators look like if pi was 3 and the results of all square roots are integers. It's really impossible and if it isn't, it's not likely to be very useful.

Now, asking ourself what we would do with infinite process power and infinite memory could be a useful exercise. We'll still have to deal with complexity of algorithms but we'd probably work somehow differently. For that I recommend The Hundred-Year Language.

J. Pablo Fernández
Yes I've clarified my question. As to the Hundred-Year language... it still looks way too algorithmic and similar to ordinary Lisp/Scheme to me, with some minor syntax changes... i.e. still very concerned with "how to" compute rather than "what to compute"...
Paul Hollingsworth
+2  A: 

Note that even if the halting problem is not computable, "does this halt within N steps on all possible inputs of size smaller than M" is!

As such any programming language would become purely specification. All you need to do is accurately specify the pre and post conditions of a function and the compiler could implement the fastest possible code which implements your spec.

Also, this would trigger a singularity very quickly. Constructing an AI would be a lot easier if you could do near infinite computation -- and once you had one, of any efficiency, it could ask the computable question "How would I improve my program if I spent a billion years thinking about it?"...

Captain Segfault
A: 

If it will all be done in one second, then most languages will eventually look like this, I call it DWIM theory (Do what I mean theory):

Just do what I said (without any bugs this time)

Because if we ever develop a machine that can compute everything in one second, then we will probably have mind control at that stage, and at the very least artificial intelligence.

Joe D