views:

407

answers:

6

The problem is to learn computer to do addition. Computer have as input a knowladge of a numbers: he "knows" that after 1 goes 2, after 2 goes 3 and so on... Having that data computer can easyly get next number. Next, computer have knowlandge as input that x+0=x and x+(y+1)=(x+1)+y. This axioms let computer to do addition. For example, to add 5 and 3, computer makes following: 5+3 = 5+(2+1) = (5+1)+2 = 6+2 = 6+(1+1) = (6+1)+1 = 7+1 = 8. But this is too long to add numbers in such way. The problem is to develop program which can improve this way of addition using the rules of mathemetics and logic. The goal addition must executed in O(log(N)) time, not O(N) time, N is magnitude of added numbers.

Have this program any science value? Is any program can do such things?

+4  A: 

I don't think much intelligence lies here. Your "computer" can remember the results of previous additions. Having infinite memory, and a long enough training period, it will build up a map of (X,Y) -> X+Y for any two numbers, which allows executing the addition in O(1). No intelligence can beat that.

Zed
Side remark: you can't have constant time access to infinite memory. Even assuming a perfect hashtable, the key length needs to grow beyond what your machine can compare in one, two, ... operations for the collisions to be avoided as the quantity of data stored in the table increases to infinity.
Pascal Cuoq
For sufficiently large integers X and Y, adding X and Y is O(log N), where N is max(|X|, |Y|)
Stephen C
@Pascal: So can you tell me which is the largest amount of memory to which constant time access is possible? And the please explain why it is impossible for twice as much memory...
Zed
A lookup is a lot more costly than an addition.
Christian
@Christian: Did you actually read the question?
Zed
How do you do a constant time lookup without using arithmetic on the memory addresses? And if you can do memory address arithmetic, why are you not just using the result ( quite common in x86 assembler )
Pete Kirkham
By using the input as the memory address.
Zed
The time required to install more memory grows linearly in the largest input that you need to support (because there is a finite largest unit of RAM available on the current market). Thus, the lookup table is asymptotically O(N) =)
Stephen Canon
@Zed 2^n where n is the number of bits your machine can handle in one operation -- say, 32. And it is not possible to access twice as memory in the same time because you machine can not handle 33-bit-addresses in one operation. You said infinite memory, not operations over infinitely long words. And if operations over infinitely long words is what you meant, using your machine to compute sums by lookups in a map is kind of a waste...
Pascal Cuoq
Okay, sorry, didn't see this theoretical question was bound to 32-bit architectures.
Zed
A: 

I want to specify problem. Computer have limited memory and limited time )))

@ifar: so do humans ... even Jon Skeet :-)
Stephen C
+1  A: 

Well, consider this: computers just work with binary numbers, thus all calculations are done at a binary level. When comparing two numbers, the computer should check if both are just the same length and add 0's on the left side of the shortest number. Then, when both are the same length, the computer starts comparing bits from left to right. As long as both are are 1's, they're equal. If both are 0's, they're equal. If one is 0, that's the smaller number and the other is the bigger number. This is how you determine the order of numbers.

Now adding two numbers. You start from the right this time and if both bits are 0, the result is 0. If one is one and the other is 0, the result is 1. If both are 1's, the result is 0 and one is added to the two bits on the left. Move one to the left and repeat. Then add the 1 you've just moved to that result byte, which might result in another 1 to be moved to the left. The interesting part of this is that you only have to add a 1 to the left only once. In no way would you ever have to move two ones to the left.

And basically, this is how processors learned to add two numbers.

When you start to work with numbers bigger than 0 and 1, you just add to the complexity of the math problem. And considering your example, you're already splitting it up a bit into 1's. Basically, if you're adding 5+3, you're splitting it up into (1+1+1+1+1)+(1+1+1), thus 8 1's. Translate it to binary and you get 101+011. Two ones on the right translate to 0, move 1. Then 1+0 is one. Add 1 that was shifted and it's back to 0, moving 1 to the left. Then you get 0+1, which is 1 again. Plus the 1 you remembered results in 0, shift 1 to the left. There are no numbers there, so assume both to be 0. 0 plus 1 is one. No more shifts so calculation is done, and you get 1000.

What you've thought about might have been considered many years ago when they developed the first computers but adding numbers the binary way is more efficient. (Especially when dealing with huge numbers.)

Workshop Alex
+7  A: 

Those automatic theorem provers which do not understand arithmetic do what you suggest: they try to reinvent it from the definition each time they need it. The results? It does not work very well. You can help these provers by providing higher-level general facts about arithmetics (either as axioms, or if you are rigorous, as lemmas proved separately). Example: associativity, commutativity,...

It still doesn't work very well. There always seem to be one more intuitively obvious fact that you need to provide to the tool for the particular proof you are interested in. for instance, x>y => x>=y, z is either odd or even, properties like that...

To compensate for this problem, some automatic theorem provers do handle arithmetic internally. In this case results are better. Simplify and alt-ergo are two examples of such provers.

Pascal Cuoq
A: 

O(log(N)) time says one thing to me: Binary Search Tree.

Treat the problem as the first answer, but as you accumulate results, arrange them into a tree wherein you serach for x, and then search the right subtree of x for x+y, building new nodes as necessary.

If it's not necessary to generate the tree on the fly from the axioms, you could create a balanced tree for your input data set, and you're done.

BnWasteland
A: 

Your assumptions about how computers do additions is completely wrong.

Computers store numbers in binary and do binary addition. If we have two 2 Byte integers and store 5 in one and 3 in the other and afterwards add those two numbers it looks like:

00000000 00000101
00000000 00000011
_________________
00000000 00001000

Adding two 2 Byte integers costs the same regardless of whether you add 200 and 1143 or whether you add 5 and 3.

Christian
Can't wait to see your comments on a Turing machine ;-)
Zed
I don't think that you need any additional axioms to be able to do binary addition.
Christian