Well, consider this: computers just work with binary numbers, thus all calculations are done at a binary level. When comparing two numbers, the computer should check if both are just the same length and add 0's on the left side of the shortest number. Then, when both are the same length, the computer starts comparing bits from left to right. As long as both are are 1's, they're equal. If both are 0's, they're equal. If one is 0, that's the smaller number and the other is the bigger number. This is how you determine the order of numbers.
Now adding two numbers. You start from the right this time and if both bits are 0, the result is 0. If one is one and the other is 0, the result is 1. If both are 1's, the result is 0 and one is added to the two bits on the left. Move one to the left and repeat. Then add the 1 you've just moved to that result byte, which might result in another 1 to be moved to the left.
The interesting part of this is that you only have to add a 1 to the left only once. In no way would you ever have to move two ones to the left.
And basically, this is how processors learned to add two numbers.
When you start to work with numbers bigger than 0 and 1, you just add to the complexity of the math problem. And considering your example, you're already splitting it up a bit into 1's. Basically, if you're adding 5+3, you're splitting it up into (1+1+1+1+1)+(1+1+1), thus 8 1's. Translate it to binary and you get 101+011. Two ones on the right translate to 0, move 1. Then 1+0 is one. Add 1 that was shifted and it's back to 0, moving 1 to the left. Then you get 0+1, which is 1 again. Plus the 1 you remembered results in 0, shift 1 to the left. There are no numbers there, so assume both to be 0. 0 plus 1 is one. No more shifts so calculation is done, and you get 1000.
What you've thought about might have been considered many years ago when they developed the first computers but adding numbers the binary way is more efficient. (Especially when dealing with huge numbers.)