views:

114

answers:

4

As far as I understood it, BigInts are usually implemented in most programming languages as arrays containing digits, where, eg.: when adding two of them, each digit is added one after another like we know it from school, e.g.:

 246
 816
 * *
----
1062

Where * marks that there was an overflow. I learned it this way at school and all BigInt adding functions I've implemented work similar to the example above.

So we all know that our processors can only natively manage ints from 0 to 2^32 / 2^64.

That means that most scripting languages in order to be high-level and offer arithmetics with big integers, have to implement/use BigInt libraries that work with integers as arrays like above. But of course this means that they'll be far slower than the processor.

So what I've asked myself is:

  • Why doesn't my processor have a built-in BigInt function?

It would work like any other BigInt library, only (a lot) faster and at a lower level: Processor fetches one digit from the cache/RAM, adds it, and writes the result back again.

Seems like a fine idea to me, so why isn't there something like that?

+3  A: 

Binary Coded Decimal is a form of string math. The Intel x86 processors have opcodes for direct BCD arthmetic operations.

dthorpe
Which hardly anyone uses anyway.
Michael Myers
@mmyers - wasn't there a spate of computers showing the year as 2016 this year because somebody in the Windows team screwed up the BCD to decimal conversion?
Paul Tomblin
http://teck.in/year-2016-y2k16-computer-bug-hits-texting-in-us-and-banking-in-australia.html
Paul Tomblin
@mmyers: OP wasn't asking if it was popular. :P BCD is/was used in financial applications for fixed-point math to avoid loss of precision / roundoff errors.
dthorpe
@dthorpe: Yes, but if BCD isn't used very much, why would processor makers implement another alternative? At least that's what I would think.
Michael Myers
@mmyers: True, BCD was not as popular as Intel had hoped, and as you suggest probably casts a shadow over anybody else heading down that path.
dthorpe
+1 for the mention. Also, interestingly (or not), IEEE-754 defines both Binary and Decimal radix variants. http://en.wikipedia.org/wiki/IEEE_754-2008.
pst
+7  A: 

There are simply too many issues that require the processor to deal with a ton of stuff which isn't its job.

Suppose that the processor DID have that feature. We can work out a system where we know how many bytes are used by a given BigInt - just use the same principle as most string libraries and record the length.

But what would happen if the result of a BigInt operation exceeded the amount of space reserved?

There are two options: 1) It'll wrap around inside the space it does have or 2) It'll use more memory.

The thing is, if it did 1), then it's useless - you'd have to know how much space was required beforehand, and that's part of the reason you'd want to use a BigInt - so you're not limited by those things.

If it did 2), then it'll have to allocate that memory somehow. Memory allocation is not done in the same way across OSes, but even if it were, it would still have to update all pointers to the old value. How would it know what were pointers to the value, and what were simply integer values containing the same value as the memory address in question?

Michael Madsen
A: 

As I think, the main idea behind not including the bigint support in modern processors is the desire to reduce ISA and leave as few instructions as possible, that are fetched, decoded and executed at full throttle. By the way, in x86 family processors there is a set of instructions that make writing big int library a single-day's matter. Another reason, I think, is price. It's much more efficient to save some space on the wafer dropping the redundant operations, that can be easily implemented on the higher level.

Negai
I think a processor that handles x86/x87/MMX/SSE/SSE2/SSE3/SSSE3 wasn't necessarily designed with the goal of minimizing the instruction set.
Jeffrey L Whitledge
+1  A: 

Suppose the result of the multiplication needed 3 times the space (memory) to be stored - where would the processor store that result ? How would users of that result, including all pointers to it know that its size suddenly changed - and changing the size might need it to relocate it in memory cause extending the current location would clash with another variable.

This would create a lot of interaction between the processor, OS memory managment, and the compiler that would be hard to make both general and efficient.

Managing the memory of application types is not something the processor should do.

leeeroy
IIRC, a multiplication result will only ever require a number of bits equal to the sum of the widths of the operands. So assuming we're multiplying two equal-sized variables, you'll never need more than 2X the space for the result.
rmeador
Sure, but you don't know that size until runtime, which is the hard issue.
leeeroy