tags:

views:

150

answers:

3

Haven't seen this "feature" anywhere else. I know that the 32nd bit is used for garbage collection. But why is it that way only for ints and not for the other basic types?

+5  A: 

It's not exactly "used for garbage collection." It's used for internally distinguishing between a pointer and an unboxed integer.

Chuck
And the corollary to that is that it *is* that way for at least one other type, namely pointers. If floats aren't also 31 bits, then i assume it's because they're stored as objects on the heap, and referred to with pointers. I would guess there's a compact form for arrays of them, though.
Tom Anderson
@Tom Anderson: you guess correctly.
Porculus
+8  A: 

See the the "representation of integers, tag bits, heap-allocated values" section of http://www.ocaml-tutorial.org/performance_and_profiling for a good description.

The short answer is that it is for performance. When passing an argument to a function it is either passed as an integer or a pointer. At a machine level language level there is no way to tell if a register contains an integer or a pointer, it is just a 32 or 64 bit value. So the OCaml run time checks the tag bit to determine if what it received was an integer or a pointer. If the tag bit is set, then the value is an integer and it is passed to the correct overload. Otherwise it is a pointer and type is looked up.

Why do only integers have this tag? Because everything else is passed as a pointer. What is passed is either an integer or a pointer to some other data type. With only one tag bit, there can be only two cases.

shf301
+13  A: 

This is called a tagged pointer representation, and is a pretty common optimization trick used in many different interpreters, VMs and runtime systems for decades. Pretty much every Lisp implementation uses them, many Smalltalk VMs, many Ruby interpreters, and so on.

Usually, in those languages, you always pass around pointers to objects. An object itself consists of an object header, which contains object metadata (like the type of an object, its class(es), maybe access control restrictions or security annotations and so on), and then the actual object data itself. So, a simple integer would be represented as a pointer plus an object consisting of metadata and the actual integer. Even with a very compact representation, that's something like 6 Byte for a simple integer.

Also, you cannot pass such an integer object to the CPU to perform fast integer arithmetic. If you want to add two integers, you really only have two pointers, which point to the beginning of the object headers of the two integer objects you want to add. So, you first need to perform integer arithmetic on the first pointer to add the offset into the object to it where the integer data is stored. Then you have to dereference that address. Do the same again with the second integer. Now you have to integers you can actually ask the CPU to add. Of course, you need to now construct a new integer object to hold the result.

So, in order to perform one integer addition, you actually need to perform three integer additions plus two pointer dererefences plus one object construction. And you take up almost 20 Byte.

However, the trick is that with so-called immutable value types like integers, you usually don't need all the metadata in the object header: you can just leave all that stuff out, and simply synthesize it (which is VM-nerd-speak for "fake it"), when anyone cares to look. An integer will always have class Integer, there's no need to separately store that information. If someone uses reflection to figure out the class of an integer, you simply reply Integer and nobody will ever know that you didn't actually store that information in the object header and that in fact, there isn't even an object header (or an object).

So, the trick is to store the value of the object within the pointer to the object, effectively collapsing the two into one.

There are CPUs which actually have additional space within a pointer (so-called tag bits) that allow you to store extra information about the pointer within the pointer itself. Extra information like "this isn't actually a pointer, this is an integer". Examples include the Burroughs B5000, the various Lisp Machines or the AS/400. Unfortunately, most of the current mainstream CPUs don't have that feature.

However, there is a way out: most current mainstream CPUs work significantly slower, when addresses aren't aligned on word boundaries. Some even don't support unaligned access at all.

What this means is that in practice, all pointers will be divisible by 4, which means they will always end with two 0 bits. This allows us to distinguish between real pointers (that end in 00) and pointers which are actually integers in disguise (those that end with 1). And it still leaves us with all pointers that end in 10 free to do other stuff. Also, most modern operating systems reserve the very low addresses for themselves, which gives us another area to mess around with (pointers that start with, say, 24 0s and end with 00).

So, you can encode a 31-bit integer into a pointer, by simply shifting it 1 bit to the left and adding 1 to it. And you can perform very fast integer arithmetic with those, by simply shifting them appropriately (sometimes not even that is necessary).

What do we do with those other address spaces? Well, typical examples include encoding floats in the other large address space and a number of special objects like true, false, nil, the 127 ASCII characters, some commonly used short strings, the empty list, the empty object, the empty array and so on near the 0 address.

For example, in the MRI, YARV and Rubinius Ruby interpreters, integers are encoded the way I described above, false is encoded as address 0 (which just so happens also to be the representation of false in C), true as address 2 (which just so happens to be the C representation of true shifted by one bit) and nil as 4.

Jörg W Mittag