tags:

views:

144

answers:

4

Can we anyhow change the size of the pointer from 2 bytes so it can occupy more than 2 bytes?

+10  A: 

Sure, compile for a 32 (or 64) bit platform :-)

The size of pointers is platform specific, it would be 2 bytes only on 16-bit platforms (which have not been widely used for more than a decade - nowadays all mainstream [update](desktop / laptop / server)[/update] platforms are at least 32 bits).

Péter Török
I was wondering if I should use a far pointer in this regard
fahad
@fahad: you'd do best to get off MS DOS and Turbo C.
Jonathan Leffler
Nice and simple :-0
Kinderchocolate
all mainstream *desktop* platforms :)
mikecsh
@mikecsh, why, are there still 16-bit servers out there? :-) Seriously though, I edited my answer to clarify that embedded platforms are not considered here.
Péter Török
Well, if anyone makes a compiler that implements char as 16 bit values (a Unicode compiler?), then you'd have two byte pointers on a 32bit system!
Skizz
@Skizz, even such a compiler can't change the size of a byte, could it? :-) But it is true that `sizeof(char*)` would be 2 on such a platform.
Péter Török
@Péter: This is one of those tricky little areas of the standards. A byte is not defined as 8 bits, but rather the smallest addressable unit of memory (or words to that effect and probably subject to implementation) and that a char is always one byte (i.e. sizeof (char) is always 1). The number of bits in a byte is implementation defined. All other sizes are in multiples of one byte, so in a 16 bit byte system, a 32 bit value has a sizeof 2 and therefore sizeof(char*) would be 2 if it was a 32 bit value. sizeof returns the number of addressable units required to store the value.
Skizz
@Skizz, hmmm, I see. I thought the standard does not actually include the term "byte", precisely because it is generally used to mean 8 bits, so it would create confusion to define it differently. And this is why they define `sizeof(char)` as the basic unit of memory. But I may easily be wrong on this count.
Péter Török
+2  A: 

If your pointer size is 2 byte that means you're running on a 16-bit system.

The only way to increase the pointer size is to use a 32-bit or 64-bit system instead (which would mean any desktop or laptop computer built in the last 15 years or so).

If you're running on some embedded device that uses 16-bit, your only option would be to switch to another device which uses 32-bits (or just live with your pointers being 16-bit).

sepp2k
+1  A: 

When a processor is said to be "X-bit" (where X is 16, 32, 64, etc), that X refers to the size of the memory address register. Thus a 16-bit system has a memory address register of 2 bytes.

You cannot cast a 4-byte address to anything smaller because it would lose part of where it's pointing to. (A 2-byte memory address register can only point to 2^16=64KB of memory, whereas a 4-byte register can point to 2^32=4GB of memory.)

You can always "step-up" (ie, run a 32-bit software application on a 64-bit computer) because there's no loss in pointer range. But you can never step down, which is why 64-bit programs don't run on 32-bit systems.

chrisaycock
So how come the 16 bit 8086 could address 1Mb and not just 64k? If only 'bit'ness was this easy. Marketing departments have a lot to answer for.
Skizz
A: 

Think of a pointer as a number, only instead of an actual value used for computation, it's the number of a 'slot' in the memory map of the system.

A pointer must be able to represent the highest position of the memory map. That is, it must have at least the amount of bytes required to represent the number of the highest position.

In a 16-bit system, the highest possible position is 0xFFFF (a 16-bit number with all the bits set to 1). A pointer must also have 16 bits, so it can reach that number.

Generalizing, in an X-bit system, a pointer will have X bits.

You can store a pointer in a larger variable, the same way you can store the number 1 in a char, in an int, or an unsigned long long if you wanted to; but there's little point to that: think that, the same way a shorter pointer won't be able to reach the highest memory position, a longer pointer would be able to point to things that can't actually exist in memory, so why have it?

Also, you'd have to 'trick' the compiler for that. If you use the pointer notation in your code, the compiler will always use the correct amount of bytes for it. You can instruct the compiler to compile for another platform, though.

Santiago Lezica