views:

127

answers:

2

During the transition from 16 to 32 bit in the 80s, int was either 16 or 32 bit. Using the current 64 bit transition nomenclature, I understand there was a pretty even spread of ILP32 and LP32 machines. At the time I believe it was understood that int would always follow the register or pointer width for any given architecture and that long would remain 32 bit.

Fast forward 25 years, I see that LP64 is pretty mainstream, but until I encountered 64 bit platforms [my discovery of desktop Linux in 2007 :)], I always expected IP64 to be the next logical step.

  1. Was this (LP64) the expected evolution for 64bit?
    • How does the char <= short <= int <= long relationship fit into this emerging scheme of fixing an integer type to each platform we leave behind?
    • How do these transition schemes relate to the use of (your choice of {l,u}case) WORD/DWORD on various platforms?
    • Some areas of Windows still contain INT forms that are 16bit. Will Windows grow out of LLP64 or is it too late?
    • Why was int chosen to be left behind this time, as opposed to during the 32bit transition?
+2  A: 

How I see it is that Windows is an oddball in the whole x64 transition. But putting that aside, C or C++ never defined the integral types to be fixed-length. I find the whole int/long/pointer thing quite understandable, if you look at it this way:

int: mostly 32 bits long (Linux, Mac and Windows) long: 64 bits on Mac and Linux, 32 on Windows long long: 64-bit on Mac, Linux, and Windows x64 (u)intptr_t: exact length of pointer (32 on 32-bit, 64 on 64-bit systems)

I only use char in context of strings and never use short, as it's as long as an int on most desktop systems anyway.

WORD and DWORD are ugly, and should be avoided. If the API forces you to use them, replace DWORD with DWORD_PTR when you're dealing with... well, pointers. It was never correct to use (D)WORD there in the first place IMHO.

I don't think Windows will change it's decision, ever. Too much trouble already

Why was int left behind? Why does Venus rotate in the opposite direction? The answer to the first question is found here (I believe), the second is a bit more complicated ;)

rubenvb
-1 for "Avoid DWORD and friends whenever possible" ... YOU might not like how they look, but they are idiomatic in Windows code.
Billy ONeal
@Billy: I think you missed the next sentence, and please don't put words in my mouth.
rubenvb
@rubenvb: There are plenty of places where the API doesn't "force" you to use them, but where it is ideomatic to do so anyway.
Billy ONeal
+1 to compensate for the Windows asshole.
R..
Lets all hold hands and think of Unix and the bad man will go away ;)
Matt Joiner
Billy is perfectly correct that `DWORD` should be used when you're calling APIs typed as `DWORD`.
Ben Voigt
There is never a reason to create variables of type `DWORD` just like there's never a reason to create a variable of type `gint` (Just to treat Windows idiocy and Linux idiocy equally). The prototype of the Windows API function will take care of making sure your sanely-typed variable is pushed correctly onto the stack for making the function call.
R..
@R: Obviously you've never written Windows code. There are not only parameters passed to Windows APIs, but incoming parameters from callbacks such as `WndProc`, and return values from APIs. The ONLY reliable way to pass `WndProc` parameters unchanged to `DefaultWndProc` is by making them the correct Windows-defined type (`LPARAM` and `WPARAM`). If you used any other type, your code broke moving from Win16 to Win32 and again from Win32 to Win64.
Ben Voigt
@R. I'm unsure how I'm a "windows asshole". Write windows code for windows. Write *nix code for *nix. Don't use the ideomatic types simply because you don't think they are "pretty". If you absolutely need to be cross platform, then fall back on the C types. But if you're truly cross platform, **you should not be caring how many bits are in the type in the first place, because that is unspecified**. At least the platform specific types have well defined behavior.
Billy ONeal
Well all of NPAPI has been wrong a long time (for win64) because they used bad types for exactly WPARAM and LPARAM (which are just `uintptr_t` s by the way...), see https://bugzilla.mozilla.org/show_bug.cgi?id=560298 . There's really no need for API specific types IMHO, but as I said in my original answer, use them when the API calls for them, but when abstracting API's, drop the suckers, (see for example the uword typedef'ed in the beginning of http://trac.webkit.org/browser/trunk/WebCore/platform/Arena.h , it had to be fixed for win64 because it *was* a `unsigned long`, instead of uintptr_t)
rubenvb
I always use the API types myself, but only on the interface boundary, and only if the conversion from my own types is non-trivial. It would be foolish and dangerous to go arbitrarily casting to your own types if you need to reuse values (such as handles) with the same API later.
Matt Joiner
The link you've provided, is very enlightening.
Matt Joiner
A: 

Instead of looking at this as int being "left behind", I would say you should look at it in terms of not being able to leave behind any size type that might be needed. I suppose compilers could define int32_t in terms of some internal __int32_t extension type, but with C99 still not being widely supported, it would have been a major pain for apps to have to work around missing int32_t definitions when their build systems couldn't find a 32-bit type among the standard types. And having a 32-bit type is essential, regardless of what your native word size is (for instance it's the only correct type for Unicode codepoint values).

For the same reason, it would not be feasible to have made short 32-bit and int 64-bit: a 16-bit type is essential for many things, audio processing being the first that comes to mind. (Not to mention Windows'/Java's ugly UTF-16 obsession..)

Really, I don't think the 16-to-32-bit and 32-to-64-bit transitions are at all comparable. Leaving behind 16-bit was leaving behind a system where most numbers encountered in ordinary, every-day life would not fit in a basic type and where hacks like "far" pointers had to be used to work with nontrivial data sets. On the other hand, most applications have minimal need for 64-bit types. Large monetary figures, multimedia file sizes/offsets, disk positions, high-end databases, memory-mapped access to large files, etc. are some specialized applications that come to mind, but there's no reason to think that a word processor would ever need billions of characters or that a web page would ever need billions of html elements. There are simply fundamental differences in the relationship of the numeric magnitudes to the realities of the physical world, the human mind, etc.

R..
I suppose you could call C99 "not widely supported" in the embedded and Windoze sense, but it feels strange to say that using it myself everyday. I think you're missing the distinction between the common, and stdint types, I don't think a type was left behind only so that we could still use 32 bit integers.
Matt Joiner
When the "decision" was made, the lack of `stdint.h`/`inttypes.h` was a major practical consideration everywhere except the modern free unices. Why do you think `configure` (autoconf, etc.) was/is always doing useless pre-build checks for the sizes of all the standard types?
R..