tags:

views:

249

answers:

5

I'm new to the Win32 API and the many new types begin to confuse me.

Some functions take 1-2 ints and 3 UINTS as arguments.

  • Why can't they just use ints? What are UINTS?

Then, there are those other types:

DWORD LPCWSTR LPBOOL 
  • Again, I think the "primitive" C types would be enough - why introduce 100 new types?

This one was a pain: WCHAR*

I had to iterate through it and push_back every character to an std::string as there wasn't another way to convert it to one. Horrible.

  • Why WCHAR? Why reinvent the wheel? They could have just used char* instead, or?
+1  A: 

A coworker of mine would say "There is no problem that can't be solved (obfuscated?) by a level of indirection." In WIN32 you'll be dealing with WCHAR, UINT, etc. and you'll get used to it. You wont have to worry when you deploy that DLL which basic type a WCHAR or UNIT compiles to - it will "just work".

Best to read through some of the documentation to get used to it. Especially on the Wide char support (WCHAR, etc.). There's a nice definition on MSDN for WCHAR.

qor72
"There is no complexity problem in programming that cannot be eased by adding a layer of indirection. And there is no performance problem in programming that cannot be eased by removing a layer of indirection."- Donald Knuth
Simon Buchan
A: 

UINT is an unsigned integer. If a parameter value will not / cannot be negative, it makes sense to specify unsigned. LPCWSTR is a pointer to const wide char array, while WCHAR* is non-const.

You should probably compile your app for UNICODE when working with wide chars, or use a conversion routine to convert from narrow to wide.
http://msdn.microsoft.com/en-us/library/dd319072%28VS.85%29.aspx

http://msdn.microsoft.com/en-us/library/dd374083%28v=VS.85%29.aspx

Kyle Alons
+13  A: 

Remember that the Windows API was first created back in the 1980's, and has had to support several different CPU architectures and compilers over the years. They've gone from single-user single-process standalone systems to networked multi-user multi-core security-conscious systems. They had to work around issues with 16-bit vs. 32-bit processors, and now 64-bit processors. They had to work around issues with pre-ANSI C compilers. They had to support C++ compilers in the early unstandardized times. They had to deal with segmented memory. They had to support internationalization before Unicode existed. They had to support some source-level compatibility with MS-DOS, with OS/2, and with Mac OS. They've had to run on several generations of Intel chips, and PowerPC, and MIPS, and Alpha. The same basic API is also used on mobile phones, handhelds, and many types of embedded systems.

Also, back in the 1980's, C was considered to be a high-level language (yes, really!) and many people considered it good form to use abstractions rather than just specifying everything as an int, a char, or a void *. Back when we didn't have IntelliSense and infotips and code browsers and online documentation and the like, such usage hints were very helpful.

Yes, it's a horrible mess, but that doesn't mean they did anything wrong.

Kristopher Johnson
One of the more obvious artifacts of the Windows API heritage is the '`LP`' prefix used on many pointer types - that prefix stands for 'long pointer' (also known as a 'far pointer') and was required for many parameters due to Win16's underlying segmented architecture, where a pointer could be 'near' (pointing within an assumed segment) or 'far' (where the segment was specified as part of the pointer). Near and far pointers are long gone with Win32, but the names remain the same.
Michael Burr
It's kind of funny. The windows platform headers still have defines for FAR pointers. Strange they still haven't cleaned up the mess yet, after 20 or so years. The C Win32 API feels like it's left and forgotten.
Mads Elvheim
@Mads: Forgotten? Hardly. Keeping the old definitions allows older apps to be updated without having to rewrite them.
Adrian McCarthy
Another obvious artifact is the WPARAM type. The "W" originally stood for "word", meaning 16-bit value. Now, it is a 32-bit value, but they kept the "W" prefix. See http://blogs.msdn.com/oldnewthing/archive/2003/11/25/55850.aspx for other commentary.
Kristopher Johnson
A: 

UINTS are unsigned ints. An int value can either be positive or negative; an unsigned int (by definition) has no sign, and so is always nonnegative. This means that a UINT's range (the numbers it can represent) is different from (though equal in size to) a (signed) int's range.

Matt Ball
+2  A: 

Win32 actually has very few primitive types. What you're looking at is decades of built-up #defines and typedefs and Hungarian notation. Because there was so few types and little or no intellisense developers gave themselves "clues" as to what a particular type was actually supposed to do.

For example, there is no boolean type but there is an "aliased" representation of a integer that tells you that a particular variable is supposed to be treated as a boolean. Take a look at the contents of WinDef.h to see what i mean.

You can take a look here: http://msdn.microsoft.com/en-us/library/aa383751(VS.85).aspx for a peek at the veritable tip of the iceberg. For example, notice how HANDLE is the base typedef for every other object that is a "handle" to a windows object. Of course HANDLE is defined somewhere else as a primitive type.

Paul Sasik
Just make sure you don't confuse Windef.h's `unsigned int` `BOOL` and WinNT.h's `unsigned char` `BOOLEAN` :).
Simon Buchan