views:

53

answers:

1

From what I understand, the definition of intptr_t varies by architecture -- it is guaranteed to have the capacity to represent a pointer that can access all of the uniform address space of a process.

Nginx (popular open source web-server) defines a type that is used as a flag(boolean) and this a typedef to intptr_t. Now using the x86-64 architecture as an example -- which has access to a plethora of instructions covering operands of all sizes -- why define the flag to be intptr_t ? Surely the tradition of using a 32-bit bool type would fit the bill just as well ?

I have gone over the 32-bit Vs. 8-bit bools argument myself when I was a new developer, and the conclusion was that 32-bit bools perform better for the common case because of the intricacies of processor design. Why then do need to move to 64-bit bools ?

+1  A: 

The only people who really know why nginx uses intptr_t for a boolean type are the nginx developers.

As you say, 32-bit bools often perform better than 8-bit bools for the common case. I have done no benchmarking myself, but it sounds not unreasonable to me that for certain situation on x86-64 a 64-bit bool beats a 32-bit bool. For example, in the nginx source I noticed that most ngnx_flag_t's occur in structs with other (u)intptr_t typedef'ed types. A 32-bit bool might not save space here due to alignment padding.

I do find the choice for intptr_t a bit odd as it is an optional C99 type with the intent of converting to/from void *. But as far as I can see it is never used as such. Perhaps this type gives the best approximation for 'native' word sized type?

schot