views:

1979

answers:

8

This seems like a simple question, but I can't find it with the Stack Overflow search or Google. What does a type followed by a _t mean? Such as

int_t anInt;

I see it a lot in C code meant to deal closely with hardware—I can't help but think that they're related.

+2  A: 

It means type. size_t is the size type.

Douglas Mayle
+2  A: 

It's just a convention which means "type". It means nothing special to the compiler.

Matt Curtis
+7  A: 

It's a convention used for naming data types, e.g with typedef:


typedef struct {
  char* model;
  int year;
...
} car_t;

mmacaulay
Would that then be a model _T (which, btw, turned 100 this month) ? <gdr>.
Joel Coehoorn
+3  A: 

If you're dealing with hardware interface code, the author of the code you're looking at might have defined int_t to be a specific size integer. The C standard doesn't assign a specific size to the int type (it depends on your compiler and target platform, potentially), and using a specific int_t type would avoid that portability problem.

This is a particularly important consideration for hardware interface code, which may be why you've first noticed the convention there.

Greg Hewgill
this will be not very good practice, i would expect one to define [u]int_[32 16 8]_t to make clear what is the size you defining.
Ilya
You're quite right, "int_t" by itself tells the programmer that it's a user defined type but not what it really is!
Greg Hewgill
+12  A: 

As Douglas Mayle noted, it basically denotes a type name. Consequently, you would be ill-advised to end variable or function names with '_t' since it could cause some confusion. As well as size_t, the C89 standard defines wchar_t, off_t, ptrdiff_t, and probably some others I've forgotten. The C99 standard defines a lot of extra types, such as uintptr_t, intmax_t, int8_t, uint_least16_t, uint_fast32_t, and so on. These new types are formally defined in <stdint.h> but most often you will use <inttypes.h> which (unusually for standard C headers) includes <stdint.h>. It (<inttypes.h>) also defines macros for use with the printf() and scanf().

As Matt Curtis noted, there is no significance to the compiler in the suffix; it is a human-oriented convention.

However, you should also note that POSIX defines a lot of extra type names ending in '_t', and reserves the suffix for the implementation. That means that if you are working on POSIX-related systems, defining your own type names with the convention is ill-advised. The system I work on has done it (for more than 20 years); we regularly get tripped up by systems defining types with the same name as we define.

Jonathan Leffler
it seems reasonable that OS and common runtime libraries define types with genericish names; but shouldn't your company's types also be prepended with a prefix or something?
Toybuilder
Yes, they should. Unfortunately, I wasn't in charge of the naming convention at the time when it was used. And global search and replace isn't something that's sanctioned - though it could be used to fix a lot of problems, even in a monstrous code base.
Jonathan Leffler
I use _type instead of _t on my typedefs precisely to avoid that.
CesarB
+1 for Posix info.
mskfisher
+3  A: 

It is a standard naming convention for data types, usually defined by typedefs. A lot of C code that deals with hardware registers uses C99-defined standard names for signed and unsigned fixed-size data types. As a convention, these names are in a standard header file (stdint.h), and end with _t.

mkClark
+1  A: 

The "_t" does not inherently have any special meaning. But it has fallen into common use to add the _t suffix to typedef's.

You may be more familiar with common C practices for variable naming... This is similar to how it's common to stick a p at the front for a pointer, and to use an underscore in front of global variables (this is a bit less common), and to use the variable names i, j, and k for temporary loop variables.

In code where word-size and ordering is important, it's very common to use custom defined types that are explicit, such as "BYTE" "WORD" (normally 16-bit) "DWORD" (32-bits).

'int_t' is not so good, because the definition of "int" varies between platforms -- so who's "int" are you conforming to? (Although, these days, most PC-centric development treats it as 32 bits, much stuff for non-PC development still treat int's as 16 bits).

Toybuilder
+1  A: 

There was a few good explanations about the subject.
Just to add another reason for re-defining the types. In many embedded project all types are redefined to state correctly given sizing to the types and improve portability across different platforms i.e hardware types compilers.
Another reason will be to make your code portable across different Os's and avoid collisions with existing types in OS your are integrating your code, for this usually unique (as possible) prefix added.For example :

typedef unsigned long dc_uint32_t

Ilya