views:

304

answers:

13
+7  Q: 

What is a Type?

A newb question /+: ...
In programming, we come across the word Type frequently enough.

What should it be meant to convey, and when should it not be used?

+6  A: 

There are many ways to answer this question, and each way is applicable to some models, but not to others. My favorite general definition is that a type is some subset of all values in the world (for example, the type "positive integers" includes the value 1 but not the value "Stack Overflow"). Types can obviously overlap (1 could be an integer or positive integer type). This definition gives a good intuitive sense of "bigger" (more inclusive) and "smaller" types, which helps to understand covariance and contravariance.

Craig Stuntz
+7  A: 

I always learned as "A Type defines how the data is stored in memory and the actions that can be performed on it."

If you think about an Class with local variables and methods, this makes sense.

Chet
That's probably the most succinct way to describe a type I've seen in a while. +1
Randolpho
Properly, a type is a set of values and operations. Secondarily, there is a representation in memory, and perhaps on file systems.
S.Lott
You can't define a value for a type without specifying how it's stored in memory. Endianness of the system comes into play with that as well.
Randolpho
@randolpho: things such as endianness and memory layout are supposed to be abstracted away. that's why you're using your language's Type functionality in the first place, instead of simply manipulating a chunk of bytes by hand.
rmeador
@rmeador: Er... exactly? I think we just said the same thing. The whole point of defining a type is to allow you to work abstractly using the semantics that were defined around the type. When you define a type you do one of two things: A) you define the semantics, encoding, memory layout and other low-level stuff surrounding the type or B) you define the type in relation to other types. I think you're focusing around B since that can be programmer-defined, but A is just as important.
Randolpho
semantics have nothing to do with representation. Two different layers of abstraction. Domain of values and operations exist independently of any specific representation. Representation is not unimportant. It's just separate.
S.Lott
A: 

Data Types. e.g. int, bool, float, char, string (names will be different between different languages).

Type is short for Data Type. These can be divided into 2 basic categories: Native and User Defined. A data type describes what type of data can be held in a variable and the operations that you can perform on that data.

native data types are already defined in the language. Often these include integer, float, boolean, character, string or something similarly named. Different languages will have different sets of native data types. Some languages don't have a boolean, for example. Other languages don't have a native string type.

custom data types are the ones you define. You can define a data type for storing any kind of information and operators which act on those values. These can be considered classes or structures.

Xetius
+1  A: 

In terms of data types, it's the format in which the data is stored in memory and conveys the operations that can be performed on and with the data.

For example, an 'unsigned integer' is a data type that can only store positive whole real numbers (i.e 0, 1, 2, 3...), usually up to a specific number due to the fact that the memory allocated to the unsigned integer is limited.

sixfoottallrabbit
+1  A: 

@divo said it well enough, but I'll try to sum up:

A type is a set of data (it can even be made made up of other types) that has been given semantic meaning. That's important -- a type is a definition of semantic meaning. This type is different from that type because I say so. The semantic meaning of the type defines how you can use it, what operations you can perform against it, that sort of thing.

At its lowest form, a type is just an encoding against a grouping of bits. For example, an integer (int in many languages) is (typically these days) 32 bits of data, encoded in twos-compliment form. floats are 32 or 64 bits encoded in the IEEE floating point arithmetic standard. chars are 8 or 16 (more frequently 16) bits encoded in ASCII or UTF8/UTF16. A string is an array of characters. And so forth.

A complex type (which is what most people think of when they see/hear the word "type") is made up of one or more other types. In most languages, a type can be defined as either an alias of another type, or as a data structure or class.

Randolpho
+1  A: 

Informally, a Type is used to name a category of objects with similar characteristics, as with "Chair" for a type of furniture. A Chair is typically for sitting on, and so has a flat horizontal space. Chairs often have four legs, but not always. A chair has a certain color or set of colors. etc.

So, if I tell you I have a Chair, you know a lot about the object I am referring to.

Taking the analogy a step further, chairs have functionality (you can sit on a chair), and properties (number of legs, color). Further, common configurations of the chair's properties can be named as well, a sub-Type (or Subclass), e.g. a Stool is a three-legged chair with no back.

Types are a short-hand for describing computer objects so that all the properties and actions (methods) don't need to be specified for each individual object. By declaring that a certain object has a certain type, programmers (and the computer) assume commonality based on the Type, making the programming process cheaper/better/faster.

Mark Laff
To me, "type" implies primitive values, while "class" implies objects. Your description seems a lot like the latter to me. Maybe I'm way off-base here?
Imagist
@Imagist: yep, you're way off base. Type means both "primitive" values and "objects". They're synonymous.
Randolpho
+5  A: 

Data is nothing but a collection of bits. A type tells you what those bits represent like int, char, Boolean.

lune
Bit strings are only one of many possible representations of data.
Apocalisp
and the others would be?
lune
Different ways of encoding data are limited only by your imagination. Some computers represent data as pairs of amino acids. The point is that bit strings are an encoding, but the data is not the encoding.
Apocalisp
I am getting it?
lune
A: 

From the perspective of a beginning programmer, you can think of the purpose of a type as limiting what information can be stored in a particular variable. For example, (ignoring odd environments), in C:

  • a char is an 8 bit value that can represent a number ranging from -128 to 127.
  • an unsigned short is a 16 bit value that can represent a number ranging from 0 to 65535

It's worth noting that not all languages handle typing in the same way. A language that strictly limits what values can be stored in variables based on types is considered strongly typed. An example of a language that is not strongly typed is Perl - if you define , Perl will do magic and treat this value as either a string or a number based on the context.

atk
That's not entirely accurate. *All* languages care about typing, but some specify and encode types dynamically at runtime, rather than statically at compile time.
Randolpho
I guess I worded that poorly. Revised to be a little more clear.
atk
A: 

A type is the name given to the "description" of a class or object (instance of a class).

In .NET, the type tells you information like the class's name, fields, properties, methods, where it is etc. It can also lead to information like what assembly (DLL) it is located in and the directory it is in.

The type is very important as the compiler knows what can and cannot be done with an object at compile time. This eases development significantly, ensures that problems are raised sooner and developer are less likely to do the wrong things with the wrong objects.

Some examples of so-called built-in types are "int, double, string, float, byte and short".

Dominic Zukiewicz
A: 

A "Type" is meant to convey the flavor of an object; its limits and expected defaults.

An int Type means that its a number and in many languages defaults to zero. A string Type, by contrast, is a set of characters that may resemble an int but doesn't have to; the default is an empty string or null value depending on the language.

"Type" is also, often, used to refer to a custom object or class, not just int, bool, string, etc. Is there a case where you shouldn't use "Type"?

shanabus
A: 

A type is a type, according to the python view of the world. In other words, is something that defines itself as the basis of a hierarchy of concepts. It's a very abstract concept, an "upper ontological" entity that defines the concepts of the programming world you are describing. In some sense, the concept of type is the big bang of your programming environment.

I suggest you this very insightful article:

http://www.cafepy.com/article/python_types_and_objects/python_types_and_objects.html

Stefano Borini
A: 

Types came about from how data is stored in memory. An integer stored in memory looks like a regular number: x = 6 is translated into memory as 00000110. But what does 6.5 look like? How about the letter x?

Since we can only store things as a series of ones or zeros, we need to know what the zeros and ones mean, this is where types come in. Otherwise I might store a number like x = 66 and get back the letter B

Jweede
+1  A: 

Here's the best definition I have ever come across:

A proof is a program. The formula that it proves is a type for the program.

Here, "program" is meant very generally, and refers to any construct in your programming language that can be reasoned about in that language (be it an irreducible value, an expression, a function, or an entire application).

Some programming languages, so-called "statically typed" languages, include an ancillary language (called a type system) for making statements about programs. Statements that, if the program is correct, should always be true. So, in a sense, types are also programs, interpreted by a further program called a type-checker. Some type systems require the programmer to make explicit statements about types, where the type-checker ensures that your programs correspond with those statements and will give you an error if they don't. Other systems try to infer the most general type for your programs automatically and will give you an error if no such type can be inferred.

Apocalisp