views:

427

answers:

8

You often see database fields set to have a magnitude of 255 characters, what is the traditional / historic reason why? I assume it's something to do with paging / memory limits, and performance but the distinction between 255 and 256 has always confused me.

varchar(255)

Considering this is a capacity or magnitude, not an indexer, why is 255 preferred over 256? Is a byte reserved for some purpose (terminator or null or something)?

Presumably varchar(0) is a nonsense (has zero capacity)? In which case 2^8 of space should be 256 surely?

Are there other magnitudes that provide performance benefits? For example is varchar(512) less performant than varchar(511) or varchar(510)?

Is this value the same for all relations databases, old and new?

disclaimer - I'm a developer not a DBA, I use SQL server 2005 with field sizes and types that suit my business logic where that is known, but I'd like to know the historic reason for this preference, even if it's no longer relevant (but even more if it still is relevant).

Edit:

Thanks for the answers, there seems to be some concensus that a byte is used to store size, but this doesn't settle the matter definitively in my mind.

If the meta data (string length) is stored in the same contiguous memory/disk, it makes some sense. 1 byte of metadata and 255 bytes of string data, would suit each other very nicely, and fit into 256 contiguous bytes of storage, which presumably is neat and tidy.

But...If the metadata (string length) is stored separately from the actual string data (in a master table perhaps), then to constrain the length of string's data by one byte, just because it's easier to store only a 1 byte integer of metadata seems a bit odd.

In both cases, it would seem to be a subtlety that probably depends on the DB implementation. The practice of using 255 seems pretty widespread, so someone somewhere must have argued a good case for it in the beginning, can anyone remember what that case was/is? Programmers won't adopt any new practice without a reason, and this must have been new once.

+5  A: 

255 is the largest numerical value that can be stored in a single-byte unsigned integer (assuming 8-bit bytes) - hence, applications which store the length of a string for some purpose would prefer 255 over 256 because it means they only have to allocate 1 byte for the "size" variable.

Amber
+4  A: 

255 is the maximum value of a 8 bit integer : 11111111 = 255.

remi bourgarel
+9  A: 

With a maximum length of 255 characters, the DBMS can choose to use a single byte to indicate the length of the data in the field. If the limit were 256 or greater, two bytes would be needed.

A value of length zero is certainly valid for varchar data (unless constrained otherwise). Most systems treat such an empty string as distinct from NULL, but some systems (notably Oracle) treat an empty string identically to NULL. For systems where an empty string is not NULL, an additional bit somewhere in the row would be needed to indicate whether the value should be considered NULL or not.

As you note, this is a historical optimisation and is probably not relevant to most systems today.

Greg Hewgill
+1 for answering to the question about `varchar(0)`
ewernli
Reserving a byte for the length makes sense, but WRT your second paragrph, presumably a /value/ of length zero is valid, but is a /capacity/ of length zero valid?
Andrew M
@Andrew: I just tried and PostgreSQL rejects `varchar(0)`. It's probably not that useful because the value could only be two things, the empty string or NULL, and so you might as well just use a `bit` for that.
Greg Hewgill
So is it true to assume that the capacity metadata is stored in the same contiguous block as the data itself, and therefore there is an advantage to the DB to keep the total of those two things (data and metadata) within one page (presumably 256 bytes)?
Andrew M
@Andrew: That's an assumption that may or may not be true, depending on the implementation details of the DBMS in question. Page sizes are typically much larger than 256 bytes. As I mentioned, this sort of optimisation is sometimes important (eg. if you're storing billions of small rows), but most of the time it's not worth worrying about.
Greg Hewgill
+3  A: 

Often varchars are implemented as pascal strings: holding the actual length in the byte #0. The length was therefore bound to 255. (Value of a byte varies from 0 to 255.)

Vlad
+1  A: 

I think it has to do with old school programmers, cant even remember why we did it.

Grumpy
In a funny way, I think this is the most correct answer of all :)
Andrew M
Hahaha, very enlightening.
Bernhof
+5  A: 

255 was the varchar limit in mySQL4 and earlier.

Also 255 chars + Null terminator = 256

Or 1 byte length descriptor gives a possible range 0-255 chars

CuriousPanda
+2  A: 

8 bits unsigned = 256 bytes

255 characters + byte 0 for length

gbn
+2  A: 

A maximum length of 255 allows the database engine to use only 1 byte to store the length of each field. You are correct that 1 byte of space allows you to store 2^8=256 distinct values for the length of the string.

But if you allow the field to store zero-length text strings, you need to be able to store zero in the length. So you can allow 256 distinct length values, starting at zero: 0-255.

MarkJ