views:

107

answers:

7

What are good sizes for data types in SQL Server? When defining columns, i see data types with sizes of 50 as one of the default sizes(eg: nvarchar(50), binary(50)). What is the significance of 50? I'm tempted to use sizes of powers of 2, is that better or just useless?

Update 1 Alright thanks for your input guys. I just wanted to know the best way of defining the size of a datatype for a column.

+3  A: 

The size of a field should be appropriate for the data you are planning to store there, global defaults are not a good idea.

Lazarus
+2  A: 

This totally depends on what you are storing. If you need x chars use x not some arbitrarily predefined amount.

John Nolan
+5  A: 

There is no reason to use powers of 2 for performance etc. Data length should be determined by the size stored data.

Kevin
+3  A: 

Why not the traditional powers of 2, minus 1 such as 255...

Seriously, the length should match what you need and is suitable for your data.

Nothing else: how the client uses it, aligns to 32 bit word boundary, powers of 2, birthdays, Scorpio rising in Uranus, roll of dice...

gbn
+2  A: 

It's a good idea that the whole row fits into page several times without leaving too much free space.

A row cannot span two pages, an a page has 8096 bytes of free space, so two rows that take 4049 bytes each will occupy two pages.

See docs on how to calculate the space occupied by one row.

Also note that VAR in VARCHAR and VARBINARY stands for "varying", so if you put a 1-byte value into a 50-byte column, it will take but 1 byte.

Quassnoi
+2  A: 

You won't gain anything from using powers of 2. Make the fields as long as your business needs really require them to be - let SQL Server handle the rest.

Also, since the SQL Server page size is limited to 8K (of which 8060 bytes are available to user data), making your variable length strings as small as possible (but as long as needed, from a requirements perspective) is a plus.

That 8K limit is a fixed SQL Server system setting which cannot be changed.

Of course, SQL Server these days can handle more than 8K of data in a row, using so called "overflow" pages - but it's less efficient, so trying to stay within 8K is generally a good idea.

Marc

marc_s
+3  A: 

The reason so many fields have a length of 50 is that SQL Server defaults to 50 as the length for most data types where length is an issue.

As has been said, the length of a field should be appropriate to the data that is being stored there, not least because there is a limit to the length of single record in SQL Server (it's ~8000 bytes). It is possible to blow past that limit.

Also, the length of your fields can be considered part of your documentation. I don't know how many times I've met lazy programmers who claim that they don't need to document because the code is self documenting and then they don't bother doing the things that would make the code self documenting.

Jeff Hornby