views:

186

answers:

4

When it comes to column order in DB tables, are there any standards or at least best practices?

Here are a few handmade best practices that I've adopted:

  • primary key comes first; foreign keys come after the primary key;
  • columns holding user generated data come after the foreign keys;
  • timestamp columns come in the end of the table (no good argument for that);

These leave many questions unanswered though and leave me wondering whether the "user_role" column should come before or after the "user_ip" column.

+4  A: 
Patrick Karcher
I call my tables "somethings" (plural) and my primary key "something" (singular).
Tor Valamo
Yep, perfect. Customers.CustomerID. I worked with someone who prefixed standard data tables with *tbl*, and had a few other prefixes. Done well, this is okay I think, but not necessary.
Patrick Karcher
nouI verThink nouPrefixes verAre adjmuch adjWorse conThan nouNot adjNecessary. I think they are sloppy and clutter the database in a harmful way.
Emtucifor
I love it how your answer evolves over time.
Emanuil
A: 

you can find different best-practices allong the net.

Always save CREATE TABLE statements, along with all other statements defining database schema in a secure location. Every time you make a change to a database object, be sure to script the change and check it into version-control software, such as Visual Source Safe.

With such policy you can easily re-create database schema on the same or different server, if necessary. Also, if you have the same database on multiple servers, it's easy to compare schemas and reconcile any differences that might have crept in over time.

Although descriptive, table names have no performance benefits. They make databases self-documenting and easier to code against. Table names should reflect their business meaning.

Create user tables on a non-primary filegroup; reserve the primary file group for system objects. This way the system supplied and user-defined objects do not compete for disk resources.

Create commonly accessed tables on the same filegroup. You can expect performance benefits if the data of commonly joined tables resides on the same disk.

Create a clustered index on every table. Each table can only have a single clustered index. If a table has a clustered index, its data is physically sorted according to the clustered index key. Clustered indexes in SQL Server have numerous benefits. For example, if you retrieve data from a table using an ORDER BY clause referencing the clustered index key, the data does not need to be sorted at query execution time.

If two tables have a common column, for example customer_id, and both tables have clustered indexes on customer_id column joining, such tables will be considerably more efficient than joining the same tables based on the same column but without clustered indexes.

Ensure the clustered index is built on a column that contains distinct

Source: Creating SQL Server tables: A best practices guide

R van Rijn
Any database worth its salt will output the existing schema as text.... at least postgres, mysql, ms-sql, and oracle do it.
Joe Koberg
down vote for recommending Visual SourceSafe for anything, especially in 2010
fuzzy lollipop
@fuzzy lollipop: what do you recommend instead? I'm looking for something that integrates with SQL Server databases (ideally with a management studio plugin)
Emtucifor
@fuzzy lollipp:It was not my intention to recommend Visual SourceSafe. I worked with it and i know its crap.
R van Rijn
you don't need something that "integrates" with SQL Server. Any version control system will work, just don't use Visual Source Safe, in 15 years it is the only version control system I have actually lost data to numerous times.
fuzzy lollipop
I said integrates literally, not in any figurative sense, not "integrates." What I want is something that either updates the contents of stored procedures itself or peeks at submitted batches to catch updates to stored procedures and keep version control in synch. What I don't want is "remember, every time you change a stored procedure you have to ALSO check it into version control!" I'd much prefer "remember, the only place you're allowed to alter stored procedures is in SSMS." I would love something that also managed privileges so that checkin/out of objects grants/revokes alter permission.
Emtucifor
+5  A: 

In MSSQL Server, NULL columns at the end of the column list actually reduce the space required to store that row, which can increase the number of rows per page, which can reduce the number of reads required per I/O operation, which is a performance benefit. While the performance benefit may not be huge, it is something to keep in mind for any column that has a preponderance of NULL values.

Proof of trailing NULLs reducing storage space can be had at Deciphering a SQL Server data page:

... The null bitmap is slightly different (fe / 1111 1110) since it's now the second column that's null. What's interesting is that in this row, only a single variable length column is present, not two. Thus there's only a single variable length column end index identifier, 0d00 / 0x000d / 13. From that we can conclude that columns are handled in order, and thus one might want to consider the order of columns, if a specific column is usually null, it might be more efficient to have it ordered last.

Note that this applies only to variable-length columns. While that clearly includes varchar, varbinary, and so on, I'm not sure about other data types (and don't have time right now to conclusively determine this).

Emtucifor
+1 because I had no idea about this!
ahsteele
+1 I didn't either.
Patrick Karcher
+1  A: 

In MS Sql Server, datatypes ntext, image, and text (all recently deprecated) should be the last columns in the row to avoid a performance penalty.

egrunin