views:

73

answers:

2

Instead of using NVarchChar (max) for my field I am limiting the lengths arbitrarily. WHen does limiting these fields really make a significant difference in terms of performance? Every 100? 200? 1000 characters?

+1  A: 

SQL uses blocks of 4K ie 4000 chars (2000 if using NVARCHAR as each chr is doubled). You may see significant performance issues if you set it to TEXT or varchar(max) /nvachar(max) if the data is more than 4000/2000 respecively, as the database then has to start paging the results internally.

Daisy Moon
+2  A: 

Your question implies that you are using SQL Server 2005/2008, so I'll take a crack at it. With the current architecture, the importance is in the storage itself. When you store more than 8k in a row, it will either go into a separate internal table partition for row overflow or for text (or large object data).

Depending on your settings, SQL will make a pointer to this other partition of nvarchar data. Upon retrieval of the data, SQL has to get the page, then do a look-up to another page to get the entire contents of the row.

IF the definition of the table leaves out the possibility of hitting that 8060 byte maximum, then you can guarantee that you don't have to do spurious lookups from pointers (which increases reads).

Also, keep in mind that SQL 2008 (Enterprise edition) can use page and row level compression, so the rules change a little when using that feature.

Strommy
To add to this, you are also not able to add nvarchar(max) or varchar(max) to the key columns of indexes. Not being able to build indexes on these columns may have a more significant impact on the performance than the size of the column itself.
StrateSQL
Good point. The inability to do that can inhibit perfornace quite a bit...
Strommy