Instead of using NVarchChar (max) for my field I am limiting the lengths arbitrarily. WHen does limiting these fields really make a significant difference in terms of performance? Every 100? 200? 1000 characters?
views:
73answers:
2SQL uses blocks of 4K ie 4000 chars (2000 if using NVARCHAR as each chr is doubled). You may see significant performance issues if you set it to TEXT or varchar(max) /nvachar(max) if the data is more than 4000/2000 respecively, as the database then has to start paging the results internally.
Your question implies that you are using SQL Server 2005/2008, so I'll take a crack at it. With the current architecture, the importance is in the storage itself. When you store more than 8k in a row, it will either go into a separate internal table partition for row overflow or for text (or large object data).
Depending on your settings, SQL will make a pointer to this other partition of nvarchar data. Upon retrieval of the data, SQL has to get the page, then do a look-up to another page to get the entire contents of the row.
IF the definition of the table leaves out the possibility of hitting that 8060 byte maximum, then you can guarantee that you don't have to do spurious lookups from pointers (which increases reads).
Also, keep in mind that SQL 2008 (Enterprise edition) can use page and row level compression, so the rules change a little when using that feature.