views:

51

answers:

3

Our data access layer uses command objects to communicate with sql server.

In most cases I've hard-coded the field size (that matches the column size in sql server) into the command param builder.

Such as:

SqlParameter param = new SqlParameter("@name", NVarChar, 4000);

What's the advantage to specifying a value here (4000 in this example) versus just leaving it as 0? It's a pain when you have to recompile when a column size changes.

+1  A: 

There is no performance or execution-time advantage - size is inferred if it is not explicitly passed:

The Size is inferred from the value of the dbType parameter if it is not explicitly set in the size parameter.

I guess you could say that by explicitly declaring the size of the parameter to match the size of the field in SQL Server you are better informing readers of your code as to the limits of the data model. How useful that may be is in the eye of the beholder.

Andrew Hare
what about varchar parameters? doesn't inferred size just refer to int and other fixed len types?
Tahbaza
No, size is inferred from the _data_ not the field in the database. All data has a size.
Andrew Hare
A: 

If you specify a size that matches the width of the SQL column, then presumably you have another layer that detects and/or prevents data loss. (What happens when a user enters or an application generates more characters than can be stored in the database?)

Perhaps the problem is related to all those Microsoft Buffer Overflows?

Philip Kelley
Would you care to name some of 'all those buffer overflows' ?
Remus Rusanu
Several years ago, there was a lengthy wave of (very) publicly disclosed security flaws in Microsoft Software, many--if not most of which--were said to be based on buffer overflows. (You couldn't go a month without hearing another one had been discovered.) Eventually, Bill Gates announced that the next several months of development work at Microsoft would be focused entirely on clearing up these and similar problems.
Philip Kelley
I have never worked for Microsoft, do not have specific examples of the problems they were dealing with, and they probably did not result from the exact situation described here. I was merely using it as a reference of the kinds of problems that can happen if the *definitions* of values (strings) passed between code modules don't match.
Philip Kelley
+2  A: 

Is actually quite important. Identical requests issued with different parameter length end up as different queries in the procedure cache. In time, this leads to cache pollution an over-active compilation events. This issue is one of the major design flaws in how both Linq2SQL and the EF providers were implemented, eg. see How Data Access Code Affects Database Performance. Also see Query performance and plan cache issues when parameter length not specified correctly for a related problem.

Remus Rusanu
Interesting. I'm seeing two situations here - specify the length of the string param to match the definition if using linked servers (I'm not), otherwise specify the maximum size of the paricular type - 4000 for nvarchar or 8000 for varchar. The execution plan caching makes sense. So, that begs the question about what to supply for varchar(max) ? 2147483647 ?
ScottE
For all MAX types the parameter length should be `-1`.
Remus Rusanu