tags:

views:

95

answers:

7

I have a ADO.NET/TSQL performance question. We have two options in our application:

1) One big database call with multiple result sets, then in code step through each result set and populate my objects. This results in one round trip to the database.

2) Multiple small database calls.

There is much more code reuse with Option 2 which is an advantage of that option. But I would like to get some input on what the performance cost is. Are two small round trips twice as slow as one big round trip to the database, or is it just a small, say 10% performance loss? We are using C# 3.5 and Sql Server 2008 with stored procedures and ADO.NET.

A: 

If you're that concerned with performance, try a test of both and see which performs better.

Personally, I prefer the second method. It makes life easier for the developers, makes code more re-usable, and modularizes things so changes down the road are easier.

Justin Niessner
A: 

Personally I would go with 1 larger round trip.

This will definately be influenced by the exact reusability of the calling code, and how it might be refactored.

But as mentioned, this will depend on your exact situation, where maintainability vs performance could be a factor.

astander
A: 

I personally like option two for the reason you stated: code reuse

But consider this: for small requests the latency might be longer than what you do with the request. You have to find that right balance.

Bryan Denny
A: 

Don't optimize for performance until a need arisess to do so. This means that you should analyze your anticipated use patterns and determine what the typical frequency of use for this process will be, and what user interface latency will result from the present design. If the user will receive feedback from the app is less than a few (2-3) seconds, and the application load from this process is not an inordinate load on server capacity, then don't worry about it. If otoh the user is waiting an unacceptable amount of time for a response (subjectve but definitiely measurable) or if the server is being overloaded, then it's time to begin optimization. And then, which optimization techniques will make the most sense, or be the most cost effective, depend on what your analysis of the issue tells you.

So, in the meantime, focus on maintainability. That means, in your case, code reuse

Charles Bretana
A: 

As the ADO.Net developer, your job is to make the code as correct, clear, and maintainable as possible. This means that you must separate your concerns.

It's the job of the SQL Server connection technology to make it fast.

If you implement a correct, clear, maintainable application that solves the business problems, and it turns out that the database access is the major bottleneck that prevents the system from operating within acceptable limits, then, and only then, should you start persuing ways to fix the problem. This may or may not include consolidating database queries.

Jeffrey L Whitledge
+3  A: 

I would think it in part would depend on when you need the data. For instance if you return ten datasets in one large process, and see all ten on the screen at once, then go for it. But if you return ten datasets and the user may only click through the pages to see three of them then sending the others was a waste of server and network resources. If you return ten datasets but the user really needs to see sets seven and eight only after making changes to sets 5 and 6, then the user would see the wrong info if you returned it too soon.

If you use separate stored procs for each data set called in one master stored proc, there is no reason at all why you can't reuse the code elsewhere, so code reuse is not really an issue in my mind.

HLGEM
+1  A: 

It sounds a wee bit obvious, but only send what you need in one call.

For example, we have a "getStuff" stored proc for presentation. The "updateStuff" proc calls "getStuff" proc and the client wrapper method for "updateStuff" expects type "Thing". So one round trip.

Chatty servers are one thing you prevent up front with minimal effort. Then, you can tune the DB or client code as needed... but it's hard to factor out the roundtrips later no matter how fast your code runs. In the extreme, what if your web server is in a different country to your DB server...?

Edit: it's interesting to note the SQL guys (HLGEM, astander, me) saying "one trip" and the client guys saying "multiple, code reuse"...

gbn