views:

225

answers:

3

In the final stage of development I started looking at code to try to find some bad practices. I discovered that while accessing a page, I'm querying the DB n times (n is the number of rows of an HTML table) just to get the translation (different languages) for a given record... I immediately thought that was bad and I tried a small optimization.

Running the SQL profiler shows that those query took 0ms.

Since these tables I'm querying are small (20-100 records) I thought I could fetch all data and cache them into the web server RAM, retrieving later using LINQ to Objects. Execution time this way is also 0ms.

The environment where I'm running these test is a DB and Web server with 0% load on the same machine. It's only me using the application.

The question starts here. Since I have no performance difference at all should I avoid that optimization? Should I keep it in the way to balance the usage of both DB and web server (the servers will be on 2 different machines in the production environment)?

In my opinion this optimization can't damage the performances, it could only make some better in case of heavy loaded DB. I have something in my brain that say it's wrong to optimize if there is no need...

Thanks for your opinion.

+9  A: 

I don't think you've actually shown that there's no performance difference at all.

Try running the query a million times each way, and time how long it takes in total... I think you'll see a big difference.

The SQL profiler only shows you (as far as I'm aware) the time taken on the database to perform the query. It doesn't take into account:

  • The time taken to set up the connection or prepare the query
  • The time taken in network latency issuing the query
  • The time taken in network latency returning the results
  • The time taken converting the results into useful objects.

Of course, premature optimisation is a bad thing in general, but this does sound like a reasonable change to make - if you know that the contents of the tables won't change.

Jon Skeet
+1  A: 

SQL Server is a bit peculiar in that way, all query execution times between 0 and 15 ms are rounded down to 0 ms. So you don't actually know from looking at the number if your query is taking 0 ms or 15 ms. It's a big difference between doing 1000 * 1 ms queries and doing 1000 * 15 ms.

Regarding translations, I've found that the best way is using Resources and tying them to a SQL database and the cache the translations in the web application for a reasonable amount of time. That is pretty efficient.

Other than that, what Jon Skeet says... :)

Jonas Lincoln
A: 

Both answer are true.

I actually measured 1 million times as Jon suggested and in fact.....there is a huge difference! Thanks Jon

And even what Jonas says is true. The query actually took something around 15ms (measured by the program) even if sql profiler says 0.

Thanks guys!

Davide Vosti