views:

421

answers:

9

Here is the SQL

SELECT tal.TrustAccountValue
FROM TrustAccountLog AS tal
INNER JOIN TrustAccount ta ON ta.TrustAccountID = tal.TrustAccountID
INNER JOIN Users usr ON usr.UserID = ta.UserID
WHERE usr.UserID = 70402 AND
ta.TrustAccountID = 117249 AND
tal.trustaccountlogid =  
(
 SELECT MAX (tal.trustaccountlogid)
 FROM  TrustAccountLog AS tal
 INNER JOIN TrustAccount ta ON ta.TrustAccountID = tal.TrustAccountID
 INNER JOIN Users usr ON usr.UserID = ta.UserID
 WHERE usr.UserID = 70402 AND
 ta.TrustAccountID = 117249 AND
 tal.TrustAccountLogDate < '3/1/2010 12:00:00 AM'
)

Basicaly there is a Users table a TrustAccount table and a TrustAccountLog table.
Users: Contains users and their details
TrustAccount: A User can have multiple TrustAccounts.
TrustAccountLog: Contains an audit of all TrustAccount "movements". A
TrustAccount is associated with multiple TrustAccountLog entries. Now this query executes in milliseconds inside SQL Server Management Studio, but for some strange reason it takes forever in my C# app and even timesout (120s) sometimes.

Here is the code in a nutshell. It gets called multiple times in a loop and the statement gets prepared.

cmd.CommandTimeout = Configuration.DBTimeout;
cmd.CommandText = "SELECT tal.TrustAccountValue FROM TrustAccountLog AS tal INNER JOIN TrustAccount ta ON ta.TrustAccountID = tal.TrustAccountID INNER JOIN Users usr ON usr.UserID = ta.UserID WHERE usr.UserID = @UserID1 AND ta.TrustAccountID = @TrustAccountID1 AND tal.trustaccountlogid =  (SELECT MAX (tal.trustaccountlogid) FROM  TrustAccountLog AS tal INNER JOIN TrustAccount ta ON ta.TrustAccountID = tal.TrustAccountID INNER JOIN Users usr ON usr.UserID = ta.UserID WHERE usr.UserID = @UserID2 AND ta.TrustAccountID = @TrustAccountID2 AND tal.TrustAccountLogDate < @TrustAccountLogDate2 ))";
cmd.Parameters.Add("@TrustAccountID1", SqlDbType.Int).Value = trustAccountId;
cmd.Parameters.Add("@UserID1", SqlDbType.Int).Value = userId;
cmd.Parameters.Add("@TrustAccountID2", SqlDbType.Int).Value = trustAccountId;
cmd.Parameters.Add("@UserID2", SqlDbType.Int).Value = userId;
cmd.Parameters.Add("@TrustAccountLogDate2", SqlDbType.DateTime).Value =TrustAccountLogDate;

// And then...

reader = cmd.ExecuteReader();
if (reader.Read())
{
   double value = (double)reader.GetValue(0);
   if (System.Double.IsNaN(value))
      return 0;
   else
      return value;
}
else
   return 0;
A: 

I suggest you try and create a stored procedure - which can be compiled and cached by Sql Server and thus improve performance

J Angwenyi
If the statement is prepared up front in the .NET code the query plan should be cached anyway. Stored procs should be used sparingly IMHO.
Paolo
The fallacy that stored procedures are cached and thus always more performant than queries is commonly believed, however none the less, still a fallacy. "There is no precompilation of stored procedure code in SqlServer. It caches execution plans for each query, also ad-hoc queries. Even better: it will parametrize queries which don't even have parameters to keep the execution plan in the cache!" http://weblogs.asp.net/fbouma/archive/2003/11/18/38178.aspx
Michael Shimmins
I agree Paolo. We have many sprocs in our enviroment, but I try to keep my code sproc independant when it makes sense to do so. Besides, this query was in a sproc at first and I had exactly the same issues. In fact now that at least some of the queries go through before when it was a sproc none of them would go through.
n4rzul
sprocs are good, but probably won't help with this issue. Even if the plans were cached for sprocs and not for other queries, compiling a query still doesn't take any time you'd notice, at least not without performing rigorous tests.
erikkallen
A: 

You don't seem to be closing your data reader - this might start to add up over a number of iterations...

Paddy
No, sorry, I just didnt include that code. The stament and reader executes in a proper try catch finally block.
n4rzul
Are you opening and closing a connection for each hit?
Paddy
And have you tried running SQL profiler, to see what is actually happening?
Paddy
+2  A: 

In my experience the usual reason why a query runs fast in SSMS but slow from .NET is due to differences in the connection's SET-tings. When a connection is opened by either SSMS or SqlConnection, a bunch of SET commands are automatically issued to set up the execution environment. Unfortunately SSMS and SqlConnection have different SET defaults.

One common difference is SET ARITHABORT. Try issuing SET ARITHABORT ON as the first command from your .NET code.

SQL Profiler can be used to monitor which SET commands are issued by both SSMS and .NET so you can find other differences.

Daniel Renshaw
you could also run this from your application: `SELECT SESSIONPROPERTY ('ARITHABORT')`, 1=On and 0=Off, or include it in your original query: `SELECT tal.TrustAccountValue, (SELECT SESSIONPROPERTY ('ARITHABORT')) AS ARITHABORT FROM TrustAccountLog AS tal ...`
KM
+1 for this - a difference in SET-tings is almost certainly the culprit.
Rob
A: 

It is easy. Delay of comunications send (sql server <> application) sql parser of your query execute and return the value Delay of comunications send (sql server <> application)

Is not the good solution write querys in applications, doesnt define homogeneity, if another applications whats do the same, the app must define super equaly all code that corresponde to be in the server

ROCK HEART
That is good advice in general, but certainly not the cause of this particular problem.
Meff
what? super equally??
n4rzul
@Meff : good advice, but not quite English. Summary: Use stored procedures.
Hogan
+2  A: 

Since you appear to only ever be returning the value from one row from one column then you can use ExecuteScalar() on the command object instead, which should be more efficient:

    object value = cmd.ExecuteScalar();

    if (value == null)
        return 0;
    else
        return (double)value;
Dan Diplo
+1  A: 

It might be type conversion issues. Are all the IDs really SqlDbType.Int on the data tier?

Also, why have 4 parameters where 2 will do?

cmd.Parameters.Add("@TrustAccountID1", SqlDbType.Int).Value = trustAccountId;
cmd.Parameters.Add("@UserID1", SqlDbType.Int).Value = userId;
cmd.Parameters.Add("@TrustAccountID2", SqlDbType.Int).Value = trustAccountId;
cmd.Parameters.Add("@UserID2", SqlDbType.Int).Value = userId;

Could be

cmd.Parameters.Add("@TrustAccountID", SqlDbType.Int).Value = trustAccountId;
cmd.Parameters.Add("@UserID", SqlDbType.Int).Value = userId;

Since they are both assigned the same variable.

(This might be causing the server to make a different plan since it expects four different variables as op. to. 4 constants - making it 2 variables could make a difference for the server optimization.)

Hogan
And is the date column really a date? (although I guess he'd notice the funky results if he got this wrong).
erikkallen
yeah the data really is a date. Fixed the 4 vs two parameters, but it makes no difference. I added OPTION (RECOMPILE) to the end of my SQL and that seems to do the trick
n4rzul
@user203882, oh you had a parameter sniffing problem.
Hogan
+1  A: 

Sounds possibly related to parameter sniffing? Have you tried capturing exactly what the client code sends to SQL Server (Use profiler to catch the exact statement) then run that in Management Studio?

Parameter sniffing: http://stackoverflow.com/questions/1007397/sql-poor-stored-procedure-execution-plan-performance-parameter-sniffing

I haven't seen this in code before, only in procedures, but it's worth a look.

Meff
yip, it was. added OPTION (RECOMPILE) to the end of my sql and its running smoothly now in th app
n4rzul
+2  A: 

If this is parameter sniffing, try to add option(recompile) to the end of your query. I would recommend creating a stored procedure to encapsulate logic in a more manageable way. Also agreed - why do you pass 5 parameters if you need only three, judging by the example? Can you use this query instead?

select TrustAccountValue from
(
 SELECT MAX (tal.trustaccountlogid), tal.TrustAccountValue
 FROM  TrustAccountLog AS tal
 INNER JOIN TrustAccount ta ON ta.TrustAccountID = tal.TrustAccountID
 INNER JOIN Users usr ON usr.UserID = ta.UserID
 WHERE usr.UserID = 70402 AND
 ta.TrustAccountID = 117249 AND
 tal.TrustAccountLogDate < '3/1/2010 12:00:00 AM'
 group by tal.TrustAccountValue
) q

And, for what it's worth, you are using ambiguous date format, depending on the language settings of the user executing query. For me for example, this is 3rd of January, not 1st of March. Check this out:

set language us_english
go
select @@language --us_english
select convert(datetime, '3/1/2010 12:00:00 AM')
go
set language british
go
select @@language --british
select convert(datetime, '3/1/2010 12:00:00 AM')

The recommended approach is to use 'ISO' format yyyymmdd hh:mm:ss

select convert(datetime, '20100301 00:00:00') --midnight 00, noon 12
Piotr Rodak
+1  A: 

Most likely the problem lies in the criterion

tal.TrustAccountLogDate < @TrustAccountLogDate2

The optimal execution plan will be highly dependent on the value of the parameter, passing 1910-01-01 (which returns no rows) will most certainly cause a different plan than 2100-12-31 (which returns all rows).

When the value is specified as a literal in the query, SQL server knows which value to use during plan generation. When a parameter is used, SQL server will generate the plan only once and then reuse it, and if the value in a subsequent execution differs too much from the original one, the plan will not be optimal.

To remedy the situation, you can specify OPTION(RECOMPILE) in the query. Adding the query to a stored procedure won't help you with this particular issue, unless you create the procedure WITH RECOMPILE.

Others have already mentioned this ("parameter sniffing"), but I thought a simple explanation of the concept won't hurt.

erikkallen
Thanks. This seemed to do the trick.
n4rzul