views:

767

answers:

13
+3  A: 

There are better ways to code it, but I doubt it's the cause of your timeouts, especially if it's only a SELECT. You should be able to determine that by looking at your query traces though. But recoding this would be optimization by guessing, and an unlikely guess at that.

Let's start with a query plan for the query that is actually timing out. Do you know for sure which query it is?

le dorfier
+5  A: 

If you have a good index on FieldW, using that IN is perfectly right.

I have just tested and SQL 2000 does a Clustered Index Scan when using the IN.

tekBlues
Then that wouldn't necessarily be a good thing. It should be doing lookups rather than a scan, suggesting that using IN isn't "perfectly right". But size of table, cardinality, and other factors are also important.
le dorfier
@tekBlues: could you please see if it does Hash Match over the Constant Scan? Just add an OPTION (HASH JOIN) to end of the query and see the plan
Quassnoi
@Quassnoi, I added the HASH JOIN and the execution plan doesn't change.
tekBlues
@le dorfier, could explain a little? I'm very interested on this issue...
tekBlues
Ideally you would like your query to search the index specifically for each value, rather than reading through the entire index from beginning to end ("scan"). If there is just one key, or the keys is an ordered set, it's more likely to do that. But if the keys aren't in a table (ordered, or easily orderable) the query optimizer may do something suboptimal. Also, it's clear that you've got those values in the (single possible) clustered index, which may or may not be true for OP, and may or may not even be significant.
le dorfier
@tekBlues: does the Clustered Table Scan have an arrow pointing to the box with a join / semijoin method used: Nested Loops, Merge or Hash?
Quassnoi
There's no likelihood that @tekBlues table is the same size, schema, cardinality, indexing, etc. They query plans are likely entirely different.
le dorfier
@le dorfier: that's why I asked to force the join method
Quassnoi
Then you're solving @tekBlues problem, not OP's problem which may be entirely different. Doesn't it seem to you that this question has gone off the rails?
le dorfier
@le dirfier: what I want to see is whether SQL Server 2000 is capable of building a hash table over a set of constants or not. It's crucial for understanding the problem, since browsing 73 values in a loop is far less efficient than scanning a hash table.
Quassnoi
@le dorfier: the @op ask to improve this query, and I have the strong feeling it can be improved. You're right, there may (or may not) be other issues with his system, but the query can be improved too.
Quassnoi
The actual problem is that the application is timing out, possibly on this query, maybe from slowness, maybe from locking. That's a long ways from the abiilty to build a hash table. Wouldn't you at least like to see a query plan first? The query is only worth improving once we know it's a problem.
le dorfier
@Quassnoi and all you guys:I've posted the Estimated Execution Plan for the query on the main post.
Victor Rodrigues
+1  A: 

Typically the IN clause is harmful to performance, but what is "bad" depends on the application, data, database size, etc. You need to test your own app to see what is best.

Bryan Migliorisi
Hi Bryan, what do you mean for "harmful for performance" The scenario is that I want to filter certain values for a field. What's the best way to do it? IMHO is using the IN clause
tekBlues
+2  A: 

You can try creating a temporary table, insert your values to it and use the table instead in the IN predicate.

AFAIK, SQL Server 2000 cannot build a hash table of the set of constants, which deprives the optimizer of possibility to use a HASH SEMI JOIN.

This will help only if you don't have an index on FieldW (which you should have).

You can also try to include your FieldX and FieldY columns into the index:

CREATE INDEX ix_a_wxy ON a (FieldW, FieldX, FieldY)

so that the query could be served only by using the index.

SQL Server 2000 lacks INCLUDE option for CREATE INDEX and this may degrade DML performance a little but improve the query performance.

Update:

From your execution plan I see than you need a composite index on (SettingsID, SectionID)

SQL Server 2000 indeed can built a hash table out of a constant list (and does it), but Hash Semi Join most probably will be less efficient than a Nested Loop for query query.

And just a side note: if you need to know the count of rows satisfying the WHERE condition, don't use COUNT(column), use COUNT(*) instead.

A COUNT(column) does not count the rows for which the column value is NULL.

This means that, first, you can get the results you didn't expect, and, second, the optimizer will need to do an extra Key Lookup / Bookmark Lookup if your column is not covered by an index that serves the WHERE condition.

Since ThreadId seems to be a CLUSTERED PRIMARY KEY, it's all right for this very query, but try to avoid it in general.

Quassnoi
I would love to see someone test this assertion, ie testing performance of the IN vs creating a temporary table and joining...
tekBlues
@tekBlues: don't have a 2000 handy, sorry. 2005 builds a hash table over the IN clause values all right, using a CONSTANT SCAN method. Could you please build the execution plan for your query and post it here?
Quassnoi
@tekBlues: oh, sorry, did't notice you're not the @op. Nevermind :)
Quassnoi
Heck of a lot of work just to identify which queries are timing out. Why not just look at a trace?
le dorfier
@le dorfier: I'd like to, but I don't have a 2000 installed.
Quassnoi
@Q, Suggestion to OP, since that's the problem trying to solve.
le dorfier
Yes, just tested and it use a Clustered Index Scan, no mattering the number of values on the IN, SQL 2000.
tekBlues
@tekBlues: Clustered Index Scan is a full table scan. Could you please see what join method does it use?
Quassnoi
@tekBlues - we have used the approach of creating a temporary table for this exact purpose. In order to be performant, you must insert rows into the temp table and then update statistics on the temp table - otherwise the SQL execution plan optimizer thinks the temp table is empty and will perform a full scan. This approach can indeed be faster than a query using IN or OR clauses and because it can avoid the steps of parsing the query and building an execution plan.
LBushkin
Creating this temp table, inserting N times in it, updating its statistics and after joining the temp table with my query, could all this be faster than the IN clause?
Victor Rodrigues
@Victor: Could you please post the current execution plan for your query?
Quassnoi
@Quassnoi: I've posted it
Victor Rodrigues
@Victor: thanks. I've updated my answer, see the update.
Quassnoi
A: 

Basically what that where clause does is "FieldW = 108 OR FieldW = 109 OR FieldW = 113...". Sometimes you can get better performance by doing multiple selects, and combining them with union. For example:

SELECT FieldX, FieldY FROM A WHERE FieldW = 108
UNION ALL
SELECT FieldX, FieldY FROM A WHERE FieldW = 109

But of course that is impractical when you're comparing to so many values.

Another option might be to insert those values into a temporary table and then joining the A table to that temp table.

Tommi
I would be wary of using UNION statements, especially in this circumstance. Effectively, a UNION it performs the equivalent of a SELECT DISTINCT on the final result set. In other words, UNION takes the results of two like recordsets, combines them, and then performs a SELECT DISTINCT in order to eliminate any duplicate rows.In other words, you'd be running an exponential number of SELECTS under the covers.
Adam McKee
UNION ALL isn't DISTINCT.
Tommi
+1  A: 

the size of your table will determine the speed when using this statement. If it's not a very large table...this statement isn't affecting your performance.

Eric
A: 

Performance can only be judged in the context of what you are trying to do. In this case you are requesting the retrieval of around 70 rows (assuming thay are unique values), so you can expect something like 70 times the duration of retrieving a single value. It might be less due to caching, or course.

However, the query optimiser may need or choose to perform a full table scan in order to retrieve the values, in which case performace will be little different than retrieving a single value via the same access plan.

David Aldridge
A: 

If you can use other things than IN : do it (I was using IN in some case not really the good way : I can easily replace with exist and it is quicker)

In your case : It seems not so bad.

Hugues Van Landeghem
+1  A: 

IN is exactly the same thing as writing a big list of ORs. And OR often makes queries unSARGable, so your indexes may be ignored and the plan goes for a full scan.

Remus Rusanu
+3  A: 

Depending on your data distribution, additional predicates in your WHERE clause may improve performance. For example, if the set of ids is small relative to the total number in the table, and you know that the ids are relatively close together (perhaps they will usually be recent additions, and therefore clustered at the high end of the range), you could try and include the predicate "AND FieldW BETWEEN 109 AND 891" (after determining the min & max id in your set in the C# code). It may be that doing a range scan on those columns (if indexed) works faster than what is currently being used.

Steve Broberg
+9  A: 

There are several considerations when writing a query using the IN operator that can have an affect on performance.

First, IN clauses are generally internally rewritten by most databases to use the OR logical connective. So col IN ('a','b','c') is rewritten to: (COL = 'a') OR (COL = 'b') or (COL = 'c'). The execution plan for both queries will likely be equivalent assuming that you have an index on col.

Second, when using either IN or OR with a variable number of arguments, you are causing the database to have to re-parse the query and rebuild an execution plan each time the arguments change. Building the execution plan for a query can be an expensive step. Most databases cache the execution plans for the queries they run using the EXACT query text as a key. If you execute a similar query but with different argument values in the predicate - you will most likely cause the database to spend a significant amount of time parsing and building execution plans. This is why bind variables are strongly recommended as a way to ensure optimal query performance.

Third, many database have a limit on the complexity of queries they can execute - one of those limits is the number of logical connectives that can be included in the predicate. In your case, a few dozen values are unlikely to reach the built-in limit of the database, but if you expect to pass hundreds or thousands of value to an IN clause - it can definitely happen. In which case the database will simply cancel the query request.

Fourth, queries that include IN and OR in the predicate cannot always be optimally rewritten in a parallel environment. There are various cases where parallel server optimization do not get applied - MSDN has a decent introduction to optimizing queries for parallelism. Generally though, queries that use the UNION ALL operator are trivially parrallelizable in most databases - and are preferred to logical connectives (like OR and IN) when possible.

LBushkin
+1  A: 

Here is your answer...

http://www.4guysfromrolla.com/webtech/031004-1.shtml

Basically, you want to create a function that will split a string and populate a temp table with the split contents. Then you can join to that temp table and manipulate your data. The above explains things pretty well. I use this technique a lot.

In your specific case use a join to the temp table instead of an in clause, much faster.

infocyde
Another link http://fluppe.wordpress.com/2005/12/27/sql-split-string-into-table/
infocyde
A: 

You might try something like:

select a.FieldX, a.FieldY
from (
    select FieldW = 108 union
    select FieldW = 109 union
    select FieldW = 113 union
    ...
    select FieldW = 891
) _a
join A a on a.FieldW = _a.FieldW

It may be appropriate for your situation, such as when you want to generate a single SQL statement dynamically. On my machine (SQL Server 2008 Express), testing with a small number (5) of FieldW values and a large number (100,000) of rows in A, this uses an index seek on A with a nested loops join between A and _a, which is probably what you're looking for.

Justice