views:

360

answers:

4

1)Are SQL query execution times O(n) compared to the number of joins, if indexes are not used? If not, what kind of relationship are we likely to expect? And can indexing improve the actual big-O time-complexity, or does it only reduce the entire query time by some constant factor?

Slightly vague question, I'm sure it varies a lot but I'm talking in a general sense here.

2) If you have a query like:

SELECT  T1.name, T2.date
FROM    T1, T2
WHERE   T1.id=T2.id
        AND T1.color='red'
        AND T2.type='CAR'

Am I right assuming the DB will do single table filtering first on T1.color and T2.type, before evaluating multi-table conditions? In such a case, making the query more complex could make it faster because less rows are subjected to the join-level tests?

+6  A: 

This depends on the query plan used.

Even without indexes, modern servers can use HASH JOIN and MERGE JOIN which are faster than O(N * M)

More specifically, complexity of a HASH JOIN is O(N + M), where N is the hashed table and M the is lookup table. Hashing and hash lookups have constant complexity.

Complexity of a MERGE JOIN is O(N*Log(N) + M*Log(M)): it's the sum of times to sort both tables plus time to scan them.

SELECT  T1.name, T2.date
FROM    T1, T2
WHERE   T1.id=T2.id
        AND T1.color='red'
        AND T2.type='CAR'

If there are no indexes defined, the engine will select either a HASH JOIN or a MERGE JOIN.

The HASH JOIN works as follows:

  1. The hashed table is chosen (usually it's the table with less records). Say it's t1

  2. All records from t1 are scanned. If the records holds color='red', this record goes into the hash table with id as a key and name as a value.

  3. All records from t2 are scanned. If the record holds type='CAR', its id is searched in the hash table and the values of name from all hash hits are returned along with the current value of data.

The MERGE JOIN works as follows:

  1. The copy of t1 (id, name) is created, sorted on id

  2. The copy of t2 (id, data) is created, sorted on id

  3. The pointers are set to the minimal values in both tables:

    >1  2<
     2  3
     2  4
     3  5
    
  4. The pointers are compared in a loop, and if they match, the records are returned. If they don't match, the pointer with the minimal value is advanced:

    >1  2<  - no match, left pointer is less. Advance left pointer
     2  3
     2  4
     3  5
    
    
     1  2<  - match, return records and advance both pointers
    >2  3
     2  4
     3  5
    
    
     1  2  - match, return records and advance both pointers
     2  3< 
     2  4
    >3  5
    
    
     1  2 - the left pointer is out of range, the query is over.
     2  3
     2  4<
     3  5
    >
    

In such a case, making the query more complex could make it faster because less rows are subjected to the join-level tests?

Sure.

Your query without the WHERE clause:

SELECT  T1.name, T2.date
FROM    T1, T2

is more simple but returns more results and runs longer.

Quassnoi
A: 

Are SQL query execution times O(n) compared to the number of joins, if indexes are not used?

Generally they're going to be O(n^m), where n is the number of records per table involved and m is the number of tables being joined.

And can indexing improve the actual big-O time-complexity, or does it only reduce the entire query time by some constant factor?

Both. Indexes allow for direct lookup when the joins are heavily filtered (i.e. with a good WHERE clause), and they allow for faster joins when they're on the right columns.

Indexes are no help when they're not on the columns being joined or filtered by.

Welbog
A: 

Check out how clustered vs non-clustered indexes work

That is from a pure technical point of view...for an easy explanation my good buddy mladen has written a simple article to understand indexing.

Indexes definately help but I do recommend the reads to understand the pros and cons.

JonH
+5  A: 

Be careful of conflating too many different things. You have a logical cost of the query based on number of rows to be examined, a (possibly) smaller logical cost based on number of rows actually returned and an unrelated a physical cost based on number of pages that have to be examined.

The three are related, but not strongly.

The number of rows examined is the largest of these costs and least easy to control. The rows have to be matched through the join algorithm. This, also, is the least relevant.

The number of rows returned is more costly because that's I/O bandwidth between client application and database.

The number of pages read is the most costly because that's an even larger number of physical I/O's. That's the most costly because that's load inside the database with impact on all clients.

SQL Query with one table is O( n ). That's the number of rows. It's also O( p ) based on the number of pages.

With more than one table, the rows examined is O(n*m*...). That's the nested-loops algorithm. Depending on the cardinality of the relationship, however, the result set may be as small as O( n ) because the relationships are all 1:1. But each table must be examined for matching rows.

A Hash Join replaces O( n*log(n) ) index + table reads with O( n ) direct hash lookups. You still have to process O( n ) rows, but you bypass some index reads.

A Merge Join replaces O( n*m ) nested loops with O( log(n+m)*(n+m) ) sort operation.

With indexes, the physical cost may be reduced to O(log(n)*m) if a table is merely checked for existence. If rows are required, then the index speeds access to the rows, but all matching rows must be processed. O(n*m) because that's the size of the result set, irrespective of indexes.

The pages examined for this work may be smaller, depending on the selectivity of the index.

The point of an index isn't to reduce the number of rows examined so much. It's to reduce the physical I/O cost of fetching the rows.

S.Lott
Without indexes, the nested loops algorithm will almost never be chosen by any decent engine. It will be either `HASH JOIN` or a `MERGE JOIN`.
Quassnoi