views:

173

answers:

3
+2  Q: 

Query Optimization

I have 2 queries.

First:

SELECT * FROM `table` WHERE col='xyz' LIMIT 100
//Time Taken: 0.0047s

Second:

SELECT * FROM `table` WHERE col='xyz' ORDER BY Id DESC LIMIT 100
//Time Taken: 1.8208s

The second takes a much longer time. I know why that is, it is because first I have to select the whole table, then do the ordering, whereas the first query only returns the first 100 rows.

Is there any way to ORDER BY using another method, like selecting the last 100 rows and then doing the order? Or am I doing the query wrong and it can be made faster?

Note the Id is Autoincrement, so selecting the last rows will still return the correct data when it is ordered.

CREATE TABLE `table`(
    `Id` BIGINT NOT NULL AUTO_INCREMENT,
    `dateReg` TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP,

    PRIMARY KEY (`Id`)
) ENGINE=MyISAM
A: 

Your id column should be set as the primary key (or if you have some other primary key, you should put an index on it anyway). That should speed the query up quite enough.

djc
It is indexed... same problem
Shahmir Javaid
+1  A: 

Create a composite index on (col, id) if you're using MyISAM for your table.

In InnoDB, the PRIMARY KEY is implicitly included into your table as a row pointer, since InnoDB tables are index-organized by design.

In case of InnoDB, to make a composite index on (col, id) it is enough to create an index on col and make sure id is the PRIMARY KEY.

This index will be used to filter on col and order by id.

The index is a B-Tree structure so it can iterate ASC and DESC with same efficiency.

Quassnoi
A composite index might help but is there no way to just select the last 100 rows rather than the first 100 rows
Shahmir Javaid
`@Shahmir Javaid`: an index can work both ways with same efficiency. Just try it.
Quassnoi
i will.. but will take some time as i have to repair table of a million records
Shahmir Javaid
Similar sort of times im afraid
Shahmir Javaid
+3  A: 

For consecutive ids:

SELECT t.*
  FROM TABLE t
  JOIN (SELECT MAX(t.id) 'maxid'
          FROM TABLE t) max ON t.id BETWEEN max.maxid-100 AND max.maxid
 WHERE t.col = 'xyz'

For non-consecutive ids:

SELECT a.*
  FROM (SELECT t.*,
               @rownum := @rownum+1 AS rownum
          FROM TABLE t, (SELECT @rownum := 0) r
         WHERE t.col = 'xyz') a,
       (SELECT COUNT(t.*) 'max'
          FROM TABLE t
         WHERE t.col = 'xyz') m
WHERE a.rownum BETWEEN m.max-100 AND m.max
OMG Ponies
This however would not return the same results as the orginal query. When you want a particular number of records in a particuluar order, you need to apply the order by first. It appears the original poster doesn't care about that but someone else reading this later may not realize that the is a problem with this solution when you need the first hundred records in the order you askled for them.
HLGEM
Bit of a problem: My database is at work so i cant test it till i get there tomarow. However i will post the results as soon as i get it.... @HLGEM I have to order the last 100 rows. It is important to the design i have to display the last 100 rows that apeared in the database.
Shahmir Javaid
It's more of an academic curiosity for me - I prefer not to use subselects if I don't have to, but there's no alternative on mySQL (or Postgres for that matter).
OMG Ponies
i have the exact copy of what you have. And its not selecting the last. It selecting the first then sorting those. However if we can just select Last and then do the order the above will work a treat
Shahmir Javaid
@Shahmir: I updated the query - try it now.
OMG Ponies
@rexam You sexy SOB, seem to do the trick. :D....just as a side note what if id's are like this 200 199 198 190 189 185, Am i right in saying its not going to return 100 records.
Shahmir Javaid
@Shahmir: You awarded me the answer before I could post my non-consecutive ID solution. Had to use non-ANSI joins in order to not have to use a SET command before the query to initialize the rownum value.
OMG Ponies