tags:

views:

366

answers:

5

So I have a table with over 80,000 records, this one is called system. I also have another table called follows.

I need my statement to randomly select records from the system table, where that id is not already listed within the follows table under the current userid.

So here is what I have:

    SELECT system.id, 
           system.username, 
           system.password, 
           system.followed, 
           system.isvalid, 
           follows.userid, 
           follows.systemid
      FROM system
  LEFT JOIN follows ON system.id = follows.systemid
                   AND follows.userid = 2 
      WHERE system.followed = 0 
        AND system.isvalid = 1
        AND follows.systemid IS NULL
   ORDER BY RAND()
      LIMIT 200

Now it wotks perfectly, except that it takes about a whole minute before it can even start processing the job at hand with the records it chosen. By this time the script usually times oout and nothing happens.

Can somebody show me how to rework this, so the same idea is done, but it is not using order by rand? This seems to slow things down a whole bunch.

Thanks!

+4  A: 

I am not sure there is a simple solution to replace your querry, here is an article on correcting this type of issue.

http://www.titov.net/2005/09/21/do-not-use-order-by-rand-or-how-to-get-random-rows-from-table/

Irwin M. Fletcher
Thanks, but thats not a viable option for the way this query works.
Brandon
Why not? There are many different solutions in that article, some of which I think would work for you. Is your id field an autoincrement field? if so the solution of selecting random ids should work.
Peter Recore
+1  A: 

You can generate some pseudo random value based on the ids and the current time:

ORDER BY 37*(UNIX_TIMESTAMP() ^ system.id) & 0xffff

will mix bites from the id, and then will take only the lowest 16.

David Rabinowitz
Seems to be just as slow...
Brandon
+2  A: 

There are two main reasons for the slowness :

  • SQL must first issue a random number for each of the rows
  • The rows must then be ordered on the basis of this number to select the top 200 ones

There is a trick to help this situation, it requires a bit of prep work and the way to implement it (and its relative interest) depends on your actual use case.

==> Introduce an extra column with a "random category" value to filter-out most rows

The idea is to have an integer-valued column with values randomly assigned, once at prep time, with a value between say 0 and 9 (or 1 and 25... whatever). This column then needs to be added to the index used in the query. Finaly, by modifying the query to include a filter on this column = a particular value (say 3), the number of rows which SQL needs to handle is then reduced by 10 (or 25, depending on the number of distinct values we have in the "random category".

Assuming this new column is called RandPreFilter, we could introduced an index like

CREATE [UNIQUE ?] INDEX  
ON system (id, RandPreFilter)

And alter the query as follows

SELECT system.id
     , system.username
     , system.password
     , system.followed
     , system.isvalid
     , follows.userid
     , follows.systemid
FROM system
LEFT JOIN follows ON system.id = follows.systemid
   AND follows.userid = 2 
WHERE system.followed=0 AND system.isvalid=1
   AND follows.systemid IS NULL

   AND RandPreFilter = 1 -- or other numbers, or possibly 
        -- FLOOR(1 + RAND() * 25)
ORDER BY RAND()
LIMIT 200
mjv
+2  A: 

The reason the query is slow is that the database needs to keep a representation of all the generated random values and their respective data before it can return even a single row from the database. What you can do is to limit the number of candidate rows to consider first by using WHERE RAND() < x, where you select x to be a number likely to return at least the number of samples you need. To get a true random sample you would then need to order by RAND again or do sampling on the returned dataset.

Using this approach allows the database to process the query in a streaming fashion without having to build a large intermediate representation of all data. The drawback is that you can never be 100% sure that you get the number of samples you need, so you might have to perform the query again until you do, live with a smaller sample set or incrementally add samples (making sure to avoid duplicates) until you have the number of samples you need.

If you don't require the query to return different results for each call you could also add a pre-generated random value column with an index and combine with the above technique. It would allow you to get any number of samples in a fair manner, even if you add or delete rows, but the same query on the same data would of course return the same result set.

Freed
A: 

Depending on how random your data needs to be it might be worth ordering the data and adding an extra "last used" datetime column and update this once you use the data. Then select the top n rows ordering by the last used field descending.

If you wrap this in a prepared statement you can select one (semi) random result at a time without worrying about the logic.

Alternatively give every row a sequential ID and generate the randomness in the code and pull back just the required rows. The problem is that the full recordset is being returned prior to being ordered.

jmk2000