views:

904

answers:

3

I have a large table (TokenFrequency) which has millions of rows in it. The TokenFrequency table that is structured like this:

Table - TokenFrequency

  • id - int, primary key
  • source - int, foreign key
  • token - char
  • count - int

My goal is to select all of the rows in which two sources have the same token in it. For example if my table looked like this:

id --- source --- token --- count
1 ------ 1 --------- dog ------- 1
2 ------ 2 --------- cat -------- 2
3 ------ 3 --------- cat -------- 2
4 ------ 4 --------- pig -------- 5
5 ------ 5 --------- zoo ------- 1
6 ------ 5 --------- cat -------- 1
7 ------ 5 --------- pig -------- 1

I would want a SQL query to give me source 1, source 2, and the sum of the counts. For example:

source1 --- source2 --- token --- count
---- 2 ----------- 3 --------- cat -------- 4
---- 2 ----------- 5 --------- cat -------- 3
---- 3 ----------- 5 --------- cat -------- 3
---- 4 ----------- 5 --------- pig -------- 6

I have a query that looks like this:

SELECT  F.source AS source1, S.source AS source2, F.token, 
       (F.count + S.count) AS sum 
FROM       TokenFrequency F 
INNER JOIN TokenFrequency S ON F.token = S.token 
WHERE F.source <> S.source

This query works fine but the problems that I have with it are that:

  1. I have a TokenFrequency table that has millions of rows and therefore need a faster alternative to obtain this result.
  2. The current query that I have is giving duplicates. For example its selecting:
    source1=2, source2=3, token=cat, count=4
    source1=3, source2=2, token=cat, count=4
    Which isn't too much of a problem but if there is a way to elimate those and in turn obtain a speed increase then it would be very useful

The main issue that I have is speed of the query with my current query it takes hours to complete. The INNER JOIN on a table to itself is what I believe to be the problem. Im sure there has to be a way to eliminate the inner join and get similar results just using one instance of the TokenFrequency table. The second problem that I mentioned might also promote a speed increase in the query.

I need a way to restructure this query to provide the same results in a faster, more efficient manner.

Thanks.

+1  A: 

I'd need a little more info to diagnose the speed issue, but to remove the dups, add this to the WHERE:

AND F.source<S.source
KM
Ah so simple. This worked perfectly for eliminating the duplicates. Thanks
cruzja
+2  A: 

Try this:

SELECT token, GROUP_CONCAT(source), SUM(count)
FROM TokenFrequency
GROUP BY token;

This should run a lot faster and also eliminate the duplicates. But the sources will be returned in a comma-separated list, so you'll have to explode that in your application.

You might also try creating a compound index over the columns token, source, count (in that order) and analyze with EXPLAIN to see if MySQL is smart enough to use it as a covering index for this query.


update: I seem to have misunderstood your question. You don't want the sum of counts per token, you want the sum of counts for every pair of sources for a given token.

I believe the inner join is the best solution for this. An important guideline for SQL is that if you need to calculate an expression with respect to two different rows, then you need to do a join.

However, one optimization technique that I mentioned above is to use a covering index so that all the columns you need are included in an index data structure. The benefit is that all your lookups are O(log n), and the query doesn't need to do a second I/O to read the physical row to get other columns.

In this case, you should create the covering index over columns token, source, count as I mentioned above. Also try to allocate enough cache space so that the index can be cached in memory.

Bill Karwin
+1 for the right approach; but such an index would be almost as big as the whole record, do you think it would be faster than just indexing on token?
Javier
Depends on the number of rows and other system-specific factors. The only way to be sure is to try it with *your* database and measure the performance.
Bill Karwin
cruzja
Apologies for misunderstanding your question. See update above.
Bill Karwin
Thanks for the update and the tips. I will work on using a convering index and update my results.
cruzja
+1  A: 

If token isn't indexed, it certainly should be.

Carl Manaster