Ok, I got downvoted on this so I decided to test it:
CREATE TABLE userrole (
userid INT,
roleid INT,
PRIMARY KEY (userid, roleid)
);
CREATE INDEX ON userrole (roleid);
Run this:
<?php
ini_set('max_execution_time', 120); // takes over a minute to insert 500k+ records
$start = microtime(true);
echo "<pre>\n";
mysql_connect('localhost', 'scratch', 'scratch');
if (mysql_error()) {
echo "Connect error: " . mysql_error() . "\n";
}
mysql_select_db('scratch');
if (mysql_error()) {
echo "Selct DB error: " . mysql_error() . "\n";
}
$users = 200000;
$count = 0;
for ($i=1; $i<=$users; $i++) {
$roles = rand(1, 4);
$available = range(1, 5);
for ($j=0; $j<$roles; $j++) {
$extract = array_splice($available, rand(0, sizeof($available)-1), 1);
$id = $extract[0];
query("INSERT INTO userrole (userid, roleid) VALUES ($i, $id)");
$count++;
}
}
$stop = microtime(true);
$duration = $stop - $start;
$insert = $duration / $count;
echo "$count users added.\n";
echo "Program ran for $duration seconds.\n";
echo "Insert time $insert seconds.\n";
echo "</pre>\n";
function query($str) {
mysql_query($str);
if (mysql_error()) {
echo "$str: " . mysql_error() . "\n";
}
}
?>
Output:
499872 users added.
Program ran for 56.5513510704 seconds.
Insert time 0.000113131663847 seconds.
That adds 500,000 random user-role combinations and there are approximately 25,000 that match the chosen criteria.
First query:
SELECT userid
FROM userrole
WHERE roleid IN (1, 2, 3)
GROUP by userid
HAVING COUNT(1) = 3
Query time: 0.312s
SELECT t1.userid
FROM userrole t1
JOIN userrole t2 ON t1.userid = t2.userid AND t2.roleid = 2
JOIN userrole t3 ON t2.userid = t3.userid AND t3.roleid = 3
AND t1.roleid = 1
Query time: 0.016s
That's right. The join version I proposed is twenty times faster than the aggregate version.
Sorry but I do this for a living and work in the real world and in the real world we test SQL and the results speak for themselves.
The reason for this should be pretty clear. The aggregate query will scale in cost with the size of the table. Every row is processed, aggregated and filtered (or not) through the HAVING
clause. The join version will (using an index) select a subset of the users based on a given role, then check that subset against the second role and finally that subset against the third role. Each selection (in relational algebra terms) works on an increasingly small subset. From this you can conclude:
The performance of the join version gets even better with a lower incidence of matches.
If there were only 500 users (out of the 500k sample above) that had the three stated roles, the join version will get significantly faster. The aggregate version will not (and any performance improvement is a result of transporting 500 users instead of 25k, which the join version obviously gets too).
I was also curious to see how a real database (ie Oracle) would deal with this. So I basically repeated the same exercise on Oracle XE (running on the same Windows XP desktop machine as the MySQL from the previous example) and the results are almost identical.
Joins seem to be frowned upon but as I've demonstrated, aggregate queries can be an order of magnitude slower.
Update: After some extensive testing, the picture is more complicated and the answer will depend on your data, your database and other factors. The moral of the story is test, test, test.