I have a table with a billion rows and I would like to determine the average time and standard deviation of time for several queries of the form:
select * from mytable where col1 = '36e2ae77-43fa-4efa-aece-cd7b8b669043';
select * from mytable where col1 = '4b58c002-bea4-42c9-8f31-06a499cabc51';
select * from mytable where col1 = 'b97242ae-9f6c-4f36-ad12-baee9afae194';
....
I have a thousand random values for col1 stored in another table.
Is there some way to store how long each of these queries took (in milliseconds) in a separate table, so that I can run some statistics on them? Something like: for each col1 in my random table, execute the query, record the time, then store it in another table.
A completely different approach would be fine, as long as I can stay within PostgreSQL (i.e., I don't want to write an external program to do this).