I have a database which stores temperature-logging data from various instruments. Data may be logged as often as once per minute. One approach to designing a log table would be to put each log entry in its own row along with the device ID, a time stamp, and a sequence number (even if the clock on a device is changed, it should be possible to sort entries in the order the measurements were actually taken). That would seem incredibly grossly inefficient, however, since every 16-bit measurement would have probably 16 bytes of other data attached to it, in addition to whatever the system adds for indexing. I recognize that it is often senseless to try to optimize every last byte out of a database, but expanding data by a factor of 9:1 or worse seems silly.
At present, I aggregate the records into groups of equally-spaced readings, and store one group per record in a variable-length opaque binary format along with the device ID, time stamp and sequence number for the first reading, and interval between readings. This works nicely, and for all I know may be the best approach, but it doesn't allow for much in the way of queries.
Is there any nice approach for handling such data sets without excessive redundancy?