Hi,
we're writing a scientific tool with MySQL support. The problem is, we need microsecond precision for our datetime fields, which MySQL doesn't currently support. I see at least two workarounds here:
- Using a decimal() column type, with integer part corresponding to seconds since some point in time (I doubt that UNIX epoch will do, since we have to store measurements taken in 60's and 50's).
- Using two integer columns, one for seconds, the other one for microseconds.
The most popular query is selecting columns corresponding to a time interval (i.e. dt_record > time1 and dt_record < time2).
Which one of these methods (or perhaps another one) is likely to provide better performance in the case of large tables (millions of rows)?