we're writing a scientific tool with MySQL support. The problem is, we need microsecond precision for our datetime fields, which MySQL doesn't currently support. I see at least two workarounds here:
The most popular query is selecting columns corresponding to a time interval (i.e. dt_record > time1 and dt_record < time2).
Which one of these methods (or perhaps another one) is likely to provide better performance in the case of large tables (millions of rows)?
If you say that the most popular queries are time base, I would recomend going with a single column that stores the time as in your first option.
You could pick your own epoch for the application, and work from there.
This should simplify the queries that needs to be written when searching for the time intervals.
Also have a look at 10.3.1. The DATETIME, DATE, and TIMESTAMP Types
However, microseconds cannot be stored into a column of any temporal data type. Any microseconds part is discarded. Conversion of TIME or DATETIME values to numeric form (for example, by adding +0) results in a double value with a microseconds part of .000000