Are there any lossless compression methods that can be applied to floating point time-series data, and will significantly outperform, say, writing the data as binary into a file and running it through gzip?
Reduction of precision might be acceptable, but it must happen in a controlled way (i.e. I must be able to set a bound on how many digits must be kept)
I am working with some large data files which are series of correlated double
s, describing a function of time (i.e. the values are correlated). I don't generally need the full double
precision but I might need more than float
.
Since there are specialized lossless methods for images/audio, I was wondering if anything specialized exists for this situation.
Clarification: I am looking for existing practical tools rather than a paper describing how to implement something like this. Something comparable to gzip in speed would be excellent.
You might want to have a look at these resources:
You might also want to try Logluv-compressed TIFF for this, thought I haven't used them myself.