Timestamp precision

I’m in the habit of recording timestamps as integer seconds. Ie, I’m inserting into my database the result of int(time.time()) from Python. It’s a habit left over from the bad old days where Unix machines didn’t have sub-second clocks. Any modern system has access to at least a millisecond timer, if not a microsecond timer or better, and there’s no reason not to use it.

The only drawback for me with my project is I declared all my timestamp fields as “integer”, and sqlite3 does in fact round them off on storage. It’s harmless enough, at least if you’re not relying on exact timestamp storage, but I should probably do a schema migration at some point for it.

It’s a bit confounding that Javascript’s time precision is integer milliseconds, although it seems like a reasonable practical choice.

 

One thought on “Timestamp precision

  1. You (and Javascript) have this right to start with – milliseconds (or finer) as an integer is the norm everything that cares about timestamps (like db’s fancier than sqlite) uses. If you ever need to closely compare timestamps, increment timestamps, index timestamps, do arithmetic on timestamps, floats are a gateway to hell. It’s python’s time module that’s that’s weird in this case.

Comments are closed.