Managing date and times has long been trouble for every application developer. In many cases, a simple app only cares about datetime resolution at the
day level. However, to many applications, higher time resolution is critical. In these applications, a finer, more granular time-unit resolution may be highly desirable. The difficulties in managing time emerge in the realm of relativity. If an application, its users, and its dependent infrastructure are spread across timezones, synchronizing a chronological history of events may prove difficult if you haven't designed your system to manage time full well. This is discussion may be old hat for many, but a painful reality for many apps.
It doesn't have to be, actually. The "difficult" aspects of managing time are generally designer oversight. Two common oversights that I am personally guilty of are:
Time is often captured incompletely. Application services consuming the incomplete time fill in the missing data with assumptions.
(new Date()).getTime() //=> 1435089516878. What happens if you log this time on a server in a different timezone? Most likely, the server uses its timezone or UTC, not the user's time zone.
Time is transferred in varying formats, generating sub-system overhead (or errors!)
timeobjects for sending over the wire? Is your serialization lossy? Do your services require knowledge of each others' formats?
Before we discuss how these issues manifest themselves in an application, let's quickly discuss the general solution. We need a solution to represent time that does so reliably across:
My preferred strategy is to store, transfer, and manipulate complete timestamps only. What's a complete timestamp? It's simply an absolute time with visual representation of timezone. It's a string or composite datatype specifying time with my application's required time-unit resolution or finer, + TZ. Practically speaking, in my app I will:
transfer all times as fully defined time strings with timezones in a standardized format (e.g. ISO 8601). Know your application's time-wise resolution needs, and adhere to them throughout the app. Suppose you need
second level resolution:
EDIT, [Aug 2015]
We already discussed these above. Let's dive a bit deeper.
var myDate = new Date(); myDate.getTime(); //=> 1435089516878`
The above is an easy way to get a time. Let us use this in our app, so long as that time data doesn't leave this client, or this machine doesn't change timezones. Can you assert that your user's don't travel? Can you assert that your time or time calculations won't be sent somewhere, beyond the client? If you cannot, sending time in a basic integer format drops critical data. Specifically, you lose timezone relativity and, in rare cases, a known base-time reference value. For instance, does that integer reflect the # of seconds from unix-time-start in UTC time, or the # of seconds from unix-time-start, offset for your region?
You could, as some do, use the above integer time value in conjunction with a timezone string. However, you've introduced generally 1 to 2 steps of extra parse complication on all services consuming your time values, and an unstated assumption that the unix time provided is already aligned with UTC (it generally is). These are all simple concepts that stack up to be a complicated when you have many services in different languages. JS (node and browser), for instance, default to milliseconds. PHP likes seconds.
Managing this complication is generally unnecessary. In order to convey a clear, accurate, and complete timestamp, one of which that you can interchange safely across services, serialize your apps' and services' timestamps in a complete string during I/O, and parse via language natives or time helper libraries as required.
This example leads us directly to our next topic!
Look at your own applications. How have you shared times between services? Have you echoed time values directly out of your database? Have your API's used programming-language specific formatting functions to make time "look" standard to your liking?
Apps I have worked in have done all sorts of variants in php:
echo date("Ymd"); // or echo date(DATE_RFC2822); // or echo date("Y-m-d H:i:s"); // very prevalent in codebases i've used echo date("c"); // my favorite :), e.g. 2004-02-12T15:19:21+00:00
Use a standard. 8601 is my personal preference. Using a standard is generally the safest, as most languages have a toolset that can parse and manipulate dates/times from a standardized string. It is ideal to do date/time I/O in the same string format on every transfer to make your interfaces predictable!
A consideration that must not be overlooked is whether or not the timestamp serializer normalizes to UTC or not. In the server example directly above, we used
date("c"). This does not normalize to UTC time. In the client example, we advised against using
myDate.toISOString() in favor of
.toISOString() normalized to UTC. Again, all of the above variations are 8601 compliant, but
.toISOString() drops user +TZ data.
It can be OK for servers to send outbound timestamps normalized to UTC time if:
Those are tough bullets to gamble over. You may have not know how your app or ecosystem will change in time. In a distributed server model, where server activity also needs to be tracked against other servers, UTC normalization may lead to bad consequences! Don't normalize to UTC if you have rich TZ data to begin with and there is possibility that you will want to maintain client locale time in any part of your app!
It's easy to drop critical time data. It's also very easy to maintain good timestamp data integrity. When possible,
These tips will help yield a healthy app and good time intgrity. It's a bland topic--thanks for reading!