It's About Time


Alright, it happened. After the one thousandth time I had a conversation about this topic with other engineers, I’ve decided to write this piece. Time-related bugs are surprisingly common and can be incredibly costly, yet most developers don’t have a solid grasp of how time actually works in software systems. So here’s what everyone should know about time, and also something I can refer people to when I inevitably have this conversation again.

Time is Relative

This is the very first thing you should know about time. Time is relative. What does this mean? It means that what we call time is a construct, a system that tells us when something happened or will happen.

There are many of such systems, and in plain english we call these systems calendars. For instance, in the Roman world, they would tell time by saying “3 days before the Ides of March”, as it is proper of the Julian Calendar. Later in the Middle Ages, people would adopt the Gregorian Calendar (which is the calendar we use today). But there are many other calendars out there. The Islamic world uses the Hijri Calendar, the Chinese use the Chinese Calendar, the Jewish people use the Hebrew Calendar, and so on.

However, all calendars have one thing in common: they have a specific event in time that determines where the year, month, day count starts. For the Julian Calendar this fact was the foundation of Rome. For the Gregorian Calendar, it is traditionally the birth of Jesus Christ—what we call Anno Domini or AD. For the Islamic Calendar, it is the Hijra, the migration of Muhammad from Mecca to Medina. And so on.

All calendars have a point in time they refer to, and our way of telling time is relative to that point. In this way, we make something that’s essentially relative, a bit more absolute. It’s an implicit agreement that we all start counting from this event.

All Computer Time is Represented as an Instant

The system that computers use to tell time is also the Gregorian Calendar, but we have a difference and more precise reference point to start counting time from, called the Unix Time or Unix Epoch, which is the 1st of January 1970 at 00:00:00 UTC. This is the point in time from which we start counting seconds. Why seconds? For simplicity and historical reasons—when Unix was created, second-level precision was sufficient for most purposes, and it keeps the numbers manageable. Modern systems can measure much smaller units (milliseconds, nanoseconds), but the Unix timestamp remains in seconds for compatibility.

The technical term that ISO 8601 uses for how computers represent time is the Instant. An instant is a specific point in time, and it is represented as the number of seconds that have elapsed since the Unix Epoch. This number is always positive, and it is always in UTC. This means that an instant is a universal point in time, and it is not affected by timezone changes or daylight saving time, as we have already discussed. This means it is an absolute reference by which we can all agree on when something happened.

UTC is a Stable Timezone

Why is an Instant in UTC? Because UTC is a stable timezone. This means that UTC does not change. It doesn’t have daylight saving time, it doesn’t move depending on the season or the country. It is constant, and that’s why it’s a good reference point for telling time. Other timezones are not stable.

In Chile, for example, we have 3 timezones. Our main timezone is America/Santiago, and on it we have daylight saving time. This means our timezone offset changes between UTC-3 and UTC-4 depending on the time of year.

We couldn’t use our timezone as a reference point for telling time because it changes, and tracking those changes is sometimes complicated. And that’s complicated because DST doesn’t happen the same time every year, and sometimes the rules change in between governments. Using UTC as a reference point, we can avoid all of this complexity.

Timezones are a Representation Concern

Timezones are a representation of the underlying instant. Think of it like this: the number 129 can be represented as 129 in decimal or 0b10000001 in binary. They’re different representations of the same value. Similarly, a timezone is just a different way of representing the same instant in time as a human-friendly encoding of the count of seconds since the Unix Epoch.

For instance, this code will never fail in PHP:

$chileTime = new \DateTimeImmutable('now', new \DateTimeZone('America/Santiago'));
$ukTime = new \DateTimeImmutable('now', new \DateTimeZone('Europe/London'));
\assert($chileTime->getTimestamp() === $ukTime->getTimestamp());
\assert($chileTime->format('Y-m-d H:i:s') !== $ukTime->format('Y-m-d H:i:s'));

Even though we are creating two different date objects, one for Chile and one for the UK, they will have the same Instant.

The time might be completely different. The UK one might say it is 10:00 PM, while the Chile one might say it is 5:00 PM. But they will both have the same number of seconds since the Unix Epoch. They represent the same point in time, just encoded in different timezones. It’s the same underlying value, just represented differently for human consumption.

This is why it’s always safe to store dates in UTC. Because it is a stable reference point, and because it is the same Instant, no matter what timezone you are in. UTC is like storing the number in its canonical form—you can always convert it to whatever representation you need later.

Named Timezones are Different from Timezone Offsets

When people talk about timezones, they usually confuse timezone offsets with what is called a named timezone.

For instance, America/Santiago is a named timezone that can be in offset -3 or -4 depending on the time of year. The offset just tells you what is the difference between UTC and the current timezone at a given point in time. This is crucial, because most of the time when you store the timezone, you want to store the named timezone, not the offset. The offset is not a stable reference point: it changes depending on when you computed this time. The named timezone is much more reliable.

For instance, if you store the offset of America/Santiago in the summer, you will get -4 but in the winter you will get -3. But if you store the named timezone, you can get the local time in that Instant and any other instant derived from it, and you can know exactly what is the offset for that Instant.

This is an important consideration. For instance, if you need to send an email at 9:00 AM local time to users in different timezones, you can’t just store a single UTC instant — you need to know their timezone to calculate when 9:00 AM is for them. Similarly, recurring events need to respect DST transitions, and date arithmetic across DST boundaries can be tricky. But for storing when something happened, UTC is your friend.

Timezones Rely on Anonymous Heroes

The reason why you can get the offset at any point in time based on named timezones is because of the work of the IANA and the tz database. This database is a collection of files that define the rules for each timezone. They define when DST starts and ends, and what is the offset for each timezone at any given point in time. You can download it and inspect it yourself.

Without this database, we would not be able to have such a rich and complex system for telling time. We would be stuck with UTC and offsets, and that would be it. However, thanks to the work of many volunteers from different countries, we can have an up to date list of rules for when offsets move in a given timezone. When a country changes their legislation or dates regarding DST, the volunteers update the database so that we can all have the correct time for our timezone. It is a testament to the power of open source and collaboration.

For instance, thanks to the IANA database, we know that in 2026, Chile will be in DST from the 6th of September 2026 to the 3rd of April 2027, and even though this is a future date (as of today’s date), our programming language Date and Time APIs can know this fact, and we can rely on it for our calculations.

Look at what happens when I run this PHP code:

$chileTime = new \DateTimeImmutable('2026-09-05 23:00:00', new \DateTimeZone('America/Santiago'));
echo $chileTime->format('Y-m-d H:i:s') . "\n"; // 2026-09-05 23:00:00
$chileTime = $chileTime->modify('+1hour');
echo $chileTime->format('Y-m-d H:i:s') . "\n"; // 2026-09-06 01:00:00
$chileTime = $chileTime->modify('+1hour');
echo $chileTime->format('Y-m-d H:i:s') . "\n"; // 2026-09-06 02:00:00

Do you notice how the displayed time jumped from 23:00 to 01:00 on the first modify? That’s because in Chile clocks will go forward one hour at midnight on the 6th of September, so the time jumps from 23:59:59 to 01:00:00. We added one hour to the instant, but the displayed time jumped two hours because of DST. And we need to give thanks to the IANA volunteers for telling us this fact so that we can all agree on when something happened.

Telling Time is Different from Measuring Time

The last thing I want to talk about is the difference between telling time and measuring time. Telling time is what we have discussed so far. It is about agreeing on when something happened. Measuring time is about how much time has elapsed between two events, and that’s not always as simple as it seems.

To understand this, you need to know there is something called clock drift. Clock drift is the difference between the time kept by a digital clock and the true time. This can be caused by many factors, but the most common one is that the clock is not perfectly accurate. For instance, a clock that is running fast will have a positive drift, and a clock that is running slow will have a negative drift.

At home I have a programmable cat feeder, and to be programmed it needs to know the current time. However, every couple of months or so I need to reset the time because it has drifted so much that it affects my cat’s meal schedule. This is because the clock on the machine is probably super cheap and therefore not perfectly accurate.

To solve this problem and have our computers agree to some extent as to what time it is, we use a protocol called NTP. NTP is a protocol that allows computers to synchronize their clocks over the internet. It works by having a set of servers that have very accurate clocks, and then other computers can ask these servers what time it is and adjust their own clocks accordingly.

However, this introduces a very subtle problem. When you are counting time (say, measuring how long it takes to process a request in a web application), you are not really measuring time, but rather the difference between two points in time as told by the computer clock. However, between the first and the second time point, the clock might have gone forward or backwards, affecting your measurement. This can happen if the NTP daemon decides to sync the clocks in between both of the time syscalls you are using to measure the time. If that happens, your measurement will be off by the amount the clock drifted in between those two calls.

This can also happen at the turn of the year when leap seconds are added. The clocks might be adjusted to account for this, and your measurement will be affected. This was the cause of a minor bug in Cloudflare DNS resolution back in 2017.

Now, this is usually not a massive problem. The clock drift is usually very small, and the effects of it are usually negligible. However, if you are measuring very short periods of time, or if you are doing it very often, you might want to take this into account. One way to do this is to use a clock that is not affected by NTP, such as a monotonic clock.

A monotonic clock is basically a clock that just increments time based on the hardware. It’s not designed to tell time: it’s designed to measure it. Unlike wall clocks, monotonic clocks are guaranteed to always move forward and are never affected by NTP adjustments or leap seconds.

When should you use a monotonic clock? Anytime you’re measuring elapsed time rather than recording when something happened. This includes:

  • Benchmarking and performance measurements
  • Request timeouts and rate limiting
  • Measuring how long an operation takes
  • Any scenario where you need to know “how much time passed” rather than “what time is it”

In PHP you have access to monotonic time via the hrtime function (which returns nanoseconds), and it should be the preferred way for any of your applications to measure elapsed time. Here’s a quick example:

$start = hrtime(true); // Get monotonic time in nanoseconds
// ... do some work ...
$end = hrtime(true);
$elapsed = ($end - $start) / 1e9; // Convert to seconds
echo "Operation took {$elapsed} seconds\n";

The key thing to remember: use wall clock time (like time() or new DateTime()) to record when something happened, but use monotonic time (like hrtime()) to measure how long something took. They’re different tools for different jobs.

Wrapping Up

So there you have it. These concepts might seem abstract, but they have very real implications. Store dates in UTC. Use named timezones when you need to display or calculate local times. Use monotonic clocks for performance measurements. And when in doubt, remember that time is just a construct we all agreed upon—and sometimes that agreement is more complicated than it seems.

I honestly hope not to have to write this article again.