The Leap Second of 2005
The second following 2005-12-31T23:59:59+0000 will be a leap second, and will have the ISO 8601 designation 2005-12-31T23:59:60+0000. In the Universal Time Zone (the one whose offset from the UTC timescale is zero seconds,) the final two seconds of 2005, and the first two seconds of 2006, will be designated as follows:
Seconds since 1970-01-01T00:00:00
Universal Time Unix/POSIX Clock UTC Timescale
2005-12-31T23:59:59 1,136,073,599 1,136,073,621
2005-12-31T23:59:60 1,136,073,599 1,136,073,622
2006-01-01T00:00:00 1,136,073,600 1,136,073,623
2006-01-01T00:00:01 1,136,073,601 1,136,073,624
Firstly, note that leap seconds always occur as the final second of the day in the Universal Time Zone, and therefore occur at some other time of day in any time zone whose offset from UT is not zero. The effect of a leap second is to make the minute that contains it 61 seconds long, the hour that contains it 3601 seconds long, the day that contains it 86401 seconds long, and the year that contains it 31,536,001 seconds long (31,622,401 seconds in a leap year)--all assuming there is only one leap second during those periods, and assuming a positive leap second. Leap seconds can theoretically be negative, although none such have yet to occur.
Where I live in California, the leap second of 2005 will occur as the final second of the final minute before 4pm on Saturday, 31 December 2005 (2005-12-31T15:59:60-8000.)
Secondly, note that the Unix/POSIX system clock does not agree with the count of seconds according to the UTC timescale. The same is true of the Windows and MacOS X system clocks, the VisualWorks system clock, and of the count of seconds returned by an RFC 868 time service (which answers the count of seconds since 1900-01-01T00:00:00+0000.) They all provide answers as though leap seconds didn't exist--although some of them will repeat T59:59:59 twice, so that leap-second-naive algorithms that convert a count of seconds into a year-month-day-hour-minute-second designation will compute the right result.
The reason most system clocks report time (in seconds since an epoch) as though there were always 86400 seconds per day is because most date/time libraries make that assumption. One reason most date/time libraries make that assumption (in addition to the complexity of the code required to handle leap seconds correctly) is because most system clocks report time that way. Specifically, the clock code in the OS kernel converts the date provided by the motherboard's real time clock into a count of seconds based on a naive 86400 seconds/day algorithm, and does not add in a leap second correction. It's a vicious circle.
Of course, most users wouldn't know the difference, and wouldn't care if they did know it. Which is another reason the leap second "catch 22" situation persists to this day, more than 30 years after leap seconds were added to the UTC timescale. Another reason that the situation persists is because most clocks don't keep time to anywhere near one second per year accuracy--and a clock has to be more accurate than even that before leap seconds matter.
Nevertheless, clocks with sufficient accuracy for leap seconds to matter do exist, and will become rather common by the end of the next decade. And there are users and use cases where such accuracy not only matters, but is essential. It is for these reasons that I am working on adding full support for leap seconds into Chronos.
I would like to point out, however, that leap seconds are usually irrelevant in Chronos. The class that implements the "point in time" concept, called Timepoint, does not represent a point-in-time as a count of seconds since an epoch. Instead, it represents a point-in-time as a) a count of days since an epoch and/or a year/day-of-year designation (the same instance may do both,) b) a number of seconds since the start of the day, and c) the number of nanoseconds since the start of the second. The counts of the days, the seconds and the nanoseconds since the epoch are all relative to Universal Time. The year/day-of-year designation is relative to the local time of the Timepoint instance. This internal representation scheme is one reason why Chronos is so performant relative to its competition.
So, given Timepoint's internal representation, leap seconds would only be relevant to the semantics of the internal representation of a Timepoint when it designates a point-in-time that corresponds to the occurrence of a leap second, and not otherwise. Leap seconds also would matter whenever one converts a Timepoint into a count of seconds since an epoch, or converts a count of seconds since an epoch into a Timepoint--but even then, the provider or consumer of the seconds-since-the-epoch count would have to be leap-second aware, which is not usually the case.
No comments:
Post a Comment