On 12/23/2011 11:27 PM, Jay Daley wrote:
The reality is that insertion of a positive leap second involves back-stepping, stopping or slewing the clock. That means for the duration of the adjustment, at least one "second" is not actually a second long. It's a small thing for most protocols (most of us have learned to live with time discontinuities, as they happen due to loss and/or subsequent regain of time synchronisation a lot more often than due to leap seconds ... has anybody else fixed ntpd, and suddenly had their screen-saver kick in?), but the fact remains that leap second are handled in a way that can only be described as introducing an error.
NTP is constantly correcting the clock. Leap seconds are treated like any other large correction. Admittedly this does take a while but as explained above this is only a problem if second/sub-second resolution is needed.
You're conflating two separate functions provided by NTP: regulation and synchronisation. Every externally referenced time system needs a means of regulating it to the external reference. But synchronisation is usually a one-time thing, done at system startup time, or as a remedial action when regulation has failed. The important thing about a regulated clock is that a second, millisecond or microsecond, whatever the resolution of the clock, should takes very close to that measure of wall time to elapse, within a small and preferably well defined margin. Any discontinuity outside those margins represents a failure of regulation. According to the NTPd documentation, "the time is slewed if the offset is less than the step threshold, which is 128 ms by default, and stepped if above the threshold." In other words, a change of a second, such as would occur when adding a leap second, will introduce a discontinuity unless the NTP server slews the source at a rate lower than the slew threshold (.5 ms/second, or 2,000 seconds for a 1 second adjustment). This is an error in clock regulation and should be considered such. Decent stratum-1 NTP servers don't otherwise do this to their clients, and I don't see why a handful of astronomers should demand that they should. Of course what ought to happen is for systems to have a regulated but unsynchronised time "ticker", available for delta time or other non-absolute time calculations, initialised at system startup time and apart from applying a reference, left alone. Then an offset from the base time to the calendar epoch provided for absolute time calculations. I've worked with systems that did in fact do this. But the standard Unix APIs do not separate calendar time from system time. -- don