ram_rar
4 months ago
I’ve spent a decent chunk of my career wrestling with time sync — NTP/PTP, GPS, timezones, all that fun stuff. For real world network time infrastructure, where do we actually hit diminishing returns with clock precision? Like, at what point does making clocks more precise stop helping in practice?
Asking partly out of curiosity, I have been toying with a future pet project ideas around portable atomic clocks, just to skip some of the headaches of distributed time sync altogether. Curious how folks who’ve worked on GPS or timing networks think about this.
nomel
4 months ago
For network stuff, high security and test/measurement networked systems use precision time protocol [1], which adds hardware timestamps as the packets exit the interface. This can resolve down to a couple nanoseconds for 10G [2], but can get down to picosecond. The "Grandmaster" clock uses GPS/atomic clocks.
For test and measurement, it's used for more boring synchronization of processes/whatever. For high security, with minimal length/tight cable runs, you can detect changes in cable length and latency added by MITM equipment, and synch all the security stuffs in your network.
[1] https://en.wikipedia.org/wiki/Precision_Time_Protocol
[2] https://www.arista.com/assets/data/pdf/Whitepapers/Absolute-...
jcelerier
4 months ago
and that precision is really important. For instance, when working with networked audio, which usually has a temporal resolution of packets that can be between 100us and 10ms (so abysmally slow in computer-time), non-PTP network cards are basically unusable.
pyuser583
4 months ago
My understanding is that precise measurement of time is the basis of all other measurements: space, mass, etc. They are all defined by some unit of time. So increasing time precision increases potential precision in other measurements.
Including of course information - often defined by the presence or absence of some alterable within a specific time.
We invent new uses for things once we have them.
A fun thought experiment would be what the world would look like if all clocks were perfectly in sync. I think I'll spend the rest of the day coming with imaginary applications.
MengerSponge
4 months ago
> were perfectly in sync
They couldn't stay synced. There's a measurable frequency shift from a few cm of height difference after all. Making a pair of clocks that are always perfectly in sync with each other is a major step towards Le Guin's ansible!
For other readers' info, clock stability is crucial for long-term precision measurements, with a "goodness" measured by a system's Allan variance: https://en.wikipedia.org/wiki/Allan_variance
fsh
4 months ago
This is true, but atomic clocks are about a million times more accurate than any other measurement device. For all pratical purposes, they are never the limiting factor.
DannyBee
4 months ago
I use fairly precise time but that's because I control high speed machinery remotely. The synchronization is the important part (the actual time doesn't matter). At 1200 inches per minute, being a millisecond off will put a noticeable notch in a piece.
Ptp and careful hardware configuration keeps things synced to within nanoseconds
toast0
4 months ago
For most applications, clock precision of synchronization isn't really necessary. Timestamps may be used to order events, but what is important is that there is a deterministic order of events, not that the timestamps represent the actual order that the events happened.
In such systems, ntp is inexpensive and sufficient. On networks where ntpd's assumptions hold (symetric and consistent delays), sync within a millisecond is acheivable without much work.
If you need better, PTP can get much better results. A local ntpserver following GPS with a PPS signal can get slightly better results (but without PPS it might well be worse)
KeplerBoy
4 months ago
I guess very few systems have better absolute time than a few microseconds. Those systems are probably exclusively found in HFT and experimental physics.
This past week I tried synchronizing the time of an embedded Linux board with a GPS PPS signal via GPIO. Turns out the kernel interrupt handler already delays the edge by 20 us compared to busy polling the state of the pin. Stuff then gets hard to measure at sub microsecond scales.
westurner
4 months ago
From https://news.ycombinator.com/item?id=44054783 :
> "Re: ntpd-rs and higher-resolution network time protocols {WhiteRabbit (CERN), SPTP (Meta)} and NTP NTS : https://news.ycombinator.com/item?id=40785484 :
>> "RFC 8915: Network Time Security for the Network Time Protocol" (2020)
KeplerBoy
4 months ago
Yes, I'm aware of some of these developments. Impressive stuff, just not the level of precision on achieves tinkering for a few days with a basic gnss receiver.
stephen_g
4 months ago
If your board’s SOC has a general purpose timer (GPT) then you can often have it count cycles of a hardware clock and store the value every interrupt pulse from a GPIO. I designed a GPS disciplined oscillator like this where we had an ADC generate a tuning voltage for a 100 MHz OCXO (which was a reference clock for microwave converters) which we divided down to 10 kHz and fed into the GPT, along with the 1pps from a GPS module, and the control loop would try to adjust it until we got 10K clock cycles for every pulse. This kind of synchronisation gets very accurate over a few minutes.
Even just triggering an GPT from an GPS PPS input counting cycles of an internal clock you could use a GPT to work out the error in the clock, and you only need to query it once a second.
stephen_g
4 months ago
Sorry that should be “had a DAC generate the turning voltage”, not ADC!
themafia
4 months ago
10 MHz reference oscillators that are GPS locked are quite common. They're very useful in RF contexts where they're quite easy to find.
KeplerBoy
4 months ago
Sure, I was specifically talking about computer system clocks. Also with an oscillator _absolute_ time offset doesn't matter, unless you want to synchronize the phase of distributed oscillators and then things quickly get non-trivial again.
themafia
4 months ago
> the phase of distributed oscillators
What do you imagine the clock in your computer is made out of?
DannyBee
4 months ago
Intel Ethernet pps input pin works much much better for this. See how the open timecard mini folks do it. Easy to get sub microsecond even on cheap embedded. Most m.2 of Intel chipsets expose it (for example) as well.
IAmBroom
4 months ago
Since their precision is essential to measuring relativistic effects, I'm not sure we're near that limit.
For your precise question, it may already be there.
TrueDuality
4 months ago
Another commenter mentioned that this is needed for consistently ordering events, to which I'd add:
The consistent ordering of events is important when you're working with more than one system. An un-synchronized clock can handle this fine with a single system, it only matters when you're trying to reconcile events with another system.
This is also a scale problem, when you receive one event per-second a granularity of 1 second may very well be sufficient. If you need to deterministically order 10^9 events across systems consistently you'll want better than nanosecond level precision if you're relying on timestamps for that ordering.
cma
4 months ago
Google Spanner paper has interesting stuff along these lines, heavily relied on atomic clocks
halestock
4 months ago
I know that Google's Spanner[0] uses atomic clocks to help with consistency.
gaze
4 months ago
It hit diminishing returns for most things long, long ago, but this physics is directly related to stuff in quantum computing and studying gravity.
amy_petrik
4 months ago
> where do we actually hit diminishing returns with clock precision?
ah yes - that would be Planck's second which can be derived from Planck's constant and the speed of light