ram_rar
21 hours ago
I’ve spent a decent chunk of my career wrestling with time sync — NTP/PTP, GPS, timezones, all that fun stuff. For real world network time infrastructure, where do we actually hit diminishing returns with clock precision? Like, at what point does making clocks more precise stop helping in practice?
Asking partly out of curiosity, I have been toying with a future pet project ideas around portable atomic clocks, just to skip some of the headaches of distributed time sync altogether. Curious how folks who’ve worked on GPS or timing networks think about this.
nomel
18 hours ago
For network stuff, high security and test/measurement networked systems use precision time protocol [1], which adds hardware timestamps as the packets exit the interface. This can resolve down to a couple nanoseconds for 10G [2], but can get down to picosecond. The "Grandmaster" clock uses GPS/atomic clocks.
For test and measurement, it's used for more boring synchronization of processes/whatever. For high security, with minimal length/tight cable runs, you can detect changes in cable length and latency added by MITM equipment, and synch all the security stuffs in your network.
[1] https://en.wikipedia.org/wiki/Precision_Time_Protocol
[2] https://www.arista.com/assets/data/pdf/Whitepapers/Absolute-...
jcelerier
14 hours ago
and that precision is really important. For instance, when working with networked audio, which usually has a temporal resolution of packets that can be between 100us and 10ms (so abysmally slow in computer-time), non-PTP network cards are basically unusable.
DannyBee
10 hours ago
I use fairly precise time but that's because I control high speed machinery remotely. The synchronization is the important part (the actual time doesn't matter). At 1200 inches per minute, being a millisecond off will put a noticeable notch in a piece.
Ptp and careful hardware configuration keeps things synced to within nanoseconds
pyuser583
18 hours ago
My understanding is that precise measurement of time is the basis of all other measurements: space, mass, etc. They are all defined by some unit of time. So increasing time precision increases potential precision in other measurements.
Including of course information - often defined by the presence or absence of some alterable within a specific time.
We invent new uses for things once we have them.
A fun thought experiment would be what the world would look like if all clocks were perfectly in sync. I think I'll spend the rest of the day coming with imaginary applications.
fsh
6 hours ago
This is true, but atomic clocks are about a million times more accurate than any other measurement device. For all pratical purposes, they are never the limiting factor.
MengerSponge
16 hours ago
> were perfectly in sync
They couldn't stay synced. There's a measurable frequency shift from a few cm of height difference after all. Making a pair of clocks that are always perfectly in sync with each other is a major step towards Le Guin's ansible!
For other readers' info, clock stability is crucial for long-term precision measurements, with a "goodness" measured by a system's Allan variance: https://en.wikipedia.org/wiki/Allan_variance
toast0
19 hours ago
For most applications, clock precision of synchronization isn't really necessary. Timestamps may be used to order events, but what is important is that there is a deterministic order of events, not that the timestamps represent the actual order that the events happened.
In such systems, ntp is inexpensive and sufficient. On networks where ntpd's assumptions hold (symetric and consistent delays), sync within a millisecond is acheivable without much work.
If you need better, PTP can get much better results. A local ntpserver following GPS with a PPS signal can get slightly better results (but without PPS it might well be worse)
KeplerBoy
20 hours ago
I guess very few systems have better absolute time than a few microseconds. Those systems are probably exclusively found in HFT and experimental physics.
This past week I tried synchronizing the time of an embedded Linux board with a GPS PPS signal via GPIO. Turns out the kernel interrupt handler already delays the edge by 20 us compared to busy polling the state of the pin. Stuff then gets hard to measure at sub microsecond scales.
stephen_g
6 hours ago
If your board’s SOC has a general purpose timer (GPT) then you can often have it count cycles of a hardware clock and store the value every interrupt pulse from a GPIO. I designed a GPS disciplined oscillator like this where we had an ADC generate a tuning voltage for a 100 MHz OCXO (which was a reference clock for microwave converters) which we divided down to 10 kHz and fed into the GPT, along with the 1pps from a GPS module, and the control loop would try to adjust it until we got 10K clock cycles for every pulse. This kind of synchronisation gets very accurate over a few minutes.
Even just triggering an GPT from an GPS PPS input counting cycles of an internal clock you could use a GPT to work out the error in the clock, and you only need to query it once a second.
stephen_g
2 hours ago
Sorry that should be “had a DAC generate the turning voltage”, not ADC!
DannyBee
7 hours ago
Intel Ethernet pps input pin works much much better for this. See how the open timecard mini folks do it. Easy to get sub microsecond even on cheap embedded. Most m.2 of Intel chipsets expose it (for example) as well.
themafia
16 hours ago
10 MHz reference oscillators that are GPS locked are quite common. They're very useful in RF contexts where they're quite easy to find.
KeplerBoy
6 hours ago
Sure, I was specifically talking about computer system clocks. Also with an oscillator _absolute_ time offset doesn't matter, unless you want to synchronize the phase of distributed oscillators and then things quickly get non-trivial again.
westurner
20 hours ago
From https://news.ycombinator.com/item?id=44054783 :
> "Re: ntpd-rs and higher-resolution network time protocols {WhiteRabbit (CERN), SPTP (Meta)} and NTP NTS : https://news.ycombinator.com/item?id=40785484 :
>> "RFC 8915: Network Time Security for the Network Time Protocol" (2020)
KeplerBoy
19 hours ago
Yes, I'm aware of some of these developments. Impressive stuff, just not the level of precision on achieves tinkering for a few days with a basic gnss receiver.
TrueDuality
18 hours ago
Another commenter mentioned that this is needed for consistently ordering events, to which I'd add:
The consistent ordering of events is important when you're working with more than one system. An un-synchronized clock can handle this fine with a single system, it only matters when you're trying to reconcile events with another system.
This is also a scale problem, when you receive one event per-second a granularity of 1 second may very well be sufficient. If you need to deterministically order 10^9 events across systems consistently you'll want better than nanosecond level precision if you're relying on timestamps for that ordering.
cma
15 hours ago
Google Spanner paper has interesting stuff along these lines, heavily relied on atomic clocks
IAmBroom
21 hours ago
Since their precision is essential to measuring relativistic effects, I'm not sure we're near that limit.
For your precise question, it may already be there.
halestock
20 hours ago
I know that Google's Spanner[0] uses atomic clocks to help with consistency.
gaze
20 hours ago
It hit diminishing returns for most things long, long ago, but this physics is directly related to stuff in quantum computing and studying gravity.
amy_petrik
2 hours ago
> where do we actually hit diminishing returns with clock precision?
ah yes - that would be Planck's second which can be derived from Planck's constant and the speed of light