A Brief History of Timekeeping: Part 2
Atomic, fountain and optical clocks, the quest for sub-1 second accuracy over 30 billion years and everyday uses of atomic clocks.
For hundreds of years, finding longitude was such an intractable problem that a prize of £20,000 was declared in 1713 (equivalent to £3.7 million today) to anyone who could find longitude to within 0.5° or to an accuracy of 30 nautical miles (~ 55 kilometers).
The main problem that plagued the determination of longitude was keeping accurate time, especially at sea. The first accurate time piece that worked for maritime navigation was built by John Harrison in the late 1700s, a master clock builder, who spent 40 years of his life in pursuit of the Longitude prize.
Today, we are able to determine position anywhere in the globe to an accuracy of a few meters, which requires clocks to have nanosecond-range accuracy or better. The driving force of GPS today are atomic clocks, which were invented only a lifetime ago. In this post, we will continue the story of timekeeping we started in Part 1, and focus on why atoms are great oscillators, how atomic clocks work, and what the future of timekeeping looks like.
We will look at modern ways to keep extremely accurate time:
The Atom as an Oscillator
Atomic Beam Clocks
Atomic Fountain Clocks
Optical Lattice clocks
Practical applications of atomic clocks
Read time: 11 mins
The Perfect Oscillator
Any source of oscillation, such as a pendulum or piezoelectric resonance in quartz crystals, serves as the heart of the clock. A perfect oscillator is unaffected by temperature, pressure, or location and has a high degree of repeatability. In the previous post, we saw how the Shortt clock's pendulum variation was handled utilizing a master-slave strategy. And how piezoelectric resonance in quartz was not totally consistent across different samples.
In the late nineteenth century, James Clerk Maxwell proposed that atoms like hydrogen or sodium would act as good oscillators since their essential atomic properties would remain constant as long as the element existed anywhere in the universe. Like most breakthrough ideas, this one went overlooked for decades.
Here's the core idea behind atomic clocks: According to quantum physics, an atom contains discrete energy states that can be stimulated from one to another using an electromagnetic field. When an atom transitions from a higher to a lower energy state, it emits energy in the form of light. The resonant frequency of an atomic oscillator is the difference between the energy levels EH and EL divided by Planck's constant, h.
One of the widely used atoms in atomic clocks is Cesium-133, and since 1967, it has been used to define the duration of a second as
1 second = the duration of 9,192,631,770 periods of radiation corresponding to the transition between two hyperfine levels of the ground state of the cesium 133 atom.
What this means is, if you hit a cesium atom with an electromagnetic pulse, then a single electron in its outermost shell will oscillate back and forth 9,192,631,770 times between two energy levels in a second. There are good reasons to use cesium:
The oscillation is only highly likely with an external excitation, and is not of spontaneous, random nature.
This energy transition is relatively insensitive to electric fields.
Being a heavy atom, it moves relatively slowly through an applied field, which allows for longer observation periods.
Building an Atomic Beam Clock
The majority of the practical ideas that led to the development of atomic clocks were pioneered by Isidor Isaac Rabi and his Columbia University colleagues in the 1930s. Rabi received the Nobel Prize in 1944 for his contributions to nuclear magnetic resonance. While Rabi anticipated the first atomic clock to use cesium-133, the first operational clock presented in 1949 employed the ammonia molecule's 23.8 GHz transition, yielding an accuracy of around 2 milliseconds per day (2 x 10E-8). Quartz clocks were two orders of magnitude more accurate than this at the time, hence the ammonia clock was never used as a timekeeper.
In the early 1950s, researchers at the National Bureau of Standards (NBS), which later became the National Institute of Standards and Technology (NIST) in 1988, constructed a cesium atomic clock based on Rabi's nuclear magnetic resonance principles. It consisted of a cesium oven that heated the atoms to a gaseous state. The heated atoms then pass through a magnetic field, which divides them according to their energy levels. The getter absorbs the high-energy atoms, while the low-energy atoms are sent into an interrogation cavity (also known as a Ramsey cavity), where they are excited by a microwave pulse at a precise frequency of 9,192,631,770 Hz.
The excited, high-energy atoms are separated from the low-energy ones by passing through a second magnetic field as they leave the cavity. When high-energy atoms strike a detector, a current proportional to the number of high-energy atoms is produced. The quartz oscillator's frequency is accurately controlled by the detector current in a feedback loop, maintaining perfect synchronization with the cesium atom's resonance frequency. The quartz oscillator being perfectly tuned to the atom, provides an extremely accurate time reference. If you prefer a visual representation of the phenomenon, this YouTube video explains it well.
The National Physical Laboratory (NPL) in England technically won the atomic clock race in 1955 because of budgetary setbacks in the NBS program and the dismantling of the atomic clock in Washington, DC, and its subsequent reassembly in Boulder, CO. As a result, the first American atomic clock, the NBS-1, didn't start up until 1958.
The NPL cesium clock achieved an accuracy of 0.1 milliseconds per day (1 x 10E-9), and was ultimately used to redefine the second in 1967. In the three decades after NBS-1 went operational, each subsequent generation of atomic clock kept getting more accurate. While the NBS-1 had an accuracy of 1x10E-11, the NIST-7, introduced 35 years later had an accuracy of 5 x 10E-15 (0.5 nanoseconds per day).
UTC, Astronomical, and Ephemeris Time
Older definitions of time based on astronomical occurrences were eventually superseded by atomic time, often known as Coordinated Universal Time (UTC). Mean solar time, often known as Greenwich Mean Time (GMT) or UT0, is the average time determined by the sun's orbit around the Greenwich meridian (longitude 0). Changes were made to establish UT1 once it was discovered that this was incorrect since the earth's wobble affected the solar time computation.
The problem now was the UTC and UT1 times kept falling out of sync and the concept of leap seconds were introduced in 1972 to make sure these times stay consistent. The atomic clock has to be stopped for a second every now and then to allow UT1 to catch up. In the last 50 years, about 27 leap seconds have been added. As of August 2024, the difference between UTC and UT1 is about 46 milliseconds. Recently however, the decision has been made to do away with leap seconds from 2035, and not bother with it for at least another 100 years after that.
Weirdly though, when atomic clocks came online, they were not compared to solar time. Instead they were compared to Ephemeris time, which was defined in 1956 as 1/31,556,925.9747 of the tropical year 1900. This number was so awkward and its choice based on the turn of the century so arbitrary, that it was never really useful to scientists, but instead was only used to benchmark atomic clocks against. It is strange that such a seemingly random standard was adopted after atomic clocks had actually been invented.
Atomic Fountain Clocks
The interaction period in the Ramsey cavity, where microwave pulses are applied, was eventually the limiting factor in atomic beam clocks like the ones built at NIST and NPL. As early as the 1950s, other ideas to increase interaction time were already in place. Jerrold Zacharias, a physicist at MIT, proposed using a vertical cavity called an atomic fountain.
In theory, the concept was straightforward: Gather a number of atoms, allow them to cool, and then launch them up a long cylinder. A microwave pulse is applied to the atoms as they ascend. About a second later, the atoms fall back down due to gravity, like a ball thrown vertically in the air, and are once more exposed to a microwave pulse. In this manner, it is anticipated that the clock precision would be enhanced and the observation time will be significantly extended.
The cooling brought about by the intersection of six laser beams was truly the secret to making cesium fountain clocks function. In just a few hundred milliseconds, the cesium atoms cool to fractions of a Kelvin after losing their entropy to the laser beams that are set to a slightly lower atomic resonance frequency. In what is known as "optical molasses," around 10^8 cesium atoms group together and are forced upward into a tube. Compared to conventional atomic clock designs, the mobility of atoms is slowed down by a factor of 100X, allowing for longer observation periods.
The NIST-F1, the first atomic fountain clock, became the standard for atomic time in 1998, with an accuracy of 4 x 10E-16 — an order of magnitude better compared to the NIST-7. The next version of the NIST fountain clock has an accuracy below 10E-16. The figure below shows an NIST-F4 Cesium fountain clock that is the primary time standard currently under evaluation by the NIST.
Optical Lattice Clocks
The future of accurate timekeeping undoubtedly belong to Optical Lattice clocks. Instead of relying on the microwave oscillations of cesium atoms, improved accuracy comes from moving to much higher, optical frequencies. The working concept of an optical clock is to trap atoms in an optical web of light tuned to a very specific wavelength that almost like magic does not affect the optical resonance of the atoms but manages to hold the atoms in place for extended interrogation.
Today, the most popular materials used are Strontium and Ytterbium for optical clocks. In a joint research effort between NIST and the University of Colorado, Boulder, researchers have demonstrated an extraordinary level of accuracy with strontium optical clocks — down to 8 x 10E-19 — it loses only one second every 30 billion years, which is about twice the age of our universe. These optical clocks are 100 times more accurate than the best fountain clocks in existence today. If the second were to redefined, it would undoubted be done on the basis of optical clocks. Redefinition of time reference is not something researchers take lightly, and it could be quite some time before the second is redefined again.
But why do we need such accuracy? It’s true that most applications do not require the need for such exceptional accuracy, and there are much cheaper, less accurate, commercially available atomic clocks that will work just fine for a whole class of practical applications from GPS to communications. The most cutting edge scientific research can still use increasingly accurate clocks — for the detection of dark matter, measurement of general relativistic effects at millimeter scales, or even to calibrate and benchmark less accurate but more practical, commercially available atomic clocks.
Atomic Clocks for Everybody
The path to accurate timekeeping has been a long and arduous one, spanning thousands of years. The unimaginable accuracy levels we have reached are useful for the fringes of scientific exploration but for the most part, humanity has “solved” the problem of timekeeping in a mere span of a few hundred years. It is not so much the quest for greater accuracy that is of practical importance anymore, but the availability of exceptionally precise clocks in small form factors that can be used practically anywhere.
Take for example, the Chip Scale Atomic Clock (CSAC) from Microchip that is a 2”x2” package, weights 35g, and consumes 120 mW of power while providing an accuracy in the range of 10E^-11 for an ASP of $5,500. These kinds of products are useful for applications where atomic clock time from GPS satellites cannot be used, like in deep sea industrial sensors, or in military applications.
If you put this atomic clock on a PCIe card, you basically get a “Time Appliance” which is an open-source project by Facebook to improve synchronization and throughput of its datacenters. Having precise time, via the Precision Time Protocol (or PTP) has shown a 100X throughput increase in Facebook’s experiments compared to using the Network Time Protocol (or NTP) in datacenters which has been around for decades. Improved synchronization time also has far reaching consequences in information security (if a packet takes too long, its probably a man-in-the-middle attack), real-time networked gaming (precision timing to establish who got the frag), in undersea fiber optic cables (to detect imminent failures of repeaters), or even detect gravitational waves. This video by Linus Tech Tips is worth a watch.
If you like this post, please click ❤️ on Substack, subscribe to the publication, and tell someone if you like it. 🙏🏽
If you enjoyed this issue, reply to the email and let me know your thoughts, or leave a comment on this post.
We have a community of RF professionals, enthusiasts and students in our Discord server where we chat all things RF. Join us!
The views, thoughts, and opinions expressed in this newsletter are solely mine; they do not reflect the views or positions of my employer or any entities I am affiliated with. The content provided is for informational purposes only and does not constitute professional or investment advice.
and then there's this. GPS spoofing / interference is not new, but the magnitude of it is. https://www.fierce-network.com/wireless/communications-gps-spoofing-comes-fore
Like the last post in this sequence, this was a lot of fun. Thanks.
-I recall that IEEE 1588 (PTP) took up some amount of network overhead and am wondering if that's still an issue today.
- The Time Appliance was fascinating - thank you - but sounds like it too would need line of sight to the sky to work? (meaning, a cable run up to the roof of the data center where a GPS antenna could go?)