Thursday, July 19, 2018

When the Silicon Meets the Road

Test what you fly; fly what you test.
-- NASA aphorism

When you are responsible for maintaining a library of reusable software that contains components that deal with honest to goodness hardware peripherals, most or all of which don't exist on your build server, how do you unit test it? As an embedded developer with decades of experience under my ever lengthening belt, I deal with this quandary often. I've worked on projects that had a vast infrastructure to simulate the underlying hardware platform; maintaining that infrastructure was its own product development effort with its own unit testing issues. I've worked on projects that had part of a sizable test organization dedicated just to testing the software on development lab bench mules that were originally hardware prototypes. I've worked on projects that had no good long term approach. You probably have too.

One of the problems with any approach is the constant economic pressure not to maintain whatever test infrastructure existed once the project gets past one or two product releases and management turned its sights onto the next shiny new revenue producing thing. When it becomes time to fix some bugs and do a new release of the legacy product, the new engineer on the project - almost inevitably it is the new engineer on the project, sometimes a junior engineer tasked with proving themselves, in some group that might be euphemistically called something like "sustaining engineering" - is now faced with having to figure out all over again how to test their changes, on software they aren't familiar with.

I had to deal with this issue writ small when working on Diminuto, an open source (LGPL) library of C functions that I have been developing over the span of more than a decade. It is my go-to systems programming toolkit. These days it’s hosted on GitHub as com-diag-diminuto, but it's been around long enough to have been maintained using several different version control systems. It started out in 2008 as a collection of supporting code for a tiny (hence the name) ARMv4 embedded project using Buildroot, with a stripped down Linux kernel, uClibc, BusyBox, and not much else. The project pre-dates the existence of the Raspberry Pi, and even the BeagleBoard.

Over time I kept expanding Diminuto as my needs grew, and porting it to new - and generally less expensive yet more powerful - platforms as they became available, as well as to new Linux releases as they arose. Eventually, all or parts of Diminuto ended up inside a handful of shipping products that I helped my clients develop. While I’ve done time in Python, wrote hundreds of thousands of lines of embedded C++ using the STL, fielded loads of Bash scripts, and even hacked JavaScript as the need arose, most of my paying gigs continue to be lower level systems work in C, writing device drivers, daemons, utilities, and glue code that holds all the higher level stuff together that's written by application and user interface developers. Diminuto also provides the infrastructure for a bunch of my other personal projects, most of which are also hosted on GitHub.
(Added 2018-07-20) An issue with maintaining software libraries of reusable code intended for systems programming and embedded applications across multiple products is that the library is architecturally divorced from any one specific product. So while you may maintain a legacy product in a test lab for unit and functional testing, it may not be adequate to test all the features in a library because the product doesn't necessarily use all those features. So to really test a new library release, you might have to corral several otherwise unrelated test mules, if that's even possible. And even that might not be sufficient for acceptable test coverage. One reason this doesn't crop up more often - in my experience anyway - is that companies don't seem to see the value in such libraries, unless they come from somewhere else (e.g. open source). I suppose that's because, again, of the cost of maintaining them. That's another reason for Diminuto: I got tired of writing and debugging and testing and documenting the same proprietary closed source code over and over again (although, you know, paid by the hour) for different clients. Or, remarkably, sometimes for the same client, but that's a story for another time. And so, some or all of Diminuto now ships in several different products produced by completely unrelated organizations.
Diminuto has a bunch of modules, what I call features, that are designed to work together.

Some features are not much more than preprocessor macros, for example:
  • Critical Section (scoped POSIX thread mutex serialization);
  • Coherent Section (scoped acquire/release memory barriers);
  • Serialized Section (scoped spin locks); and
  • Uninterruptible Section (scoped blocked signals).
Some provide simpler interfaces to inherently complicated stuff:
  • IPC4 and
  • IPC6 (IPv4 and IPv6 sockets); and
  • Mux (multiplexing using the select(2) system call).
Some provide convenience interfaces that combine several operations into a single function call, sometimes atomically to make them thread safe:
  • I2C (I2C bus);
  • Log (logging API that works in kernel space or in user space, and which automatically directs log messages to the system log for daemons, or to standard error for applications);
  • Pin (general purpose I/O using the /sys file system GPIO interface);
  • Serial (serial port configuration).
Some implement interfaces that are more consistent and mutually interoperable than the native POSIX and Linux capabilities:
  • Frequency,
  • Time,
  • Delay, and
  • Timer (the usual time-related stuff but all using the same units of time).
Some implement functionality that is a bit complex in and of itself:
  • Modulator (software pulse width modulation or PWM using POSIX interval timers);
  • Controller (proportional/integral/derivative or PID controller);
  • Shaper (traffic shaping using a virtual scheduler); and
  • Tree (red-black tree).
Some are just useful to have:
  • Phex, pronounced "fex" (print non-printable characters); and
  • Dump (print a formatted dump of selected memory).
Having this collection of documented, pre-tested, reliable functions that handle the heavy lifting of the POSIX and Linux APIs that I routinely am called upon to use means I can rapidly toss together a usable working piece of C code without having to worry about whether I did the conversion between microseconds and nanoseconds for two different POSIX calls correctly.

Virtually all of the Diminuto features have unit tests that verify that correctness of the implementation. These unit tests are part of the Diminuto repository, and like the library itself, I would expect them to build and run on any modern Linux distribution on any processor architecture.

I'm a big believer in unit tests. But some of these features, like Pin, Serial, Modulator, and Controller, can only really be adequately tested using functional tests on actual peripheral hardware. Over the years, to test these features, I’ve built a series of hardware test fixtures. These range from simple custom wired connectors, to breadboards with ICs on them. When I work on these features, I pull the appropriate fixture out of a box, hook it up to the system under test (typically a Raspberry Pi), and run the functional test.

Untitled

Why is this important? Because I want to know my generically-useful but hardware-related features work before I (or anyone using my library) deploy them in an actual product development effort. And if for some reason they don’t work in the project, I want to be able to back up and use my functional tests to verify basic sanity, and to help me see where the problem might actually be. Even if the bug turns out to be in my code, the functional tests at least help me determine what does work and indict what doesn’t work. The functional tests also serve as a living example of how I intend the Diminuto features to be used.

Here are a couple of USB-to-serial adapters, one with a hand-wired loopback, the other with a commercial loopback adapter.

Untitled

I use these to test the Serial feature and its ability to set baud rate, data bits, and stop bits using the lbktest functional test.

Here is a USB GPS dongle that supports one pulse per second (1PPS) using the data carrier detect (DCD) modem control line.

NaviSys Technology GR-701W

I use it to test the Serial feature DCD support by reading NMEA sentences from the GPS receiver using the dcdtest functional test. (I have several other projects that depend specifically on this feature.)

Here are two USB-to-serial adapters connected back to back with a null modem adapter in between.

Untitled

I also use these to test the Serial feature, either between computers or even on the same computer using two USB ports, using Diminuto's serialtool utility.

Here is what my latest functional test breadboard looks like.

Diminuto Hardware Test Fixture

It consists of a Uctronics expansion board that connects the breadboard to the Raspberry Pi via a ribbon cable; four LEDs with built in resistors; two momentary contact buttons, one active high, the other active low with a pull up resistor; and an Avago APDS9301 ambient light sensor with an I2C interface. I also bring out the pins to the hardware console port on the Raspberry Pi.

The pintest functional test, based on the Diminuto's pintool utility, uses Mux and Pin to read the buttons and to write patterns to the LEDs. Pin, Mux, and the underlying Raspberry Pi GPIO controller, supports multiplexing GPIO pins using select(2).

The pwmrheostat functional test uses Pin and Modulator to control as many as four LEDs concurrently, all at different brightness levels.

The luxrheostat functional test uses Pin, Modulator, and I2C to control a single LED pointed at the ambient light sensor, turning the LED brightness up and down, while reading the light level.

The pidtest functional test uses Modulator and Pin to control the brightness of an LED, I2C to read its light level from the ambient light sensor, and Controller to maintain a specified light level as the background illumination changes.

(I am in the process of adding a TI ADS1115 analog to digital converter or ADC, also with an I2C interface, to this breadboard to extend the testing of the Modulator software PWM feature. I already have a first cut at a adcrheostat functional test.)

Here is a USB-to-serial adapter that works with logic-level signals, with logic clips at the end of the signal wires.

Untitled

I use this to further test the Serial feature using the Raspberry Pi hardware console port that is accessible via the breadboard.

These test fixtures are pretty simple stuff. But they allow me to exercise the hardware-related features of Diminuto such that I have high confidence that I haven't done something boneheaded. When I'm not using them, these fixtures go back into the box and go up on the shelf.

When I'm thinking of adding another hardware-centric feature to Diminuto, one of the first things I decide is how I am going to test it: if I can do so with an existing fixture; if I need to add to an existing feature like the breadboard; if I need to build a new fixture; and how much that's going to cost. Sometimes I decide the cost isn't worth it, and forgo developing the feature in Diminuto at all, leaving it to a future client to pay for and maintain if its needed. Or maybe I just wait for technology to catch up.

If it can't be tested, I'm not going to waste my time developing it.

Test what you ship; ship what you test.
-- Chip Overclock aphorism

Tuesday, June 19, 2018

Clock Club

The first rule of Clock Club is: you cannot stop talking about Clock Club. 
Last week I was privileged to spend four days attending the 43rd Annual Time & Frequency Metrology Seminar at the National Institute of Standards and Technology at their laboratories in Boulder Colorado. NIST is one of the premier scientific and technology standards organizations in the world (they were formerly the National Bureau of Standards), and continually produces bleeding edge breakthroughs, many of which end up becoming commercialized into mainstream technology that changes our lives for the better.

Untitled
Here are some Fun Facts To Know And Tell on the material presented by NIST scientists, and some photos of stuff I saw. Any bone headed remarks are strictly my fault, and are probably the result of my being just a humble software developer instead of a research physicist or an electrical engineer, or maybe just not taking good notes.

My electrical engineer and physicist friends would certainly have gotten even more out of this seminar than I did. Most of the people there were EEs or physicists, from other national labs, other time standards organizations, from commercial atomic clock manufacturers, etc. For example: one guy from the National Physics Laboratory (NPL), the U.K. equivalent to NIST; a woman who was a lieutenant in the Spanish navy (had served on a frigate) now working at Spain's Naval Observatory (which has the same role as the U.S. Naval Observatory, which provides the master time reference for the U.S. military, and is a reminder that our definition of time is central to navigation at sea and is still in part in the province of astronomers); a guy who worked for the Canadian Department of National Defense (equivalent to the civil service portion of the U.S. Department of Defense) in the organization that is responsible for instrument testing and quality, etc.

Oh, and some guys whose badges listed no organization, but wrote "DOD" on a sign up sheet, and told me they were from "Maryland". Past experience tells me they work for an intelligence agency. This is not an unusual encounter in my line of work; it happened all the time when I attended conferences while employed at the National Center for Atmospheric Research due to our common interests in high performance computing and mass storage systems. The pink I. M. Pei-designed NCAR Mesa Laboratory where I worked is barely visible in the photograph below in the distance above and to the right of the NIST Building 1 where the seminar was held.

Untitled

The phrase I learned that I will be most likely to drop into conversations in the future: "clock trip". I knew this was done, but I didn't know this is what it was called: you synchronize/syntonize/phase-lock a rack-mounted commercial atomic clock, running on a big battery pack, to an atomic clock at one laboratory, then you put it on a cart/truck/plane/ship/etc. and take it to another laboratory, maybe far away, where you compare it to another atomic clock. This is old school, but it still done from time to time today, since it yields the best results. Mostly, "time transfer", as this procedure is generically known, is also done over dark fiber, geosynchronous satellites, or, most typically today, GPS. Everyone now compares their atomic clocks to GPS Time (GPST), which is synced ultimately to International Atomic Time (TAI).

NIST generates UTC(NIST) time. About once a month or so, the International Bureau of Weights and Measures (BIPM) in Paris publishes an announcement of how far UTC(NIST), and every other "national" time standard, diverges from standard Universal Coordinated Time (UTC). But this is all done after the fact and only about once a month. UTC is ultimately based on the variable rotation of the Earth (hence: leap seconds), the orbit of the Earth around the Sun, and our planet's relationship to distant stars and quasars; it's determined by astronomers over a long span of time. So UTC isn't really a "real-time"... time. In the U.S., when we synchronize our clocks, it isn't really to UTC. It's probably to UTC(NIST). Or, maybe, UTC(USNO), produced by the U. S. Naval Observatory (USNO), particularly if we're in the military. Or, more realistically, GPST. Most of the clocks/NTP servers I've built are "disciplined" to GPST, adjusted to approximately UTC by adding leap seconds; this is what all the cool kids do these days.

Cesium atomic clocks are noisy in the short term (jitter) - so they are imprecise - but averaged over long durations they are extremely stable - so they are accurate. Hydrogen maser atomic clocks are extremely precise (very low short term noise) but are inaccurate - one scientist said using a hydrogen maser was a "black art" because you would never be exactly sure what extremely stable frequency you would actually get until you turned it on (weird). So NIST, USNO, and others use an ensemble of multiple commercial cesium (Cs) clocks, the long term average of which is used to servo the output of an oscillator that is locked to a hydrogen maser clock. Below is a photograph of an broken commercial hydrogen maser clock sitting in a hallway; new, they go for around a quarter million dollars U.S.

Untitled

The only way to measure the quality of a clock is with another clock. This is just as problematic as it sounds, particularly since really good clocks are measurably affected by not just environmental noise like vibration, thermal noise, and even quantum noise, but also by special and general relativistic effects. We spent almost an entire day on measuring and characterizing phase noise in clocks, which is really all about measuring phase differences between a clock you trust, and one you don't, and then processing the raw data by computing an Allan Deviation (ADEV, or AVAR for the Allan Variance). Because the measurement of phase differences is a time series and not a population, the normal statistical methods don't apply. Below is a photograph of the equipment we used for a hands-on demonstration of measuring and characterizing phase differences. One of the carts contained a high-precision oven-controlled quartz oscillator that is kept running on an uninterruptible power supply so that it stays in its "sweet spot". One of them also contained an atomic clock in the form of a rubidium (Rb) oscillator.

Untitled

The latest "optical" atomic clocks - so called because their resonant frequency is in the visible light range instead of the microwave range like cesium or rubidium atomic clocks - have frequencies so high they can't be measured directly because electronics can't run that fast. So they have to have "optical frequency combs" that produce a lower, measurable, frequency - 10MHz or even a low as 1Hz a.k.a. 1PPS - that is locked to the higher optical frequency. Optical atomic clocks, still in the experimental stage today, are likely to eventually replace cesium atomic clocks as the definition of the second in the International System of Units (SI). That's one reason why labs at NIST may contain awesome.

Untitled

One of the optical clock labs was officially surveyed to determine its position within a millimeter or so. Because an optical atomic clock is so accurate, general relativistic effects are measurable if it’s moved just a few centimeters vertically from the Earths mass. The clock depends on measurements of less than the width of a hydrogen atom. There's an official Geodetic Survey marker embedded in the concrete lab floor.

Untitled

A form of quantum noise is from virtual particles popping into existence and then disappearing due to fluctuations in the vacuum energy. This actually happens and is detectable in optical atomic clocks. Some of these clocks work by trapping *two* ions - one quantum entangled to another - in a magnetic trap and stimulating one and measuring the atomic transition of the other. Sometimes another atom of the same type will wink into existence for a very short time and then disappear. I'M NOT MAKING THIS UP. Below is a photograph of a running ytterbium (Yb) lattice optical atomic clock. Note that the lab bench of one of the most advanced experimental optical atomic clocks in existence looks more or less like my lab bench, BUT WITH LASERS.

Untitled

(I observe that this idea of entangling two atoms, "exciting" one and "interrogating" the other, is pretty much the same reason that historically the most accurate mechanical pendulum clocks had two pendulums, one slaved to the other.)

The latest optical clocks are so accurate and precise that they are being used to probe the "constant-ness" of certain values in physics - like the dimensionless fine-structure constant - that we think are constant, but have no theory that says why they must be constant. As we continue to make things smaller and smaller, and faster and faster, the measurement of such values will matter more and more. One of the NIST experiments involves comparing two different implementations of optical clock-type technologies and making very fine measurements of any differences in their results that may be affected by variation in electric field strengths. In the photographs below, you can see the green versus the blue LASERs, denoting different atomic resonant frequencies at use.

Untitled

Untitled

Because we can measure time far more accurately - several orders of magnitude better - than any other physical unit of measure, we can expect other SI units to eventually be defined in terms of time whenever possible. So these technologies have been used by NIST to develop measurement standards for other physical units. Below is a photograph of a ten volt (10V) reference standard made up of an array of superconducting Josephson junctions on semiconductor wafer (left) placed in carrier (right) that goes in cryogenic chamber. It's accurate to maybe twelve significant digits. One wafer is worth about US$50,000.

Untitled

This is used in the instrument used below to calibrate volt meters. Well, maybe not your voltmeter. NIST has built quite a few of these instruments and sold them to various labs here and abroad. You can vaguely see the small cryogenically cooled chamber behind the screen in the lower left of the rack. The noise of the compressor for the cooler was a constant thumping in the room.

Untitled

I wore a mechanical wristwatch each of the four days of this conference. It just seemed appropriate. I wasn't the only one. Another guy and I compared oscillator frequencies. Mine was twice as fast as his. I have watches with 4, 6, and 8 Hz oscillators, and even a vintage "fast beat" watch with a 10 Hz oscillator. The one I was wearing that day, shown below, has an 8 Hz mechanical oscillator. Compare that to your everyday quartz wristwatch... that typically has a 32,768 Hz quartz oscillator. Conventional atomic clocks oscillate in the gigahertz (microwave) range, while optical atomic clocks oscillate in the terahertz (visible) range. Future clocks will be "nuclear clocks", oscillating in the petahertz to exahertz (x-ray) range, "ticking" via transitions in the nucleus of an atom. I think maybe NIST wins the oscillator frequency challenge.

Rolex Oyster Perpetual Milgauss Automatic Chronometer

A big Thank You to NIST and its scientists that participated in the seminar and who were so generous with their time and expertise. Because, you know, ultimately, it's all about time and frequency.

Monday, May 28, 2018

Time Is Precious

Earlier this month I was fortunate to be included on a tour of the time and frequency facilities at the Boulder Colorado laboratories of the National Institute of Standards and Technology (NIST). I got to walk right up to the F-2 cesium fountain atomic clock, which, along with the earlier F-1, is the standard frequency reference for the United States.

NIST F-2 Cesium Fountain Clock

I always thought the F-2 ran all the time. Fortunately that's not true, or else we might not have been able to see it. NIST fires it up periodically to calibrate an ensemble of commercial atomic clocks, which do run all the time, and which we didn't get to see.

I also got to see a couple of the prototypes of the chip-scale atomic clock (CSAC) developed as a result of NIST and DARPA research projects.

Prototype Chip Scale Atomic Clock

Untitled

I used a commercialized version of this same device in the stratum-0 NTP server that I built.

Like me, you may have read the book From Sundials to Atomic Clocks: Understanding Time and Frequency by NIST physicist James Jespersen and Jane Fitz-Randolph. But did you know Jespersen was part of a team that won an Emmy award for the invention of close captioning?

Untitled

Also: a four hundred pound quartz crystal. Just a little bit bigger than the one in your quartz wristwatch.

Untitled

On display in a hallway was an IBM radio clock: it's a high precision electro-mechanical pendulum clock with a radio that allows it to set itself according to the NIST time code transmissions.

Untitled

Speaking of which, it only seemed appropriate this morning that I take a little jaunt up to see the WWV (short-wave) and WWVB (long-wave) transmitter towers just north of Fort Collins Colorado.

Untitled

This is from where a time code is transmitted to an estimated fifty million radio clocks, including the WWVB radio clock that I built, in the continental United States. The time code is synchronized to a cesium atomic clock on site, which in turn is synchronized to the NIST master atomic clocks in Boulder Colorado.