Lightning strikes the houses at either end of your street. As your neighbor at one end relates it, lightning struck his end of the street first, then the other end. Your other neighbor who lives at the other end says just the opposite: lightning struck his end first, then the other end. You live in the middle, and you saw it strike both ends simultaneously. Who's right?
You all are. That's what Einstein was getting at in the Special Theory of Relativity: when all observers are at rest with respect to the world around them, there is no preferred frame of reference. None of you are more correct than the other. Under no circumstances can information travel faster than the speed of light. So our perception of the order of events that occur in the world around us depends upon our position within the world and the speed at which we are traveling. There is no right answer. Or at least, none that is any more right than another.
As anthro-astronomer Anthony Aveni points out in his book Empires of Time [University Press of Colorado, 2002], on timekeeping across cultures and throughout history, even if you were unfortunate enough to have your eyeball pressed right up against the place on the roof where the bolt of lightning struck, there is still a non-zero latency for the light to reach your eyeball, to be converted into nerve impulses, for those impulses to travel to your brain, for them to be interpreted by your conscious mind, and for your mind to order you mouth to say "HOLY CRAP!" Any sense of simultaneity is completely subjective.
And so it is with the real-time systems we build. I've worked on all sorts of stuff over my long career: ATM-based optical networking for internationally distributed telecommunication systems; GSM and CDMA cellular base stations; Iridium satellite communication systems for business aircraft; Zigbee wireless sensor networks; RS-485 connected industrial lighting control systems, to name just a few. Plus a lot of the more typical Ethernet, WiFi, and LTE communications systems designed to tie together a variety of computers. These systems all suffer from the same lack of preferred reference frames that we do.
Each of these systems can be thought of as state machines that receive stimuli from many different sources and through many different channels. Vastly complex state machines for sure. But ultimately we can consider of their entire memory as one gigantic number - a Gödel number - perhaps billions of bits in length, each unique value encoding a specific state. Each real-time stimulus causes this enormous number to change, transitioning the state machine to a new state. Because there is a finite amount of memory in the system, there is a finite (albeit really really big) number of unique states that the system can assume.
The stimuli to this state machine may arrive via a variety of mechanisms. Network messages from other nodes might arrive via Ethernet, WiFi, and cellular networks. Sensor values might be read via various hardware schemes like SPI, I2C, GPIO, ADC, UART, each of which has its own bandwidth, and more importantly, latency. The hardware latency for each of these technologies varies widely. Some, like the analog/digital convertor, have hardware latency quite long relative to other devices. Plus all the software processing latency on top of it for whatever protocol stack is used.
In addition to all of these stimuli that are the results of events occurring in the outside world, stimuli generated internally are arriving all the time as well. Peruse any network standard and you'll find a slew of timers defined, that are set when certain conditions are met, cancelled if others are met, and when they fire, they inject their own stimuli into the state machine.
Once received, each stimulus has to be serialized - placed in some discrete order of events relative to all of the other stimuli - to be injected into the state machine so that the system acts deterministically.
If two companies develop competing products to solve the same problem, they will likely make different hardware design choices and write different software. Their products will implement different hardware and software paths, even if they use the same networks and sensors. The real-time stimuli that gets injected into their state machines will, at least occasionally, be serialized in a different order. Their systems will transition to different states. Because of that, they may make different decisions, based their own unique but different perception of reality. Which system is right?
Maybe they are all right. Even when the system responds in a way we think is incorrect, we may still be basing that assessment on our own perception of what happened in the real-world, which may or may not be any more or less correct than what the hardware and software system perceived. We may believe that our own reference frame is the preferred one, but both we and our cyber counterparts are subject to the same laws of Special Relativity as applied to biological and silicon systems.
I think about this a lot when I read articles on driverless vehicles. Vehicle automation doesn't have to be perfect. It just has to be better than the human behind the wheel. Or the joy stick. That may be a low bar to hurdle. There are still likely to be cases where the vehicle automation system makes a decision different from the one we would have made ourselves. That doesn't make it wrong. But it might be hard to understand why it did what it did without a lot of forensic log analysis.
My own Subaru WRX has an EyeSight automation system that includes features like collision avoidance, lane assist, and adaptive cruise control. EyeSight gets concerned when I drive through the car wash. It sometimes loses track of the road completely in heavy rain or snow. But for the most part, it works remarkably well. I don't depend on it.
Just this past weekend, Mrs. Overclock and I celebrated our thirty-fourth wedding anniversary in Las Vegas, a short plane ride from our home near Denver. We witnessed the eight-passenger driverless shuttle, part of a pilot program, cruising around the downtown area near Fremont Street. We saw it hesitate while pulling away from a stop because of oncoming traffic. Its full load of passengers weren't screaming in terror.
We also saw an empty and unmanned Las Vegas monorail train blow through a station where we were waiting to the sound of a very loud bang and a big shower of sparks. This resulted in someone with a walkie talkie walking the track, and eventually an announcement of a service disruption with no estimated time of repair. On the cab ride - plan B - we noticed another train stuck at a earlier station waiting for service to resume.
The Denver-area light/commuter rail was due to be extended to our neighborhood nearly a year and a half ago. The rail line is complete, and all the stations ready with their brightly lit platforms and scrolling electronic signs. But the developers can't get the positive train control to work as required by federal regulations; the automated crossing gates apparently stay down about twenty seconds too long. Sounds minor, but this defect - which has to be a lot more complex than it sounds or it would have been solved long ago - has cost millions of dollars, not to mention keeping Mrs. Overclock and me from taking the train from the station at the end of this line, which is within walking distance from our suburban home, to downtown Denver for the theatre and concerts we regularly attend, or even to the Denver International Airport by changing lines downtown.
Do any of these automated systems suffer from a disparity in reference frames with their users? Dunno. But in the future when I work on product development projects with a lot of real-time components (which for me, is pretty much all of them), I'm going to be pondering even more the implications of our hardware and software design decisions, how they impact how the system responds to events in our real-world, and how I'm going to troubleshoot the system when it inevitably makes a decision that its user finds inexplicable.
Wednesday, March 14, 2018
Subscribe to:
Posts (Atom)