If I have any readers left, they are surely tired of hearing me talk lately about the importance of traffic management and rate control. I thought a good way to wrap up the topic (for a while anyway) would be to provide a metaphor and an actual example, in the spirit of going from the abstract to the concrete.
The "Big Dig" was hardly Boston's first controversial road construction project. That historic city's transportation and urban planners have for decades had to deal with the fact that many of the downtown streets are barely wide enough for two lanes. This makes building exit ramps from interstates and other high volume highways into the oldest portions of Boston really problematic. How to you handle the traffic volume of Interstate 95 funnelling down to a cobblestone street that is two hundred years old? How do you widen that street when the real estate on either side of it is going for seven figures? It's not just a traffic management problem, it's an economic issue.
Yet we take exactly the same approach to designing large distributed systems that have longish lifespans. We try hard to make our systems modular so that they can be easily upgraded. We try to make them scalable so that they can be easily expanded. We try to make it possible to incorporate new technology as it becomes available. This is particularly true with systems which have high capital costs and for which customers expect long term amortization. Traditional telephone systems come to mind, but this applies equally well to things like power plants, air traffic control systems, and just about anything built for the U.S. Department of Defense, like aircraft carriers.
Traditional telephone systems are among the most complex distributed systems in the world. A typical design (where "typical" here means "one that I've worked on") has call control centralized in a single processor, a network of communication links going to remote cabinets, and each cabinet containing tens of boards with embedded processors that control everything from digital trunks to the telephone on your desk. The individual embedded processors not only talk to the central processor, but to other embedded processors in the same cabinet, to embedded processors in different cabinets, and to embedded processors in other systems owned by other people. There is a whole lotta messaging going on. The mere act of picking up your telephone handset results in dozens of messages being interchanged between at least a handful of processors across several communications links. As my friend and former colleague Ron Phillips says, "Every time I pick up a phone and get dial tone, I know a little miracle has occurred."
So sometime in the 1980s you start out with central processors capable of running at a speed of a few million instructions per second. You have communication links capable of bandwidths of tens of kilobits per second. You have embedded processors at the ends of those links that run a few tens of thousands of instructions per second. And there is a strong motivation to use cheap processors in the embedded portions of the systems because that's what the customers have to buy in quantity. There are at least tens of such processors in each cabinet, and at least tens of such cabinets in a system. One really big traditional telephone system may support tens of thousands of telephones, have cabinets spread across state lines, and effectively have hundreds, if not thousands, of processing elements.
Things work pretty well, because you architect the system initially with a good balance of processor speed and bandwidth. Because the central processor was so slow, there was a built-in natural limit to how many messages it could pump out. Because the communications links were so slow, there was a natural limit to how many messages from various sources could be funnelled down to a single embedded board.
Time passes. You upgrade the central processor with one that is a hundred times faster than its predecessor. You upgrade some of the communication links (but not all of them) to technologies like Ethernet that have a thousand times more bandwidth. You install some (but not all) new embedded boards that have faster processors on them.
A few more years go by. You now have a central processor that can execute a billion instructions per second, thousands of times faster than what you had twenty years ago, in the same system. You replace some (but not all) of the 10mb/s Ethernet with 100mb/s Ethernet. Maybe you didn't even want or need to replace the network technology, but that's what everyone is using now, including your customers. You upgrade a few of those embedded processors (but not all of them) with faster boards, but it is simply not cost effective to re-engineer all of those legacy boards with newer technology, and even if you did, your customers would rebel at the thought of a forklift upgrade.
It's today. Your central processor runs several billion instructions per second on each of several cores. No one even makes your original processor anymore; the closest thing to it is in your wristwatch. You customers want to use gigabit Ethernet, because that's all they run anymore. You start using VOIP, which requires a volume of control messaging like no one has ever seen in the history of telephony. You've expanded your maximum available configuration to many many more cabinets, and customers have systems that span national boundaries.
And you've still got a lot of those original embedded boards, still struggling to deliver dial tone, while getting fire hosed with messages at a rate they never imagined they would ever have to deal with. Even if they don't receive more messages, they receive the same messages but a lot faster, in a burst. The natural throttles on the rate of messaging, the limit on the speed of the central processor, the limit on the speed of their peers, the limit on the speed of the communication channels, are all gone.
And even if those processors can handle the higher rates, they start to behave in funny ways. Before they could do all necessary message processing in their spare time between doing actual work, like making phone calls. Now, when a burst of messages comes through, they have to drop what they are doing and deal with the burst, or else risk losing a crucial control message when the next burst comes in.
In many ways, your system is a victim of its own success: of its longevity in the marketplace, its architecture that allowed customers to expand it incrementally, and the fact that, gosh darn it, people just like it.
Back in May, I talked about how different technologies were on very different growth curves. Microprocessor speeds double about every two years. Memory density doubles about a year and a half. Network bandwidth increases by a factor of ten every decade.
When you upgrade your system, you use what's commercially available, but what's commercially available isn't all growing at the same rate. And some of your system, like those embedded processors, aren't growing at all. What started out as a well behaved system with a balanced architecture becomes unbalanced and starts to, well, get weird. The pattern of behavior of the real-time system as a whole is an emergent behavior, not one that you deliberately architected. And worse, your management starts to wonder what the heck is going on, because, well, how could those old embedded boards be a problem, because with them you haven't changed anything.
No, Bucky, you've changed everything.
This is not a niche issue. Back in May, I talked about the total cost of code ownership. In that article I cited studies that have shown that 67%, a full two thirds, of the total life-cycle cost of software development is code maintenance, and of that 67%, half of that is either perfective (improving correctly working code, not fixing bugs) or adaptive (changing to meet to new requirements). A huge portion of the cost of software development is the modifications and testing necessary to adapt to a changing environment. This has been referred to as software aging: software gets old and cranky and quits working, even though it hasn't been changed. But the context in which it runs has changed.
Traffic management and rate control are tools in the quest for a balanced, scalable, deterministic system. By controlling the rate at which you throw events around, you maintain control over the engineering of your system. Not your customer by deciding what kind of Ethernet to use. Not your server vendor by deciding what kind of Pentium processor to ship you. Not your service provider by moving your communications link to a faster technology. Traffic management and rate control gives you the capability to decide how your system is going to behave, by letting you determine when you turn up the dial, and by how much. It allows you to take control over the emergent behavior of your real-time system.
I've used large scale telephone systems as an example, but that's just an obvious one. I've also worked on distributed systems that controlled real-time devices to shoot a half million frames of photographic film a month. On mass storage systems that delivered gigabytes of data on a daily basis to a room full of supercomputers. On embedded systems where a single board has thirty different processing elements and dozens of communicating real-time tasks. On service oriented architectures that placed a dozen interacting components on a single enterprise service bus. All of these systems face these kinds of issues.
I'm betting yours do too.
Sources
D. Parnas, "Software Aging", Proceedings of the 16th International Conference on Software Engineering, IEEE, May 1994
Chip Overclock, "Traffic Contracts", January 2007
Chip Overclock, "The Generic Cell Rate Algorithm", January 2007
Chip Overclock, "Rate Control Using Throttles", January 2007
Chip Overclock, "Traffic Management", December 2006
Chip Overclock, "It's Not Just About Moore's Law", May 2006
Chip Overclock, "The Total Cost of Code Ownership", May 2006
Subscribe to:
Post Comments (Atom)
2 comments:
Well, Chip, while I generally agree with your points, I need to quibble on one comment in this article.
The conventional "Moore's Law" statement of doubling chip functions every 18 months can be compared to the rate of increase of the product of speed and distance obtainable with digital transmission systems. Since the early 1980's this means fiber-optic systems. I haven't checked recently, but for a long time the figure of merit for transmission systems doubled every 9 months. This is one reason I always questioned applications (such as voice compression) that use processing to save tranmission resources. Why spend a resource whose value drops relatively slowly to save another resource that gets cheaper faster ?
Ken
I dimly recall that my figure of network bandwidth increasing by a factor of ten every ten years came from one of the NSF labs back when I was working at a National Lab. But your different number doesn't invalidate my point (and in fact your comment enforces it): different technologies grow at very different rates. As always, Dr. Howard, thanks for contributing. Maybe together we can find some better metrics for growth in network bandwidth.
Post a Comment