Wednesday, February 14, 2007

Escape Analysis for Java Apparently Delayed

Escape analysis is a form of static analysis in which the compiler determines if a new object reference (or pointer if you’re not talking about Java) can possibly be visible outside the current thread or beyond the current scope block. If it cannot be, the object can be safely allocated on the stack instead of the heap. This is potentially a very significant optimization, since it eliminates both the overhead of later garbage collecting the object and the heap fragmentation that may occur from the allocation and deallocation of many small objects.

Java 6 (“Mustang”) was originally slated to include escape analysis in the javac Java compiler. But if I am reading the release notes correctly, this optimization has been delayed at least until Java 7 (“Dolphin”) or maybe later. Java 7 may be released sometime in 2008.

I’m bummed. Escape analysis has been talked about at least since around 1999. I was all hot for escape analysis as one more reason why Java could be the next big embedded programming language. Obviously Java has made in-roads in the embedded world, becoming pretty standard in a lot of handheld consumer devices. I helped develop an embedded telecommunications product in Java as early as 1999. But escape analysis would have eliminated one of the concerns many embedded developers have rightly had regarding Java on hard real-time or tightly memory constrained platforms.

On the plus side, I’ve had zero problems porting the tiny little Buckaroo Java code base to Java 6, either using the Sun javac with Ant, or the Eclipse 3.2 incremental compiler.

My jonesing for escape analysis in Java remains unrequited.

Sources

J. Choi et al., “Escape analysis for Java”, Proc. 14th SIGPLAN OOPSLA, 1999

M. Love, “Optimized Java: Escape Analysis”, Dr. Dobb’s Portal, 2006-06-01

Sun Developer Network, Java SE 6 Release Notes: Features and Enhancements

Sun Developer Network, “Eliminate locking of unescaped objects”, Bug ID 6339956, 2005-10-21

Buckaroo, Digital Aggregates Corp., 2007

Small is Beautiful, but Many is Scary II

Space Cowboy Steve Tarr reminds me why I miss working with him (as if I really needed any reminder). In a comment on my prior article on this topic, in which I reviewed the white paper The Landscape of Parallel Computing Research: The View from Berkeley, he points out quite rightly that the future of many-cores in the embedded world is not a thousand general purpose processing elements. Even today, many microprocessors for the embedded market have a single general purpose core and many special purpose cores targeted for functions like communications or digital signal processing. Steve's example was robotics, something he knows a bit about from working on firmware for unmanned space missions.

There are several economic drivers of this, not the least of which is the fact stated by the Berkeley researches that multiple cores are actually easier to manufacture and functionally test. They produce higher effective yields since redundant cores which are defective can be deactivated and the resulting microprocessor sold at a lower-price. Embedded guru Jack Ganssle points out in his seminar Better Firmware Faster (review) that separating large development projects onto different host processors can actually reduce time-to-market. And many real-time designs are simplified by the ability to dedicate a processor to a task; polling no longer seems like a bad thing, and jitter in hard real-time scheduling is reduced or eliminated. For sure, there are lots of reasons to like the many-core approach.

If embedded microprocessors have a thousand identical cores, embedded products will likely purpose most of these cores to specialized tasks: processing packets from demultiplexed communications channels, digital signal processing of VOIP calls, dedicating a core to controlling a particular hardware device. Even today the trend in telecommunications products seems to be handling functions like codecs, companding, and conference summing in software, and dedicating a core to handle a single channel or at most a handful of channels is very attractive. A thousand core microprocessor places a thousand universal machines on the chip. This makes a lot of specialized hardware used today obsolete.

To be fair to the white paper and its authors, one of the things that I really liked about it is that it acknowledged the common themes running through the high performance computing and the embedded camps when it comes to many-core architectures. Having lived in both camps at various times in my career, as well as the multi-core enterprise server camp, it has always troubled me that the various groups don't seem to know about each other, much less recognize the issues they have have in common. The white paper specifically mentions that an application for a many-core microprocessor may have a lot of specialized tasks communicating by message passing. I just neglected to mention it in my article. My bad.

And, of course, as the number of daemons and other background processes running on the typical Linux server continues to grow, throwing a many-core microprocessor at the problem sounds pretty good too. And although I've never lived in the database/transaction camp, or in the visualization camp, I would expect their members to have an interesting opinion on this topic as well. As will the virtualization folks, who may offer service providers many virtual servers hosted on a single many-core microprocessor chip, an attractive prospect for the software-as-a-service market.

A thousand cores? I think it's going to be fun.

Monday, February 12, 2007

Small is Beautiful, but Many is Scary

There is a danger in succumbing to the seduction of choosing reading material that agrees with your point of view. Sure, you enjoy reading it in a "look how smart I am" kind of way. But in the end, it isn't clear you've actually learned anything. So it is with some trepidation that I recommend The Landscape of Parallel Computing Research: A View from Berkeley, a recent white paper by a multidisciplinary group of researchers at University of California at Berkeley. Although maybe I like it because it supports some of my recent assertions, I prefer to think it is because I learned quite a bit from it. It includes a list of old conventional wisdom versus new conventional wisdom about microprocessor design that gave me pause. I'm tempted to just cut and paste the list here, but in the spirit of fair use, I'll just list some of the issues in my own words, in the hope that you'll be inspired to read the white paper.

  • In a reversal of what has been true in the past, power is expensive, but transistors are free; we can put more transistors on a chip than we have the power to turn on.
  • For desktops and servers, static power consumed by leakage can be forty percent of the total power.
  • The cost of chip masks for feature sizes below sixty-five nanometers are so expensive to produce that researchers can no longer afford to build physical prototypes of new designs.
  • For many technologies, bandwidth improves by at least the square of the improvement in latency.
  • Modern microprocessors may be able to do a floating-point multiply in four clock cycles, but take as many as 200 cycles to access DRAM, leading to a reversal in which loads and stores are slow, but floating point operations are fast.
  • There are diminishing returns in achieving higher degrees of instruction level parallelism using tricks like branch prediction, out of order execution, speculative execution, and the like. (And, I would add, these tricks have already broken the memory models of many popular programming languages).
  • The doubling of sequential microprocessor performance has slowed; continuing the growth in performance as per Moore's Law will require increasing degrees of parallelism.

The authors cite a chip used by Cisco in a router that incorporates 188 RISC cores using a 130 nanometer process. They go on to predict that as many as 1000 cores may be possible on a single chip using a thirty nanometer feature size. They mention the many advantages of building microprocessors from multiple cores.

  • Achieving greater performance by parallelism is energy efficient.
  • Multiple processors can be exploited in fault-tolerant or redundant designs.
  • For parallel codes, many small cores yields the highest performance per unit area.
  • Smaller processing elements are easier to design and verify.

Thankfully, they go into some detail about the difficulties one might encounter when writing software for a 1000 core microprocessor. This gives me some hope that hardware designers might have a clue about how challenging it is to write such fine-grained parallel code. Some of the issues they mention are: inefficiencies in cache-coherence protocols; the difficulty in writing reliable code using synchronization locks; the immaturity of techniques, like transactional memory, which provide higher levels of abstraction for parallelism; and the cognitive effort required to use programming models like MPI, used by my friends in the high performance computing (HPC) community, which requires that the developer explicitly map computational tasks onto specific processors.

It's not like it's a new problem. I remember back in my HPC days how researchers much preferred the eight-processor Cray Research supercomputer to the 64,000 processor Connection Machine MPP that sat right next to it. I explained it to Mrs. Overclock this way: if you want to build a deck on the back of your house, which is easier: doing it with eight master carpenters, or with 64,000 junior high schoolers taking shop? If you could figure out some way to keep all of the students productive, you might just get it done faster. But the effort involved might also lead you to conclude that you would have been better off just building the deck by yourself.

And it's also not like we aren't already facing this issue. Today's microprocessors have already broken the memory models of many programming languages; witness the fixes that had to go into Java 1.5 so that common multi-thread design patterns like double-check-locking would work reliably. There are legacy C/C++ code bases using multiple threads that cannot run reliably on a multi-core processor, or even on a hyper-threaded processor that merely pretends to have multiple cores, without a significant porting effort. Until these issues are resolved, such code bases have stepped off the Moore's Law curve. Just let me off at the corner, thanks, I'll walk from here.

I find it unlikely that developers will be able to effectively utilize a thousand-core processor using the kind of coarse-grained parallelism offered by the POSIX thread library or the Java thread model. It's more likely that languages will have to evolve, in much the way that the standard language of supercomputing, FORTRAN, has: to incorporate fine-grained parallelism in language constructs like the do-loop, as part of its syntax. Or even to implicitly define parallelism as part of the language semantics, in a way similar to the research projects I remember from graduate school when the Japanese Fifth Generation project was looming on the Eastern horizon.

Parallelism has been a running theme during my thirty-plus-year career, from writing interrupt-driven real-time code, to designing distributed systems, to supporting supercomputers, to working on embedded projects with as many as thirty separate processing units on a single board, to developing enterprise applications suitable for multi-core servers. I look forward to seeing where things will go.

Sources

K. Asanovic et al., The Landscape of Parallel Computing Research: A View from Berkeley, UCB/EECS-2006-183, Electrical Engineering and Computer Science, U. C. Berkeley, December 18, 2006

Saturday, February 10, 2007

Thirteen Things In Which I Believe

National Public Radio has been doing this series called This I Believe, audio essays by listeners about their core values and fundamental believes. The series was actually started back in 1951 (before my time, as hard as that is to believe) by Edward R. Murrow. I like the series a lot, but I find it hard not to think about that old Steve Martin bit (“I believe that Ronald Regan can make this country what it once was: an arctic region covered in ice.”). Some of my core values, at least those pertinent to this blog, are a little too esoteric for NPR, but perhaps not to my readers.

1. I believe that developers should get out more.

Specifically, that developers should interact directly and frequently with customers. And not just on the telephone, with their manager hovering in the background. Or at the customer briefing center, with nervous account executives looking on. But at the customer site, watching to see how the customer uses the products that they develop. Unfettered and uncensored. I learned a lot in my various travels to customer sites, even though it is frequently an awkward and painful experience for all involved. Yes, a few companies will probably go out of business because of this. And some others will take over the world.

2. I believe in controlling the long-term cost of code maintenance.

Some studies have shown that maintenance of software constitutes fully two-thirds of the cost of the entire software development life cycle. Other studies have suggested it is even more. Design, coding, unit testing, and integration makes up about one-quarter of the cost. I believe that most companies developing software products completely miss the boat on this. Time-to-market is vitally important, no doubt, but quickly churning out code that is difficult to maintain over the life span of the software product only saves money in the short run. In order to be economically maintainable, software must be designed to be easy to modify.

3. I believe in managing the emergent behavior of large systems.

I have written far more words than anyone cares to read on the need for rate control in real-time systems. But I believe that rate control and other mechanisms for managing emergent behavior are necessary for building large systems that are both scalable and robust, and which can be reliably evolved over time. All things change, but all things do not change at the same rate.

4. I believe in designing systems to expedite field troubleshooting.

I don’t just mean some log files. I mean mechanisms like SNMP MIBs or JMX MBeans or at least some counters available via a debug console that allows developers to peer directly into the heart of a running system and see what it is doing or has done. Log files too often turn out to be a fire hose of information, useful for post-hoc analysis but useless while the system is running. Some carefully chosen counters or controls are crucial to gathering needed forensic data in a timely manner so that a misbehaving system can be quickly returned to a running state. This may also mean embedding some simple (or not so simple) tests in the production system that can be run at the customer site to facilitate the rapid indictment of specific components.

5. I believe that all development is by nature fractally iterative.

There are loops within loops within loops, all feeding back into one another. It’s not called the development cycle for nothing. At the very highest level, products need to be given time to mature and improve to be successful. Otherwise a company ends up shipping nothing but prototypes to increasingly disgruntled customers. At the lowest level, tools are needed to make code easy to refactor. Somewhere in the middle, expect designs to change over time. No design ever survives its implementation. The development process - any development process - is by its nature fractally iterative, whether you want it to be or not.

6. I believe that my best ideas do not occur while at work.

I get my best ideas when I first wake up in the morning and haven’t even gotten out of bed yet. Or while I’m at the gym, sweating and pumping a lot of highly oxygenated blood around. Or while I’m reading something totally unrelated to whatever I’m working on. Or while I’m talking with my wife. Or while I’m petting the cat. Or while I’m discussing the latest episode of Battlestar Galactica with my friends at lunch. If you’re the kind of manager that thinks people should “give it their all” by working a sixty hour week, then you have set the stage for where it is impossible for your people to give it their all.

7. I believe in the Golden Rule.

Except that I may phrase it as “Others will treat you as you have treated them.” For example, if you consider all of your people temporary employees, they will treat your company as a temporary employer. If you ask your people “What have you done for me lately?” they in turn will ask themselves “What has the company done for me lately?” In engineer speak: for every action there is an opposite and equal reaction. Please don’t act so surprised when this occurs. It is a sign of immaturity on your part.

8. I believe in a diversity of culture and opinion.

Study after study has shown that organizations with greater diversity of culture and opinion make better decisions. Look around and see if all of your employees are about the same age. Or the same nationality. Or all the same mindset (which probably really means: all agree with you). If so, you have created a culture which will produce no innovation, which cannot think out of the box, and which will be risk adverse. Maybe you preferentially hired clones of yourself. Or driven out all dissent. Or implemented hiring policies that were extremely selective as to skill set. In any case, you have crippled your organization. As a friend of mine says: “If you and I always agree, one of us is redundant.”

9. I believe that no one has ever shipped a perfect product, and you won’t be the first.

“The perfect is the enemy of the good.” (Voltaire) You need to ship a product so that you can book revenue. You need to book revenue so that you can continue to improve the product. All good things ultimately come from booking revenue, and you cannot book revenue unless you ship a product. This is not an excuse to ship a crappy product. But it is a rationale to ship a good enough product. Besides, if you ship a perfect product, what does your employer need you for any more?

10. I believe that most high technology has a half-life of about five years.

This means if you write those kind of job ads that list two dozen specific technologies, and if by some miracle you find someone with those exact qualifications, that person may not be the person you need a couple of years from now. It may be tempting to say “Okay, when the time comes, I’ll lay them off and hire someone else.” This ignores the rule of thumb that replacing an employee costs an organization anywhere from two months to two years fully loaded salary. Only the very largest of organizations can survive this, and even they will end up spending money that could have been more effectively used elsewhere. Hire people that are adaptable and fearless, so that they can tackle the next big thing, whatever that may be.

11. I believe that it is a very good thing indeed to work hard with smart people to achieve a common goal.

More than a decade ago I worked on a large team to produce the most complex single functional unit of integrated hardware, firmware, and software ever produced before, or since, by that particular development organization. The product is still in routine use today by dozens of Fortune 100 enterprises. The experience of that project affected the engineering team so much that to this day we still keep in touch and have annual reunions, even though we have mostly all gone our separate ways. There is simply no better thing in life than to work hard with a bunch of smart, funny, highly motivated, diverse, occasionally weird people to successfully achieve a common, challenging goal.

12. I believe that people are complicated, and that's a good thing.

I am a member of both the National Rifle Association and the American Civil Liberties Union. Of the American Motorcyclist Association and the American Association of Retired Persons. I like activities that include the possibility of lethality (and I have the scars and the titanium in my body to prove it), and ballroom dancing. I am currently reading the Harvard Business Review, a book on Java Micro Edition, and the latest novel by Jimmy Buffett. People are complicated, and that's a good thing. It is so tempting, for both engineers and managers, to try to plug people into stereotypical categories in an attempt to make dealing with them easier. But all it does is result in getting constantly surprised by what people do when they insist on stepping out of the box in which we placed them. The complexity of people is in fact their greatest strength. It is why Artificial Intelligence has been ten years out for the past forty years.

13. And, like Steve Martin, I believe in eight of The Ten Commandments.

Friday, February 09, 2007

A Breakdown of Some Breakthrough Ideas

The February 2007 issue of Harvard Business Review (HBR) had a cover article that caught my eye: “Breakthrough Ideas for 2007”. This is their annual survey of twenty emerging ideas that are considered important, or at least provocative, by the editors, the World Economic Forum, and in one case, the HBR readership. Each idea was presented by a short essay by a subject matter expert.

The least interesting ideas to me were the most technical ones. Most high technologies fail in the marketplace, and even the successful ones have a half-life of maybe five years. I’m more interested in the ideas that cause me to doubt my whole world view, question my basic assumptions, and, in the best cases, cause me to have a crisis of faith. Of course I viewed all of them through a lens polished with thirty years of engineering experience. Here’s my take on some of the HBR’s breakthrough ideas. These are all strictly filtered through my perspective; I encourage you to read the article for yourself and see if any of the ideas rock your world. (Any opinions or analyses expressed are strictly my own and not those of the authors.)

The Accidental Influentials
Duncan J. Watts

In his book The Tipping Point, Malcolm Gladwell applies the epidemic theory of disease to the spread of ideas. These self-replicating ideas are called memes. One of his pivotal distribution mechanisms is that of the connector, someone who knows a lot of people, to whom is paid a lot of attention, and who makes it their business to disseminate memes, sort of the cognitive equivalent of Typhoid Mary. Popular bloggers (for example, not me) frequently serve as connectors.

Watts argues against this model, and instead says, based on studies of actual meme distribution, that memes are spread most effectively by a critical mass of people who are willing to be easily influenced. That is, the network model of meme distribution depends not on those willing to influence, but on those willing to be influenced. If a meme encounters resistance just a couple of degrees away from the connector that is spreading it, its propagation slows or stops all together. The mechanism of meme distribution depends not on Typhoid Mary but on having a lot of victims with depressed immune systems, a susceptibility to what has been called viral marketing. The special ability of connectors is “mostly an accident of location and timing”.

For folks (for example, like me) trying to become known through their blogs, this does not bode well. It means if it happens at all, it is because a critical number of my readers are willing to believe whatever crap I write, not because of the quality or insightfulness of anything I write.

Well, now that I put it that way, it sounds like a good thing.

Brand Magic: Harry Potter Marketing
Frederic Dalsace, Coralie Damay, David Dubois

The idea here is simple. You don’t design a product for a particular market segment. You design it for a particular demographic cohort. And you evolve the product over time so that it consistently appeals to this same cohort.

The example in the article is cosmetics. You start out designing and marketing a line of cosmetics for women in their 40s. Over a decade you evolve the same brand so that it fulfill the needs of women in their 50s. Then, over another decade, for women in their 60s. Presumably over the span of a couple more decades, the same brand is altered and marketed to funereal cosmetologists.

The economics behind this has been stated before: it is a whole lot cheaper to keep existing customers than to acquire new customers. So if the needs of the demographic cohort to which you market your product changes as it grows older, your product, sold under the same brand, changes too. Meanwhile you introduce the same formulation as before under a different brand name in an effort to attract new customers from the subsequent demographic cohort.

My telecommunications equivalent of this would be to sell the same phones decade after decade but with volume controls that go higher and a typeface that is larger.

Algorithms in the Attic
Michael Schrage

I have no idea if this is true, but I love the concept: Schrage says that Google’s page rank algorithm was actually invented in the 1800s, the Perron-Frobenius theorem. And that many other algorithms that have similarly revolutionized high technology languished for years or even decades because at the time they were invented, they were computationally intractable.

In a past article (“It’s Not Just About Moore’s Law”) I’ve talked about how the power of different portions of an architecture grow at different rates, making designing a scalable system with a long life-span challenging. This is the flip side of that: those same growth curves suddenly make possible what was once thought impossible. This makes possible a whole new field of algorithmic archeology, where scientists and engineers try to find modern applications of old algorithms. What was once old is now new again.

When something amiss occurs in an embedded system, most don’t have the luxury of just logging an error message or throwing an exception. They have to find a way to soldier on, perhaps in a reduced capacity. I’ve designed and implemented error recovery sub-systems for two commercial telecommunications products to do just that. I’ve often thought that borrowing something from the Game Theory playbook to implement a more general solution would be an interesting idea. Maybe I should revisit that intuition. Embedded systems may now have the available horsepower to exploit more complex algorithms. I’ve made the same argument about the evolution of embedded programming languages, which have transitioned from assembly, to C, to C++, and now (as I argued in “If Java is the new COBOL, is C++ the new assembly?”) to Java.

The Leader from Hope
Harry Hutson, Barbara Perry

One of my favorite quotes is from Napoleon Bonaparte: “A leader is a dealer in hope.” I have found this to be true on the battlefield of product development, and it makes me think that this idea, while important, is not new, and shouldn’t be provocative.

An Emerging Hotbed of User-Centered Innovation
Eric von Hippel

This article talks about how in many industries, innovation is increasingly being customer driven, from the point of view that it is the end-user doing the innovating, not the producer of the product.

This is a very open-source or hacker kind of model, where innovation is the result of a grass roots effort and not of corporate or government initiatives. It is also not a terribly new idea even in the manufacturing arena. Harley-Davidson routinely sends representatives to motorcycle rallies to examine how customers have customized, modified, and improved their products. The best ideas show up on subsequent models.

Certainly every time I have ever visited a customer site and seen a product I’ve helped develop in use, I learn something new. Most of the time it is “Boy that’s a lot harder to use than I anticipated.” But sometimes it is “Wow, I never thought of using it that way!” Just one more reason why developers should get out more (whether they like it or not).

As corporate policies like forced-ranking, sixty-hour works weeks, and always-on Blackberries continue to squelch corporate innovation (my opinion), user-centered innovation is going to become increasingly critical for global competitiveness as R&D organizations can no longer be counted upon to meet the innovation needs of their customers. (See my comments below on Innovation and Growth: Size Matters and In Defense of Ready, Fire, Aim for other takes on this topic.)

Living with Continuous Partial Attention
Linda Stone

Stone talks about the backlash against the tyranny of always-on Blackberries, and how continuous partial attention differs from multitasking, where the tasks generally have low cognitive requirements. See my comments above on An Emerging Hotbed of User-Centered Innovation to see where I think this is going.

Innovation and Growth: Size Matters
Geoffrey B. West

This one really caused me to think. The author looked at scalability issues as they relate to population size. Civilizations exist because of economies of scale: not every one has to raise crops, hunt game, or rear children. The cumulative effort of these tasks scales sub-linearly, making labor available for other things, like blogging. What was unexpected, both to me and apparently the author, was that innovation scales super-linearly. The larger the population, the disproportionately larger the amount of innovation that occurs.

This sounds like a page from James Surowiecki’s book The Wisdom of Crowds, which talked about how studies of juries revealed that a diversity of opinion yielded more subtle and nuanced decisions, or how scientists who were more widely collaborative were ultimately more successful in their research. West conjectures that this may be why organizations like Bell Labs “in its heyday” were so creative. (Speaking as someone who worked at Bell Labs during its decline, “ouch”.) This is really counter-intuitive to the stereotypical lonely genius in the garage that permeates American culture as a high-tech equivalent to the American cowboy, but it rings true.

On a more ominous note, his model also predicts that “in the absence of continual major innovations, organizations will stop growing and may even contract, leading to either stagnation, or ultimate collapse.”

Conflicted Consumers
Karen Fraser

Although Fraser doesn’t specifically cite the “Green Movement”, where consumers preferentially choose products from environmentally friendly companies, it is a great example of what she is talking about. Fraser describes stealth consumers who buy your product but would rather not, and only do so because they have no choice. While sales may continue to be strong, there is a growing resentment towards the brand held by consumers who are just waiting for an alternative that is more palatable.

The gist of this is that companies can get by with violating the values of their consumers as long as they have a captive market. This may seem kind of obvious. But the real message is that such companies cannot afford to become complacent, assuming that their current customer base will always be there. Things could suddenly turn ugly. Not only might consumers find a replacement product from a more acceptable source, but in some cases they may simply decide to do without.

So I imagine a web site in which disgruntled Baby Boomers post their feelings about subtle age discrimination in hiring by companies whose products they consume. These are the same Baby Boomers who are poised to retire in droves and whose retirement funds will ultimately control most of the wealth in United States. Is a day of reckoning is upon some of us? Our memories may be bad (and getting worse), but they’re not that bad.

Why U.S. Health Care Costs Aren’t Too High
Charles R. Morris

Mrs. Overclock, also known as Dr. Overclock, Medicine Woman, remarked to me the other day about an unintended side effect of the mandatory motorcycle helmet law in California. It resulted in a shortage of organs for transplantation. Folks that had never been on a motorcycle in their life died because some biker had to wear a helmet. This is like something right out of the book Freakonomics.

I was reminded of this by Morris’ observation that the costs of individual medical procedures in the U.S. are not increasing. If anything, they are decreasing. It’s just that we’re living longer to need more of them. We live longer, and hence require more care. “The people who used to die of heart attacks now live on to consume expensive medications, visit specialists, and contract cancer or Alzheimer’s. Does that mean we should stop saving heart attack victims?”

This makes me wonder if mandatory helmet laws actually drive health-care costs up. Instead of leaving a good looking and mostly intact corpse, motorcycle accidents may now create victims that require expensive medical care.

Morris cites that health care is the single largest industry in the United States, now 16% of the GDP. He projects it will rise to be 25% to 30% in the next two decades based on shifting demographics alone. He likens this to how, in 150 years, agriculture went from being 50% of the GDP to a tiny 3%, and in the last fifty years the workforce went from being one-third employed in manufacturing to 10%.

Wake up and smell the disinfectant: things change! Health care becoming a major industry may not be a problem, and even if it is, what is to be done about it? Morris says that that paying for health care is an issue of financing, not affordability, and that there are no quick or easy fixes.

I’m reminded that we really have no frackin’ clue as to all the impacts of the Baby Boomer retirement wave.

In Defense of “Ready, Fire, Aim”
Clay Shirky

“The cost of trying to prevent bloggers from saying stupid or silly things… would be high, whereas the cost of allowing anyone to publish anything is low.” Thank goodness. This is another article right from the pages of The Wisdom of Crowds, and from the open source movement. Let a thousand flowers bloom. Let a thousand schools of thought contend.

Except Mao didn’t really mean it. And Shirky points out that out of 100,000 open source projects on SourceForge, most are inactive. While we know about the high profile successes, like Linux and the Apache web server, the vast majority of open source projects are stillborn. Linux and Apache are notable because they are the highly publicized exceptions to the rule.

Never the less, Shirky promotes open source as a path to innovation because the cost of failure is low. Large companies insist on embarking on huge boil the ocean projects where the economic cost of failure is counted in six figures or more. Open source allows thousands of tiny projects to be planted in the hopes that a few of them might sprout and fewer still survive the first winter. What we need is a good way to filter successful projects from failures as quickly as possible.

As two-time Nobel Prize winner Linus Pauling said: “You aren’t going to have good ideas, unless you have lots of ideas and some principle of selection.”

The Folly of Accountabalism
David Weinberger

Holding people accountable for their actions is one of those things that sounds (to me anyway) like a no-brainer. For one thing, you only learn from your mistakes (this is never truer than it is in software development), so if you are not held accountable, you never learn anything.

Weinberger argues that accountability assumes that there is a right and wrong answer to every question, that performance can be measured exactly, that systems go wrong because of individual actions, and that if we only knew the appropriate set of controls to put in place, the system would be self correcting. It assumes perfection can be achieved.

In an era of Six Sigma process controls, this is a very seductive idea. The problem is that in everything but the most mechanized of processes, accountabalism, as Weinberger calls it, is “blind to human nature”. It reflects a very engineering mindset (which may be why it sounds so appealing on the surface to people like me): it tries to treat people like machines. Although, as Weinberger points out, this doesn't even work all that well with machines, thanks to entropy: even machines wear out and misbehave. People are just a lot more clever and subtle about it.

This is a recurring theme. People that hang around me are tired of hearing me trot out Robert Austin’s book Measuring and Managing Performance in Organizations, but it really did change my world view. Austin presents a model based on Agency Theory (an offshoot of Game Theory which is the basis of much of contract and labor law) that shows that no incentive plan can be perfect unless all metrics of success can be accurately measured. Then he argues that this is impossible in any information-based industry.

Weinberger is making the same argument from a different perspective. Punishing (offering negative incentives in Austin-speak) people for taking risks and failing means that people will cease to take risks. Just like in the stock market, the higher the risk the higher the potential payoff (and the greater the potential loss). Becoming completely risk averse brings a halt to innovation because people will only apply what is guaranteed to work, meaning only that which has been done before.

But Weinberger is arguing something more than that: that complete accountability is impossible, in the same sense that Austin argues that perfect incentives are impossible. Weinberger’s accountabalism is another form of Austin’s measurement dysfunction.

Sources

Breakthrough Ideas for 2007”, Harvard Business Review, February 2007, pp. 20-54

Robert D. Austin, Measuring and Managing Performance in Organizations, Dorset House, 1996

Malcolm Gladwell, The Tipping Point, Little, Brown and Co., 2002

Steven D. Levitt and Stephen J. Dubner, Freakonomics: A Rogue Economist Explores the Hidden Side of Everything, William Morrow, 2005

James Surowiecki, The Wisdom of Crowds, Doubleday, 2004

Monday, February 05, 2007

Outsourcing for Small Businesses

Digital Aggregates Corporation is a tiny little subchapter-S corporation that started out as a hobby and ended up being how I earn a pretty good living.

Subchapter-S (as opposed to subchapter-C) is a section of the Federal income tax code that determines how a corporation is taxed. I am occasionally asked why Digital Aggregates chose to be an S-corp instead of a Limited Liability Company (LLC). When Digital Aggregates was founded in 1995, LLCs existed, but were so new that there was almost no legal precedent for how much protection they actually provided, and not all fifty states in the U.S. recognized them (and would not do so until the next year). So Digital Aggregates chose to be an S-corp. If I had to do it all over again, Digital Aggregates would probably be an LLC.

Update (June 2018): Some years later I would do exactly that for my second company, in One Prototype Is Worth a Dozen Meetings.

I am also asked from time to time what services I outsource, to whom, and why. There is nothing like running a tiny business to convince you that not only is outsourcing a good idea, it is the only way you will ever have any free time to do anything other than work. If I didn’t outsource a lot of stuff, I would be missing a lot of the new Battlestar Galactica. One must have one’s priorities in life.

So here is a list.

PSTN: Qwest


Qwest provides both of our home analog telephone lines, the second one of which is a dedicated business line. That second line goes into one of the analog trunk ports of my Asterisk PBX. I have an analog phone in my home office on the second phone line to use as a backup in case my Asterisk server goes down, although that's never happened. I also have call-forwarding on my business line so if necessary I can forward it to my mobile phone.

I don’t have anything to say about Qwest, good or bad. I pay the bill and the line always works. I remember having an interesting conversation with the installer when he came out to hook the second line up. I started talking to him about the household wiring using terms like “Christmas pair” and “Halloween pair” and finally he asked me what the heck I did for a living. (These are old telephony terms for the standard color coding of the four-wire residential phone lines.)

Mobile Phone: Verizon Wireless


Verizon Wireless is my cellular service provider. They’re okay. I have not yet forgiven them for disabling some of the Bluetooth capabilities on my Razor. (Yes, I know I could hack it.) I desperately want a Treo Palm-based smartphone, but not until Verizon offers one with WiFi. It is pretty much worthless to me until I can use an internet connection over my own WiFi network behind my firewall. Wise up, Verizon.

Update (February 2010): I eventually switched to a Blackberry Tour from Verizon. The Tour supports both the CDMA and the GSM/UMTS standards, which means it pretty much works anywhere in the world (as I discovered when my phone rang late at night during a business trip to England recently). This little smartphone lets me keep up with voice and email no matter where I go. Of course, it's also an address book, a GPS, a web browser, an MP3 player, and I can use it as a cellular modem with my laptop when WiFi isn't available. Alas, it still doesn't have WiFi, something I hope Blackberry and Verizon will rectify on a later model.

Update (November 2010): And they did: I now carry a Blackberry Bold that supports CDMA and GSM/UMTS and offers WiFi as well.

Update (December 2012): On a recent trip my Blackberry Bold, on which I rely for just about everything while traveling, failed me. The alarm clock application quit working and, jetlagged, I overslept twice. The second time I had to scramble in a mad rush to make a breakfast meeting. Later while waiting on a plane I did a series of tests that convinced me this was not user error, or a hardware failure, but some weird software bug. I am quite unforgiving about some things. So a few days later back home I marched into the Apple store and left thirty minutes later with a Verizon Apple iPhone 5 that also supports both CDMA and GSM. I already use both an iPad and a Samsung Android tablet, so I'm familiar with both environments. But since I use a MacBook Air and a Mac Mini with a Cinema display on a nearly daily basis, and I typically travel with the iPad (or with the Air if I think I'm going to have to do a lot of content generation), I decided to stick with the Apple ecosystem. I'm quite happy with the iPhone. I bought a really nice alarm clock app that I tested at home before relying on it on a trip.

Broadband Internet: Comcast


Comcast is our broadband internet service provider. Of all the services that Comcast's internet service could provide, we mostly just use the physical internet connection. We sometimes take advantage of Comcast’s email service just to create temporary email accounts that we can later delete, or “disavow” in Mission: Impossible speak.

My spousal unit and I both love Comcast broadband. You will have to pry it from our cold dead fingers. As I mentioned in my article “Important Safety Tip: Enable Ping With Comcast”, since I enabled responding to ping on my LinkSys router, our Comcast broadband connection has worked completely reliably. Apparently they eventually revoke your DHCP lease if the endpoint doesn’t respond to ping.

Domain Registrar: Network Solutions


Back in 1995, when I first registered the domain diag.com, Network Solutions was pretty much the only domain name registrar. Now there are a lot of them, but I’ve stuck with Network Solutions for diag.com, as well as for digitalaggregates.com and digital-aggregates.com, not to mention chipoverclock.com and a few others. Having all domains through one registrar simplifies the management of them through a single web interface.

IMAP, SMTP, and DNS: Indra’s Net


With your own domain comes great responsibility, like maintaining your DNS MX records, as well as IMAP and SMTP email servers. For email and all general DNS service I use a local company, Indra’s Net (those of you into Eastern Mythology will get the reference), an ISP based in Boulder Colorado.

I can’t say enough good things about Indra. When I call them, a human who can actually help me answers the phone. When I email them, I get a prompt response from a human who quickly and efficiently handles whatever I need done. And I don’t have to do these things very often because their service just works. They have a configurable spam filter, and a web-based email interface that I can use while traveling.

Update (June 2018): For my second company, I would instead use the Gmail for business feature of Google's G Suite; for a small fee, I use my new company's domain name but keep all my email in the Google cloud.

Web Hosting: Verio (formerly Webcom)

I have two web sites, one which is hosted by the web hosting service Webcom (bought by Verio, which is part of NTT), and another hosted on an Apache server that is part of the powerful Digital Aggregates computer center. The Webcom web site is our production web site. The internal one is our R&D web site where we try all sorts of goofy stuff as well as store big files that we don’t see a need to pay Webcom to host.

I recommend this as a strategy. Webcom provides a reliable 24x7 web service for not much money, all under the www.diag.com (or www.digital-aggregates.com or www.digitalaggregates.com) domain. Meanwhile, we have our own Apache web server to play with. Depending on where you navigate on the Digital Aggregates web site, you seamlessly move between the Webcom servers (somewhere in California I think) and our R&D server (a PC in the basement).

Update (June 2018): I no longer need the internal Apache server. I now use Dropbox for storing large files that I need to access over the web.

Update (January 2020): I now manage my web site using WordPress with the Divi theme from Elegant Themes. I use web forwarding from Network Solutions to map www.diag.com and the other domains to a DNS aliased domain (see below), which in turn is mapped to the IP address of a Raspberry Pi that I maintain and that runs an Apache web server that now hosts my web site.

Dynamic DNS: DynDNS


DynDNS has a clever service supported directly by my LinkSys router. For almost no money, if Comcast DHCP assigns a different IP address to my router, the router automatically notifies DynDNS, and they update the DNS tables for the domain name of the Digital Aggregates R&D web server, to point to the new IP address. Pretty slick and completely transparent.

Update (June 2018): For a variety of reasons, I later switched to using No-ip.com for my Dynamic DNS service.

Web Site: Fog Creek CityDesk and Telepark Forssa

I've done enough programming in pure HTML to convince me that life is too short. Sure, it's a useful skill, just like sometimes you need to know a little assembler even if you're programming in C. But I use a commercial tool, CityDesk from Fog Creek Software, to edit and manage the Digital Aggregates web site. When I chose it, there were a few other commercial alternatives, and no open source option. If I were starting fresh, maybe I could do something different. But then again, maybe not. I love CityDesk. I can easily edit and preview the Digital Aggregates web site on my laptop, then download it via FTP to both the production server and the internal R&D server with just a button press. And for those rare cases when CityDesk's fairly simple web editor isn't sufficient, I can easily use another tool and just import the HTML into CityDesk.

CityDesk supports the use of web site templates: pre-formatted forms that make creating a web site mostly a fill-in-the-blank proposition. The German company Telepark sells for next to nothing several CityDesk templates. Although I've departed somewhat from the original Forssa template I started with, it still enabled me to get a simple but usable web site up and running with about a day's worth of work.

I know developers that still create their web pages with raw HTML through sheer force of will. And I once knew a guy that refused to write in anything except assembler. C'mon, really, you need to have some concept of what your time it worth.

Update (June 2018): I still use CityDesk and Foressa, but CityDesk is no longer supported by Fog Creek. It is only due to the kindness of strangers that it still works on my Windows laptop.

Update (January 2020): The solid state disk in the big hybrid SSD/disk in my desktop Mac died catastrophically, taking with it the VMware virtual machine image on which I ran the ancient version of Windows 8 just to run CityDesk. I had the Apple Store replace the fusion drive, and restored from a perfectly good backup. But I decided to take this opportunity to revamp my company web site by moving from CityDesk to WordPress, using the Divi theme from Elegant Themes, and hosting the site on a tiny Raspberry Pi 4B single board computer running Apache under Raspbian 10.

Blog: Blogspot (now Blogger) and Flickr

I use Google's Blogger for the text portions of my blog, coverclock.blogspot.com (a.k.a. www.chipoverclock.com), and Yahoo's Flickr for the images, including both photographs and diagrams. I learned this mash-up technique from Demian Neidetcher, although for sure he didn't invent it either. Blogger is free. Flickr can be free too, but I signed up for a low cost Flickr account to get the ability to upload more data faster.

I think the new Blogger needs some work in the composer tool department. It's a pain to do detailed articles which involve embedded source code. And embedded XML is, somewhat understandably, especially painful. I frequently have to drop down into HTML editing, which I liken to programming in assembly, and then can't go back to the composer without loosing my changes. But for the most part this combination works well for me.

This is a trivial example of the kind of web service and software-as-a-service mash-ups that are becoming increasingly common. If this is the direction that things are going, I'm all for it. Services like these which are, if not free, at least very inexpensive, are tremendously enabling technologies for small businesses. I know folks that run web-based retail businesses through sites like Amazon.com and EBay, letting those sites handle their web presence. I have used all of these services as a consumer.

Payroll: Paychex

Do payroll? Figure out withholdings? Report all the various corporate taxes quarterly to the appropriate state and federal authorities? I have people to do that. I just tell Donna of Paychex how much I want to pay myself every month, and she does the rest, all by electronic funds transfer from a Digital Aggregates checking account to my personal checking account.

I suppose there are other equivalent payroll services, but I really like Paychex. They were recommended by my tax accountant, and boy was he right on the money (as usual). They are a national company, but when I deal with someone, I deal with Bruce, my payroll specialist. Bruce answers all my questions, and so far has never made a mistake. I have no idea where Bruce is located, but I dial a local number, and he speaks English. Bruce is my go-to guy for payroll.

Paychex sends me a complete payroll report every payroll period (for me, that’s monthly), as well as a pay stub and end-of-year W-2 form that says “Digital Aggregates Corporation”. That is very cool.

Registered Agent: Corporation Services Company (CSC)

Until recently, Colorado required every corporation incorporated in the state to either have an office staffed during normal business hours to receive official paperwork from the state, or to have a registered agent that does this and handles the forwarding of the paperwork. CSC is not cheap, especially given that they add almost no value to the process. Colorado has changed the law to allow tiny corporations, just like Digital Aggregates, to receive official paperwork by registered mail, eliminating any real need for me to have a registered agent. 2007 will be the last year I’ll be using CSC or any other registered agent. I recommend avoiding using a registered agent unless you absolutely must.

Update (November 2008): I did exactly this, dropped my registered agent, starting in 2008.

Accounting: Intuit Quickbooks

So far I have not had a need for an accountant just to keep the books. I use Intuit’s Quickbooks, on the recommendation of my tax accountant. Right as usual. Being a techie, grasping basic arithmetic, and having the ability to balance my own checkbook every month, I’ve found keeping what little accounting I need to do quite feasible with Quickbooks. Note however that I am not doing payroll or withholdings, Paychex is handling that. So my accounting needs come down to generating invoices to clients and reconciling the corporate account every month (or so). Quickbooks works just fine for that.

Taxes: Your Own Tax Accountant

I have used the same tax CPA for both my corporate and personal taxes since 1989. The man is absolutely invaluable. Not only does he handle all my tax stuff, but he provides a wealth of valuable advice about the day to day details of running a small business, partly because he is a tax accountant, and partly because he is a small business himself and has to do all this stuff. Even though you could certainly do your corporate taxes yourself, I recommend finding a CPA you know and trust and establishing a business relationship with him or her, and let them keep you out of jail. I have found it to be worth every penny.

Important safety tip: unlike personal tax returns, corporate tax returns are due March 15th.

Pension: SEP IRA

Here is where you really need your own tax expert. After discussing my options for a pension plan with my tax accountant, I set up a Simplified Employee Pension plan or SEP. SEP is a special type of Individual Retirement Account (IRA). Another alternative would be a 401(k) retirement account. I felt that the 401(k) got complicated particularly if I had additional employees. The SEP was simpler and met my particular needs. I can't really give you any advice here except to discuss this issue with your own tax accountant and choose the financial instrument that makes the most sense for you. The take-away here is that you can be self-employed and still have a tax deferred pension plan.

Insurance: Your Independent Insurance Agent

Update (March 2010): Several of my larger clients have asked me to carry Commercial Liability (CL) insurance and Professional Liability a.k.a. Errors & Omissions (E&O) insurance. I found this to be a reasonable requirement, and probably a good idea in any case. Back in 2006, I found CL and E&O policies nearly impossible to find for my one-man corporation. Policies could be had through the professional organization IEEE, of which I am a member. But they were aimed at professional licensed engineers, not megalomaniacal supervillains such as myself.

Since then, things have changed. I'm guessing the increase in the number of self-employed has created a significant market for just the kind of thing I needed. My home and auto insurance agent referred me to an independent agent who helped me get CL and E&O policies with just a little paperwork. The policies aren't necessarily cheap: CL is about $500 annually and E&O is about $1500 annually. But both I and my clients rest easier.

Stuff: Amazon.com

I routinely buy used technical books at a fraction of their new price through Amazon's used book dealers. (Mrs. Overclock -- a.k.a. Dr. Overclock, Medicine Woman -- expresses concern when I don't get my daily Amazon shipment.) I recommend this approach for all technologists. Most high technologies have a half-life of about five years, but we're all on different points on the innovation adoption curve. There's no reason to spend sixty bucks for a book that you may be able to get (barely) used for twenty.

But I have also found Amazon to be a great source for small -- and not so small -- parts and equipment. Remarkably, I even bought a PC-based oscilloscope through Amazon. I have yet to purchase something as expensive as a laptop from them, but I can see the day coming where they will be my preferred supplier, or at least the sales room for all my suppliers. Amazon has effectively become my purchasing organization.

Mailing Address: The UPS Store (formerly Mailboxes Etc.)

Update (March 2010): When I'm slaving over a hot laptop, I'm most typically at my client's site, but occasionally in my home office, and sometimes at the local coffee shop. But on my business card, I want to give the impression that I have a palatial suite of offices that is the hub of my vast business empire. Not to mention I don't necessarily want to broadcast my home address. That's why I rent a mailbox at my local UPS Store. I get an actual street address which can accept both postal mail and package deliveries, and will notify me by phone or email if something interesting arrives. My UPS Store also provides notary services, copying, shipping, custom printing, both incoming and outgoing faxing, and any number of other handy business services.

Best of all is its location: it's in a strip mall just a few steps from the Starbucks that Mrs. Overclock and I frequent, and in the same parking lot as the local grocery, liquor store, and even my bank. I'm there two or three times a week running one errand or another, so checking my mailbox is a no brainer. It's also a good excuse to drop in for an afternoon latte and to get some reading done.

Company Credit Card and Checking Account

Update (March 2010): Having a company credit card vastly simplifies bookkeeping. Many goods and services purchased for the company can be directly charged to the credit card, eliminating any reimbursement of expenses. Just be sure you itemize each class of purchase on your credit card statement in your accounting system for eventual use by your tax accountant. This is not as hard as it sounds. My company card pays for things like office supplies, the monthly charge for the outsourcing of the company web site, reference materials from Amazon.com, the company mailbox, and business related travel.

Preventing commingling of personal and company money is pretty much a legal requirement for a corporation or an LLC. Even so, a company checking account is a good idea and simplifies bookkeeping.

If you're going to be self-employed, you better know what your time is worth, especially if you bill by the hour. Outsourcing is one way to help you spend your time where it delivers the most value, even if that value is your own leisure time.

Sunday, February 04, 2007

The Secret Life of Chip Overclock Revealed at Last!

My article “Asterisk, WiFi, HomePlug, and an Avaya SIP Phone” inspired me to reveal some of the technical infrastructure of what I like to think of as the Palatial Overclock Estate, but what is more typically described by the media as the Heavily-Armed Overclock Compound. Besides, Demian Neidetcher tells me I should blog about more mundane stuff, like my “cat in a sink”. So, Demian, there’s a photograph of one of our cats here. (Click on any photograph to see a larger version.)

The Nexus of the Vast Overclock Empire

The Nexus of the Overclock Empire

Here I am busily accomplishing vitally important strategic corporate goals in my home office. Visible here (clockwise from top left) are my backup analog business phone, my Avaya 4610SW SIP phone, my IBM Thinkpad in its docking station, and my flat panel display with keyboard and wireless mouse. The Thinkpad connects via WiFi, the SIP phone via HomePlug. Also visible are other critical business tools like a magic eightball, a photograph of my favorite musical group Swing Out Sister posing with a Triumph motorcycle, the cat Jiji from the Japanese anime movie Kiki’s Delivery Service, a red Swingline stapler just like in the movie Office Space, an illuminated Welcome to Fabulous Las Vegas sign, and a bamboo plant which I haven’t managed to kill yet. A Netgear HomePlug bridge is plugged into a wall outlet behind all of this.

The Multi-Purpose Spousal Unit

My Spousal Unit

When Mrs. Overclock (a.k.a. Dr. Overclock, Medicine Woman) isn't heroically saving lives, or enriching the cultural heritage of the world via filk music, she spends her time cruising the web looking for new catalog web-order sites. Here she can be seen with her WiFi-connected Mac Powerbook, which you will have to pry from her cold dead fingers.

The Network Equipment Stack

The Network Stack

Here you can see Bastet, one of our technocats, working on some network cabling behind the A/V cabinets. We have had some success outsourcing the more mundane infrastructure work to other species. The blue boxes on top of the rightmost cabinet, just to the right of the Bender robot from the TV series Futurama, are the LinkSys network equipment stack. The TiVo is on a shelf directly underneath the television. Below that is a VCR which we have never used since buying the TiVo, and probably never will. Another Netgear HomePlug bridge is plugged into a wall outlet behind all of this.

The Computer Center

The Computer Center

Here is the powerful Digital Aggregates computer center situated in its secure underground facility. On top of the table is an H-P full duplex color laser printer, which dwarfs everything else. Underneath the table (left to right) is a Dell server running Fedora Core Linux with Asterisk and Apache, an old Compaq laptop running Fedora Core Linux that serves as a build server and Subversion repository, a UPS, and the all important document shredder. The Dell server is networked via HomePlug, the printer and the build server via WiFi. Also visible on the wall is a Bullwinkle clock that runs counterclockwise and from which all other network timekeeping is synchronized. Yet another Netgear HomePlug bridge is plugged into a wall outlet behind all of this.

In a future article I'll describe the stuff you don't see in these photographs: not only the software involved, but what services we outsource and to whom and why.

Saturday, February 03, 2007

Asterisk, WiFi, HomePlug, and an Avaya SIP Phone

The Digital Aggregates corporate data center, such as it is, is a mash-up of bits and pieces of hardware spread all over the palatial Overclock estate. The hardware was selected by the tried and true method of mostly whatever works and sometimes whatever was handy at the time.

The central network stack is a pile of several LinkSys boxes on top one of the A/V cabinets in the family room. This is simply because our broadband internet provider is also our cable television provider, Comcast. Proximity to the cable was necessary for the enablement of two critical activities in the Overclock household, watching the new Battlestar Galactica, and cruising the web. One of the LinkSys boxes is a cable modem, and another, predictably, is a router/WiFi access point.

There are a lot of devices on our network, including an H-P full-duplex color laser printer with integrated print server (running VxWorks, as it turns out), several Linux servers, Mrs. Overclock's Mac Powerbook, my IBM Thinkpad, my Palm PDA, our TiVo, an Avaya 4610SW SIP (VOIP) phone, and probably some other stuff that I've forgotten about.

Although the router has a four-port Ethernet switch, until now only one device was plugged into that switch, our TiVo digital video recorder, the machine without which life as we know it could not exist. As Mrs. Overclock has been known to say, "TiVo loves us and wants us to be happy!" The only reason the TiVo warranted such special treatment was its physical proximity to the router. Otherwise, it would have been a citizen of the WiFi network, just like all the other devices on the heavily-armed Overclock compound.

The SIP phone is my principle telephone in my home office. It is serviced by one of the Linux servers which runs Asterisk, an open source PBX. The Asterisk server has a four-port Digium card that connects it to an analog phone (used only for troubleshooting), and to the PSTN via our second phone line. I chose the Avaya SIP phone because it had both of what are for me absolutely killer features: a speakerphone with great sound quality, and a jack for a high quality two-ear Plantronics headset. The former is necessary for the way I tend to work during boring phone calls (present company excepted of course). The latter is for my less than perfect hearing, which is the result of a misspent youth (no, not rock music, but firearms, motorcycles, and mainframe computer rooms). The Avaya 4610SW is a professional quality phone. It is not cheap, but its sound quality and headphone capabilities make it worth every single penny. It looks kinda cool too.

Note that this is a completely unsupported configuration: an Avaya 4610SW connected to a WiFi network serviced by a SIP proxy and registrar running inside Asterisk. Once upon a time when I was doing product development for Avaya, someone from their CTO organization actually called me up and warned me not to ever admit that I was using Asterisk, nor that I was using an Avaya SIP phone with it, nor to ever help anyone ever set up such a configuration, nor to even admit that such a configuration was even possible. So here is my disclaimer: if you have an Avaya SIP phone and are thinking of running it with Asterisk, quit reading this article right now. You have been warned. And if you are an Avaya executive, please tell your CTO people to pull their heads out of their asses.

This configuration worked pretty well for a couple of years. In the past few months though I have been having a disconcerting problem of very occasionally having phone calls from the PSTN to my SIP desk phone drop. Picking up an analog phone connected to the same phone line revealed that the other party was still on the line, and in fact had no idea that the call had been dropped from my point of view. Several hours here and there spent testing, perusing Asterisk logs, running Ethereal traces, and just about anything I could think of, turned up nothing.

Then one day while cruising the web, I started to notice articles complaining about how when the density of WiFi networks in a neighborhood reaches a certain point, connections start to drop unexpectedly. When I first brought WiFi into the Overclock household, there was exactly one other WiFi network visible in our neighborhood. Now there are at least a half dozen, and maybe more which, like my network, do not publish their SSIDs. I began to realize that not only had I seen SIP calls drop occasionally, but once in a while I had problems just downloading web pages too, and had to tell my browser to stop and reload, finding that it worked fine the second time.

So began a quest to find another LAN solution, and after some research, HomePlug presented itself. HomePlug is an industry standard that allows you to run Ethernet over your household electrical wiring at rates somewhere around thirty megabits a second, depending on whose propaganda you believe. After reading a lot of reviews, I departed from my usual network vendor, LinkSys, and settled on NetGear.

Why the change? You may recall that in my article on the book Five Myths of Consumer Behavior, I mentioned how the authors preached simplicity and ease of use as the keys to rapid consumer adoption. The LinkSys manuals for their HomePlug gear had long, multi-step setup procedures. The reviews of the NetGear boxes had users saying "I just plugged it in and it worked!" Yeah, baby, that's what I'm talking about, give me some of that.

So I ordered three NetGear XE104NA "PowerLine" four-port HomePlug Ethernet bridges from what has become my favorite computer vendor, Amazon.com. One for the Asterisk server in the basement, one for the router in the family room, and one for the SIP phone in my office.

Installation was almost that simple. The Digital Aggregates network became a combination of 100BaseT, WiFi, and HomePlug. I could telnet from my WiFi laptop to the HomePlug server. I could ping from one of the WiFi servers to the SIP phone. And when I booted the SIP phone, it could download its firmware and configuration from the HomePlug server. But the damn phone would not register with the SIP registrar on the very same HomePlug server.

I'll spare you the drama of the Ethereal traces, the traceroutes, the pings, the firewall experiments that took up a couple of hours on a Saturday morning. None of them contributed in any way to the solution except to convince me that the problem was something a lot less obvious. And so it was. To me anyway. All solutions are obvious once you know the answer.

The 4610SW implements the 802.1Q standard, a mechanism that allows multiple bridged virtual LANs to share the same physical LAN while keeping their data packets segregated. This makes a lot of sense for corporate environments. The default value for 802.1Q on the 4610SW was "auto", which also makes a lot of sense. Unfortunately, it doesn't seem to work on my new network configuration. Even through the SIP phone and the Asterisk server are on the same HomePlug network, the Netgear bridges apparently make them appear to be on different VLANs. I disabled this feature on the phone, and five seconds later all was well.

That's all it took. Otherwise, it was just a matter of freeing up some wall outlets for the Netgear boxes, which are each about the size of a largish power brick, moving the Ethernet cables from the WiFi boxes to the Netgear boxes, and standing back. It was that simple. It is too soon to tell if this will solve my problem of dropped VOIP calls, but it has had an expected side benefit. The HomePlug network apparently has lowered the network latency to where the echo canceller in the 4610SW can handle it. Prior to this, on the WiFi network, I had some echo back of my own voice, probably from the hybrid analog interface to the PSTN, which, fortunately, only I could hear. That seems to be gone. Huzzah!

Time will tell.

Update

By way of an update, I've had several longish calls recently, in addition to my own test calls, using this new configuration and have yet to have a VOIP disconnect. This proves nothing, but it is at least encouraging. On the flip side, if WiFi does have a problem when densities get high, then unfortunately as a technology it has no where to go by down.

Update

So here's the latest on Asterisk, HomePlug, and my Avaya SIP phone. As of yesterday, I had a couple of calls drop in the middle. This is the first time this has happened since switching from WiFi to HomePlug to connect my Asterisk server to my Avaya 4620SW. Investigation continues.

Meanwhile, today I upgraded to Asterisk 1.2.16 (I'm embarressed to admit how old my prior version was, but I installed it almost exactly one year ago today), and also to the Zaptel 1.2.15 drivers. I did this primary to take advantage of Digium's proprietary host-based High-Performance Echo Canceller (HPEC) which they offered free to folks with a Digium card still under warranty (barely, in my case). So far initial testing is all good: with no echo cancellation, the hybrid echo was terrible. With the open source echo canceller, it was barely bearable. With HPEC, it's gone completely. (I try not to think how many cycles this is consuming, but it's ONE phone and a server dedicated to Asterisk and some internal Apache use.)

I initially tried this with Zaptel 1.2.14 (the latest rev at that time) and could not get the zaphpec_enable tool to recognize that I had HPEC enabled ("it appears that this driver was not built with HPEC enabled", which was patently untrue as any number of tools verified). But Zaptel 1.2.15 seems to work fine.

Update

I used a LinkSys 100Mb/s WiFi bridge, a little box that converts from wired Ethernet to WiFi. The phone has a power-over-ethernet brick that sits between it and the WiFi bridge (now between it and the HomePlug adaptor). I converted both the Avaya 4610SW SIP phone and my Asterisk server from WiFi to HomePlug.

I'm still having problems with very occasional disconnects while using HomePlug. I recently upgraded the Avaya phone to the latest firmware (from 46xxSIP_101005 to 46xxSIP_032207) and am waiting to see if that helps.

I've never been able to reproduce this disconnect problem at will, so testing this is a pain. I now wonder if it's something like the old problem where the message waiting update from Asterisk would crash the phone. This was a known problem with the SIP firmware caused, according to Avaya, by a malformed or otherwise invalid SIP message from Asterisk. You can disable the feature in Asterisk by commenting out the "mailbox=" line in your sip.conf file.

I recently installed the proprietary host-based echo cancellation software from Digium. I recommend it if you're using Asterisk and a Digium analog card to connect to the PSTN. It definitely made a big difference in sound quality.

Update

Just a couple of days ago I had another rash of call droppings. The original call was coming in from the PSTN analog to Asterisk, then SIP via HomePlug to my Avaya 4610. It dropped twice on the same caller. (Sorry about that.) So it's back to debugging.

Update

I'm admitting defeat. After a couple of years of trying to get an Avaya 4610SW SIP phone to work reliably with Asterisk, I've just ordered a LinkSys (Cisco) SPA942 to replace it. My two must-have applications for a desk phone are [1] a decent speaker phone, and [2] a Plantronics headphone jack. We'll see how the SPA942 fairs in this regard. Just when I think I've got the 4610 working reliably, it drops a call (something my two soft phones I've used have never done). I'm calling it quits.

Update

I installed the LinkSys (Cisco) SPA942 on my Asterisk server this AM. The hardest part was sorting through the admin web pages (the phone as a built-in web server, like all the LinkSys gear I've used) to find where the IP address of the SIP proxy (my Asterisk server) was administered. Other than that, everything else was just gravy. I did some machinations in my Asterisk dial plan to attach a second phone number to the phone, just because I could. The voice quality of the handset, speakerphone, and headset, all seem quite acceptable. I had to buy an inexpensive 2.5mm adaptor for my Plantronics headset, but other than that it all pretty much worked right out of the box.

Update

My ancient Digium line card finally gave up the ghost. I had to install the latest hardware from Digium which, happily, is available with a hardware echo canceler. I also had to upgrade to Asterisk 1.8.3.2, which was a moderate bit of effort. So far, so good.

Friday, February 02, 2007

Five Myths of Software Developer Behavior

I’ve been known to read a book or two on management or marketing now and then. I know it’s a personality flaw on my part. I should be reading books on agile development processes or the latest new language like Ruby on Rails. But the fact is most high technologies have a half-life of about five years. When you get to be a certain age, and have survived a thirty-year career in high tech, you have seen so many technologies come and go it’s hard to get excited about something that in the long view seems like just another fad. Plus, you figure maybe it’s a good idea to practice just-in-time learning instead of expending bandwidth with stuff that you might not live long enough to use. Time is short, and you want to make the most of what you have left.

Even while reading a book on marketing, it is hard not to see it through a lens polished by decades of technical experience. And so it is that when I read Paul Smethers’ and Alastair France’s book Five Myths of Consumer Behavior: Create Technology Products Consumers Will Love, I chose to see it not as a book on how to design and market consumer-friendly products, but as a book on API design, and to see the consumers described in the book as other software developers using my code.

Is that sick, or what?

I’m sure if I specialized in user-interface design, such as GUIs or web pages, this short book would have even more resonance, since it talks specifically about the design of human interfaces for cell phones and web pages, based on work by the authors. But I found much of value just from my own experience developing real-time and embedded software for use by other developers.

Myth 1: Consumers behave the same in all markets

Five Myths describes the Consumer Adoption S-Curve, which is a sigmoid curve passing through the phases stalled adoption, to rapid adoption, to mass adoption over time. The first consumers who use your product will do so because they are curious, seek innovation, or have a problem they think your product can solve. As we will see later, only a fraction of those initial users become repeat users, only a fraction of those become regular users, and only a fraction of those become power users who may come to know more about using your product than its developers. If the barriers to entry to using your product are too high, you will weed all of your users out at the first stage, and your product will become stalled in the first (low) part of the sigmoid curve. The key to wide spread adoption is lowering the barriers to entry. Barriers to entry include usability, performance, and a product value that is hard to find.

I'm sure many readers have recognized this as being very similar to the cumulative Innovation Adoption Curve described by Everett Rogers in his book Diffusion of Innovations. Rogers separates his sigmoid curve into the phases innovators, early adopters, early majority, late majority, and laggards. While Smethers and France concentrate on talking about mass-market consumer products like web pages and cell phones, Rogers talks about how innovations ranging from new agricultural technology to rap music spread through a culture. The similarity of these two concepts suggests to me that both books are talking about the same thing, and that it is something fundamental.

Five Myths offers examples where consumers in different markets, for example Europe versus Japan, behave very differently in terms of what they find interesting or acceptable, making the same product or feature move through the sigmoid curve much more quickly in one market than in another. Success in Japan does not guarantee success in Europe when marketing the same product or feature.

A good example of this is to compare developers of enterprise software to those of embedded software. The success of Java in the enterprise was possible because that domain’s developers found the overhead of the JVM and its garbage collector vastly out weighed by its many advantages. Embedded developers just smiled and shook their heads. Historically this was also true about C++, and lately with its more exotic features like generic programming.

I have argued in recent articles that it is time for Java (“If Java is the new COBOL, is C++ the new assembly?”) and generic programming (“Generic Embedded Programming Using Templates”) to take their place on the embedded developer’s tool belt. But it is wise to remember that as technologies become mainstream, they filter though problem domains ranging from innovative to conservative, just as Five Myths describes.

Want to get a clue what embedded developers will be doing and using a few years from now? Look at what enterprise developers are doing and using today.

Myth 2: The more consumers see it, the more successful it will be

The biggest marketing effort in the world can’t make up for an ugly product. The secret isn’t attracting users. It’s keeping users. Marketing might get potential users to become initial users, but they won’t stick with an ugly product to become regular users. Products can be ugly (my term, Five Myths would probably say "unattractive") for lots of reason, like poor performance, poor usability, or value that is too hard to find.

Five Myths has a pretty interesting discussion on how products abandon their core consumers. Marketing managers insist on adding more and more features in an effort to broaden the product’s appeal. It evolves over time to the point where much of its installed user base can no longer find value in the product.

(A good metaphor for this effect can be found in popular entertainment. I was a fan of the television series Twin Peaks until I figured out that David Lynch was just making it all up as he went along and there was no hidden context. The series was cancelled shortly afterwards. The X Files came perilously close to this with its meandering mythology based on a government/extraterrestrial conspiracy. But it sustained enough great episodes that it remained worth watching. And of course there was the UNblonde character.)

I remember installing one of the popular Linux distributions, Ubuntu, on one of Digital Aggregates’ servers, only to discover that it did not include a C compiler. Apparently the folks that packaged this particular Ubuntu, which is marketed as “Linux for human beings”, was not marketing it to developers. (From this we can infer that developers are not human beings, something my spousal unit has wondered aloud about from time to time.) It was easy enough to download the GNU package and installed it, but it was definitely a WTF moment for Mr. Overclock.

While I know folks that are successfully using Linux as their desktops (and I still run it experimentally on an old laptop), all of those folks are hard core technologists, definitely not part of the mainstream. My laptop runs Windows, not because I am a Microsoft fan, but because I simply don’t have the time (see first paragraph) to figure out how to do with Linux things that are relatively simple with the applications available for Windows. And before you start commenting on how Mac OS is UNIX-based, recall that it is not Linux-based (and you will have to pry the Powerbook running Mac OS X from my non-technologist spousal unit’s cold dead fingers).

I have seen the same happen to products in the embedded tool chain, such as real-time operating systems. Victims of their own success, they become so feature-rich that one of two things happens. They can no longer be used by their core constituency for their traditionally memory-constrained applications, or they require so much memory that alternatives that were formerly unthinkable, like Linux, suddenly become competitive.

While I can appreciate the marketers trying to broaden the appeal of a product successful in narrow market segment, this can be the death knell if the product loses sight of its core value, the problem it was originally trying to solve.

Myth 3: If I’ll use it, my users will

On one hand, the points that Five Myths makes seem almost too simple and obvious. On the other, they are contrary to many truths that my technologist brain holds to be self-evident. This is one of the points of the book: engineers are power users of their own products, and are therefore the least qualified to have an opinion on how to make a product attractive to a new user.

It is difficult, for me anyway, to try to view the APIs that I design from the point of view of someone approaching it for the first time. Yet that is exactly what I think you need to do: try to see it with new eyes. And be willing, eager even, to evolve it once you have some real developers using it and getting their feedback. Don’t be afraid to say “this was a stupid idea” and move on. If it really is a stupid idea, why would you want to keep it around? Maybe it’s just stupid in the current context. So keep it in your hip pocket for when you find a context in which it is a brilliant idea. But don’t force it on your customers.

I’ve become skeptical over the years of design reviews that do not incorporate some kind of actual use of the API as part of its evaluation. The most useful technique I have found that did not require running code was to at least code up the interfaces or header files defining the API, with no executable code behind them, and code up some example programs using them to see how it feels. Even without any executable code, I was surprised how many problems this technique shook out. Plus, if you comment the interfaces with JavaDoc or Doxygen, the result is the beginnings of a usable (and current) design document.

Of course this sounds an awful lot like some of the precepts of agile development, like working closely with the customer, and test driven development, and so it is. I find a lot of API issues while writing the unit tests. These problems range from “this isn’t pretty”, to “this is awkward” to “this doesn’t work at all”.

Myth 4: Consumers will find a product’s value

Five Myths says you have to make your product's value obvious right up front, and make it easy for consumers to find it. Using web pages for mobile devices as an example, you lose customers with every single mouse click and menu drop-down they have to go through to use the feature they really want to use. With every step consumers have to take, a proportion of them give up and spend their attention budget elsewhere. Place enough barriers to entry in their way, and they give up, weeded out before they ever figured how why they should care.

So it is with APIs. On a Java project I remember comparing JAX-B with XMLBeans as tools to convert between Java objects and XML. I didn’t particularly like either one of them, but for sure the one I liked the least was the one that seemed to take more work to get something usable.

Five Myths is the latest among books I’ve been reading that push an evidence-based approach: evidence-based medicine, evidence-based management, and now evidence-based marketing. It’s easy to be lulled into thinking that evidence-based marketing is the norm, what with all the talk about focus groups and consumer surveys. But Five Myths tells us about how most such approaches are deeply flawed, and frequently only serve to reinforce mistakenly held beliefs by the product developers and their market managers.

Five Myths suggests an approach I recall reading as having been used by the early Intuit developers: give your product to real customers, follow them home, and watch them while they use it. The most useful feedback I’ve ever gotten on how difficult most systems are to use is when I watched over the shoulder of an actual customer as they tried to use it. The second most useful is when I tried to do the same while an irate customer was standing over my shoulder. This is why I strongly recommend that developers interact directly with customers. And not just in the customer briefing center either, which is a restrained and carefully controlled environment, but at the customer site, where the products are being used. I guarantee that it will be a valuable learning experience, albeit perhaps a painful one for all involved. (You might even make a friend or two.)

I’ve come to think of this as evidence-based design. Design based on facts and measurements, not on what you think is right. Agile proponents will recognize this as a form of don’t build it until you need it and incorporate the customer into the development process.

Myth 5: Consumers want more features

Consumers do not want more features. More features only confuse them and serve to hide the value (if any) in your product. When it comes to features, less is more. It is better to have a few (maybe as few as one) key features that work well, than a lot of features that all work marginally. This contrasts sharply with the boil the ocean approach of many modern software products.

Five Myths classifies features as to their attractiveness, their stickiness, and their usage. Attraction is the ability of the feature to attract consumers to use it. Stickiness is the ability of the feature to attract repeat users. Usage is the ability of the feature to engage a customer over a period of time. The most successful features are those that have high measures of all three metrics. But a feature that is just attractive, like a built-in MP3 player, may encourage potential users to become initial users, where they discover a sticky feature, like text messaging, that keeps them coming back. A high usage feature, like cellular phone calls billed by the minute, increases revenue.

This plea for simplicity describes well the problems I have with many software tools, frameworks, and APIs. They are just too damn complicated. It takes too long to come up to speed on them (see first paragraph) and to ferret out the best practices for using them, when all I really want to do is make my dates, ship a product, and book revenue. (All good things come from booking revenue.)

And many software products seem to be designed by a committee, having every single imaginable bell or whistle, with a cumbersome API that tries to be all things to all people. The POSIX Threads API comes to mind. Of course, like many public standards, it was designed by a committee, and does try to be all things to all people. It is no accident that the original, simple UNIX API was the product of just a couple of guys sitting around in the lab. Popular new tools, like Ruby on Rails, are frequently the product of just one or two people trying to address their own issues and coming up with a particularly effective solution.

My own experience has caused me to re-think how I design APIs for the software that I write, and Five Myths just reinforces it. In particular it has changed how I design something as mundane as constructors. In recent work in both the Desperado C++ library and the Buckaroo Java library, I have started putting only the absolutely essential arguments in the constructors, and setting all other options via dependency injection using settors which return an object reference.

For example, Buckaroo features “Chip’s Instant Managed Beans, classes which vastly simplify exposing useful information as MBeans to remote JMX browsers like JConsole. (For those of you in the C++ world, this is akin to exposing MIBs to remote SNMP browsers, which can be a very good thing indeed for system monitoring and field troubleshooting.) The constructor of the Buckaroo Counters class takes one mandatory argument, an enumeration, creates an array of counters for use by the application which are indexed by that enumeration, and exposes that array as a Dynamic MBean via JMX. It makes exposing debugging counters and even simple controls absolutely trivial.

Of course, there are some optional parameters for this capability, like the name of the managed bean, the location of the mbean server, a logger to use for error messages, and a callback that tells you when a counter has been changed remotely. But there are usable defaults for all that stuff. Instead of cluttering up the class with a bunch of polymorphic constructors, Counters has settors for those options. And since each settor returns a reference to the Counters object, you can write your application code as simply as this


enum Fault {
FAULT_INVALID_CONTEXT,
FAULT_OUT_OF_SPEC,
FAULT_INVALID_CHECKSUM,
FAULT_ SEQUENCE_NUMBER
};
Counters fault = new Counters(Fault.class);


or as complex as this


enum Fault {
FAULT_INVALID_CONTEXT,
FAULT_OUT_OF_SPEC,
FAULT_INVALID_CHECKSUM,
FAULT_ SEQUENCE_NUMBER
};
Counters fault = new Counters(Fault.class)
.setCallBack(myCallBack)
.setMBeanServer(myMBeanServer)
.setMBeanName(myMBeanName)
.setLogger(myLogger);


I know this sounds like a mundane example, but I think that not only does this simple constructor design lowers the cognitive cost of entry for using this class, the fact that you don’t really have to know what an MBean server or an MBean name is in order to make effective use of it makes the value more obvious.

This isn’t a new pattern, and I certainly didn’t invent it. But I’ve recently warmed to it, and Five Myths just convinces me that this is the right way to go. Five Myths preaches simple usage, and if you absolutely must have options for the power user, hide them and provide usable defaults.

The Rapid Consumer Adoption Process

Five Myths defines the Rapid Consumer Adoption Process. It goes like this: Design Product, Implement Design, then iterate in the cycle Deploy Product, Analyze Product, Improve Product, until golden, brown and delicious. Developers will recognize this as just a form of iterative development. But this cyclic process is more controversial than you might think, and getting more so all the time.

As competition gets worse, budgets get tight, and companies are under more pressure to improving the bottom line while shortening the time to market, the fact that all product development is fractally iterative sometimes get lost. I remember ten years ago working on one product for as many as six release cycles, or about three years. In the past few years, the norm is more like two or even one release cycle before I’m pulled off and put on the next crash priority project.

This is not good. It’s not that products were shipped too soon, but that after they were shipped there was no budget to keep developers around to improve the product. I felt like I was shipping nothing but prototypes. Customers felt it too: many complaints were heard about slippage in quality. But the first release of those later products weren’t any worse than the earlier ones. In fact, thanks to improved tools and better processes like test driven development, they were often better. But they never got much better than that first release, and it was really this lack of maturing, I think, that the customers were complaining about.

Every time I complain about quality, I feel especially old and cranky. I remember when I had to hike three miles to the computer center, carrying a box of punch cards, through a foot of snow, uphill both ways. The fact is programming languages, development tool chains, and processes have vastly improved since I started my career in the 1970s. There is no way I would want to go back. But Five Myths is just another voice telling me there is still plenty of room to move forward.

Sources

Paul Allen Smethers, Alastair France, Five Myths of Consumer Behavior: Create Technology Products Consumers Will Love, ConsumerEase Publishing, 2007

Everett Rogers, Diffusion of Innovations, Simon & Schuster, 1962

Chip Overclock, “Generic Embedded Programming Using Templates”, January 2007

Chip Overclock, “If Java is the new COBOL, is C++ the new assembly?”, January 2007

Chip Overclock, “Uncle Chip's Instant Managed Beans”, December 2006

Buckaroo, Digital Aggregates Corp., 2007

Desperado, Digital Aggregates Corp., 2007