Wednesday, February 14, 2007

Small is Beautiful, but Many is Scary II

Space Cowboy Steve Tarr reminds me why I miss working with him (as if I really needed any reminder). In a comment on my prior article on this topic, in which I reviewed the white paper The Landscape of Parallel Computing Research: The View from Berkeley, he points out quite rightly that the future of many-cores in the embedded world is not a thousand general purpose processing elements. Even today, many microprocessors for the embedded market have a single general purpose core and many special purpose cores targeted for functions like communications or digital signal processing. Steve's example was robotics, something he knows a bit about from working on firmware for unmanned space missions.

There are several economic drivers of this, not the least of which is the fact stated by the Berkeley researches that multiple cores are actually easier to manufacture and functionally test. They produce higher effective yields since redundant cores which are defective can be deactivated and the resulting microprocessor sold at a lower-price. Embedded guru Jack Ganssle points out in his seminar Better Firmware Faster (review) that separating large development projects onto different host processors can actually reduce time-to-market. And many real-time designs are simplified by the ability to dedicate a processor to a task; polling no longer seems like a bad thing, and jitter in hard real-time scheduling is reduced or eliminated. For sure, there are lots of reasons to like the many-core approach.

If embedded microprocessors have a thousand identical cores, embedded products will likely purpose most of these cores to specialized tasks: processing packets from demultiplexed communications channels, digital signal processing of VOIP calls, dedicating a core to controlling a particular hardware device. Even today the trend in telecommunications products seems to be handling functions like codecs, companding, and conference summing in software, and dedicating a core to handle a single channel or at most a handful of channels is very attractive. A thousand core microprocessor places a thousand universal machines on the chip. This makes a lot of specialized hardware used today obsolete.

To be fair to the white paper and its authors, one of the things that I really liked about it is that it acknowledged the common themes running through the high performance computing and the embedded camps when it comes to many-core architectures. Having lived in both camps at various times in my career, as well as the multi-core enterprise server camp, it has always troubled me that the various groups don't seem to know about each other, much less recognize the issues they have have in common. The white paper specifically mentions that an application for a many-core microprocessor may have a lot of specialized tasks communicating by message passing. I just neglected to mention it in my article. My bad.

And, of course, as the number of daemons and other background processes running on the typical Linux server continues to grow, throwing a many-core microprocessor at the problem sounds pretty good too. And although I've never lived in the database/transaction camp, or in the visualization camp, I would expect their members to have an interesting opinion on this topic as well. As will the virtualization folks, who may offer service providers many virtual servers hosted on a single many-core microprocessor chip, an attractive prospect for the software-as-a-service market.

A thousand cores? I think it's going to be fun.

No comments: