Saturday, November 08, 2014

Unintended Consequences of the Information Economy

William Lynn, former Deputy Secretary of Defense in the Obama administration, writes in the journal Foreign Affairs on how the transition from a manufacturing economy to an information economy has affected the U.S. Department of Defense in "The End of the Military-Industrial Complex".
For more than a decade, U.S. defense companies have been lagging further and further behind large commercial companies in technology investment. Although the Pentagon historically exported many technologies to the commercial sector, it is now a net importer. Indeed, next-generation commercial technology has leapt far ahead of what the defense industry can produce in areas spanning 3-D printing, cloud computing, cybersecurity, nanotechnology, robotics, and more. In addition, commercial information technology dominates national security today as much as it does the private sector. Soldiers now use smartphones to gather real-time surveillance from drones and send messages to fellow soldiers. 
Keeping up with commercial innovations will be difficult, if not impossible. The combined R & D budgets of five of the largest U.S. defense contractors (about $4 billion, according to the research firm Capital Alpha Partners) amount to less than half of what companies such as Microsoft or Toyota spend on R & D in a single year. Taken together, these five U.S. defense titans do not even rank among the top 20 individual industrial investors worldwide. Instead of funding R & D, defense companies have been returning the overwhelming majority of their available cash to shareholders in the form of dividends and stock buybacks. As a result, from 2000 to 2012, company-funded R & D spending at the top U.S. defense firms dropped from 3.5 percent to roughly two percent of sales, according to Capital Alpha Partners. The leading commercial companies, by contrast, invest an average of eight percent of their revenue in R & D.
Lynn opens with the example of Google's purchase of Boston Dynamics, the robotics firm that designed, among other devices, the BigDog, the four-legged load carrying robot. BigDog was originally funded by the U. S. DoD. Google announced that while they would honor Boston Dynamics existing military commitments, they would not be seeking further work from the DoD. Google basically reached into their deep pockets and pulled the advanced robotic technology rug right out from under the Defense Department.

Lynn identifies several trends that may be ending the Military-Industrial Complex as we have come to know it. High tech companies are reticent to reveal what may be valuable intellectual property to the U. S. government. They don't see a reason to have to deal with the vast government procurement and contracting bureaucracy when more money can be made more easily in the commercial space. Commercial companies increasingly exploit globalization, manufacturing goods or building research facilities overseas where it makes economic sense, something the U.S. military is understandably reluctant to do for both national security and political reasons. Lynn talks about how the Defense Department is going to have to come to terms with these trends unless it wants to lose its technological advantage.

What Lynn doesn’t talk about (but probably knows): in the 21st century information economy, the key component to growth isn’t enormous capital investment in manufacturing capacity, something the military establishment used to good effect during World War II, but instead enormous people investment in innovation capacity.

Just yesterday National Public Radio ran a story on this very topic: "Future U. S. Manufacturing Jobs Will Require More Brain Than Brawn". Planet Money's Adam Davidson remarks on how this is affecting the world of work.
If you want to succeed for the coming decades, you don't just need to be trained and then a few years later retrained. You need a continuous improvement in your education. The main skill you need is the skill to learn more skills. The one certainty we have is manufacturing is going to look more and more like computer programming and engineering. It's going to involve a lot more brain work and a lot less brawn work. And that means probably a smaller number of people can benefit, but those who can benefit will probably benefit quite a bit.
The DoD can’t just dial up more innovation capacity by throwing money at the problem, like they did in WWII. Nor, in a free country, can the U. S. government just mandate for whom companies choose to work. Innovation capacity requires not only brilliant engineers, who are hard enough to come by, and who cannot be easily identified in the job market, but also a willingness to accept a lot of risk: to try and perhaps to fail, over and over. To old school 20th century managers, this looks a lot like waste, but in fact it’s a necessary part of the innovation process. The economics of conflict is changing just like the economics of everything else is changing. It’s as if one smart guy with a laptop, some open source software stacks, and a 3-D printer, can now manufacture hydrogen bombs.

The DoD has to start thinking more about leveraging not just globalization (like, as Lynn suggests, by buying German-made artillery), but also consumer technologies where there are enormous economies of scale (so they’re relatively cheap compared to specialized albeit low volume goods), not to mention more profitable than the DoD could make it for the contractor. The days of the DoD calling the shots in high-tech are over. Global market forces are going to make the decisions about who makes what and for whom.

My career, so far spanning four decades, has been distributed among academia and big science, defense contracting, and commercial high-tech product development. While the transition to the information economy has been very very good to me, I've been thinking about Lynn's article a lot lately, and what it means for my life, my colleagues, my clients, and my country.

Monday, September 29, 2014

What You Don't Know Can Hurt You

Below is a little snippet of C code from [Krebbers 2014]. Peruse it and see if you can predict what two values it will print. It's only a handful of lines long. Go ahead, take your time. I'll wait.

#include <stdio.h>
void main() {
    int x;
    int y;
    y = (x = 3) + (x = 4); 
    printf("%d %d\n", x, y); 
}

So let's compile it on my build server that's sitting a few feet away from me. It's a Dell x86_64 system with four 2.4GHz Intel cores running Ubuntu 14.04 with the 3.13 Linux kernel and the GNU 4.8.2 C compiler. It's old but still mighty.

coverclock@ubuntu:~/src/misc$ gcc -o foo foo.c
coverclock@ubuntu:~/src/misc$ 

Good; no warnings, no errors.

coverclock@ubuntu:~/src/misc$ ./foo
4 8
coverclock@ubuntu:~/src/misc$ 

Huh.

This code isn't multi-threaded. It's barely single threaded. In fact, the code snippet is so simple, it hardly qualifies as anything beyond the classic "Hello World!" program.

Here's the thing: you may have gotten completely different results, if you used a different compiler. Or a different version of the same compiler. Or maybe even different compiler options for optimization or debugging levels. As Mister Krebbers points out in [Krebbers 2014]:
By considering all possible execution orders, one would naively expect this program to print 4 7 or 3 7, depending on whether the assignment x = 3 or x = 4 is executed first. However, the sequence point restriction does not allow an object to be modified more than once (or being read after being modified) between two sequence points [ISO C, 6.5 p. 2]. A sequence point occurs for example at the end ; of a full expression, before a function call, and after the first operand of the conditional ? : operator [ISO C, Annex C]. Hence, both execution orders lead to a sequence point violation, and are thus illegal. As a result, the execution of this program exhibits undefined behavior, meaning it may do literally anything.
Okay, so maybe not a huge surprise to folks who have memorized the ISO C standard. Or who are tasked with debugging problematic code by occasionally resorting to looking at the assembler code. Using a symbolic JTAG debugger that monitors the program at the hardware level, I've seen the program counter single step backwards in a sequential piece of C code, as the debugger traced the execution path the processor took through the optimized machine code and then tried to correlate it to the original source.

This is why you don't write tricky C code, playing games like trying to smash as much stuff into a single statement as you can. Because it can belie any kind of rational analysis. Because it becomes a debugging nightmare for the developer tasked with current engineering who comes after you. Because its behavior may change with your next compiler update. Or when it's ported to a project using a different compiler suite altogether.

Because it can bite you in the ass.

References

R. Krebbers, "An Operational and Axiomatic Semantics for Non-determinism and Sequence Points in C", 41st ACM SIGPLAN -SIGACT Symposium on Programming Languages, January 2014

International Organization for Standardization, ISO/IEC 9899-2011: Programming Languages - C, ISO Working Group 14, 2012

Lambda The Ultimate, "An Operational and Axiomatic Semantics for Non-determinism and Sequence Points in C", September 2014

Saturday, September 20, 2014

The Very Big and the Very Small

In
K. Asanovic at al., The Landscape of Parallel Computing Research: A View from Berkeley, EECS Department, U. C. Berkeley, UCB/EECS 2006-183, December 2006
(a paper I've cited here before) its authors, which include David Patterson (as in Patterson and Hennesy), remark
Note that there is a tension between embedded and high performance computing, which surfaced in many of our discussions. We argue that these two ends of the computing spectrum have more in common looking forward than they did in the past. [Page 4]
That's been my experience too, although perhaps for different reasons than the authors cite. I’ve made an excellent living flipping back and forth between the high performance and embedded domains. It turns out the skill sets are mostly the same. In particular, developers in both domains are constantly concerned about very low level details in the realm where software runs close to bare metal, and are always dealing with issues of real-time, asynchronicity, parallelism, and concurrency. These are relatively rare skills that are hard to come by for both the employer and the employee.

I was reminded of this as my tiny one-man company, Digital Aggregates Corporation, buys its fourth Android tablet to use as a development system. These tablets contain powerful multi-core ARM-based processors as well as other embedded microcontrollers and devices. And increasingly I am seeing the embedded and mobile device domains adopt technologies originally developed for large-scale systems, like Security Enhanced Linux (SELinux) and OS-level containerization like Linux Containers (LXC).

I’ve seen large development organizations axe their firmware developers as the company decided to get out of the hardware business to focus on large multi-core server-side software applications. What a remarkable lack of insight into the nature of the technologies on which their businesses depend.