Monday, September 29, 2014

What You Don't Know Can Hurt You

Below is a little snippet of C code from [Krebbers 2014]. Peruse it and see if you can predict what two values it will print. It's only a handful of lines long. Go ahead, take your time. I'll wait.

#include <stdio.h>
void main() {
    int x;
    int y;
    y = (x = 3) + (x = 4); 
    printf("%d %d\n", x, y); 

So let's compile it on my build server that's sitting a few feet away from me. It's a Dell x86_64 system with four 2.4GHz Intel cores running Ubuntu 14.04 with the 3.13 Linux kernel and the GNU 4.8.2 C compiler. It's old but still mighty.

coverclock@ubuntu:~/src/misc$ gcc -o foo foo.c

Good; no warnings, no errors.

coverclock@ubuntu:~/src/misc$ ./foo
4 8


This code isn't multi-threaded. It's barely single threaded. In fact, the code snippet is so simple, it hardly qualifies as anything beyond the classic "Hello World!" program.

Here's the thing: you may have gotten completely different results, if you used a different compiler. Or a different version of the same compiler. Or maybe even different compiler options for optimization or debugging levels. As Mister Krebbers points out in [Krebbers 2014]:
By considering all possible execution orders, one would naively expect this program to print 4 7 or 3 7, depending on whether the assignment x = 3 or x = 4 is executed first. However, the sequence point restriction does not allow an object to be modified more than once (or being read after being modified) between two sequence points [ISO C, 6.5 p. 2]. A sequence point occurs for example at the end ; of a full expression, before a function call, and after the first operand of the conditional ? : operator [ISO C, Annex C]. Hence, both execution orders lead to a sequence point violation, and are thus illegal. As a result, the execution of this program exhibits undefined behavior, meaning it may do literally anything.
Okay, so maybe not a huge surprise to folks who have memorized the ISO C standard. Or who are tasked with debugging problematic code by occasionally resorting to looking at the assembler code. Using a symbolic JTAG debugger that monitors the program at the hardware level, I've seen the program counter single step backwards in a sequential piece of C code, as the debugger traced the execution path the processor took through the optimized machine code and then tried to correlate it to the original source.

This is why you don't write tricky C code, playing games like trying to smash as much stuff into a single statement as you can. Because it can belie any kind of rational analysis. Because it becomes a debugging nightmare for the developer tasked with current engineering who comes after you. Because its behavior may change with your next compiler update. Or when it's ported to a project using a different compiler suite altogether.

Because it can bite you in the ass.


R. Krebbers, "An Operational and Axiomatic Semantics for Non-determinism and Sequence Points in C", 41st ACM SIGPLAN -SIGACT Symposium on Programming Languages, January 2014

International Organization for Standardization, ISO/IEC 9899-2011: Programming Languages - C, ISO Working Group 14, 2012

Lambda The Ultimate, "An Operational and Axiomatic Semantics for Non-determinism and Sequence Points in C", September 2014

Saturday, September 20, 2014

The Very Big and the Very Small

K. Asanovic at al., The Landscape of Parallel Computing Research: A View from Berkeley, EECS Department, U. C. Berkeley, UCB/EECS 2006-183, December 2006
(a paper I've cited here before) its authors, which include David Patterson (as in Patterson and Hennesy), remark
Note that there is a tension between embedded and high performance computing, which surfaced in many of our discussions. We argue that these two ends of the computing spectrum have more in common looking forward than they did in the past. [Page 4]
That's been my experience too, although perhaps for different reasons than the authors cite. I’ve made an excellent living flipping back and forth between the high performance and embedded domains. It turns out the skill sets are mostly the same. In particular, developers in both domains are constantly concerned about very low level details in the realm where software runs close to bare metal, and are always dealing with issues of real-time, asynchronicity, parallelism, and concurrency. These are relatively rare skills that are hard to come by for both the employer and the employee.

I was reminded of this as my tiny one-man company, Digital Aggregates Corporation, buys its fourth Android tablet to use as a development system. These tablets contain powerful multi-core ARM-based processors as well as other embedded microcontrollers and devices. And increasingly I am seeing the embedded and mobile device domains adopt technologies originally developed for large-scale systems, like Security Enhanced Linux (SELinux) and OS-level containerization like Linux Containers (LXC).

I’ve seen large development organizations axe their firmware developers as the company decided to get out of the hardware business to focus on large multi-core server-side software applications. What a remarkable lack of insight into the nature of the technologies on which their businesses depend.

Thursday, September 11, 2014

I've C++11ed and I can't get up!

(Updated 2014-09-14)

C++11 is the latest iteration of the standard for the C++ programming language. This is the 2011 version of the standard that was known as C++0x in its draft form. (C++14 is forthcoming.) There were some new features of C++11 that I thought I’d play around with since I have a little bit of time between gigs. I'm a big believer in using C++ for embedded and even real-time applications whenever possible. But it's not a slam dunk. The language is complex, and growing more complex with every standards iteration.

Using C++ effectively has many benefits, even in the embedded/real-time domain. But it can place a burden on the development team; I have found it relatively easy to write C++ code that is nearly incomprehensible to anyone except the original author. Try debugging a complex problem in code that you did not write and that uses the Standard Template Library or the Boost Library to see what I mean.

My little test program that I've been futzing around with can be found here

which is useful since Blogger seems to enjoy hosing up the angle brackets in my examples below that use templates.

I like the decltype but I wish they had used typeof to be consistent with sizeof. I cheated.

#define typeof decltype

    long long int num1;
    typedef typeof(num1) MyNumType;
    MyNumType num2;

    printf("sizeof(num1)=%zu sizeof(num2)=%zu\n", sizeof(num1), sizeof(num2));

I really like the ability for one constructor to delegate to another constructor (something Java has always had). I also like the instance variable initialization (ditto).

    class Thing {
        int x;
        int y = 2;
        Thing(int xx) : x(xx) {}
        Thing() : Thing(0) {}
        operator int() { return x * y; }

    Thing thing1, thing2(1);

    printf("thing1=%d thing2=%d\n", (int)thing1, (int)thing2);

An explicit nullptr is nice although a 0 still works.

    void * null1 = nullptr;
    void * null2 = 0;

    printf("null1=%p null2=%p equal=%d\n", null1, null2, null1 == null2);

The new alignas and alignof operators solve a problem every embedded developer and systems programmer has run into and has had to resort to proprietary, compiler-specific directives to solve.

    struct Framistat {
        char a;
        alignas(int) char b;
        char c;
    printf("alignof(int)=%zu sizeof(Framistat)=%zu alignof(Framistat)=%zu\n", alignof(int), sizeof(Framistat), alignof(Framistat));

    Framistat fram1[2];

        , &fram1[1].a - &fram1[1].a, sizeof(fram1[1].a)
        , &fram1[1].b - &fram1[1].a, sizeof(fram1[1].b)
        , &fram1[1].c - &fram1[1].a, sizeof(fram1[1].c)


I like the auto keyword (which has been repurposed from it’s original definition). You can declare a variable to be a type that is inferred from its context.

    auto foo1 = 0;
    auto bar1 = 'a';

    printf("sizeof(foo1)=%zu sizeof(bar1)=%zu\n", sizeof(foo1), sizeof(bar1));

You can use {} for initialization in many contexts, pretty much anywhere you can initialize a variable. (Yes, the missing = below is correct.)

    int foo3 { 0 };
    char bar3 { 'a' };

    printf("sizeof(foo3)=%zu sizeof(bar3)=%zu\n", sizeof(foo3), sizeof(bar3));

Here’s where my head explodes.

    auto foo2 { 0 };
    auto bar2 { 'a' };

    printf("sizeof(foo2)=%zu sizeof(bar2)=%zu\n", sizeof(foo2), sizeof(bar2)); // WTF?

The sizeof(foo2) is 16. 16? 16? What type is foo2 inferred to be? I haven’t figured that one out yet.

I like the extended for statement where it can automatically iterate over a container or an initialization list. The statements

    enum class Stuff : uint8_t {

    for (const auto ii : { 1, 2, 4, 8, 16, 32 }) {
        printf("ii=%d\n", ii);



    for (const Stuff ss : { Stuff::THIS, Stuff::THAT, Stuff::OTHER }) {
        printf("ss=%d\n", ss);

    std::list<int> mylist = { 1, 2, 3, 5, 7, 11, 13, 17 };

    for (const auto ll : mylist) {
        printf("ll=%d\n", ll);

do exactly what you would expect. Also, notice I can now set the base integer type of an enumeration, something embedded developers have needed forever. And I can use a conventional initialization list to initialize the STL list container. But if there's a way to iterate across all of the values in an enumeration, I haven't found it.

I’m kind of amazed that I figured out the lambda expression stuff so easily (although I have a background in functional languages going all the way back to graduate school), and even more amazed that it worked flawlessly, using GNU g++ 4.8. Lambda expressions are a way to, in effect, insert a portion of control of the calling function into a called function. This is much more powerful than just function pointers or function objects, since the inserted lambda can refer to local variables inside the calling function when it is being executed by the called function.

const char * finder(std::list<std::pair
<int, std::string>> & list, const std::function <bool (std::pair<int, std::string>)>& selector){
const char * result = nullptr;

for (auto ll: list) {
if (selector(ll)) {
result = ll.second.c_str();

return result;


    std::list<std:pair<int, std::string>> list;

    list.push_back(std::pair<int, std::string>(0, std::string("zero")));
    list.push_back(std::pair<int, std::string>(1, std::string("one")));
    list.push_back(std::pair<int, std::string>(2, std::string("two")));
    list.push_back(std::pair<int, std::string>(3, std::string("three")));

    for (auto ll : list) {
        printf("ll[%d]=\"%s\"\n", ll.first, ll.second.c_str());

    int selection;
    selection = 0;
    printf("list[%d]=\"%s\"\n", selection, finder(list, [&selection] (std::pair<int, std::string> entry) -> bool { return entry.first == selection; }));
    selection = 1;
    printf("list[%d]=\"%s\"\n", selection, finder(list, [&selection] (std::pair<int, std::string> entry) -> bool { return entry.first == selection; }));
    selection = 2;
    printf("list[%d]=\"%s\"\n", selection, finder(list, [&selection] (std::pair<int, std::string> entry) -> bool { return entry.first == selection; }));
    selection = 3;
    printf("list[%d]=\"%s\"\n", selection, finder(list, [&selection] (std::pair<int, std::string> entry) -> bool { return entry.first == selection; }));
    selection = 4;

    printf("list[%d]=\"%s\"\n", selection, finder(list, [&selection] (std::pair<int, std::string> entry) -> bool { return entry.first == selection; }));

Lambda expressions appeal to me from a computer science viewpoint (there's that graduate school thing again), but I do wonder whether they actually provide anything more than syntactic sugar over alternatives like function objects whose type inherits from a predefined interface class. Lambdas  remind me of call-by-name and call-by-need argument evaluation strategies, both forms of lazy evaluation.

Where C is a portable structured assembler language, C++ is a big, complicated, high-level programming language that can be used for applications programming or for systems programming. It has a lot more knobs to turn than C, and some of those knobs are best left alone unless you really know what you are doing. In my opinion it is much easier to write poor and/or incomprehensible code in C++ than it is in C. And this is coming from someone who has written hundreds of thousands of lines of production C and C++ code for products that have shipped, and who was mentored by colleagues at Bell Labs, which had a long history of using C++ in embedded and real-time applications. One of my old office mates at the Labs had worked directly with Bjarne Stroustrup; I sucked as much knowledge from his brain as I could.

C++, and especially C++11, is not for the faint hearted. But C++ is an immensely powerful language that can actually produce code that has a smaller resource footprint than the equivalent code in C... if such code could be written at all. C++ is worth considering even if you end up using a limited subset of it; although having said that, I find even widely used subsets like MISRA C++ too restrictive.

Monday, September 08, 2014


When I decided that it would be fun to ride my BMW R1100RT motorcycle from Denver Colorado to Cheyenne Wyoming to get a personalized tour of the new NCAR Wyoming Supercomputer Center (NWSC), it was 90°F, dry, and sunny. When the day came to actually do the ride, it was 48°F, raining, and dismal. At least I got to test my cold and wet weather riding gear.

The NWSC was built to house the latest supercomputers dedicated to climate research that are operated by the National Center for Atmospheric Research (NCAR), a national laboratory based in Boulder Colorado that is sponsored by the U.S. National Science Foundation (NSF). The Boulder Mesa Laboratory, where I worked for several years, still houses its own supercomputer center. But both space and electricity in Boulder is precious. So when the time came for NCAR to expand its supercomputing resources, a new facility was constructed about two hours north of Boulder and just a few minutes west of Cheyenne Wyoming. My old friend and colleague Marla Meehl was good enough to talk Assistant Site Manager Jim Vandyke into giving me a tour of the new facility. It's nice to have friends in high places (the NCAR Mesa Lab being at an altitude of 6109 feet above sea level).

The NWSC houses Yellowstone, an IBM iDataPlex compute cluster and the latest in a lengthy series of computers managed by NCAR for the use of climate scientists. NCAR has a long history of providing supercomputers for climate science, going all the way back to a CDC 3600 in 1963, and including the first CRAY supercomputer outside of Cray Research, the CRAY-1A serial number 3.

Yellowstone represents a long evolution from those early machines.  It is an IBM iDataPlex system consisting of (currently) 72,576 2.6GHz processor cores. Each Intel Xeon chip has eight cores, each pizza box-sized blade server compute node has two chips, each column has at most (by my count) thirty-six pizza boxes, and each cabinet has at most two columns. There are one hundred cabinets, although not all cabinets are compute nodes. Each compute node uses Infiniband in a fat-tree topology (like the old Connection Machines, which NCAR also used at one time) as an interconnect fabric, Fibre Channel to reach the disk and tape storage subsystem, and ten gigabit Ethernet for more mundane purposes. Yellowstone has an aggregate memory capacity of more than 144 terabytes, eleven petabytes of disk space, and (my estimate) at least a hundred petabytes of tape storage organized into two StorageTek robotic tape libraries.

It all runs on Redhat Linux. Major software subsystems include the IBM General Parallel File System (GPFS), the High Performance Storage System (HPSS), and IBM DB2.

The NWSC computer room runs, for the most part, dark. There is a full time 24x7 staff at the NWSC of perhaps twenty people, although that includes those who man NCAR's Network Operation Center (NOC). This is cloud computing optimized for computational science and climate research. Not only Yellowstone's large user community of climate scientists and climate model developers, but its system administrators, all access the system remotely across the Internet.

This is remarkable to an old (old) hand like me who worked for years at NCAR's Mesa Lab and was accustomed to its computer room being a busy noisy place full of operators, administrators, programmers, and even managers, running to and fro. But the NWSC facility has none of that. It is clean, neat, orderly, even quiet (relatively speaking), and mostly uninhabited. This is the future of computing (until, you know, it changes).

The infrastructure around Yellowstone was, for me, the real star of the tour. The NWSC was purpose built to house massive compute clusters like Yellowstone (which currently takes up a small portion of the new facility; lots of room for expansion).

Below are a series of photographs that I was graciously allowed to take with my iPhone 5 during my tour. I apologize for the photo quality. Better photographs can be found on the NWSC web site. All of these photographs, along with the comments, can be found on Flickr. Some of the photographs were truncated by Blogger; you can click on them to see the original. For sure, any errors or omissions are strictly mine.

* * *

Compute Node Cabinet (Front Side)

Yellowstone Cabinet - Front Side - Open

I told Jim Vandyke, assistant site manager, to "point to something". I count thirty-six pizza box "compute nodes" in each column, two columns, each node with dual Intel Xeon chips, each chip with eight 2.6 GHz execution cores, in this particular cabinet. There are also cabinets of storage nodes crammed with disk drives, visualization/graphics nodes with GPUs, and even login nodes where users ponder their work and submit jobs.

Compute Node Cabinet (Back Side)

Yellowstone Cabinet - Back Side - Open

Each pair of pizza boxes looks to have four fans. But what's interesting is the door to the cabinet on the right: that's a big water cooled radiator. You can see the yellow water lines coming up from under the floor at the lower right.


Yellowstone Cabinet - Back Side - Closed

This is what the radiator door looks like closed. Again, note the yellow water lines at the bottom.


Yellowstone Cabinet - Front Side - Interconnects

Classic separation of control and data: the nodes use an Infiniband network in a Connection Machine-like fat-tree topology for the node interconnect, Fibre Channel to access the storage farm, and ten gigabit Ethernet for more mundane purposes.

Sticky Mat

Sticky Mats

You walk over a sticky mat when you enter any portion of the computer room. The tape libraries are in an even more clean room environment (which is why I didn't get to see them); tape densities (five terabytes uncompressed) are so high that a speck of dust poses a hazard.

Bug Zapper

Bug Zapper

Here's a detail you wouldn't expect: I saw several bug zappers in the infrastructure spaces. And they were zapping every few seconds. As they built the building out in an undeveloped area west of Cheyenne, where land and energy is relatively cheap compared to NCAR's Boulder supercomputer center location, all sorts of critters set up housekeeping.

Cooling Tower

Cooling Tower

This is looking out from the loading dock. There are a number of heat exchanges in the computer room cooling system. Ultimately, the final stages goes out to a cooling tower, but not before it is used to heat the LEED Gold-certified building.

Loading Dock

Loading Dock

The loading dock, the hallway outside (big enough to assemble and stage equipment being installed), and the computer room are all at one uniform level, making it easy to roll in equipment right off the semi-trailer. I dimly recall being told that Yellowstone took 26 trailers to deliver. You can see a freight elevator on the right to the lower infrastructure space. The gray square on the floor on the lower left is a built-in scale, you can verify that you are not going to exceed the computer room floor's load capacity.

Heat Exchanger

Heat Exchanger

There are a number of heat exchanges in the cooling system. This is the first one, for water from the computer room radiators. I had visions of the whole thing glowing red, but the volume of water used in the closed system, and the specific heat index of water, is such that the temperature going into this heat exchanges is only a few degrees hotter than that of the water leaving it. It wouldn't be warm enough for bath water.

Fan Wall

Fan Wall

This is a wall o' fans inside a room that is part of the cooling system.These fans pull air from space above the computer room through a vertical plenum and down to this space below the computer room where it is cooled through what is effectively gigantic swamp coolers. All of the air conditioning in the building is evaporative. Each fan has a set of louvers on the front that close if the fan stops running to prevent pressure loss.


Power and Power Conditioning

This is right below a row of Yellowstone compute node racks in the computer room above. If you lifted a floor tile in the computer room, you would be looking twelve feet down into this area.

Filter Ceiling

Ceiling of Air Filters

If you want to run a supercomputer, everything has to be super: networking, interconnect, storage, power, HVAC. This is part of the air filtration system, sitting right below the computer room floor into which the clean cool air flows.

Vertical Air Plenum

Air Plenum

This photograph was taken from a window in the visitors area, looking into a vertical air plenum big enough to drive an automobile through. The wall of fans is directly below. The computer room is visible though a second set of windows running along the left. Through them one of the Yellowstone cabinets is just visible. Air moves from the enormous space (another story of the building, effectively) above the computer room, down through this vertical plenum, through the fan wall, through the cooling system, up into the ceiling of filters and through the computer room floor. I didn't get a photograph of it, but the enormous disk farm generates so much heat that its collection of cabinets are glassed off in the computer room with dedicated air ducts above and below.

Visitor Center

Visitors Center

This is where you walk in and explain to the security folks who you are and why you are here. Before I got three words out of my mouth the guard was handing me my name badge. It's nice to be expected. There's a quite nice interactive lobby display. The beginning of the actual computer facility is visible just to the right.

Warehouse of Compute Cycles

Computer Facility

Typical: simple and unassuming, looks like a warehouse. You'd never know that it contains what the TOP500 lists as the 29th most powerful computer in the world, according to industry standard benchmarks.

* * *

A big Thank You once again to Jim Vandyke, NWSC assistant site manager, for taking time out of his busy schedule for the tour (Jim is also the only network engineer on site), and a Hat Tip to Marla Meehl, head of NCAR's networking and telecommunications section, for setting it all up for me.

Thursday, September 04, 2014

War Stories

Nearly forty years ago I was taking a required senior/graduate level computer science course for which we had to write in assembler a multitasking system with device drivers and such for a PDP-11/05. I would go into the lab first thing in the morning before work and my software would work flawlessly. I could go into the lab in the evening after work and could not get my DMA disk device driver to work at all. This went on for days. I was nearly in tears, pulling my hair out.

I had to demonstrate my software to the professor to pass the course. So I signed up for a morning slot in his schedule. My software worked. I passed.

After the term was over I had reason to go back into that lab during the break and I ran into the hardware support guys taking the system apart.

"What's the deal?"

"We think there's a loose solder joint or something in the disk controller. It quits working when it gets warm."

I smiled and nodded and went on my way.

(I would go on to teach this class as a graduate student, the original professor would be my thesis advisor, and what I learned in that class formed the basis for my entire career since then. It would also form the basis for Mrs. Overclock's Rule: "If Mr. Overclock spends too much time debugging his software, he should start looking at the hardware.")

* * *

Decades ago I ran a systems administration and lab support group at a state university. It was the end of the academic term and I was deleting the course accounts to clean up the disk on a VAX/750 running Berkeley Unix 4.2 in one of the labs I was responsible for. This is something my student assistants normally did, but I thought I would get started on it.

The clean up actually took a long time to execute, so I was going to run it as a background process so I could do other stuff on the system console as it ran. I logged in and executed the following commands.

cd /
rm -rf home/cs123 &

I noticed that I didn't get a shell prompt back as I expected to. I waited for a moment or two more, began to get concerned, then started looking more closely at exactly what I had typed.

Have you ever noticed that the * character and the & character are right next to each other on the QWERTY keyboard?

I tried to cancel the command but it was too late. I had just started a process that would delete the entire root file system — including the operating system — from the disk.

One of my student assistants walked by and noticed me staring at the console. She asked "How is it going?"

I sighed and said "Could you please go fetch me the backup tapes?"

(I would go on to automate the process of creating and deleting student directories with shell scripts so that this would be unlikely to ever occur again.)

* * *

Back in my Bell Labs days I was in a lab struggling to troubleshoot some complex real-time traffic shaping firmware I had written for an ATM network interface card that had an OC-3 fiber optic physical interface. Using fiber optics meant the test equipment was all horrendously expensive.

I was working late one night — and truth be told a little peeved at myself for taking this long to debug my code — when it suddenly dawned on me that between the ATM broadband analyzer, the ATM protocol analyzer, the multi-cabinet telecom equipment under test, the network traffic generators, and all the fiber optic cable I had strewn all over the place, I was probably using a million dollars worth of equipment, just to debug my code. It was a major insight: I could never had done that kind of work in a smaller organization.

With all the emphasis these days on cheap computers and free open source software (which much of my current work certainly takes advantage of), that's something I think is often unappreciated: there are some problems you just can't tackle without a million dollars worth of equipment.

* * *

A long time client asked me to come in for an afternoon to one of their labs to help debug some cellular telecom equipment that had been returned from the field and for which I was one of the principal platform developers. We sat at the lab bench watching log messages scroll by on a laptop connected to the unit while a technician got the unit to go into its failure mode.

"Okay", I began, "this is likely to be a hardware problem. There is a failure with the connection between the ARM processor and the PowerPC processor. It sure looks like an intermittent solder joint failure."

"Oh, no", said the technician, "we think this is a software problem. We were thinking you could..."

As he spoke I slammed my hand against the side of the cabinet and the problem went away.

"... oh... Okay, I'll mark that one down as a hardware problem."

Of course, I had no idea what was going to happen when I hit the side of the cabinet. I was just doing that as a diagnostic step.

But it did make me look like a fraking genius.

* * *

Back in the 1970s I worked as a systems programmer for IBM mainframes. I wrote a lot of code in a lot of languages, FORTRAN, COBOL, even a smattering of PL/I. But mostly I wrote programs in IBM 360/370 assembler to do low level systems things, even sometimes I/O channel programming.

In the 1980s I left that position to work for another department in the same organization, to work with minicomputers. Now I wrote code in PDP-11 assembler for machines ranging in size from a shoebox to a refrigerator.

One day a young programmer I had never seen before showed up in my office. He worked for my old department and had been tasked with modifying some printer software in IBM assembler for new hardware. I looked at the code - it was just a couple of pages - and immediately recognized my coding style. There was no question in my mind that I had written it. Here's the thing: I didn't remember it. I don't mean I didn't remember writing the program. I didn't even remember the project for which I wrote it. It might have well come from a parallel universe.

Thanks to his domain knowledge, and my own excellent comments, we figured out what needed to be done. And ever since then, I came to really appreciate maintenance programmers, and the need to leave good breadcrumbs for those who follow me.

(Added 2020-08-28)

Wednesday, September 03, 2014

Forward Looking Infrared

I'm futzing around with a FLIR Systems E8, a forward looking infrared (FLIR) camera. My goal is to use it to, among other things, characterize heating in semiconductor components in embedded systems. Or anything else I can get my clients to pay for.

Here's an Nvidia Jetson TK1 evaluation board. The round hot spot in the middle is the integrated cooling fan on top of the Tegra SoC which has four ARM Cortex-A15 cores, a fifth low-power Cortex-A15 core, and 192 GPUs. The fact that you can buy this much horsepower for only USD$192 is amazing to me. The Jetson is the next in a long (long) series of ARM-based evaluation boards that I've used to do reference ports of my company's software libraries whose code finds its way into my clients' products.

Nvidia Jetson TK1 with Tegra 124 SoC

I'm an Apple user principally, and I do most of my development work on Linux or Android. But I have to keep a Windows laptop around for one technical reason or another. (I also run Windows in a VM on one of my Macs.) This is my Lenovo Thinkpad T430s that I use for field troubleshooting. What's interesting is you can really tell where the exhaust vent for the fan is by that heat signature on the left hand side. Also note the power brick on the right.

Lenovo ThinkPad T430s

This stuff is tricky. This is my ghostly reflection in a Pella double-paned low-emissivity sliding glass door at the Palatial Overclock Estate. There is clearly a bit of an art at discriminating direct IR from reflected IR. I ordered a roll of emissivity calibration tape that is made to be used for just this purpose.

My IR Reflection In Patio Door



I chose the FLIR Systems E8 for a variety of reasons. My tiny company could afford it, although it was still an expensive piece of kit, even for a guy that's used to buying a new server, laptop, or evaluation board at the drop of a hat. I think the E-serie's MSX imaging technology that incorporates both an IR camera and a conventional camera makes it a lot easier to tell just what the heck you're really looking at, particularly when you are reviewing images taken perhaps days earlier. Ditto with the 320 x 240 pixel resolution, which I expect to come in handy when peering at components on a printed circuit board. And the unit is portable and easy to use so I won't hesitate to schlep it around to client sites.

I have used a relatively inexpensive IR thermometer with a laser sight for similar purposes for several years now. It will be interesting to see if the E8 replaces my IR thermometer completely or is merely an additional tool in my kit.