Tuesday, December 24, 2013
Update
I haven't written much in the way of new blog articles for a long time. Work consumes. But I have been busy updating some existing articles. Today I wrote an addendum to We Shape Our Tools And Then Our Tools Shape Us to include some serious upgrades to my field support capability for air travel. If you take a dead serious approach to field support like I do, you'll like the what you see. (Go all the way to the end of the article.)
Saturday, November 30, 2013
James Mickens: The Night Watch
In his article The Night Watch computer science researcher James Mickens has described so succinctly what I have done for a living since 1976 that reading his article I alternated between laughing out loud (really) and weeping (almost).
http://research.microsoft.com/en-us/people/mickens/thenightwatch.pdf
(Human Computer Interaction) people discover bugs by receiving a concerned email from their therapist. Systems people discover bugs by waking up and discovering that their first-born children are missing and "ETIMEDOUT"' has been written in blood on the wall.
To say that this article sums up my day to day life for the past several decades is like saying the Universe is a middlin' large place. (And despite all that, it's been a very good life.)
Monday, May 06, 2013
In It For the Long Run
Journalist Brian Hall interviews me on what it's like to be the poster child for not letting your tech skills become obsolete in How to Thrive In The Tech Industry For Decades.
Saturday, May 04, 2013
I've become an internet meme!
I brought up LinkedIn in my browser this morning and this is what greeted me: a photograph of me taken circa 1978 has ended up in an article on obsolete technology skills. Amused friends and colleagues continue to pass this link on to me. The first to do so worked at that very same computer center at Wright State University when that photograph was taken. That's an IBM 360/65 running OS/MFT in the background, on which I worked as a systems programmer while in college. I dimly recall that it had a whopping 256KB of memory. I am totally rocking that 1970s look.
Engage the Ironic Drive, Mr. Crusher!
Thursday, April 04, 2013
Observations on Product Development: Part 4
- What product your company makes might not be as obvious as you might think.
- One way to tell what it makes is to look at where your company spends its money.
- A better way is to look at how your company makes its money.
- But the best way is to look at what your company produces that would be the most difficult for a competitor to duplicate.
- You may find that your hardware company is in fact a software company, a service provider, or even a marketing firm.
- And sometimes the greatest value is not in the product but in the minds of the people producing the product.
- Some or even most of those people may not actually be employees of your company.
Saturday, March 30, 2013
Observations on Product Development: Part 3
- Your product forms a unified hardware-firmware-software-marketing ecosystem.
- This is true even if you are only really interested in one part of it.
- Your product has a lifecycle that extends from conception through decommission.
- When your product first ships it will be, at best, one-third of the way through its lifecycle.
- Your product's ecosystem and its lifecycle are orthogonal concerns.
- You need to control costs in both; otherwise they will eat you alive.
- Architect, design, implement, and deploy with the entire ecosystem in mind.
- Architect, design, implement, and deploy with the entire lifecycle in mind.
- Successful product development organizations don't do this because they are large.
- They got large because they did this and it allowed them to scale.
Sunday, March 24, 2013
Observations on Product Development: Part 2
- Product developers need order and structure to do their jobs.
- This is true whether they admit it or not.
- Every process - waterfall, agile, you name it - makes its own assumptions.
- If those assumptions don't hold, that process may not yield the results you expect.
- Everyone wants to use a process that guarantees success.
- There is no silver bullet.
- If it were easy, anybody could do it.
- There is no substitute for smart, engaged people and face-to-face communication.
Saturday, March 16, 2013
Observations on Product Development: Part 1
- All product development is fractally iterative.
- This is true whether you want it to be or not.
- Success comes from generating revenue.
- Revenue cannot be sustainably generated without shipping a product.
- No one has ever shipped a perfect product.
- You won't be the first.
Monday, March 04, 2013
Hard Power Off Is Dead But Not Buried
In The Death of Hard Power Off I talked about how the use of persistent flash-based read-write storage -- flash file systems and solid state disks -- had led to the systems using those technologies requiring soft power off: software mechanisms that implement an orderly shutdown of the device before power is actually removed. The failure to quiesce flash-based read-write storage before cycling power will eventually lead to file system corruption. Do it enough, and you have a high likelihood of bricking the entire device.
Just the other day, my occasional colleague Paul Gross passed this very recent gem of a paper along to me, for which I am grateful.
But it doesn't have to be an SSD. I've seen similar failure modes in embedded systems using JFFS2 (Journaling Flash File System version 2) under Linux. JFFS2 does all the same kinds of things behind the scenes that an SSD does, except its controller is implemented in software instead of hardware, on top of commodity NAND flash. Just as the controller inside an SSD is rewriting its flash behind the scenes, the JFFS2 garbage collector kernel thread (which will look something like [jffs2_gc_mtd2], interpreted as "JFFS2 Garbage Collector for Memory Technology Device partition 2", when you do a ps command) is rolling along its merry way, erasing and rewriting flash blocks with little or no regard to the fact that your finger is on the power switch.
But with JFFS2, at least you have some prayer of coming to an orderly stop if you do something like a shutdown -h now before turning off the power. Not that systems in the field with hard power off will have an opportunity to do so, of course. But at least you might save a system or two in the development lab.
The problem with SSDs is that they are not only asynchronous -- doing stuff behind the scenes -- but also autonomous -- doing stuff that you don't even know about and have no control over. The SSD controller continues to erase and rewrite flash blocks even if you unmount the file system. Even if you shut down the operating system. Even if you hold the damn processor in reset.
It's like the honey badger. Honey badger SSD don't care.
Just the other day, my occasional colleague Paul Gross passed this very recent gem of a paper along to me, for which I am grateful.
Mai Zheng, Joseph Tucek, Feng Qin, Mark Lillibridge, "Understanding the Robustness of SSDs under Power Fault", 11th USENIX Conference on File and Storage Technologies (FAST '13), San Jose CA USA, February 12-15, 2013It's worth a read. Here's just a brief section from the abstract.
Applying our testing framework, we test fifteen commodity SSDs from five different vendors using more than three thousand fault injection cycles in total. Our experimental results reveal that thirteen out of the fifteen tested SSD devices exhibit surprising failure behaviors under power faults, including bit corruption, shorn writes, unserializable writes, meta-data corruption, and total device failure.These researchers from the Ohio State University and H-P Labs take the same approach as unpublished (and kinda clever) work done by my occasional colleague Julie Remington, a hardware engineer who hooked up a system we were troubleshooting that had a surface-mount SSD to a computer-controlled power supply and proceeded to cycle power on the system in a controlled, scripted fashion. Each power cycle waited until the system under test was up and stable, and logged all of the results from the system's serial console. Her results: after a few iterations, the system had to use fsck to repair the EXT3 file system on the SSD (file system corruption, not unexpected under the circumstances); after a few more, the SSD began reporting a bogus device type and serial number to hdparm (internal meta-data corruption); after just a few more, the device quit responding completely to I/O commands. It could only be recovered by removing the tiny flash chips from the top of the SSD chip itself and replacing them with uncorrupted chips from an identical SSD. Which one of Julie's colleagues did. Which is kinda, you know, hard core.
But it doesn't have to be an SSD. I've seen similar failure modes in embedded systems using JFFS2 (Journaling Flash File System version 2) under Linux. JFFS2 does all the same kinds of things behind the scenes that an SSD does, except its controller is implemented in software instead of hardware, on top of commodity NAND flash. Just as the controller inside an SSD is rewriting its flash behind the scenes, the JFFS2 garbage collector kernel thread (which will look something like [jffs2_gc_mtd2], interpreted as "JFFS2 Garbage Collector for Memory Technology Device partition 2", when you do a ps command) is rolling along its merry way, erasing and rewriting flash blocks with little or no regard to the fact that your finger is on the power switch.
But with JFFS2, at least you have some prayer of coming to an orderly stop if you do something like a shutdown -h now before turning off the power. Not that systems in the field with hard power off will have an opportunity to do so, of course. But at least you might save a system or two in the development lab.
The problem with SSDs is that they are not only asynchronous -- doing stuff behind the scenes -- but also autonomous -- doing stuff that you don't even know about and have no control over. The SSD controller continues to erase and rewrite flash blocks even if you unmount the file system. Even if you shut down the operating system. Even if you hold the damn processor in reset.
It's like the honey badger. Honey badger SSD don't care.
Monday, February 18, 2013
Imperfect People Build Imperfect Systems
I've done a lot of work over the decades on systems that were expected to be fault-tolerant and highly reliable. These projects ranged from enormous geographically-distributed enterprise communications systems that spanned national boundaries, to tiny sensor networks that are part of an environmental control system. I've done a lot of reading on best practices for developing fault-tolerant, high-reliability systems at all scales, and have done my best to incorporate those learnings into my own product development work.
But because imperfect people build imperfect systems, I also did a lot of reading on how humans muck things up more or less inevitably. Here are some of the books I recommend, in the order that I read them, and what I took away from them.
But because imperfect people build imperfect systems, I also did a lot of reading on how humans muck things up more or less inevitably. Here are some of the books I recommend, in the order that I read them, and what I took away from them.
James R. Chiles, Inviting Disaster: Lessons from the Edge of Technology, HarperBusiness, 2001
In 1982, during a storm off the coast of Newfoundland, the drilling rig Ocean Ranger capsized and sank with its crew of eight-four men. There were no survivors. The British rigid airship R101 was largest flying machine ever built, and would remain so until Germany built the Hindenburg some year later. When the R101 made its maiden voyage in 1929, it was unable to keep its own weight airborne, shearing off the roofs of cottages in the English countryside, until it finally crashed, killing forty-eight of the fifty-four people on board. In 1979, World War III was narrowly averted when a training tape was inadvertently inserted onto the screens of four U. S. command centers with no indication that it was a simulation.
These are just three of the dozens of case studies that Chiles presents and dissects in his book. Some of them are more familiar: Apollo 13, the Challenger space shuttle, Three Mile Island, and Chernobyl. Chiles' book manages to be both fascinating and horrifying.
Many of these disasters were the result of ordinary mistakes made in extraordinary circumstances. Sometimes the people involved didn't realize a disaster was in the offing. Sometimes they chose to ignore the evidence right in front of them. Often they depended on their own unreliable intuition, a failure in cognition that will be reiterated in the book below by Dörner. Transitions are the time of the greatest chance of making mistakes or of errors occurring. Sometimes folks did not appreciate the scale of risk even when the probability of failure was low. Sometimes they did not appreciate the hidden technology buried deep in the systems they were using. Valiant heroism often occurs in the midst of disasters, but unlike the movies, seldom succeeds.
The leaders of an organization have tremendous influence, sometimes unwittingly and unknowingly, on safety, or the lack of it, through their own attitudes and prioritization, which affect the organization's culture. And a common theme in these case studies is how leaders seldom insist on hearing bad news; in fact, subordinates are punished for bringing bad news upstairs. Chiles also talks about how important it is to "fill in the cracks": make minor course corrections before small mistakes expand into systemic breakdowns.
For me, one of the biggest lessons from this book is that we frequently do not design systems to recover from multiple failures that occur simultaneously. It's just too hard to think about, and it seems so unlikely. Yet it was that kind of impossible chain of independent failures that just happened to line up in time that lead to the deaths of the crew of the Ocean Ranger.
These are just three of the dozens of case studies that Chiles presents and dissects in his book. Some of them are more familiar: Apollo 13, the Challenger space shuttle, Three Mile Island, and Chernobyl. Chiles' book manages to be both fascinating and horrifying.
Many of these disasters were the result of ordinary mistakes made in extraordinary circumstances. Sometimes the people involved didn't realize a disaster was in the offing. Sometimes they chose to ignore the evidence right in front of them. Often they depended on their own unreliable intuition, a failure in cognition that will be reiterated in the book below by Dörner. Transitions are the time of the greatest chance of making mistakes or of errors occurring. Sometimes folks did not appreciate the scale of risk even when the probability of failure was low. Sometimes they did not appreciate the hidden technology buried deep in the systems they were using. Valiant heroism often occurs in the midst of disasters, but unlike the movies, seldom succeeds.
The leaders of an organization have tremendous influence, sometimes unwittingly and unknowingly, on safety, or the lack of it, through their own attitudes and prioritization, which affect the organization's culture. And a common theme in these case studies is how leaders seldom insist on hearing bad news; in fact, subordinates are punished for bringing bad news upstairs. Chiles also talks about how important it is to "fill in the cracks": make minor course corrections before small mistakes expand into systemic breakdowns.
For me, one of the biggest lessons from this book is that we frequently do not design systems to recover from multiple failures that occur simultaneously. It's just too hard to think about, and it seems so unlikely. Yet it was that kind of impossible chain of independent failures that just happened to line up in time that lead to the deaths of the crew of the Ocean Ranger.
Dietrich Dörner, The Logic of Failure: Recognizing and Avoiding Error in Complex Situations, Basic Books, 1996
Dörner is a psychologist whose book describes series of computer simulations with a wide variety of volunteers that required each to deal with a dynamic system with interrelated components. His goal was to discover the common cognitive errors humans make when trying to manage such systems.
For example: an African tribe whose livelihood depends on herds of cattle and sheep, which depend on the grassland, which depends on the water table. Well meaning participants in the simulation would bring Western health care to the tribe. The mortality rate would go down. The population would go up. The number of animals necessary to support the tribe would rise. The amount of grass necessary to feed the herds would outstrip the water supply. Famine would result. In fact, pretty much anything the simulation participants tried to do to improve the lot of the tribe would ultimately, sooner or later, result in famine.
The game wasn't rigged. The system was just the beyond the capability of most people, even trained economists and ecologists, to understand all the interdependencies when they had to discover them in real-time and make iterative decisions. The irony is that if the participants had just stood back and done nothing, the tribe would have been fine. Natural forces had, over many years, optimized all the components in the system for the available resources. Another simulation involving a watch factory in a small European community had similar results.
It was hard not to think about capitalism and market forces versus communism and central management when reading this book, even though I don't recall the author making that comparison explicitly. Dörner compares the results of these simulations with real-life mistakes that often had disastrous consequences, such as Chernobyl.
Humans suck at understanding and managing dynamic systems. Systems may have limited resources with only a certain amount of buffering between interrelated components. Variables in the system may have relationships exhibiting both positive and negative feedback. We are too quick to apply our own conditioned responses to situations that are different from those we have experienced before. We ignore what has been successfully done by others before -- frequently for very good reasons -- in the same circumstances. We try to make abstractions which can result in hiding or discarding vitally important detail.
The author remarks that many of these cognitive mistakes are probably the result of economizing the scarce and slow resource that is human thought. He talks about the necessity of "redundancy of potential command", by which he means the delegation of decision making to empowered subordinates, a point that Gawande will reiterate in his book below. His studies also suggest that we are highly motivated by a desire to preserve a positive view of our own competence, sometimes with deadly consequences. One of the big take aways from this book is the need for communication in organizations, both vertically and horizontally, and the absolute requirement for empowered delegation, bringing a kind of distributed and parallel processing to the human domain.
Two great quotes:
"Advocates of progress often have too low an opinion of what already exists."
-- Bertolt Brecht
"One jumps into the fray, then figures out what to do next."
-- Napoleon
Atul Gawande, The Checklist Manifest: How to Get Things Right, Picador, 2009
Mrs. Overclock, a.k.a. Dr. Overclock, Medicine Woman, read this book and passed it along to me. I'm glad she did. Gawande is a surgeon in Boston, a MacArthur fellow, and author, who participated in a World Health Organization project to improve surgical outcomes and reduce complications. Not just in the kinds of places you might expect WHO to operate, but in places like the United States, Britain, and New Zealand.
The result was the evolution of a series of checklists. I mean literally: a poster on the wall or a page on a clipboard with just a few brief bullet items on it, listing simple but important tasks that must have been completed, to be read, reviewed, and verified at pre-defined pause points during the surgery: before anesthetic is administered, before the incision is made, before the patient leaves the operating room.
The use of surgical checklists has lead to a drastic reduction in post-operative complications like infection, and has saved tens of millions of dollars, not to mention many many lives.
This will seem all quite familiar to the aviators in my audience. Gawande devotes much of the book to describing how checklists evolved from the early days of aviation, when it seemed that multi-engine aircraft might be too complicated for humans to fly, and how checklists, both notebooks and computerized, are now a routine part of every flight, commercial or private. He also talks about how checklists, in the form of project management charts, have enabled construction firms to build skyscrapers and nuclear submarines, keeping track of thousands of details and tasks for hundreds of skilled craftspeople. And he talks about the role of checklists in the safe ditching of U. S. Airways flight 1549 in the Hudson river in 2009.
Like the books mentioned above, Gawande talks about the cognitive weaknesses in humans, in this case to deal with fine levels of detail. He talks about the difference between the simple, the complicated (lots of moving parts, but once an algorithm is discovered, success is easily repeatable), and the complex (every situation is unique). Humans are not wired for complicated, but we may the only ones that can tackle the complex. My friend, occasional colleague, and embedded wonk John Lowe likes to say, in tone of voice that is a combination of wistful and ironic: "If only there were a machine that could somehow take mundane, repetitive tasks, and automate them...". For many complicated jobs, the checklist is that machine.
The author discusses the difference between a do-confirm checklist and a read-do checklist. He also talks about the need for both task actions and coordination actions, the latter being a pre-programmed time to touch bases with the those involved to make sure everyone is on the same page. In various contexts, this may be called a meeting, a briefing, a huddle, or a scrum.
A good checklist -- and there is a lot of effort that goes into creating and refining a good checklist -- has enormous leverage. While a single instance of a failure may affect only one person, that class of failure can affect many. Eliminating it has broad consequences. A good checklist is one that gets used. For that to happen, checklists should have only five to nine items, about the same size as short term memory, and should take only sixty to ninety seconds to go through.
One of the things that makes checklists difficult to adopt is ego: some professionals feel the checklist usurps their authority, particularly when it is a subordinate reading the checklist. But those professionals suck at the fine detail, as much as they would like to think they do not. Surgeons and pilots use checklists because the checklist helps them succeed.
I will continue to read about failures -- in both technology and people -- and about how to prevent them. We could use a dose of fault-tolerance and high-reliability in both our systems and ourselves.
Dörner is a psychologist whose book describes series of computer simulations with a wide variety of volunteers that required each to deal with a dynamic system with interrelated components. His goal was to discover the common cognitive errors humans make when trying to manage such systems.
For example: an African tribe whose livelihood depends on herds of cattle and sheep, which depend on the grassland, which depends on the water table. Well meaning participants in the simulation would bring Western health care to the tribe. The mortality rate would go down. The population would go up. The number of animals necessary to support the tribe would rise. The amount of grass necessary to feed the herds would outstrip the water supply. Famine would result. In fact, pretty much anything the simulation participants tried to do to improve the lot of the tribe would ultimately, sooner or later, result in famine.
The game wasn't rigged. The system was just the beyond the capability of most people, even trained economists and ecologists, to understand all the interdependencies when they had to discover them in real-time and make iterative decisions. The irony is that if the participants had just stood back and done nothing, the tribe would have been fine. Natural forces had, over many years, optimized all the components in the system for the available resources. Another simulation involving a watch factory in a small European community had similar results.
It was hard not to think about capitalism and market forces versus communism and central management when reading this book, even though I don't recall the author making that comparison explicitly. Dörner compares the results of these simulations with real-life mistakes that often had disastrous consequences, such as Chernobyl.
Humans suck at understanding and managing dynamic systems. Systems may have limited resources with only a certain amount of buffering between interrelated components. Variables in the system may have relationships exhibiting both positive and negative feedback. We are too quick to apply our own conditioned responses to situations that are different from those we have experienced before. We ignore what has been successfully done by others before -- frequently for very good reasons -- in the same circumstances. We try to make abstractions which can result in hiding or discarding vitally important detail.
The author remarks that many of these cognitive mistakes are probably the result of economizing the scarce and slow resource that is human thought. He talks about the necessity of "redundancy of potential command", by which he means the delegation of decision making to empowered subordinates, a point that Gawande will reiterate in his book below. His studies also suggest that we are highly motivated by a desire to preserve a positive view of our own competence, sometimes with deadly consequences. One of the big take aways from this book is the need for communication in organizations, both vertically and horizontally, and the absolute requirement for empowered delegation, bringing a kind of distributed and parallel processing to the human domain.
Two great quotes:
"Advocates of progress often have too low an opinion of what already exists."
-- Bertolt Brecht
"One jumps into the fray, then figures out what to do next."
-- Napoleon
Atul Gawande, The Checklist Manifest: How to Get Things Right, Picador, 2009
Mrs. Overclock, a.k.a. Dr. Overclock, Medicine Woman, read this book and passed it along to me. I'm glad she did. Gawande is a surgeon in Boston, a MacArthur fellow, and author, who participated in a World Health Organization project to improve surgical outcomes and reduce complications. Not just in the kinds of places you might expect WHO to operate, but in places like the United States, Britain, and New Zealand.
The result was the evolution of a series of checklists. I mean literally: a poster on the wall or a page on a clipboard with just a few brief bullet items on it, listing simple but important tasks that must have been completed, to be read, reviewed, and verified at pre-defined pause points during the surgery: before anesthetic is administered, before the incision is made, before the patient leaves the operating room.
The use of surgical checklists has lead to a drastic reduction in post-operative complications like infection, and has saved tens of millions of dollars, not to mention many many lives.
This will seem all quite familiar to the aviators in my audience. Gawande devotes much of the book to describing how checklists evolved from the early days of aviation, when it seemed that multi-engine aircraft might be too complicated for humans to fly, and how checklists, both notebooks and computerized, are now a routine part of every flight, commercial or private. He also talks about how checklists, in the form of project management charts, have enabled construction firms to build skyscrapers and nuclear submarines, keeping track of thousands of details and tasks for hundreds of skilled craftspeople. And he talks about the role of checklists in the safe ditching of U. S. Airways flight 1549 in the Hudson river in 2009.
Like the books mentioned above, Gawande talks about the cognitive weaknesses in humans, in this case to deal with fine levels of detail. He talks about the difference between the simple, the complicated (lots of moving parts, but once an algorithm is discovered, success is easily repeatable), and the complex (every situation is unique). Humans are not wired for complicated, but we may the only ones that can tackle the complex. My friend, occasional colleague, and embedded wonk John Lowe likes to say, in tone of voice that is a combination of wistful and ironic: "If only there were a machine that could somehow take mundane, repetitive tasks, and automate them...". For many complicated jobs, the checklist is that machine.
The author discusses the difference between a do-confirm checklist and a read-do checklist. He also talks about the need for both task actions and coordination actions, the latter being a pre-programmed time to touch bases with the those involved to make sure everyone is on the same page. In various contexts, this may be called a meeting, a briefing, a huddle, or a scrum.
A good checklist -- and there is a lot of effort that goes into creating and refining a good checklist -- has enormous leverage. While a single instance of a failure may affect only one person, that class of failure can affect many. Eliminating it has broad consequences. A good checklist is one that gets used. For that to happen, checklists should have only five to nine items, about the same size as short term memory, and should take only sixty to ninety seconds to go through.
One of the things that makes checklists difficult to adopt is ego: some professionals feel the checklist usurps their authority, particularly when it is a subordinate reading the checklist. But those professionals suck at the fine detail, as much as they would like to think they do not. Surgeons and pilots use checklists because the checklist helps them succeed.
Saturday, February 02, 2013
When You Are Compelled To Create
Linds Redding was a New Zealand-based graphic designer, art director, and animator who spent decades in the advertising industry until he died in October of esophageal cancer. In March he wrote an article for his blog, very widely disseminated among folks in that industry, in which he looked back at his long career. He was writing from the point of view of a creative who knew he was going to die soon. What he says seems to me to be remarkably applicable to other fields of endeavor, maybe even universally so. I would like to write a blog article of my own on this topic, but I could not improve on what Redding has to say. You're better off reading the original article.
A Short Lesson in Perspective
Should Redding's blog disappear, his article can also be found here in its entirety.
The fact is: we are all creatives who are going to die soon.
Many thanks to my friend Mike Kiss for passing this along.
A Short Lesson in Perspective
Should Redding's blog disappear, his article can also be found here in its entirety.
The fact is: we are all creatives who are going to die soon.
Many thanks to my friend Mike Kiss for passing this along.
Thursday, January 31, 2013
We Shape Our Tools And Then Our Tools Shape Us
Or so said (maybe) Marshall McLuhan, a philosopher and communication theorist best known for the phrase "The medium is the message." And because our tools shape us, it pays to have the best tools you can afford. When you say "tools" to a software developer, the first thing that pops into their head are software development tools like Eclipse, an interactive development environment with a graphical user interface. And for sure, I use Eclipse on an almost daily basis. But for the kind of work I do, down close to bare metal, this can also mean artifacts ranging from hand tools like wrenches and screwdrivers to high-technology tools like logic analyzers and digital storage oscilloscopes.
I have three tool kits that I put together over the past few years that have served me well. I didn't just go out to the hardware store and buy a bunch of tools that I thought might be useful. I carefully collected the tools I found indispensable and organized them into cases and bags that I could cart around in the trunk of my car from client to client as the need arose. I learned to do this from working closely with really excellent field support engineers who had to wrench and troubleshoot some of the products that I produced during my long career.
Field-Service Tool Kit
This is a big professional field-service bag about the size of a small suitcase. It is shown with my iPhone for comparison. It is stuffed with twenty-four pounds of hand tools. That may not sound like that much until you cart them up several flights of stairs.
The field-service tool kit has two enormous zip-open compartments. The first one looks about like what you would expect: wire cutters and strippers, pliers, wrenches, screwdrivers, EMT scissors and other sharp objects capable of various degrees of mayhem, as well as more esoteric stuff like ESD-safe tweezers, a barb-wire-fence tool (invaluable), some pipecleaners, and a toothbrush. If you buy quality hand tools for your own use, you will recognize a lot of this stuff.
The other side of the bag contains a variety of specialized equipment like my Radio Shack digital multimeter and a variety of probes for it, a big zip-lock bag of neon-colored nylon ties, and electrical and duct tape.
Not shown are the outside velcro-closing pockets with safety glasses, flashlights, spools of wire of various gauges, and a big zip-lock bag of miscellaneous crimp-on connectors.
Console and Signals Tool Kit
This is a laptop bag, with my iPhone for comparison, stuffed with the things I need to instrument a microprocessor or microcontroller to see what the heck my software or firmware, or my client's hardware, is doing.
The console and signals tool kit includes a Velleman PCSU1000 PC-based digital storage oscilloscope (left), a Saleae Logic-16 PC-based logic analyzer (right), a powered USB 2.0 hub (bottom), and an old IBM ThinkPad (center). The laptop runs Windows 7 and is loaded with the Velleman and Saleae software, as well as packages like WireShark, the Ethernet protocol analyzer formerly known as Ethereal, and PuTTY, my favorite Windows-based terminal emulator. The tool kit also includes a wide selection of USB adaptors for RS232 and logic-level serial ports and other serial busses, as well as a bunch of USB, Ethernet, and serial cables.
All of this, and more, gets packed into the laptop bag so that a short trip to my car can get me started quickly figuring out what's going on.
The console and signals tool kit is the one I use the most often.
Soldering Station Tool Kit
Occasionally I have to do more violence, in the form of hardware mods or repairs, than you might expect for someone whose degrees are all in Computer Science. That's when this backpack moves into the trunk of the car.
It contains a complete Weller digitally-controlled soldering station and all the paraphernalia that goes with it. Plus: safety glasses with magnifying lenses, alligator-clip "helping hands" with a magnifying glass, a folding super-bright LED task light (you may be detecting a pattern here), and even a Weller ESD-safe heat gun and a collection of shrink wrap tubing.
All of this, plus a flux pen and other useful stuff, fit comfortably in the backpack, the soldering station and heat gun inside their original boxes.
I've used this tool kit to do quite professional-looking (considering the practitioner) work, building special adapters and cables and test fixtures and what-not. And sometimes, not so professional work like scraping surface mount resistors off a board and replacing them with conventional resistors to which I could attach a logic analyzer.
I have had clients that knew that they needed an embedded developer, but had no real clue what an embedded developer did or what tools he needed to fulfill his role in their product development organization. These tool kits allow me to become a self-sufficient one-man traveling R&D laboratory. When they hire me, they're getting a bunch of necessary infrastructure too. I've already figured out what I need, so they don't have to.
Update (2013-02-09)
Network Tool Kit
While having breakfast when three friends this morning, conversation inevitably turned towards using Wireshark to debug TLS packet streams. Well, maybe not inevitably, but that's the kind of friends I have. I realized then that I forgot about my network tool kit.
This bag contains four smaller bags or sub-kits. The sub-kits contain (clockwise from top left) a LinkSys WRT54GL wireless broadband router (this is the Linux-based model), an inexpensive Dynex IP router, a Netgear FS108 fast Ethernet switch, and a Netgear EN104 Ethernet hub. Plus, all the necessary AC adapters and cables to deploy them.
I've used the LinkSys access point to debug and test WiFi chips in embedded systems. The Dynex router is useful to set up a temporary IP subnet in a laboratory and isolate weirdness from the client network. The Netgear switch is handy for expanding the single Ethernet cable the client gives me into a usable eight-port network. And if you've ever done Wireshark debugging, you know that as obsolete as it may be, the Ethernet hub is invaluable for peeking at a network packet stream between devices.
All four sub-kits, plus an array of Ethernet patch cables, fits in this small bag.
I don't use my network kit that often, but when I need it, I really need it.
Update (2013-12-24)
Air Travel
I was recently called upon to do some cross country travel on behalf of one of my clients to do some product integration testing with one of their big customers. I used this as an excuse to try out my recently upgraded field support capability. Considering past clients have shipped me off to Europe and Asia, I'd like to think my business card could say: "Have Laptop - Will Travel".
I replaced my laptop bag with a Pelican U100 Elite backpack. If the U.S. Navy Seals had to do field support, this is what they would use. The U100 has a built-in Pelican case for my field service laptop. The case has the usual dust and moisture seals and pressure equalization valve that has made Pelican the standard by which all other such field cases are measured.
Some time ago I replaced my ancient IBM ThinkPad mentioned earlier in this article with a Lenovo ThinkPad T430s laptop. This is Lenovo's standard rugged business ThinkPad equipped with a solid state disk (SSD). I find eliminating moving parts as much as possible makes it more likely my equipment will arrive at its destination fully functional. Since the U100 backpack with the T430s resides in the trunk of my all-wheel-drive Subaru, I worry about it a lot less. The U100 carries not only the T430s, but also my Saleae Logic PC-based logic analyzer, my Velleman PCSU1000 PC-based oscilloscope, a small multimeter, USB adapters, cables, connectors, and other tools of the trade.
The U100 is small enough to carry-on, but some of the tools I carry in it are not TSA-friendly. When I have to travel with it as checked baggage, I stow it in a North Face Base Camp duffel bag. The mouth of the North Face duffel is big enough to swallow the U100 backpack whole.
I've sat on flights that have just landed, waiting my turn to deplane, and watched my luggage sit on the tarmac in the pouring rain. The water resistant North Face duffel not only protects my equipment from the elements, but keeps the straps of the U100 out of the workings of hungry automated baggage systems. Although the duffel comes with detachable shoulder straps, I find that the built-in handles are completely adequate to temporarily turn the duffel into a backpack while shlepping it through airports.
When I have to travel with additional equipment, I once again turn to Pelican. The relatively small Pelican 1510 case meets the carry-on requirements of most airlines and is designed like carry-on rolling luggage.
It doesn't hold as much as you might think, thanks to all the internal padding and the thick plastic shell. But the padding and shell are exactly why I bought the case in the first place. It was sufficient on my recent trip to carry an engineering prototype of a satellite communications product, a power supply, a tangle of cables, and a quarter-terabyte USB 3.0 drive (!!!).
When I reach my destination, I extend the handle of the 1510, set it up on its built-in wheels, stack my other paraphernalia on top of it, and I'm good to go.
When I am traveling on the business of high technology, it doesn't pay to scrimp. My mission may be critical enough that there really is no Plan B. And when I am dealing with my clients, or even more importantly the clients of my clients, I want to project an image of competence, quality, and professionalism. I want to keep my current business; and I never know from where my future business is going to come.
I have three tool kits that I put together over the past few years that have served me well. I didn't just go out to the hardware store and buy a bunch of tools that I thought might be useful. I carefully collected the tools I found indispensable and organized them into cases and bags that I could cart around in the trunk of my car from client to client as the need arose. I learned to do this from working closely with really excellent field support engineers who had to wrench and troubleshoot some of the products that I produced during my long career.
Field-Service Tool Kit
This is a big professional field-service bag about the size of a small suitcase. It is shown with my iPhone for comparison. It is stuffed with twenty-four pounds of hand tools. That may not sound like that much until you cart them up several flights of stairs.
The field-service tool kit has two enormous zip-open compartments. The first one looks about like what you would expect: wire cutters and strippers, pliers, wrenches, screwdrivers, EMT scissors and other sharp objects capable of various degrees of mayhem, as well as more esoteric stuff like ESD-safe tweezers, a barb-wire-fence tool (invaluable), some pipecleaners, and a toothbrush. If you buy quality hand tools for your own use, you will recognize a lot of this stuff.
The other side of the bag contains a variety of specialized equipment like my Radio Shack digital multimeter and a variety of probes for it, a big zip-lock bag of neon-colored nylon ties, and electrical and duct tape.
Not shown are the outside velcro-closing pockets with safety glasses, flashlights, spools of wire of various gauges, and a big zip-lock bag of miscellaneous crimp-on connectors.
Console and Signals Tool Kit
This is a laptop bag, with my iPhone for comparison, stuffed with the things I need to instrument a microprocessor or microcontroller to see what the heck my software or firmware, or my client's hardware, is doing.
The console and signals tool kit includes a Velleman PCSU1000 PC-based digital storage oscilloscope (left), a Saleae Logic-16 PC-based logic analyzer (right), a powered USB 2.0 hub (bottom), and an old IBM ThinkPad (center). The laptop runs Windows 7 and is loaded with the Velleman and Saleae software, as well as packages like WireShark, the Ethernet protocol analyzer formerly known as Ethereal, and PuTTY, my favorite Windows-based terminal emulator. The tool kit also includes a wide selection of USB adaptors for RS232 and logic-level serial ports and other serial busses, as well as a bunch of USB, Ethernet, and serial cables.
All of this, and more, gets packed into the laptop bag so that a short trip to my car can get me started quickly figuring out what's going on.
The console and signals tool kit is the one I use the most often.
Soldering Station Tool Kit
Occasionally I have to do more violence, in the form of hardware mods or repairs, than you might expect for someone whose degrees are all in Computer Science. That's when this backpack moves into the trunk of the car.
It contains a complete Weller digitally-controlled soldering station and all the paraphernalia that goes with it. Plus: safety glasses with magnifying lenses, alligator-clip "helping hands" with a magnifying glass, a folding super-bright LED task light (you may be detecting a pattern here), and even a Weller ESD-safe heat gun and a collection of shrink wrap tubing.
All of this, plus a flux pen and other useful stuff, fit comfortably in the backpack, the soldering station and heat gun inside their original boxes.
I've used this tool kit to do quite professional-looking (considering the practitioner) work, building special adapters and cables and test fixtures and what-not. And sometimes, not so professional work like scraping surface mount resistors off a board and replacing them with conventional resistors to which I could attach a logic analyzer.
I have had clients that knew that they needed an embedded developer, but had no real clue what an embedded developer did or what tools he needed to fulfill his role in their product development organization. These tool kits allow me to become a self-sufficient one-man traveling R&D laboratory. When they hire me, they're getting a bunch of necessary infrastructure too. I've already figured out what I need, so they don't have to.
Update (2013-02-09)
Network Tool Kit
While having breakfast when three friends this morning, conversation inevitably turned towards using Wireshark to debug TLS packet streams. Well, maybe not inevitably, but that's the kind of friends I have. I realized then that I forgot about my network tool kit.
This bag contains four smaller bags or sub-kits. The sub-kits contain (clockwise from top left) a LinkSys WRT54GL wireless broadband router (this is the Linux-based model), an inexpensive Dynex IP router, a Netgear FS108 fast Ethernet switch, and a Netgear EN104 Ethernet hub. Plus, all the necessary AC adapters and cables to deploy them.
I've used the LinkSys access point to debug and test WiFi chips in embedded systems. The Dynex router is useful to set up a temporary IP subnet in a laboratory and isolate weirdness from the client network. The Netgear switch is handy for expanding the single Ethernet cable the client gives me into a usable eight-port network. And if you've ever done Wireshark debugging, you know that as obsolete as it may be, the Ethernet hub is invaluable for peeking at a network packet stream between devices.
All four sub-kits, plus an array of Ethernet patch cables, fits in this small bag.
I don't use my network kit that often, but when I need it, I really need it.
Update (2013-12-24)
Air Travel
I was recently called upon to do some cross country travel on behalf of one of my clients to do some product integration testing with one of their big customers. I used this as an excuse to try out my recently upgraded field support capability. Considering past clients have shipped me off to Europe and Asia, I'd like to think my business card could say: "Have Laptop - Will Travel".
I replaced my laptop bag with a Pelican U100 Elite backpack. If the U.S. Navy Seals had to do field support, this is what they would use. The U100 has a built-in Pelican case for my field service laptop. The case has the usual dust and moisture seals and pressure equalization valve that has made Pelican the standard by which all other such field cases are measured.
Some time ago I replaced my ancient IBM ThinkPad mentioned earlier in this article with a Lenovo ThinkPad T430s laptop. This is Lenovo's standard rugged business ThinkPad equipped with a solid state disk (SSD). I find eliminating moving parts as much as possible makes it more likely my equipment will arrive at its destination fully functional. Since the U100 backpack with the T430s resides in the trunk of my all-wheel-drive Subaru, I worry about it a lot less. The U100 carries not only the T430s, but also my Saleae Logic PC-based logic analyzer, my Velleman PCSU1000 PC-based oscilloscope, a small multimeter, USB adapters, cables, connectors, and other tools of the trade.
The U100 is small enough to carry-on, but some of the tools I carry in it are not TSA-friendly. When I have to travel with it as checked baggage, I stow it in a North Face Base Camp duffel bag. The mouth of the North Face duffel is big enough to swallow the U100 backpack whole.
I've sat on flights that have just landed, waiting my turn to deplane, and watched my luggage sit on the tarmac in the pouring rain. The water resistant North Face duffel not only protects my equipment from the elements, but keeps the straps of the U100 out of the workings of hungry automated baggage systems. Although the duffel comes with detachable shoulder straps, I find that the built-in handles are completely adequate to temporarily turn the duffel into a backpack while shlepping it through airports.
When I have to travel with additional equipment, I once again turn to Pelican. The relatively small Pelican 1510 case meets the carry-on requirements of most airlines and is designed like carry-on rolling luggage.
It doesn't hold as much as you might think, thanks to all the internal padding and the thick plastic shell. But the padding and shell are exactly why I bought the case in the first place. It was sufficient on my recent trip to carry an engineering prototype of a satellite communications product, a power supply, a tangle of cables, and a quarter-terabyte USB 3.0 drive (!!!).
When I reach my destination, I extend the handle of the 1510, set it up on its built-in wheels, stack my other paraphernalia on top of it, and I'm good to go.
When I am traveling on the business of high technology, it doesn't pay to scrimp. My mission may be critical enough that there really is no Plan B. And when I am dealing with my clients, or even more importantly the clients of my clients, I want to project an image of competence, quality, and professionalism. I want to keep my current business; and I never know from where my future business is going to come.
Wednesday, January 09, 2013
Hidden Variables: Arduino Due and the Cortex-M3
Once again, I'm curious about what preprocessor symbols are predefined by the GNU C compiler. This time it's for the ARM Cortex-M3 core in the Atmel SAM3X8E chip on my Arduino Due. Here is what the command
arm-none-eabi-g++ -dM -E -mcpu=cortex-m3 -mthumb - < /dev/null
arm-none-eabi-g++ -dM -E -mcpu=cortex-m3 -mthumb - < /dev/null
for GCC 4.7.2 from the CodeSourcery CodeBench Lite 2012.09-63 toolchain yields.
#define __DBL_MIN_EXP__ (-1021)
#define __HQ_FBIT__ 15
#define __UINT_LEAST16_MAX__ 65535
#define __ATOMIC_ACQUIRE 2
#define __SFRACT_IBIT__ 0
#define __FLT_MIN__ 1.1754943508222875e-38F
#define __UFRACT_MAX__ 0XFFFFP-16UR
#define __UINT_LEAST8_TYPE__ unsigned char
#define __DQ_FBIT__ 63
#define __INTMAX_C(c) c ## LL
#define __CS_SOURCERYGXX_REV__ 63
#define __ULFRACT_FBIT__ 32
#define __SACCUM_EPSILON__ 0x1P-7HK
#define __CHAR_BIT__ 8
#define __USQ_IBIT__ 0
#define __UINT8_MAX__ 255
#define __ACCUM_FBIT__ 15
#define __WINT_MAX__ 4294967295U
#define __USFRACT_FBIT__ 8
#define __ORDER_LITTLE_ENDIAN__ 1234
#define __SIZE_MAX__ 4294967295U
#define __WCHAR_MAX__ 4294967295U
#define __LACCUM_IBIT__ 32
#define __GCC_HAVE_SYNC_COMPARE_AND_SWAP_1 1
#define __GCC_HAVE_SYNC_COMPARE_AND_SWAP_2 1
#define __GCC_HAVE_SYNC_COMPARE_AND_SWAP_4 1
#define __DBL_DENORM_MIN__ ((double)4.9406564584124654e-324L)
#define __GCC_ATOMIC_CHAR_LOCK_FREE 2
#define __FLT_EVAL_METHOD__ 0
#define __LLACCUM_MAX__ 0X7FFFFFFFFFFFFFFFP-31LLK
#define __GCC_ATOMIC_CHAR32_T_LOCK_FREE 2
#define __FRACT_FBIT__ 15
#define __UINT_FAST64_MAX__ 18446744073709551615ULL
#define __SIG_ATOMIC_TYPE__ int
#define __UACCUM_FBIT__ 16
#define __DBL_MIN_10_EXP__ (-307)
#define __FINITE_MATH_ONLY__ 0
#define __ARMEL__ 1
#define __ARM_FEATURE_UNALIGNED 1
#define __ARM_ARCH_7M__ 1
#define __LFRACT_IBIT__ 0
#define __GNUC_PATCHLEVEL__ 2
#define __LFRACT_MAX__ 0X7FFFFFFFP-31LR
#define __UINT_FAST8_MAX__ 4294967295U
#define __DEC64_MAX_EXP__ 385
#define __INT8_C(c) c
#define __UINT_LEAST64_MAX__ 18446744073709551615ULL
#define __SA_FBIT__ 15
#define __SHRT_MAX__ 32767
#define __LDBL_MAX__ 1.7976931348623157e+308L
#define __FRACT_MAX__ 0X7FFFP-15R
#define __thumb2__ 1
#define __UFRACT_FBIT__ 16
#define __UFRACT_MIN__ 0.0UR
#define __UINT_LEAST8_MAX__ 255
#define __GCC_ATOMIC_BOOL_LOCK_FREE 2
#define __UINTMAX_TYPE__ long long unsigned int
#define __LLFRACT_EPSILON__ 0x1P-63LLR
#define __DEC32_EPSILON__ 1E-6DF
#define __CHAR_UNSIGNED__ 1
#define __UINT32_MAX__ 4294967295UL
#define __ULFRACT_MAX__ 0XFFFFFFFFP-32ULR
#define __TA_IBIT__ 64
#define __LDBL_MAX_EXP__ 1024
#define __WINT_MIN__ 0U
#define __ULLFRACT_MIN__ 0.0ULLR
#define __SCHAR_MAX__ 127
#define __WCHAR_MIN__ 0U
#define __INT64_C(c) c ## LL
#define __DBL_DIG__ 15
#define __GCC_ATOMIC_POINTER_LOCK_FREE 2
#define __LLACCUM_MIN__ (-0X1P31LLK-0X1P31LLK)
#define __SIZEOF_INT__ 4
#define __SIZEOF_POINTER__ 4
#define __USACCUM_IBIT__ 8
#define __USER_LABEL_PREFIX__
#define __STDC_HOSTED__ 1
#define __LDBL_HAS_INFINITY__ 1
#define __LFRACT_MIN__ (-0.5LR-0.5LR)
#define __HA_IBIT__ 8
#define __TQ_IBIT__ 0
#define __FLT_EPSILON__ 1.1920928955078125e-7F
#define __APCS_32__ 1
#define __USFRACT_IBIT__ 0
#define __LDBL_MIN__ 2.2250738585072014e-308L
#define __FRACT_MIN__ (-0.5R-0.5R)
#define __DEC32_MAX__ 9.999999E96DF
#define __DA_IBIT__ 32
#define __INT32_MAX__ 2147483647L
#define __UQQ_FBIT__ 8
#define __SIZEOF_LONG__ 4
#define __UACCUM_MAX__ 0XFFFFFFFFP-16UK
#define __UINT16_C(c) c
#define __DECIMAL_DIG__ 17
#define __LFRACT_EPSILON__ 0x1P-31LR
#define __ULFRACT_MIN__ 0.0ULR
#define __LDBL_HAS_QUIET_NAN__ 1
#define __ULACCUM_IBIT__ 32
#define __UACCUM_EPSILON__ 0x1P-16UK
#define __GNUC__ 4
#define __ULLACCUM_MAX__ 0XFFFFFFFFFFFFFFFFP-32ULLK
#define __HQ_IBIT__ 0
#define __FLT_HAS_DENORM__ 1
#define __SIZEOF_LONG_DOUBLE__ 8
#define __BIGGEST_ALIGNMENT__ 8
#define __DQ_IBIT__ 0
#define __DBL_MAX__ ((double)1.7976931348623157e+308L)
#define __ULFRACT_IBIT__ 0
#define __INT_FAST32_MAX__ 2147483647
#define __DBL_HAS_INFINITY__ 1
#define __ACCUM_IBIT__ 16
#define __DEC32_MIN_EXP__ (-94)
#define __THUMB_INTERWORK__ 1
#define __LACCUM_MAX__ 0X7FFFFFFFFFFFFFFFP-31LK
#define __INT_FAST16_TYPE__ int
#define __LDBL_HAS_DENORM__ 1
#define __DEC128_MAX__ 9.999999999999999999999999999999999E6144DL
#define __INT_LEAST32_MAX__ 2147483647L
#define __ARM_PCS 1
#define __DEC32_MIN__ 1E-95DF
#define __ACCUM_MAX__ 0X7FFFFFFFP-15K
#define __DBL_MAX_EXP__ 1024
#define __USACCUM_EPSILON__ 0x1P-8UHK
#define __DEC128_EPSILON__ 1E-33DL
#define __SFRACT_MAX__ 0X7FP-7HR
#define __FRACT_IBIT__ 0
#define __PTRDIFF_MAX__ 2147483647
#define __UACCUM_MIN__ 0.0UK
#define __UACCUM_IBIT__ 16
#define __LONG_LONG_MAX__ 9223372036854775807LL
#define __SIZEOF_SIZE_T__ 4
#define __ULACCUM_MAX__ 0XFFFFFFFFFFFFFFFFP-32ULK
#define __SIZEOF_WINT_T__ 4
#define __SA_IBIT__ 16
#define __ULLACCUM_MIN__ 0.0ULLK
#define __GXX_ABI_VERSION 1002
#define __UTA_FBIT__ 64
#define __SOFTFP__ 1
#define __FLT_MIN_EXP__ (-125)
#define __USFRACT_MAX__ 0XFFP-8UHR
#define __UFRACT_IBIT__ 0
#define __INT_FAST64_TYPE__ long long int
#define __DBL_MIN__ ((double)2.2250738585072014e-308L)
#define __LACCUM_MIN__ (-0X1P31LK-0X1P31LK)
#define __ULLACCUM_FBIT__ 32
#define __GXX_TYPEINFO_EQUALITY_INLINE 0
#define __ULLFRACT_EPSILON__ 0x1P-64ULLR
#define __USES_INITFINI__ 1
#define __DEC128_MIN__ 1E-6143DL
#define __REGISTER_PREFIX__
#define __UINT16_MAX__ 65535
#define __DBL_HAS_DENORM__ 1
#define __ACCUM_MIN__ (-0X1P15K-0X1P15K)
#define __SQ_IBIT__ 0
#define __UINT8_TYPE__ unsigned char
#define __UHA_FBIT__ 8
#define __NO_INLINE__ 1
#define __SFRACT_MIN__ (-0.5HR-0.5HR)
#define __UTQ_FBIT__ 128
#define __FLT_MANT_DIG__ 24
#define __VERSION__ "4.7.2"
#define __UINT64_C(c) c ## ULL
#define __ULLFRACT_FBIT__ 64
#define __FRACT_EPSILON__ 0x1P-15R
#define __ULACCUM_MIN__ 0.0ULK
#define __UDA_FBIT__ 32
#define __LLACCUM_EPSILON__ 0x1P-31LLK
#define __GCC_ATOMIC_INT_LOCK_FREE 2
#define __FLOAT_WORD_ORDER__ __ORDER_LITTLE_ENDIAN__
#define __USFRACT_MIN__ 0.0UHR
#define __UQQ_IBIT__ 0
#define __INT32_C(c) c ## L
#define __DEC64_EPSILON__ 1E-15DD
#define __ORDER_PDP_ENDIAN__ 3412
#define __DEC128_MIN_EXP__ (-6142)
#define __UHQ_FBIT__ 16
#define __LLACCUM_FBIT__ 31
#define __INT_FAST32_TYPE__ int
#define __UINT_LEAST16_TYPE__ short unsigned int
#define __INT16_MAX__ 32767
#define __SIZE_TYPE__ unsigned int
#define __UINT64_MAX__ 18446744073709551615ULL
#define __UDQ_FBIT__ 64
#define __INT8_TYPE__ signed char
#define __thumb__ 1
#define __ELF__ 1
#define __ULFRACT_EPSILON__ 0x1P-32ULR
#define __LLFRACT_FBIT__ 63
#define __FLT_RADIX__ 2
#define __INT_LEAST16_TYPE__ short int
#define __LDBL_EPSILON__ 2.2204460492503131e-16L
#define __UINTMAX_C(c) c ## ULL
#define __SACCUM_MAX__ 0X7FFFP-7HK
#define __SIG_ATOMIC_MAX__ 2147483647
#define __GCC_ATOMIC_WCHAR_T_LOCK_FREE 2
#define __VFP_FP__ 1
#define __SIZEOF_PTRDIFF_T__ 4
#define __CS_SOURCERYGXX_MIN__ 9
#define __LACCUM_EPSILON__ 0x1P-31LK
#define __DEC32_SUBNORMAL_MIN__ 0.000001E-95DF
#define __INT_FAST16_MAX__ 2147483647
#define __UINT_FAST32_MAX__ 4294967295U
#define __UINT_LEAST64_TYPE__ long long unsigned int
#define __USACCUM_MAX__ 0XFFFFP-8UHK
#define __SFRACT_EPSILON__ 0x1P-7HR
#define __FLT_HAS_QUIET_NAN__ 1
#define __FLT_MAX_10_EXP__ 38
#define __LONG_MAX__ 2147483647L
#define __DEC128_SUBNORMAL_MIN__ 0.000000000000000000000000000000001E-6143DL
#define __FLT_HAS_INFINITY__ 1
#define __USA_FBIT__ 16
#define __UINT_FAST16_TYPE__ unsigned int
#define __DEC64_MAX__ 9.999999999999999E384DD
#define __CHAR16_TYPE__ short unsigned int
#define __PRAGMA_REDEFINE_EXTNAME 1
#define __CS_SOURCERYGXX_MAJ__ 2012
#define __INT_LEAST16_MAX__ 32767
#define __DEC64_MANT_DIG__ 16
#define __INT64_MAX__ 9223372036854775807LL
#define __UINT_LEAST32_MAX__ 4294967295UL
#define __SACCUM_FBIT__ 7
#define __GCC_ATOMIC_LONG_LOCK_FREE 2
#define __INT_LEAST64_TYPE__ long long int
#define __INT16_TYPE__ short int
#define __INT_LEAST8_TYPE__ signed char
#define __SQ_FBIT__ 31
#define __DEC32_MAX_EXP__ 97
#define __INT_FAST8_MAX__ 2147483647
#define __INTPTR_MAX__ 2147483647
#define __QQ_FBIT__ 7
#define __UTA_IBIT__ 64
#define __LDBL_MANT_DIG__ 53
#define __SFRACT_FBIT__ 7
#define __SACCUM_MIN__ (-0X1P7HK-0X1P7HK)
#define __DBL_HAS_QUIET_NAN__ 1
#define __SIG_ATOMIC_MIN__ (-__SIG_ATOMIC_MAX__ - 1)
#define __INTPTR_TYPE__ int
#define __UINT16_TYPE__ short unsigned int
#define __WCHAR_TYPE__ unsigned int
#define __SIZEOF_FLOAT__ 4
#define __THUMBEL__ 1
#define __USQ_FBIT__ 32
#define __UINTPTR_MAX__ 4294967295U
#define __DEC64_MIN_EXP__ (-382)
#define __ULLACCUM_IBIT__ 32
#define __INT_FAST64_MAX__ 9223372036854775807LL
#define __GCC_ATOMIC_TEST_AND_SET_TRUEVAL 1
#define __FLT_DIG__ 6
#define __UINT_FAST64_TYPE__ long long unsigned int
#define __INT_MAX__ 2147483647
#define __LACCUM_FBIT__ 31
#define __USACCUM_MIN__ 0.0UHK
#define __UHA_IBIT__ 8
#define __INT64_TYPE__ long long int
#define __FLT_MAX_EXP__ 128
#define __UTQ_IBIT__ 0
#define __DBL_MANT_DIG__ 53
#define __INT_LEAST64_MAX__ 9223372036854775807LL
#define __GCC_ATOMIC_CHAR16_T_LOCK_FREE 2
#define __DEC64_MIN__ 1E-383DD
#define __WINT_TYPE__ unsigned int
#define __UINT_LEAST32_TYPE__ long unsigned int
#define __SIZEOF_SHORT__ 2
#define __ULLFRACT_IBIT__ 0
#define __LDBL_MIN_EXP__ (-1021)
#define __arm__ 1
#define __UDA_IBIT__ 32
#define __INT_LEAST8_MAX__ 127
#define __LFRACT_FBIT__ 31
#define __LDBL_MAX_10_EXP__ 308
#define __ATOMIC_RELAXED 0
#define __DBL_EPSILON__ ((double)2.2204460492503131e-16L)
#define __UINT8_C(c) c
#define __INT_LEAST32_TYPE__ long int
#define __SIZEOF_WCHAR_T__ 4
#define __UINT64_TYPE__ long long unsigned int
#define __LLFRACT_MAX__ 0X7FFFFFFFFFFFFFFFP-63LLR
#define __TQ_FBIT__ 127
#define __INT_FAST8_TYPE__ int
#define __ULLACCUM_EPSILON__ 0x1P-32ULLK
#define __UHQ_IBIT__ 0
#define __LLACCUM_IBIT__ 32
#define __DBL_DECIMAL_DIG__ 17
#define __DEC_EVAL_METHOD__ 2
#define __TA_FBIT__ 63
#define __UDQ_IBIT__ 0
#define __ORDER_BIG_ENDIAN__ 4321
#define __ACCUM_EPSILON__ 0x1P-15K
#define __UINT32_C(c) c ## UL
#define __INTMAX_MAX__ 9223372036854775807LL
#define __BYTE_ORDER__ __ORDER_LITTLE_ENDIAN__
#define __FLT_DENORM_MIN__ 1.4012984643248171e-45F
#define __LLFRACT_IBIT__ 0
#define __INT8_MAX__ 127
#define __UINT_FAST32_TYPE__ unsigned int
#define __CHAR32_TYPE__ long unsigned int
#define __FLT_MAX__ 3.4028234663852886e+38F
#define __USACCUM_FBIT__ 8
#define __INT32_TYPE__ long int
#define __SIZEOF_DOUBLE__ 8
#define __FLT_MIN_10_EXP__ (-37)
#define __UFRACT_EPSILON__ 0x1P-16UR
#define __INTMAX_TYPE__ long long int
#define __DEC128_MAX_EXP__ 6145
#define __ATOMIC_CONSUME 1
#define __GNUC_MINOR__ 7
#define __UINTMAX_MAX__ 18446744073709551615ULL
#define __DEC32_MANT_DIG__ 7
#define __HA_FBIT__ 7
#define __DBL_MAX_10_EXP__ 308
#define __LDBL_DENORM_MIN__ 4.9406564584124654e-324L
#define __INT16_C(c) c
#define __STDC__ 1
#define __PTRDIFF_TYPE__ int
#define __LLFRACT_MIN__ (-0.5LLR-0.5LLR)
#define __ATOMIC_SEQ_CST 5
#define __DA_FBIT__ 31
#define __UINT32_TYPE__ long unsigned int
#define __ARM_ARCH_EXT_IDIV__ 1
#define __UINTPTR_TYPE__ unsigned int
#define __USA_IBIT__ 16
#define __DEC64_SUBNORMAL_MIN__ 0.000000000000001E-383DD
#define __ARM_EABI__ 1
#define __DEC128_MANT_DIG__ 34
#define __LDBL_MIN_10_EXP__ (-307)
#define __SIZEOF_LONG_LONG__ 8
#define __ULACCUM_EPSILON__ 0x1P-32ULK
#define __SACCUM_IBIT__ 8
#define __GCC_ATOMIC_LLONG_LOCK_FREE 1
#define __LDBL_DIG__ 15
#define __FLT_DECIMAL_DIG__ 9
#define __UINT_FAST16_MAX__ 4294967295U
#define __GNUC_GNU_INLINE__ 1
#define __GCC_ATOMIC_SHORT_LOCK_FREE 2
#define __ULLFRACT_MAX__ 0XFFFFFFFFFFFFFFFFP-64ULLR
#define __UINT_FAST8_TYPE__ unsigned int
#define __USFRACT_EPSILON__ 0x1P-8UHR
#define __ULACCUM_FBIT__ 32
#define __QQ_IBIT__ 0
#define __ATOMIC_ACQ_REL 4
#define __ATOMIC_RELEASE 3
I find this kind of stuff fascinating. And necessary, for the kind of work that I do down close to bare metal where the underlying details are often important. But I'm careful about writing code that depends on these symbols unless it's really necessary. For example, I much prefer to use sizeof(size_t) instead of __SIZEOF_SIZE_T__ if possible (sometimes it isn't).
It's been interesting having spent the past several years developing low level code for embedded systems whose processors range from powerful ARM microprocessors like the Cortex-A8 -- a 32-bit von Neumann architecture with a memory management unit and on which I run Linux -- to Atmel AVR and Microchip PIC16 microcontrollers -- 8-bit Harvard architectures sometimes with as little as ninety-six bytes of RAM on which I may run just a task loop -- to see the ARM Cortex-M3 which straddles both worlds -- a 32-bit ARM core yet with separate program and data memories.
My friends who are Java developers may safely go back to downloading an additional terabyte of framework to add yet another layer of abstraction to whatever they're trying to accomplish.
Subscribe to:
Posts (Atom)