Saturday, December 29, 2012

Scalability, Economics, and the Fermi Paradox

For some time now I've been wondering if my professional interests in technological scalability and my dilettante interests in economics and the Fermi Paradox might be all connected.

In "Is US Economic Growth Over?" economist Robert Gordon recently argues that the U.S. and the rest of the developed world is at an end of a third, and smaller, industrial revolution.
The analysis in my paper links periods of slow and rapid growth to the timing of the three industrial revolutions: 
  • IR #1 (steam, railroads) from 1750 to 1830;  
  • IR #2 (electricity, internal combustion engine, running water, indoor toilets, communications, entertainment, chemicals, petroleum) from 1870 to 1900; and
  • IR #3 (computers, the web, mobile phones) from 1960 to present. 
It provides evidence that IR #2 was more important than the others and was largely responsible for 80 years of relatively rapid productivity growth between 1890 and 1972.
Once the spin-off inventions from IR #2 (airplanes, air conditioning, interstate highways) had run their course, productivity growth during 1972-96 was much slower than before. In contrast, IR #3 created only a short-lived growth revival between 1996 and 2004. Many of the original and spin-off inventions of IR #2 could happen only once – urbanisation, transportation speed, the freedom of women from the drudgery of carrying tons of water per year, and the role of central heating and air conditioning in achieving a year-round constant temperature.
In "Is Growth Over?" economist Paul Krugman ponders this in light of the kind of non-scarcity economy known to anyone who is familiar with the idea of the technological singularity, in which robots replace most manual laborers and artificial intelligences (strong or weak) replace most information workers. As he points out, if all labor done by machines, you can raise the production per capita to any level you want, providing the robots and AIs are not part of what gets counted in the capita. (I should mention that Krugman is a science fiction fan himself and is certainly familiar with the writings of folks like Vernor Vinge and Charles Stross. I saw Stross interview Krugman in person at the World Science Fiction Convention in Montreal in 2009. Where do you go to hear Nobel Laureates speak?)

Krugman doesn't explain however from where the raw materials for this production will come. In his book The Great Stagnation economist Tyler Cowen has suggested that economic growth, particularly in the United States, was due to our having taken advantage of low hanging fruit, in the form of things like natural resources, cheap energy, and education. Now that those natural resources are more or less consumed, growth may become much more difficult.

Also recently, on the science fiction blog io9, George Dvorsky has written about The Great Filter, one of the possible explanations regarding the Fermi Paradox. Long time readers of this blog, and anyone that knows me well, will recall that I find the Fermi Paradox troubling. The Fermi Paradox is this: given the vast size of space, the vast span of time, and the vast numbers of galaxies, stars, planets, at least some percentage of which must be habitable, why haven't we heard any radio signals from extraterrestrial sources? It hasn't been for lack of trying. The Great Filter is a hypothesis that surmises that there is some fundamental and insurmountable barrier to development that all civilizations come up against.

Another possible explanation for what has been called The Great Silence is that mankind indeed holds a privileged position in the Universe. This can be seen as pro-religion argument, but it needn't be. It is possible that life is much much rarer than we have believed. There is actually some evidence to suggest that the Earth itself may inhabit a unique area of space in which physical constants permit life to thrive. (I've written about this in The Fine Structure Constant Isn't.)

Unfortunately, the explanation I find the most compelling (and like the least) is this: the Prisoner's Dilemma in Game Theory suggests that the dominant strategy is for space faring civilizations to wipe one another out before they themselves are wiped out by their competition. I call this the "Borg Strategy" (although rather than assimilation, I find a more credible possible mechanism to be weaponized von Neumann machines). Compare this to the optimal game strategy of cooperation, which I call the "United Federation of Planets Strategy". (I've written about this in The Prisoner's Dilemma, The Fermi Paradox, and War Games.)

In my professional work, particularly with large distributed systems and supercomputing, I have frequently seen issues with scalability. Often it becomes difficult to scale up performance with problem size. Cloud providers like Google and Amazon.com have addressed many problems that we thought were intractable in the past, as has the application of massively parallel processing to many traditional supercomputer applications. But the ugly truth is that cloud/MPP really only solves problems that are "embarrassingly parallel", that is, that naturally break up into many small and mostly independent parts. (I've written about this in Post-Modern Deck Construction.)

Many problems will remain intractable because they fall under the NP category: the only algorithms that are known to solve them run in "Non-Deterministic Polynomial" time (thanks to my old friend David Hemmendinger for that correction), which is to say, they scale, for example, exponentially with problem size. There are lots of problems that are in the NP category. Lucky for all of us, encryption and decryption is in the P category, while cryptographic code breaking is (so far) NP. True, encryption become easier to break as processing power increases, but adding a few more bits to the key increases the work necessary to crack codes exponentially.

What problems in economics are in fact computationally NP? For example, it could be that strategies necessary to more or less optimally manage an economy are fundamentally NP. This is one of the reasons that pro-free-market people give for free markets, where market forces encourage people to "do the right thing" independent of any central management. It really is a kind of crowd-sourced economic management system.

But suppose that there's a limit - in terms of computation or time or both - to how well an economy can work as a function of the number of actors (people, companies) in the economy relative to its available resources. Maybe there's some fundamental limit by which if a civilization hasn't achieved interstellar travel, it becomes impossible for them to do so. This can be compared to the history of Pacific islanders who became trapped on Easter Island when they cut down the last tree; no more big ocean going canoes, and as a side effect to the deforestation, ecological collapse.

This doesn't really need to be an NP class problem. It may just be that if a civilization doesn't make getting off of their planet a global priority, the time comes when they no longer have the resources necessary for an interplanetary diaspora or even for interstellar communication.

Henry Kissinger once famously said "Every civilization that has ever existed has ultimately collapsed." Is it possible that this is the result of fundamentally non-scalable economic principles, and is in part the explanation for the Fermi Paradox?

Update (2013-01-13)

I just finished Stephen Webb's If the Universe if Teeming with Aliens... WHERE IS EVERYBODY? Fifty Solutions to the Fermi Paradox and the Problem of Extraterrestrial Life (Springer-Verlang, 2002). Webb presents forty-nine explanations, plus one of his own, at about five pages apiece, that have been put forth by scientists, philosophers, etc. Besides being a good survey of this topic, it's also a good layman's introduction to a number of related science topics like plate tectonics, planetary formation, neurological support for language acquisition and processing, etc. I recommend it.

Wednesday, December 26, 2012

Leaving versus Staying

Let us suppose that people in the workforce can be categorized into those that need a compelling reason to leave a job (Group L) and those that need a compelling reason to stay in a job (Group S).

What would the benefits and retention strategies of a company predominantly employing people in Group L look like? Economics suggests that the company employing Group L employees would reach for the lowest common denominator in terms of benefits. The company would cite "best practices" in their industry to justify expending the least amount of money possible: the most basic health insurance, minimal vacation, and least flexible work hours, etc. There would be no financial incentive to do anymore than the least they had to do to keep their Group L employees. In fact, if they wanted to get rid of Group L employees, they would have to provide the compelling reason for them to leave, for example, layoffs. The company wouldn't have to worry about retaining Group S employees, nor would it have to worry about getting rid of them. They would likely have already have left.

Companies employing folks from Group L have it easy: they just have to make sure they don't give them a reason to leave. Except when they want them to leave. And my guess is that most people in Group L have similar reasons they would find compelling to leave.

What would the benefits and retention strategies of a company predominantly employing people in Group S look like? It would have to go out of its way to offer crazy generous benefits that were so obviously better than any of their competitors in order to retain Group S employees. Like free day care. Free food. Subsidized mass transit. Time to work on personal projects. Or they would have to put so much money on the table, maybe in the form of bonuses, so much that it might seem outrageous to those on the outside, that a Group S employee would be foolish to leave. Providing, of course, they didn't think they could get that kind of crazy money through other means. In which case, the outrageous bonuses aren't really a useful retention tool either.

Companies employing folks from Group S have it a lot harder. My guess is most people in Group S have different compelling reasons for staying. So the Group S company has to really scramble to keep their Group S employees.

Is it possible for a Group S employee to work at a Group L company? Sure, although the company probably has no idea why the Group S employee stays. The compelling reason for the Group S employee to stay may be something quite personal, even private. The management of the Group L company will be surprised when the Group S employee leaves, because from the Group L company's point of view, nothing has changed that could motivate the employee to leave.

Is it possible for a Group L employee to work at a Group S company? Maybe, and the Group L employee is probably amazed at how good they've got it. But the Group S company probably tries hard not to hire Group L employees. This could be done by placing all sorts of barriers in the interview and hiring process. Group S employees have to have a really good reason to apply for a particular job, even if it is with a Group S company.

I haven't said anything about the kinds of people that may be in Group L versus Group S. I have merely proposed a way in which you can decide which kind of company you work for. This probably says more about how your employer sees you than it says about you.

Thursday, December 20, 2012

Dead Man Walking

This article is about the event that was the greatest disaster of my career. It is also about the event that was the greatest stroke of luck of my career. It was the same event.

In 2005 I was working for a large Telecommunications Equipment Manufacturer. TEM had an impressive product line ranging from pizza-box-sized VOIP gateways to enormous global communications systems. At the time I was a software developer for TEM's Enterprise Communications Product, software that had a code base of millions of lines of code, the oldest sections of which stretched back to the invention of the C language. ECP was the crown jewels that, directly or indirectly, paid all the bills. Although I was an experienced product developer, I was fairly new to this area of the company, having spent the past several years writing software and firmware mostly in C++ closer to the hardware. But I had been working on a team that had developed a major new feature for ECP that was going to be in the upcoming release.

TEM was eager to win a big contract with one of their largest customers, a large, well-known Wealth Management Firm. It is likely that some or all of your retirement funds were under their management. WMF wanted a unified communications system for both their big office buildings, of which they had several, and for their smaller satellite offices scattered all over the country, of which they had many.

TEM was so eager to win this big contract, and the timing for WMF's acquisition was such, that my employer decided to preview the latest release of ECP by sending a team to one of WMF's data centers to install it on one of their servers so that WMF could see just how awesome it was, especially with this new feature that I had helped develop. But this new release of ECP not only wasn't in beta with any other customers yet, it hadn't even passed through TEM's own Quality Assurance organization. It was, at best, a development load, and not a terribly mature one at that. But millions of dollars were riding on TEM convincing WMF that ECP was the way to go.

When they asked me to get on the plane, being the fearless sort, I said yes.

Even given my relative inexperience with this code base, I was probably the logical choice. I had been one of the developers of one of the features in which WMF was interested. And getting this new release of ECP on WMF's server was not the usual process easily handled by one of TEM's technical support people. Because of the immaturity of the release, it wasn't a simple update, but a disk swap that required that I open up WMF's server and do some surgery. I had to back up the configuration and parameter database from the prior ECP release, swap the disks, and restore it to the new release. I was traveling with two disk drives in a hard case and a tool kit.

The conditions under which my work was done at the WMF data center were not optimal. I was in a tightly packed equipment room taking up most of a floor of an office building on a large WMF campus. All the work had to be done in the dead of night outside of normal business hours. I was locked in the equipment room without any means of communicating with anyone. If I walked out even to go to the euphemism, I couldn't get back in without finding a phone outside the room and calling someone. I had a mobile phone, but it couldn't get a signal inside the equipment room. For security reasons, there was no internet access. I had to get the install done quickly so that other two TEM developers that came on site could admin the new features and we could demo them before our narrow maintenance window expired. Security was tight, and time was short. I spent almost all my time at WMF sitting on the floor in a very narrow aisle with tools and parts strewn all around me, and a laptop in my lap connected to the server's maintenance Ethernet port. I got the DB backed up, the disks swapped, and the DB restored.

ECP would not come up.

It core dumped during initialization. I didn't even get to a maintenance screen. The system log file told me nothing that the stack trace didn't already. The ECP log was useless. I swapped the disks again and verified that the prior system came up just fine with the same DB as expected. I tried the spare disk that I had brought with me, to no avail. I desperately needed a clue. The catastrophic failure was in some part of the enormous application that I knew nothing about. Even if I did, I didn't have access to the code base or any of the usual diagnostic tools while sitting in the floor with my laptop. I had saved the DB, the stack trace, and the core dump on my laptop, but had no way to diagnose this level of failure on site, and no way to reach anyone that could. I knew that I was going to have to declare failure and cart everything back home for analysis.

Later, back at TEM, there were lots of meetings, port-mortems, but, remarkably, not a lot of finger pointing. We all knew it was a very immature release. I engaged other, more experienced, ECP developers to diagnose the failure and they set upon fixing it in the development base. Once that was done, I set up an ECP server, sans any actual telecommunications hardware, in my office, and installed WMF's DB on it to verify that it did indeed now come up. In the meantime, TEM's QA organization began testing this new ECP release on their own servers which did have actual hardware. Just a week or two passed before the powers that be decided that the new release had percolated enough that another attempt would be made. WMF would give TEM and ECP another chance.

I said yes, again. In hind sight, I'm a little surprised they asked.

This time I had a copy of the entire ECP code base on my laptop, although I still had no access to any of the complex diagnostic tools used to troubleshoot the system. The circumstances were identical: the same cast of characters, the same cramped cold equipment room, the same DB, the exact same server. Once again, I backed up the DB, swapped the disk, and restored the DB.

ECP came up. But it refused to do a firmware load onto the boards that allowed the server to communicate with any of the distributed equipment cabinets. ECP was up, but it was effectively useless.

We hadn't seen anything like this in our own QA testing of the new release, even though it used the same boards. My intuition told me that it probably something to do with WMF's specific DB. We weren't able to test with that DB in QA because the data in the DB is quite specific to the exact hardware configuration of the system, which involved hundreds if not thousands of individual components that we were unable to exactly replicate. The error didn't appear to be in the ECP software base itself, but in the firmware for the communications board, the source code of which I didn't have. And in any case I was not familiar with the hundreds of thousands of lines of C++ that made up that firmware. I personally knew folks at TEM that were, but even though they were standing by in the middle of the night back at the R&D facility, I had no way to contact them while connected to the server in front of me. After some consulting with the other TEM folk on site, and as our narrow maintenance window was closing, I once again declared failure.

As I got on the plane back east to return home, I knew that this was the end of my career at TEM. I took it as a compliment that they didn't fire me. They didn't even ax me in the inevitable next wave of layoffs. There were lots more meetings and post-mortems, some perhaps harsh but in my opinion well deserved words from TEM management to me, and a lot of discussion about a possible Plan C. But WMF's acquisition timetable had closed. And I knew that I would never be trusted with anything truly important at TEM ever again.

This is not the end of the story.

If you've never worked in a really big product development organization, it may help to know how these things operate.

ECP wasn't a single software product. It was a broad and deep product line incorporating several different types of servers, several possible configurations for some of the servers, many different hardware communications interface boards, and a huge number of features and options, some of which targeted very specific industries or market verticals. Just the ECP software that ran on a server alone was around eight million lines of code, mostly C. The code bases for all of the firmware that ran on the dozens of individual interface and feature boards manufactured by TEM, incorporating many different microprocessors, microcontrollers, FPGAs, and ASICs, and written in many different languages ranging from assembler to C++ to VHDL, added another few million lines of code. As ECP features were added and modified and new hardware introduced, all of this had to be kept in sync by a big globally distributed development organization of hundreds of developers and other experts.

The speed at which new ECP releases were generated by TEM was such that dozens developers were kept busy fixing bugs in perhaps two prior releases or more, while another team of developers was writing new features for the next release. It was this bleeding edge development release that I had hand carried to WMF. So it was not at all unusual to have at least three branches or forks of the ECP base in play at any one time. As bugs were found in the two prior forks, the fixes had to be ported forward to the latest fork. This was not always simple, since the code that the bug fix was in in the older fork may have been refactored, that is, modified, replaced, or even eliminated, in the course of new feature development in the latest fork. While a single code base might have been desirable, it simply wasn't practical given the demands of TEM's large installed user base all over the world, where customers just wanted their bugs fixed so that they could get on with their work and weren't at all interested in new and exciting bugs.

Once I got back home, and got some breathing space between meetings with understandably angry and disappointed managers, I started researching both of the WMF failures. Here is what I discovered: both of the issues I encountered trying to get ECP to run at WMF were known problems. They were documented in the bug reporting system for the prior release, not the development release that I had. The two bug reports were written as a result of TEM's own testing of the prior release. At WMF. At the same data center. On the same server. Those two known bugs had been fixed in the prior release, the very release of ECP that was already running on WMF's test server, but the fixes had not yet been ported forward to the development release that I was using for either of my two site visits. I hadn't known about these issues before; I was new enough to this particular part of the organization that I hadn't been completely conversant with the fu required to search its bug reporting system.

Both of the times I had gotten on the plane to fly to WMF, I was carefully hand carrying disk drives containing software for which it was absolutely known that it could not be made to work. In hindsight, my chances of success were guaranteed to be zero. It had always been a suicide mission.

Here's what keeps me awake some nights thinking about. There was a set of folks at TEM that knew we were taking this development release to WMF. There was a set of folks at TEM that knew this development release would not work at WMF. Was the intersection of those two sets empty? Surely it was. What motivation could anyone have to allow such a fiasco to occur?

But sometimes, in my darker moments, I remember that at the time TEM had an HR policy that included that enlightened system of forced ranking. And someone has to occupy those lower rating boxes. Would you pass up the opportunity to eliminate the competition for the rankings at the more rarified altitudes?

Never the less, I have always preferred to believe that the WMF fiasco was simply the result of the right hand not knowing what the left hand was doing. One of the lessons I carried away from this experience is that socializing high risk efforts widely through an organization might be a really good idea.

Ironically, WMF decided to go ahead and purchase TEM's ECP solution, the very product I had failed to get working, twice, for their main campuses, but go with TEM's major competitor for the small satellite offices. Technically, it was actually a good solution for WMF, since it played off the strengths of both vendors. Sometimes I wonder what my life would have become if WMF had simply gone with that solution in the first place and we could have avoided both of my ill-fated site visits.

WMF itself, once firmly in the Fortune 100, ceased to exist, having immolated under the tinder of bad debt in the crucible of the financial crisis.

Many of my former colleagues are still at TEM, albeit fewer with each wave of layoffs, still working in that creaky old huge C code base that features things like a four thousand line switch statement. It's probably significantly bigger by now.

As for me, a chance to transfer from the ECP development organization to another project came along. The new project was developing a distributed Java-based communications platform using SOA/EDA with an enterprise service bus. I moved to the new project, and worked there happily for over a year, learning all sorts of new stuff, some of which I've written about in this blog. ECP was probably relieved to see me go.

But knowing that I had made a career-limiting mistake, I eventually chose to leave TEM to be self-employed. My decision surprised a lot of people, most of whom knew nothing or only a small part of the WMF story. It was one of the best career decisions I've ever made. I'm happier, less stressed, learned more, and made more money, than had I stayed at TEM.

Funny how these things work out. Would I have ever have followed one of my life-long ambitions had not the WMF fiasco occurred? Or do we sometimes need a little creative destruction in our lives to set us on the right path?

Monday, December 17, 2012

Passion Practice Proficiency Profession

Back in the early 1980s I was a humble graduate student sitting in my office grading papers when I overhead one of the academic advisors for the computer science department talking to one of the undergraduates in the hallway. The student was saying "I really don't like programming but I'm majoring in computer science because I want to make a lot of money". This is a guy who was going to spend many years and a lot of money getting a degree so that he could be miserable in his job for the rest of his life. I'm also pretty sure he was never going to make a lot of money. I didn't understand it then, and I don't understand it now.

* * *

A few years ago I found myself at a banquet at that same university at which I was unexpectedly called upon to speak. I had to come up with something extemporaneously. Those who know me well will understand that this wasn't a big problem for me. This is more or less what I said.

"Most high technologies have a half life of about five years. Some technologies have done better than that: C, TCP/IP. Most haven't. This means that no matter what technologies you are teaching when a freshman enters the university, they will almost certainly not be what you are teaching when that senior graduates. And whatever technologies that student learns will not be what he ends up needing expertise in when he enters the workforce. Every six months or so I am expected in my job to become the world's greatest living expert in some technology that I may have never heard of beforehand. The most valuable thing I was taught during my time at this university was how to learn. Continuous, life-long learning isn't a buzzword, it's a requirement. Core skills, and learning how to learn, is what your students need. Not the latest fad. People who grasp specific technologies but can't quickly learn new ones on their own are the ones who are going to be laid off or whose jobs are going to be outsourced."

* * *

One of my favorite movies is the 1948 British film The Red Shoes. The film tells the story of the career of a ballerina and features a beautiful dance sequence based on the Hans Christian Andersen story of the same name. But the film is about ballet in the same way that the book Moby Dick is about the whaling industry in New England in the mid-1800s.

One of my favorite scenes in the movie has the aspiring ballerina chatting up a famous ballet company impresario at a dinner party, something that probably happens to him several times a day. Finally he snaps at her: "Why do you dance?" She cooly replies: "Why do you breath?" "Why... why I don't know. I just know that I must." "That's my answer too."

* * *

Not so many years ago I found myself on a chairlift with my niece, who was getting ready to graduate high school and was planning going to college to major in the performing arts. To their credit, her father, a professor of engineering, and her mother, at one time a technical writer, to my knowledge was never anything but supportive of her career choice. But given the professions of her parents, and the fact that her older brother was graduating with a degree in mechanical engineering, she was a little nervous. While her mother and my wife were getting caught up with the sister thing in the next chair back, this is more or less what I told her.

"To be happy in any profession, you have to be successful at it. To be successful, you have to be proficient at it. To be proficient at it, you have to have spent thousands of hours practicing at it, no matter what your natural skill at it may be. You have to be passionate about it, otherwise you'll never spend enough time at practice. You have to love it so much, you can't imagine not doing it. So much, you'd do it anyway even if you didn't get paid to do it. There is no point in choosing a career in anything that you don't love to do that much. No point in anything that you aren't compelled to be better at than anyone."

 * * *

The past six years or so have had some challenges. I lost both my mom and dad, although that wasn't a big surprise: mom was 86 when she died, dad was 94. I hope I do as well. I lost an old friend about my age to stroke. Another to cancer. Two colleagues to separate vehicular accidents. Three former colleagues to suicide. One friend to murder. It was after the sudden and unexpected death of one of those people that I went home and told Mrs. Overclock: "If I go to work tomorrow and don't come back, I just want you to know, it's all been good."

I don't know how I was lucky enough to end up in a profession that I can't imagine not doing. That I love so much I practice it even when I'm not being paid to do so. That I managed to make a decent living from. And that gave me an opportunity to routinely work with people smarter than myself and from whom I could learn.

But always with a work-life balance better than that of a certain aspiring ballerina.

Saturday, December 08, 2012

Arduino Due Data Types

Just the other day my Arduino Due arrived from one of my favorite suppliers, nearby Sparkfun Electronics based in Boulder Colorado. Unlike the Arduino Uno which uses an 8-bit Atmel AVR ATmega328 microcontroller, the Due uses an Atmel AT91SAM3X8E microcontroller which has a 32-bit ARM Cortex-M3 core. But like those AVR-based Arduinos, the Due's processor is a Harvard architecture, different from many other ARM-based processors which are von Neumann architectures. The Due has a whopping 512KB of flash for instructions and 96KB of SRAM for data.

First order of business was of course to run my little Arduino sketch that prints the sizes of all the data types. This is my version of the classic "Hello, World!" program. I like it because it not only verifies that the tool chain and platform software all works, and serves as a basic sanity test for the board, but tells me something useful about the underlying hardware target as well.

#include <stdint.h>

void setup() {
  Serial.begin(115200);
}

void loop() {
  Serial.print("sizeof(byte)="); Serial.println(sizeof(byte));
  Serial.print("sizeof(char)="); Serial.println(sizeof(char));
  Serial.print("sizeof(short)="); Serial.println(sizeof(short));
  Serial.print("sizeof(int)="); Serial.println(sizeof(int));
  Serial.print("sizeof(long)="); Serial.println(sizeof(long));
  Serial.print("sizeof(long long)="); Serial.println(sizeof(long long));
  Serial.print("sizeof(bool)="); Serial.println(sizeof(bool));
  Serial.print("sizeof(boolean)="); Serial.println(sizeof(boolean));
  Serial.print("sizeof(float)="); Serial.println(sizeof(float));
  Serial.print("sizeof(double)="); Serial.println(sizeof(double));
  Serial.print("sizeof(int8_t)="); Serial.println(sizeof(int8_t));
  Serial.print("sizeof(int16_t)="); Serial.println(sizeof(int16_t));
  Serial.print("sizeof(int32_t)="); Serial.println(sizeof(int32_t));
  Serial.print("sizeof(int64_t)="); Serial.println(sizeof(int64_t));
  Serial.print("sizeof(uint8_t)="); Serial.println(sizeof(uint8_t));
  Serial.print("sizeof(uint16_t)="); Serial.println(sizeof(uint16_t));
  Serial.print("sizeof(uint32_t)="); Serial.println(sizeof(uint32_t));
  Serial.print("sizeof(uint64_t)="); Serial.println(sizeof(uint64_t));
  Serial.print("sizeof(char*)="); Serial.println(sizeof(char*));
  Serial.print("sizeof(int*)="); Serial.println(sizeof(int*));
  Serial.print("sizeof(long*)="); Serial.println(sizeof(long*));
  Serial.print("sizeof(float*)="); Serial.println(sizeof(float*));
  Serial.print("sizeof(double*)="); Serial.println(sizeof(double*));
  Serial.print("sizeof(void*)="); Serial.println(sizeof(void*));
  Serial.println();
  delay(5000);
}

Here are the results. You can compare these to that of the Arduino Uno when I run a similar program on it.

sizeof(byte)=1
sizeof(char)=1
sizeof(short)=2
sizeof(int)=4
sizeof(long)=4
sizeof(long long)=8
sizeof(bool)=1
sizeof(boolean)=1
sizeof(float)=4
sizeof(double)=8
sizeof(int8_t)=1
sizeof(int16_t)=2
sizeof(int32_t)=4
sizeof(int64_t)=8
sizeof(uint8_t)=1
sizeof(uint16_t)=2
sizeof(uint32_t)=4
sizeof(uint64_t)=8
sizeof(char*)=4
sizeof(int*)=4
sizeof(long*)=4
sizeof(float*)=4
sizeof(double*)=4
sizeof(void*)=4