In "Is US Economic Growth Over?" economist Robert Gordon recently argues that the U.S. and the rest of the developed world is at an end of a third, and smaller, industrial revolution.
The analysis in my paper links periods of slow and rapid growth to the timing of the three industrial revolutions:
- IR #1 (steam, railroads) from 1750 to 1830;
- IR #2 (electricity, internal combustion engine, running water, indoor toilets, communications, entertainment, chemicals, petroleum) from 1870 to 1900; and
- IR #3 (computers, the web, mobile phones) from 1960 to present.
It provides evidence that IR #2 was more important than the others and was largely responsible for 80 years of relatively rapid productivity growth between 1890 and 1972.
Once the spin-off inventions from IR #2 (airplanes, air conditioning, interstate highways) had run their course, productivity growth during 1972-96 was much slower than before. In contrast, IR #3 created only a short-lived growth revival between 1996 and 2004. Many of the original and spin-off inventions of IR #2 could happen only once – urbanisation, transportation speed, the freedom of women from the drudgery of carrying tons of water per year, and the role of central heating and air conditioning in achieving a year-round constant temperature.In "Is Growth Over?" economist Paul Krugman ponders this in light of the kind of non-scarcity economy known to anyone who is familiar with the idea of the technological singularity, in which robots replace most manual laborers and artificial intelligences (strong or weak) replace most information workers. As he points out, if all labor done by machines, you can raise the production per capita to any level you want, providing the robots and AIs are not part of what gets counted in the capita. (I should mention that Krugman is a science fiction fan himself and is certainly familiar with the writings of folks like Vernor Vinge and Charles Stross. I saw Stross interview Krugman in person at the World Science Fiction Convention in Montreal in 2009. Where do you go to hear Nobel Laureates speak?)
Krugman doesn't explain however from where the raw materials for this production will come. In his book The Great Stagnation economist Tyler Cowen has suggested that economic growth, particularly in the United States, was due to our having taken advantage of low hanging fruit, in the form of things like natural resources, cheap energy, and education. Now that those natural resources are more or less consumed, growth may become much more difficult.
Also recently, on the science fiction blog io9, George Dvorsky has written about The Great Filter, one of the possible explanations regarding the Fermi Paradox. Long time readers of this blog, and anyone that knows me well, will recall that I find the Fermi Paradox troubling. The Fermi Paradox is this: given the vast size of space, the vast span of time, and the vast numbers of galaxies, stars, planets, at least some percentage of which must be habitable, why haven't we heard any radio signals from extraterrestrial sources? It hasn't been for lack of trying. The Great Filter is a hypothesis that surmises that there is some fundamental and insurmountable barrier to development that all civilizations come up against.
Another possible explanation for what has been called The Great Silence is that mankind indeed holds a privileged position in the Universe. This can be seen as pro-religion argument, but it needn't be. It is possible that life is much much rarer than we have believed. There is actually some evidence to suggest that the Earth itself may inhabit a unique area of space in which physical constants permit life to thrive. (I've written about this in The Fine Structure Constant Isn't.)
Unfortunately, the explanation I find the most compelling (and like the least) is this: the Prisoner's Dilemma in Game Theory suggests that the dominant strategy is for space faring civilizations to wipe one another out before they themselves are wiped out by their competition. I call this the "Borg Strategy" (although rather than assimilation, I find a more credible possible mechanism to be weaponized von Neumann machines). Compare this to the optimal game strategy of cooperation, which I call the "United Federation of Planets Strategy". (I've written about this in The Prisoner's Dilemma, The Fermi Paradox, and War Games.)
In my professional work, particularly with large distributed systems and supercomputing, I have frequently seen issues with scalability. Often it becomes difficult to scale up performance with problem size. Cloud providers like Google and Amazon.com have addressed many problems that we thought were intractable in the past, as has the application of massively parallel processing to many traditional supercomputer applications. But the ugly truth is that cloud/MPP really only solves problems that are "embarrassingly parallel", that is, that naturally break up into many small and mostly independent parts. (I've written about this in Post-Modern Deck Construction.)
Many problems will remain intractable because they fall under the NP category: the only algorithms that are known to solve them run in "Non-Deterministic Polynomial" time (thanks to my old friend David Hemmendinger for that correction), which is to say, they scale, for example, exponentially with problem size. There are lots of problems that are in the NP category. Lucky for all of us, encryption and decryption is in the P category, while cryptographic code breaking is (so far) NP. True, encryption become easier to break as processing power increases, but adding a few more bits to the key increases the work necessary to crack codes exponentially.
What problems in economics are in fact computationally NP? For example, it could be that strategies necessary to more or less optimally manage an economy are fundamentally NP. This is one of the reasons that pro-free-market people give for free markets, where market forces encourage people to "do the right thing" independent of any central management. It really is a kind of crowd-sourced economic management system.
But suppose that there's a limit - in terms of computation or time or both - to how well an economy can work as a function of the number of actors (people, companies) in the economy relative to its available resources. Maybe there's some fundamental limit by which if a civilization hasn't achieved interstellar travel, it becomes impossible for them to do so. This can be compared to the history of Pacific islanders who became trapped on Easter Island when they cut down the last tree; no more big ocean going canoes, and as a side effect to the deforestation, ecological collapse.
This doesn't really need to be an NP class problem. It may just be that if a civilization doesn't make getting off of their planet a global priority, the time comes when they no longer have the resources necessary for an interplanetary diaspora or even for interstellar communication.
Henry Kissinger once famously said "Every civilization that has ever existed has ultimately collapsed." Is it possible that this is the result of fundamentally non-scalable economic principles, and is in part the explanation for the Fermi Paradox?
Update (2013-01-13)
I just finished Stephen Webb's If the Universe if Teeming with Aliens... WHERE IS EVERYBODY? Fifty Solutions to the Fermi Paradox and the Problem of Extraterrestrial Life (Springer-Verlang, 2002). Webb presents forty-nine explanations, plus one of his own, at about five pages apiece, that have been put forth by scientists, philosophers, etc. Besides being a good survey of this topic, it's also a good layman's introduction to a number of related science topics like plate tectonics, planetary formation, neurological support for language acquisition and processing, etc. I recommend it.
3 comments:
Well Chip, this posting is certainly rife with interesting tidbits encouraging responses.
Some random ones:
1. Like many arguments, the free market vs. centrally controlled economic discussion is not really a two-sided debate. Each side argues against the extreme example of the other type. There are many cases where you cannot control everything well, but a few well-chosen curbs and interventions can make serious improvement. Consider herding sheep with a Bellwether ewe.
2. It has been a while since I reviewed the Fermi paradox, but how does it address the fact that as we look further away we look further back in time ? Since advanced civilizations take time to develop, there is a minimum time before they would communicate. Perhaps there is a time when they stop communicating, either for safety or economy ? Our RF noise has only gone 150 light-years or so from us.
Ken Howard
1. I'm not proposing a position on either side of the free market vs. centrally managed economy debate. My personal opinion is that economies of any size cannot be centrally managed, so a combination of free markets plus regulation to deal with externalities is necessary (which is basically what we do today). I've written about the issue of externalities and their relation to measurement dysfunction elsewhere in this blog.
2. I chose not to go into this much detail, but for sure you have to take the light cone geometry into account. But still, the Great Silence suggests that there is no one out there transmitting at the right distance and time intervals such that we can detect them. Safety: that's basically my game theory argument; we should shut the hell up before we attract the wrong kind of attention. Economy: Nicholas Negroponte has observed what others have called the "Negroponte Switch", in which information we have historically broadcast we now transmit over wire (television), where has information we historically transmitted over wire we now broadcast (telephone). The latter case tends to be much lower power. This may indeed also account for the Great Silence. I much prefer the latter explanation.
Long time friend, former colleague, and computer scientist David Hemmendinger kindly gave me permission to post the comments he made by email:
QUOTE
Your Dec 29 blog entry refers to NP problems as "non-polynomial", which isn't quite right. "NP" stands for "non-deterministic polynomial" [time] -- the class of problems that can be solved in polynomial time by a non-deterministic Turing machine, which turns out to be equivalent to the class of problems that may not have a polynomial-time algorithm (P) but whose solutions can be checked in polynomial time. Thus P is clearly a subset of NP; anything soluble in P-time can certainly be checked in P-time. The big question is whether the converse is true, which would make P=NP. But until that is answered, we don't know whether problems for which we have only exponentially-complex algorithms so far actually can't have P-time ones (you probably know all this).
UNQUOTE
Always good to hear from you, David.
Post a Comment