Monday, September 16, 2024

When The Minimum Viable Product Is Too Minimal And Not Viable

All technology product development is fractally iterative, whether you want it to be or not. The agile development processes at least recognizes this. But agile, and its idea of a Minimum Viable Product (MVP), replaces the waterfall development process' requirements - which consumes a lot of thought, research, and consensus ahead of time - with a competent product manager and close proximity to the customer. My long professional experience working in both waterfall and agile processes suggests that this can work. Except when it doesn't.

2024 BMW R1250GS Adventure: I-25 and CO-60 near Johnstown Colorado

This past spring I was the victim of a Minimum Viable Product strategy when I bought BMW Motorcycle's latest GPS device, the Connected Ride Navigator (CRN-1), for my 2024 BMW R1250GS Adventure, my fourth BMW motorcycle. I spent about US$800 on the CRN-1, and it was a disaster. Prior BMW Motorrad navigators were built by Garmin, and to be fair, had their own hardware issues. But this one was a BMW product, reportedly with TomTom maps. The hardware seemed pretty solid, but it was as if the software had been designed and written by someone who had never used a navigator (BMW's or otherwise), and had never ridden a motorcycle.

IMG_5990

Besides having lots of professional experience writing code to use GPS devices and to use Open Street Maps, I had used an old Garmin standalone unit on many car trips, and my Subaru WRX has an in-dash TomTom. My basic navigation needs are simple. I want to know what road I'm on. I want to know what the next cross street is. I want to know what direction I'm going. Basic stuff like that. The CRN-1 couldn't do any of that. On a recent trip through northern New Mexico, the screen typically was all gray with a single green line - presumably indicating the road - on it; no labels, no other information. And when there were labels, the font was so tiny as to be unreadable with my old eyes using my progressive spectacles.

Here's the MVP thing: since I bought the CRN-1, there have been two software updates, and with each one the device has gotten a little better. But after the New Mexico debacle I had already bought a Garmin Zūmo XT2 navigator, a motorcycle-specific model from BMW's now-competitor, for about US$500. Since I had to modify the navigator cradle on the motorcycle for the XT2, I am unlikely to ever go back.

Sure wish I hadn't spent the money on the CRN-1. You'd think I'd know better than to buy the first release of any tech product. After all, that's why I bought the R1250GS instead of its R1300GS replacement. I'm used to BMW's motorcycle products being well designed and overpriced; the new BMW navigator got one of those right. The MVP CRN-1 was too little and too late.

Thursday, September 12, 2024

Large Language Models and the Fermi Paradox

(I originally wrote this as a comment on LinkedIn, then turned the comment into a post on LinkedIn, then into a post for my techie and science fictional friends on Facebook, and finally turned it into a blog article here. I've said all this before, but it bears repeating.)

The destruction of the talent pipeline by the use of AI for work normally done by interns and entry-level employees not only threatens how humans fundamentally learn, but leads to AI "eating its own seed corn". As senior experts leave the work force, there will be no one left to generate the enormous amount - terabytes - of content necessary to train the Large Language Models.

Because human generated content will generally be perceived to be more valuable than machine generated content, humans using AI to generate content will be highly incentivized to not identify AI generated content as such. More and more AI generated content will be swept up along with the gradually diminishing pool of human content to use as training data, in a kind of feedback loop leading to "model collapse", in which the AI produces nonsense.

A former boss of mine, back in my own U.S. national lab days, once wisely remarked that this is why the U.S. Department of Energy maintains national labs: experienced Ph.D. physicists take a really long time to make. And when you need them, you need them right now. Not having them when you need them can result in an existential crisis. So you have to maintain a talent pipeline that keeps churning them out.

It takes generations in human-time to refill the talent pipeline and start making more senior experts, no matter what the domain of expertise. Once we go down this path, there is no quick and easy fix.

The lobe of my brain that goes active at science fiction conventions suggests that this anti-pattern is one possible explanation for the Fermi Paradox.

Tuesday, August 20, 2024

A Science Fictional Idea: Noise and the Fermi Paradox

Today, Sabine Hossenfelder, "my favorite quantum physicist", had an article about an academic paper in which the author proposed that aliens using really advanced technologies might be able to modulate the quantum properties of photons in order to carry information. (Note: this is not the SFal idea of using entangled particles to communicate faster than light, which - sorry - is not possible; there is no way to modulate the effect to carry information.) Such a modulation scheme could carry a lot more information than our current schemes that modulate properties like amplitude, frequency, phase, etc. A communication beam with quantum modulation would have to be extremely narrowly focused, lest it run into some other matter which would cause the quantum properties to decohere, losing all the information content.

The author wrote a possible approach would be to use frequencies in the infrared, which would require antennas on each end about one hundred kilometers across, which is, actually, not a completely crazy idea. Dr. Hossenfelder also mentioned that because of the decoherence issue, the aliens would be careful NOT to aim the beam near any planet, like, you know, ours. But even if we did receive it, we would have no clue how to demodulate it. I got to thinking about this. (That's why I follow Dr. Hossenfelder, and even support her work on Patreon.)

I recently spent three days at the NIST Time & Frequency Seminar held at the National Institute for Standards and Technology (NIST) Boulder laboratories. A big part of that seminar were demonstrations on how to measure and characterize noise in precision frequency sources. Precision frequency sources like the NIST-F2 cesium fountain clock, shown below, which is the principal frequency reference for the definition of UTC(NIST), the United State's contribution to the international definition of Universal Coordinated Time or UTC. This noise measurement and characterization is not completely removed from measuring noise in communications systems, which, by the way, depend on precision frequency references to work. (A big Thank You to Dr. Jeffrey Sherman, below, for the tour!)

Untitled

Along the commuter train line from our neighborhood to downtown Denver there is an old AT&T Long Lines tower with the giant microwave horn antennas that used be the backbone of the long distance telephone system. This was before fiber optic cables were run along every railroad track - because the railroads owned the right of ways (an effort which gave the telecommunications company SPRINT its name: "Southern Pacific Railroad Internal Networking Telephony").

The Spousal Unit is so very tired of me telling the story - which I do virtually every time we ride the train (sorry) - of the two Bell Labs engineers who were tasked with figuring out and eliminating the source of the noise in the early models of these microwave antennas (which were big enough you could easily stand up inside of them). Alas, they ultimately weren't able to eliminate it: they determined the noise was the Cosmic Background Microwave Radiation that was the result of the Big Bang. They were picking up the noise from the birth of the Universe. As a consolation, however, they did win a Nobel Prize in physics. And invented the entire realm of radio astronomy. (I eventually took a little motorcycle ride and found that Long Lines tower, shown below.)

Untitled

Noise exists in every communication link, whether it's radio, wire, optical, etc. You can't get rid of it completely. Eventually maybe you give up and just declare it's "cosmic background", or "thermal noise", or "electrical noise from other equipment in the room". But noise in communication systems is no small problem; given enough, it can jam your GPS, your WiFi, your mobile phone, etc., or just make your vintage vinyl albums sound bad.

I very dimly recall a result from Information Theory that says something like: the output of a theoretically optimal data compression algorithm is indistinguishable from noise. That is: there is no statistical test that can tell you whether the data stream you're looking at is just noise, or is optimally compressed data. (That's not quite the same as saying, however, that it is random.)

What if the aliens have a nearly optimal compression algorithm? (A perfectly optimal one is impossible.) Sending data from one star system to another is bound to be really expensive, not to mention take a long time. So they would be highly incentivized to use such an algorithm. What if part of the noise we see and hear and receive every day in our own radio communications systems is really alien data transmissions?

We could be awash in extraterrestrial data communications and not even know it.