Monday, April 08, 2024

Ancient History

I bought a four terabyte (4TB) SSD the other day at the local big box store. A Samsung T7 Shield (which I think just means it comes with a rubberlike case around it). It was substantially discounted, probably because the new T9 model is out. Easily fits in my shirt pocket. Hooked it up via the included USB cable to our network-attached storage box, and I'm now using it to automatically back up two Mac laptops and a Mac desktop at the Palatial Overclock Estate.

Mind blown.

Because I am ancient almost beyond belief - it's a miracle I'm still alive, especially considering my hobbies - I remember thirty years ago helping to write a proposal to DARPA to build a one terabyte (1TB) hierarchical storage system that would have included rotating disks and a robotic tape library. It would have taken up an entire room. Can't blame them for not funding it. Someone smarter than me (which could have been just about anyone) probably saw this all coming.

That same organization, the National Center for Atmospheric Research (NCAR) in Boulder Colorado, had the only CRAY-3 supercomputer outside of Seymour Cray's Cray Computer Corporation. Today, your Raspberry Pi Single Board Computer (SBC) - and not even the latest Pi model 5 - has more horsepower than that CRAY-3. And the SBC would fit in your shirt pocket as well.

Don't bet against Moore's Law.

Although as I am always quick to point out, what exactly Moore's Law implies has changed over the past few years. Which is why I was tickled when someone who is using my Diminuto C systems programming library passed along a command line to build the software by running make across sixteen parallel threads of execution - taking advantage of the trend towards multicore processors, now that it's become difficult to make individual processors faster. Between much of the build process being I/O bound, and having four processor cores on the Raspberry Pi 5, this approach really speeds up the makefile.

CRAY-3

That's me, about thirty years ago, leaning against the world's most expensive aquarium; the CRAY-3 logic modules were visible under the transparent top, fully immersed in Fluorinert.

Wednesday, March 20, 2024

Converting GPIO from the legacy sysfs ABI to the ioctl ABI in Diminuto and Hazer

It could be that no one but me is using my "Hazer" GNSS library and toolkit, or the "Diminuto" C-based systems programming library on which it depends. But just in case: I'm close to finishing testing of the develop branch of both repos, both of which have some major changes to how General Purpose Input/Output (GPIO) - the generic term for controlling digital input and output pins in software - and merging develop back into the master branch.

This was all motivated by my being one of the lucky few to get a backordered Raspberry Pi 5, and putting the latest version of Raspberry Pi OS, based on Debian "bookworm" and Linux 6.1, on it, only to find when unit and functional testing my code that the deprecated sysfs-based GPIO ABI no long worked. This wasn't a big surprise - I had read at least two years ago that the old ABI was being phased out in favor of a new ioctl-based ABI. My code makes heavy use of GPIO for a lot of my stuff, e.g. interrupts from real-time sensors, the One Pulse Per Second (1PPS) signal from GNSS receivers, status LEDS, etc. So it was finally time to bite the bullet and replace all the places where I used the sysfs-based Diminuto Pin feature (diminuto_pin) with a new ABI using the ioctl. Hence, the Diminuto Line (borrowing a term from the new ABI) feature (diminuto_line).

Line is now used in place of Pin in all of the Diminuto functional tests that run on  hardware test fixtures I wired up many years ago just for this purpose, and all the functional tests work. The Hazer gpstool utility has similarly been converted to using Line instead of Pin and has been tested with an Ardusimple board using a u-blox UBX-NEO-F10T GNSS receiver.

IMG_5717

(That's a Pi4 on the left connected to my test fixture, and a Pi5 on the right connected to the GNSS receiver.)

Two complaints.

[1] The new ABI is woefully under documented. However, I found some code examples in the Linux kernel source repo under tools/gpio that were essential to my understanding. (I chose not to use the new-ish libgpiod for my work, for reasons, but that is a story for another time. I have no reason to believe that it's not perfectly fine to use.)

[2] The way the ioctl GPIO driver is implemented on older versus newer Raspberry Pi OS versions makes it difficult - I am tempted to say impossible, but maybe I'm just not that smart - to write code that easily runs on either platform using the new ABI. Specifically, the GPIO device drivers in the OSes use a different symbolic naming scheme, making it impossible for application code to select the correct GPIO device and line offset portably on the two platforms. But maybe I'm just missing something. (I hope so.)

I like the new ioctl ABI, and expect to use it exclusively moving forward, even though this will orphan Pis I have that might run older versions of the OS. (I think I have at least one example of every Pi version ever sitting around the Palatial Overclock Estate. Ten of them run 24x7 in roles such as a web server, an Open Street Map server, a Differential GNSS base station, and an NTP server with a cesium atomic clock for holdover.) I have tagged the last version of both repos that still use the sysfs ABI.

That's it.

Update (2024-03-23)

I merged the develop branch in to the master branch this morning. Both the Diminuto build and the Hazer build passed sanity and functional testing (and I'm currently running the long-running "geologic" unit test suite against Diminuto). I had tagged the old master branch in both repos with the name sysfsgpio in case I needed to build them, but I don't anticipate any further development of the old code.

Thursday, March 07, 2024

AI in the Battlefield

The name, "Tactical Intelligence Targeting Access Node" (TITAN), is pretty clever. Peter Thiel's Denver-Based Palantir Technologies, a software-driven data analytics company in the defense and intelligence domain, just won a US$178M contract to build an AI-driven mobile battlefield sensor fusion platform. From Palantir's home page: "AI-Powered Operations, For Every Decision". In this context, TITAN consumes a huge amount of data from remote sensors and tells soldiers what to destroy.

Cool. And absolutely necessary. Soldiers on the battlefield are inundated with information, more than humans can assimilate in the time they have. And even if we didn't build it, our peer adversaries surely will (or more likely, are).

This is the kind of neural network-based AI that's going to mistake a commercial airliner for an enemy bomber and recommend that it be shot down, even if its own cyber-finger isn't on the trigger. Because time is short, and if no other information is forthcoming, someone will pull that trigger.

In the inevitable following Congressional investigation, military officers, company executives, and AI scientists and engineers will be forced to admit that they have no idea why the AI made that mistake, and in fact they can't know, because no one can. When you have an AI with over a trillion - not an exaggeration - variables in its learning model, no one can understand how Deep Learning really works.

Seriously, this is a real problem in the AI field right now. AIs do things their own developers did not anticipate, and cannot explain.

Accidental commercial airliner shoot downs are so common they have their own Wikipedia page. And it's just a matter of time before the cyber-finger is on the trigger, because it can respond so more quickly than its overwhelmed human operators.

The worst thing that could happen is for TITAN be an unqualified success. Someone will get the idea that maybe such a system should have its cyber-finger on the red button for strategic ICBMs.