Jeff Wise wrote this interesting article about how commercial aircraft are getting all crossways - figuratively and literally - as nation states and other actors are jamming and spoofing GPS/GNSS and using other ElectroMagnetic Spectrum Operations (the broader term that has replaced Electronic Warfare) generally targeted at military activity. Like a lot of embedded systems, the boxes inside commercial aircraft were never designed with malware and malicious signals in mind.
Jeff Wise, "Air Travel Is Not Ready For Electronic Warfare", New York Magazine, 2024-01-02
I belong to the Association of Old Crows, a professional society for EMSO folks, and I get their Journal of Electromagnetic Dominance. It's mostly about RF stuff far more low level than my area of expertise, being an embedded/real-time/telecom software/firmware guy, so I can't really appreciate most of it. But the volume of ads and articles in the journal makes it obvious this is a highly active area for both defense and offense.
Black Box AIs in Air Defense Systems
I've said many times - everyone is probably tired of hearing me say it - that I think the use of neural network AI - like used in LLMs/GPTs - in air defense systems for target identification is inevitable. And putting the AI in control of firing to reduce response time will also happen. Accidental shootdowns of commercial aircraft due to human error is common enough that it has its own Wikipedia page, so the AI will actually probably be more accurate than humans. But it's just a matter of time until a commercial aircraft is misidentified by an AI as an enemy target. And when there's the resulting U.S. Congressional investigation about the loss of innocent civilian lives, many are going to be surprised when the defense contractors say that not only does no one know why the aircraft was misidentified, no one can know. That's how these massive neural network algorithms work; they're so far mostly black boxes.
Model Collapse In Air Defense System AIs
We need an enormous volume of high quality content created and curated by human experts to correctly train LLM/GPT-type AIs. Because such data sets are labor intensive, and therefore expensive, to create and to assemble, there will be enormous pressure to train AIs with AI-produced data. This might even happen unknowingly (as has already in fact happened) if the provenance of the original content isn't well documented (or the people building the AI just don't care). (There will be strong incentives not to reveal that content is AI generated, because human-created content will be so much more highly valued.) Training AIs with AI-generated data leads to model collapse, a kind of feedback loop in which errors and hallucinations in the training data are reinforced.
This is likely to occur with the air defense AIs I described above.
And there will be no quick way to fix this. We will likely have eliminated all the career paths of those very same human experts by our use of those same LLMs for their entry level jobs. As the existing cohort of experts retire, die, move into management, or otherwise quit producing content, there will be no one to take their place. See also: "eating your own seed corn".
No comments:
Post a Comment