Monday, December 04, 2023

Lessons in Autonomous Weapons from the Iraq War

Read a good article this AM from the Brookings Institution from back in 2022, about issues in the use of automation in weapons systems: Understanding the errors introduced by military AI applications  [Kelsey Atherton, Brookings Institution, 2022-05-06]. It's in part a case study of a shoot down of an allied aircraft by a ground-to-air missile system operating autonomously during the Iraq War. That conflict predates the development of Generative Pre-trained Transformer (GPT) algorithms. But there's a lot here that is applicable to the current discussion about the application of GPTs to autonomous weapons systems. I found three things of special note.

First, it's an example of the "Swiss cheese" model of system failures, in that multiple mechanisms that could have prevented this friendly fire accident all failed or were not present.

Second, the article cites the lack of realistic and accurate training data, not in this case for a GPT, but for testing and verification of the missile systems during its development.

Third, it cites a study that found that even when there is a human-in-the-loop, humans aren't very good at choosing to override an autonomous system.

I consider the use of full automation in weapons systems to be - unfortunately - inevitable. Part of that is a Game Theory argument: if your adversary uses autonomous weapons, you must do so as well, or you stand to be at a serious disadvantage on the battlefield. But in the specific case of incoming short-range ballistic missiles, the time intervals involved may be too short to permit humans to evaluate the data and make and execute a decision. Also, in the case in which the ballistic missile is targeted at the ground-to-air missile system itself, if the intercepting missile misses the incoming missile, the stakes of the failure are lower if the ground-to-air missile system is itself unmanned.

It was an interesting ten pages that were well worth my time.

No comments: