Monday, December 18, 2023

Bruce Schneier: AIs, Mass Spying, and Trust

If you do anything with high technology (I do) you don't have to be a cybersecurity expert (I'm not) to learn something from reading security guru and Harvard fellow Bruce Schneier. His recent weekly newsletter had two articles that really gave me new and different points of view.

"Mass Spying" - Mr. Schneier makes a distinction between surveillance and spying. An example of the former is when a law enforcement agency puts a "mail cover" on your postal mail: they know everything you're receiving and its return address, but they don't know the contents. An example of the latter is when they open your mail and read what's in it. Technology made mass surveillance (of all kinds) cost effective, but spying was labor intensive: humans still had to read and summarize the contents of our communications. GPTs/LMMs have made mass spying practical, since now AIs can summarize written and even verbal communication. We know how technology made mass surveillance scalable, but AIs now open the era of scalable mass spying.

"AI and Trust" - Mr. Schneier explains the different between interpersonal trust and social trust. The former is the kind of trust I have for my spousal unit. I've known her for decades, I know what kind of person she is, and I have a wealth of experience interacting with her. The latter is the kind of trust I have for the people that made the Popeye's chicken sandwich yesterday, or for the driver of the bus or of the commuter train I rode on Saturday: I don't know any of these people, but because of laws, regulations, and social conventions, I trust they won't poison me or crash the vehicle I'm in. Interpersonal trust is the result of experience. Social trust is the result of, for the most part, the actions of government, which establishes and enforces a rule of law. Here's the thing: AIs are getting so good - in essence, passing the Turing Test - that we subconsciously mistake them for being worthy of interpersonal trust. But they aren't. Current GPTs/LMMs are tools of corporations, in particular profit-maximizing corporations, and if unregulated (that is, the corporations, not the AIs) they aren't even worthy of our social trust.

Well worth my time to read, and I encourage you to do so too.

Update (2023-12-18)

Schneier points out that you can't regulate the Artificial Intelligence; AIs aren't legal entities, so they can't be held responsible for anything they do. You must regulate the corporation that made the AI.

I'm no lawyer, but I feel pretty confident in saying that corporations that make AIs can and will be sued for their AIs' behavior and actions. AIs are merely software products. The legal entities behind them are ultimately responsible for their actions.

One of my concerns about using Large Language Model/Generative Pre-trained Transformer types of AIs in safety critical applications - aviation, weapons systems, autonomous vehicles, etc. - is that when the LLM/GPT makes the inevitable mistake - e.g. shoots down a commercial airliner near an area of conflict (which happens often enough due to human error there is a Wikipedia page devoted to it) - the people holding the post-mortem inquiry are going to be surprised to find that the engineers that built the AI don't know - in fact, can't know - how it arrived at its decision. The AI is a black box trained with a tremendous amount of data, and its inner workings based on that data are more or less opaque even to the folks that built it. Insurance companies are going to have to grapple with this issue as well.

Update (2024-02-21)

Even though you can't legally hold an AI itself responsible for crimes or mistakes the way you can a person or a corporation, that doesn't keep companies that are using AIs from trying to do just that. In a recent story, Air Canada tried to avoid responsibility for their experimental customer service AI giving a customer completely fictional advice about their bereavement policy. The Canadian small claims court wasn't having it, as well they shouldn't.

No comments: