Monday, December 18, 2023

Bruce Schneier: AIs, Mass Spying, and Trust

If you do anything with high technology (I do) you don't have to be a cybersecurity expert (I'm not) to learn something from reading security guru and Harvard fellow Bruce Schneier. His recent weekly newsletter had two articles that really gave me new and different points of view.

"Mass Spying" - Mr. Schneier makes a distinction between surveillance and spying. An example of the former is when a law enforcement agency puts a "mail cover" on your postal mail: they know everything you're receiving and its return address, but they don't know the contents. An example of the latter is when they open your mail and read what's in it. Technology made mass surveillance (of all kinds) cost effective, but spying was labor intensive: humans still had to read and summarize the contents of our communications. GPTs/LMMs have made mass spying practical, since now AIs can summarize written and even verbal communication. We know how technology made mass surveillance scalable, but AIs now open the era of scalable mass spying.

"AI and Trust" - Mr. Schneier explains the different between interpersonal trust and social trust. The former is the kind of trust I have for my spousal unit. I've known her for decades, I know what kind of person she is, and I have a wealth of experience interacting with her. The latter is the kind of trust I have for the people that made the Popeye's chicken sandwich yesterday, or for the driver of the bus or of the commuter train I rode on Saturday: I don't know any of these people, but because of laws, regulations, and social conventions, I trust they won't poison me or crash the vehicle I'm in. Interpersonal trust is the result of experience. Social trust is the result of, for the most part, the actions of government, which establishes and enforces a rule of law. Here's the thing: AIs are getting so good - in essence, passing the Turing Test - that we subconsciously mistake them for being worthy of interpersonal trust. But they aren't. Current GPTs/LMMs are tools of corporations, in particular profit-maximizing corporations, and if unregulated (that is, the corporations, not the AIs) they aren't even worthy of our social trust.

Well worth my time to read, and I encourage you to do so too.

Update (2023-12-18)

Schneier points out that you can't regulate the Artificial Intelligence; AIs aren't legal entities, so they can't be held responsible for anything they do. You must regulate the corporation that made the AI.

I'm no lawyer, but I feel pretty confident in saying that corporations that make AIs can and will be sued for their AIs' behavior and actions. AIs are merely software products. The legal entities behind them are ultimately responsible for their actions.

One of my concerns about using Large Language Model/Generative Pre-trained Transformer types of AIs in safety critical applications - aviation, weapons systems, autonomous vehicles, etc. - is that when the LLM/GPT makes the inevitable mistake - e.g. shoots down a commercial airliner near an area of conflict (which happens often enough due to human error there is a Wikipedia page devoted to it) - the people holding the post-mortem inquiry are going to be surprised to find that the engineers that built the AI don't know - in fact, can't know - how it arrived at its decision. The AI is a black box trained with a tremendous amount of data, and its inner workings based on that data are more or less opaque even to the folks that built it. Insurance companies are going to have to grapple with this issue as well.

Update (2024-02-21)

Even though you can't legally hold an AI itself responsible for crimes or mistakes the way you can a person or a corporation, that doesn't keep companies that are using AIs from trying to do just that. In a recent story, Air Canada tried to avoid responsibility for their experimental customer service AI giving a customer completely fictional advice about their bereavement policy. The Canadian small claims court wasn't having it, as well they shouldn't.

Saturday, December 09, 2023

Unusually Well Informed Delivery

The U.S. Postal Service scans the outside address-side of all your mail. Obviously. They have to have automated mechanisms that sort the mail by address, most especially zip code. So they have some pretty good character recognition technology, for both printed and handwritten addresses.

But did you know they keep a scanned image of your mail? U.S. law enforcement agencies - and sometimes intelligence agencies - can and do get a kind of search warrant, referred to as a mail cover, to see these images.

You can see these images too. The U.S.P.S. has a service called "Informed Delivery", part of their "Innovative Business Technology" program. You can sign up online to get an email every day, seven days a week (yes, even on Sunday) for the mail that has been scanned with your address on it. It's free. I've used this for some time.

Every morning I get an email with black and white digital images of my mail that had been scanned, probably the night before. Most of it is junk mail. It also contains color digital images of catalogs that I'll be receiving, that I'm sure the catalog merchandiser pays to have included. This is probably another revenue stream for the U.S.P.S. (and may be what pays for Informed Delivery).

The other day I had something extra in my Informed Delivery email. I had scanned images of the outsides of three other people's mail. These people weren't even on my street; two weren't even in my zip code.

Obviously some kind of glitch. But it wasn't a security hole I was expecting to find. That was naive on my part. If the FBI and the NSA find this information useful, someone who gets it by accident may as well.

Update 2023-12-13: This AM I got another ID email from the USPS with an image of someone else's mail in it, again not in my zip code. So this glitch isn't a one-off.

Update 2023-12-13: And for our friends in the Great White North: "Canada Post breaking law by gathering info from envelopes, parcels: watchdog".

Update 2023-12-13: Note that the images of mail, whether yours or someone else's, in your Informed Delivery email, are remote content: downloaded from a remote server and displayed, in an HTML-like manner, when you view the email. This means it can be removed or altered without accessing your copy of the email on your personal device. If you need to save these images for any reason, you need to save it in such a way that captures the remote images as well. Printing a hardcopy might be the best solution.

Update 2023-12-13: A friend, colleague, and former law enforcement officer asked me if the routing bar code printed by the USPS, and visible in the images in the ID email, for the other peoples' mail was the same as that on my mail. It's been a few years since I've had to eyeball bar codes of any kind, but I'm going to say "no".

Update 2023-12-13: Maybe this is obvious, but I thought I'd better say it: not subscribing to Informed Delivery will not prevent the USPS from scanning your mail, keeping the digital images, and showing them (deliberately or not) to other folks. At least by subscribing, you can see what other people might see.

Monday, December 04, 2023

Lessons in Autonomous Weapons from the Iraq War

Read a good article this AM from the Brookings Institution from back in 2022, about issues in the use of automation in weapons systems: Understanding the errors introduced by military AI applications  [Kelsey Atherton, Brookings Institution, 2022-05-06]. It's in part a case study of a shoot down of an allied aircraft by a ground-to-air missile system operating autonomously during the Iraq War. That conflict predates the development of Generative Pre-trained Transformer (GPT) algorithms. But there's a lot here that is applicable to the current discussion about the application of GPTs to autonomous weapons systems. I found three things of special note.

First, it's an example of the "Swiss cheese" model of system failures, in that multiple mechanisms that could have prevented this friendly fire accident all failed or were not present.

Second, the article cites the lack of realistic and accurate training data, not in this case for a GPT, but for testing and verification of the missile systems during its development.

Third, it cites a study that found that even when there is a human-in-the-loop, humans aren't very good at choosing to override an autonomous system.

I consider the use of full automation in weapons systems to be - unfortunately - inevitable. Part of that is a Game Theory argument: if your adversary uses autonomous weapons, you must do so as well, or you stand to be at a serious disadvantage on the battlefield. But in the specific case of incoming short-range ballistic missiles, the time intervals involved may be too short to permit humans to evaluate the data and make and execute a decision. Also, in the case in which the ballistic missile is targeted at the ground-to-air missile system itself, if the intercepting missile misses the incoming missile, the stakes of the failure are lower if the ground-to-air missile system is itself unmanned.

It was an interesting ten pages that were well worth my time.

Saturday, December 02, 2023

Time, Gravity, and the God Dial

Disclaimer: my knowledge of physics is at best at a dilettante level, even with more than a year of the topic in college, one elective course of which got me one of the only two B letter grades of both of my degrees. (Statistics similarly defeated me.)

I've read that there is no variable for time in the equations used in quantum physics, no t, because (apparently) time doesn't play a role. That's why quantum effects that are visible at the macroscopic level - even something as simple as stuff that absorbs light to glow in the dark - are random processes time-wise.

Yet time t plays a crucial role at the macroscopic level, in classical, or "Newtonian", mechanics.

Not only that, time is malleable, in the sense that it is affected by velocity and acceleration (special relativity) and gravity (general relativity), effects that are not only measurable, but stuff we depend on every day (like GPS) have to make adjustments for it.

So suppose God has a dial that controls the scale of their point of view, all the way from the smallest sub-atomic scale we know of, the Planck length, to the largest cosmological scale we know of, the observable Universe. At some point as God turns this dial on their heavenly tele/micro/scope, zooming out, out, far out, time goes from not being a factor at all to being an intrinsic factor for whatever they’re looking at.

Does this transition happen all at once? Does it happen gradually - somehow - in some kind of jittery change? What the heck is going on in this transition? What other things similarly change at this transition point? Is this the point at which particle-wave duality breaks down? Where Schrödinger's Cat definitely becomes alive or dead? Where does gravity starts to matter?

Gravity? Yeah, gravity. Because we currently have no theory of quantum gravity. Yet it seems necessary that at the quantum level gravity ought to play a role in a wave/particle. If a particle is in a super-position of states, what does that say about the gravitational attraction associated with the mass of this particle? At what point on the dial does gravity make a difference? There's a Nobel prize for sure for the first person to make significant progress on this question.

This is the kind of thing I think about while eating breakfast.

Will Optical Atomic Clocks Be Too Good?

Read a terrific popsci article this morning in Physics Today on time keeping: "Time Too Good To Be True" [Daniel Kleppner, Physics Today, 59.3, 2006-03-01]. (Disclaimer: it's from 2006, so it's likely to be out of date.)

The gist of the article is that as we make more and more precise atomic clocks by using higher and higher frequency resonators (like transitioning from cesium atomic clocks that resonate in the microwave range to elements that resonate in the optical range), in some ways they become less and less useful. Eventually we will create (or perhaps y bnow have created) clocks whose frequencies are so high that they are effected by extremely small perturbations in gravity, like tidal effects from the Sun and the Moon. Or perhaps, I wonder, as clocks get more sensitive, even smaller gravitational effects, like a black hole and a neutron star colliding 900 million light years away (which has in fact been detected).

Even today, the cesium and rubidium atomic clocks in GPS satellites have to be adjusted for special (due to the centripetal acceleration of their orbits) and general (orbital altitudes over the center of mass of the Earth) relativistic effects, where, in round numbers, an error of one nanosecond throws the ranging measurement for a single satellite off by about a foot.

Sophia

(This is an NTP server I built for my home network that incorporates a chip-scale cesium atomic clock disciplined to GPS; everyone needs a stratum-0 clock of their own. Also shown: my lab assistant.)

With far more accurate/precise atomic clocks, we won't be able to compare them. Note that relativistic effects aren't just jitter issues, they affect the fundamental nature of time itself, so it's not just a measurement or equipment issue.

Untitled

(This is a photograph I took in 2018 of part of an experimental ytterbium lattice optical atomic clock at the NIST laboratories in Boulder Colorado.)

One of the problems with optical atomic clocks is that to compare two of them in two locations we have to account for differences in altitude as little as one centimeter; that's how precise these clocks are, and how sensitive they are to general relativistic effects. We simply don't have, and probably can't have, a way to measure altitudes from the center of mass of the Earth that accurately. One of the ways we measure the shape of the "geoid" of the Earth is to (you knew this was coming) compare synchronized/syntonized atomic clocks. So there's definitely a chicken and egg problem.