Tuesday, March 24, 2026

Some Thoughts On Quantum Computing

Some remarks by a speaker at a recent professional event inspired me to rant a bit on the topic of quantum computing. For sure, it's not my area of expertise, but I do have a dilettante interest in quantum physics, plus I took a short course at University of Denver a few months ago that was a kind of "Quantum Computing for Dummies", and it is the area of expertise by the Ph.D. physicist who taught it.

[1] Quantum computers will not replace conventional computers. There is a lot of interest in QCs because it is believed that they can do calculations that are not feasible for today's computers. But those calculations are only possible for problems for which there are known - or believed - to be QC-type algorithms. You will never be running a web browser or a spreadsheet on a QC.

[2] One example is Peter Shor's quantum algorithm for large number factorization, which could be used to break encryption schemes that rely on the difficulty of such factorization. Shor is a theoretical computer scientist who developed Shor's Algorithm while at Bell Labs. This is why governments are interested/concerned about QC. A lot of encrypted secrets have been stolen by hackers (theirs and ours), but are just useless bits until they can be decrypted.

[3] Not all encryption is based on large number factorization. Maybe there are as yet undiscovered QC algorithms for the "trap door functions" that those schemes use instead of large number multiplication and factorization. Maybe not. Until then - if ever - such schemes are described as quantum resistant. Switching to such schemes is probably a good idea for sensitive data, just in case QCs eventually work.

[4] QC may never work. Although scaling up quantum computers is talked about as if it were an engineering issue, the people trying to do it are - in my opinion, and whether they realize it or not - trying to solve what physicists call the measurement problem. What constitutes a "measurement" in a quantum system - an action that causes the wave function describing a superposition of states to collapse into one state - is unknown. QCs work by causing the superposition created by the quantum algorithm to collapse into a state representing the answer, possibly solving a problem what would take a conventional computer years, or centuries, or ... Engineers working on QCs are trying to prevent a measurement - whatever that is - from occurring and the system decohering until they want it to. Defining what constitutes a quantum measurement is Nobel Prize territory, a problem that reaches into the very definition of reality in the transition from the realm of the very small to the realm we perceive. It is a problem that may never be solved, despite what investors are told.

I hope quantum computers do come to fruition. Not just for the practical reasons of solving some very difficult optimization problems and such, but because of the light it would shed on the measurement problem in physics. But I remain cautiously pessimistic.

Friday, December 19, 2025

The Negative Feedback Loop of AI Summaries

I've already ranted about how using Large Language Models (LLMs) - what passes for "AI" these days - to replace entry level employees will disrupt the very talent pipeline used to create the experienced senior employees used to generate and curate the enormous (terabytes to petabytes) quantities of data used to train the LLMs in the first place. LLMs are merely gigantic "autocomplete" programs making statistical guesses based on their training data. That's why I say this effort isn't sustainable.

But there's another negative feedback loop in the use of LLMs that I just became aware of. Various web and social media tools are starting to provide "AI summaries" in response to user queries. You've probably already experienced these, and have seen that these LLM generated summaries range from usably good or laughably bad.

What Are AI Summaries

Here's the problem: studies have shown that between 80% to 90% of humans making queries for which there are AI summaries never go past the summary. They never click on the web links leading to the data on which the summary is based (if such links are even made available). This is in stark contract to conventional web searches, in which the web links are the result of the search, and the user almost inevitably clicks on the link to get the answer for which they were searching.

Because the user never visits the source web page, they never see the advertising used to pay for the generation of the web page. The web site is visited perhaps once and only once, by the LLM web crawler, and never by a human being. This destroys the business model used to pay for the web site in the first place. So the use of AI summaries will eventually result in the loss of the very data used to create the summary.

Suing AI Companies For Copyright

The only solution I see to this is to paywall all of the news and data sources being used by the AI summary algorithms. Lawsuits are already in progress against the AI companies extracting copyrighted data from advertising supported web sites. Clearly copyrighting the web site alone isn't sufficient to keep their value from being extracted without payment.

Sources

Edd Gent, "AI coding is now everywhere. But not everyone is convinced.", MIT Technology Review, 2025-12-15, https://www.technologyreview.com/2025/12/15/1128352/rise-of-ai-coding-developers-2026/

Sabine Hossenfelder, "AI Is Breaking The Internet As We Know It", Backreaction, 2025-12-14, http://backreaction.blogspot.com/2025/12/ai-is-breaking-internet-as-we-know-it.html

Wednesday, September 03, 2025

Real-Time versus Real Time

Interesting article from IEEE Spectrum: "How AI’s Sense of Time Will Differ From Ours" [Popovski, 2026-08-13].

Human cognition integrates events from different senses - especially seeing and hearing - using a temporal window of integration (TWI). Among other things, it's the ability that lets us see continuous motion with synchronized sound in old school films at 24 frames per second. But under the hood, everything is asynchronous with different sensing and processing latencies. Which is why we don't automatically integrate seeing distant lightning strikes with the thunderclap, even though intellectually we may know they're the same event.

Machines have to deal with this as well, especially AI in applications like self-driving vehicles. It's non-trivial. "Computers put timestamps, nature does not" as the author remarks. Anyone that develops real-time software - or has spent time analyzing log files - has already had to think about this. I talked about this issue in a prior blog article: "Frames of Reference".

I've also pointed out in a prior article, "Frames of Reference III", that our human sense of simultaneity continuously gives us a false view of reality. If I look towards the back of my kitchen, I see the breakfast table and chairs a few feet away. Since light travels about a foot per nanosecond, I'm actually seeing events that occurred a few nanoseconds ago (plus the communication and processing latency inside me). The back yard that I can see through the window: a few tens of nanoseconds ago. The house across the street: a hundred nanoseconds ago. The mountains to the west: microseconds ago. If I can see the moon on a clear evening: over a second ago. I see all of these things as existing in the same instant of time, but nothing could be further from the truth; my perception is at best an ensemble of many instants in the past, and the present is just an illusion.

AI perception of the real-world will have similar complications.