(I originally wrote this as a comment on LinkedIn, then turned the comment into a post on LinkedIn, then into a post for my techie and science fictional friends on Facebook, and finally turned it into a blog article here. I've said all this before, but it bears repeating.)
The destruction of the talent pipeline by the use of AI for work normally done by interns and entry-level employees not only threatens how humans fundamentally learn, but leads to AI "eating its own seed corn". As senior experts leave the work force, there will be no one left to generate the enormous amount - terabytes - of content necessary to train the Large Language Models.
Because human generated content will generally be perceived to be more valuable than machine generated content, humans using AI to generate content will be highly incentivized to not identify AI generated content as such. More and more AI generated content will be swept up along with the gradually diminishing pool of human content to use as training data, in a kind of feedback loop leading to "model collapse", in which the AI produces nonsense.
A former boss of mine, back in my own U.S. national lab days, once wisely remarked that this is why the U.S. Department of Energy maintains national labs: experienced Ph.D. physicists take a really long time to make. And when you need them, you need them right now. Not having them when you need them can result in an existential crisis. So you have to maintain a talent pipeline that keeps churning them out.
It takes generations in human-time to refill the talent pipeline and start making more senior experts, no matter what the domain of expertise. Once we go down this path, there is no quick and easy fix.
The lobe of my brain that goes active at science fiction conventions suggests that this anti-pattern is one possible explanation for the Fermi Paradox.
No comments:
Post a Comment