Regarding the current state of Artificial Intelligence (AI) applications: my concern is that by the time we figure out we need an enormous volume of high quality content created and curated by human experts to correctly train Large Language Models (LLMs) like ChatGPT, we will have eliminated all the entry-level career paths of those very same human experts by using those same LLMs. As the existing cohort of experts retire, die, move into management, or otherwise quit producing content, there will be no one to take their place. We will have destroyed the very talent pipeline that creates the necessary training data.
We will have “eaten our own seed corn”.
Because human-created and -curated content will be more expensive to produce, organizations will be strongly incentivized to use LLM-created content to train other LLMs - or perhaps even the same LLM. This tends to cause errors in the training data to be amplified, leading to model collapse, where the LLM produces nonsense. (This is less likely to happen with human-created content because humans, unlike an algorithm, are unlikely to make exactly the same mistakes.)
Because human-created and -curated content will be deemed to be of higher quality, organizations will be strongly incentivized to not label LLM-created content as such. This will be problematic for LLM developers who are looking for the enormous amounts of high quality data necessary to train their models.
We will have "salted our own fields".
The seeds of the destruction of LLMs lies in the economics of creating and using LLMs.
I believe that LLMs have a future in being used as tools by experienced users in the same way such users today use tools like Wikipedia, Google Search, and StackOverflow - with much of the same risk. But even if we retain the people in the entry level positions, encouraging them to use LLMs in their work may prevent them from the very experience in that entry level work they need to move onto more advanced tasks.