Generative AI and LLM (large language models) are in serious trouble
There is growing and very tangible evidence that Generative AI and the Large Language Models (LLMs) that underpin them are in serious trouble. We are talking here about OpenAI’s ChatGPT and its underpinning models, Google’s Gemini, and other similar LLM tools.
Not only are they built on the theft of the underlying intellectual property on which they have been trained, but it is increasingly clear that they are generating vast numbers of errors, ‘hallucinations’ and frankly, bullshit.
Now the evidence is clear that the latest, more ‘advanced’ Generative AI models are increasingly prone to errors and bullshit. The newest OpenAI model has an error rate of 40-60% - a huge jump from the 14% in the first version of the model. No one - least of all their designers - actually understands why - but a leading theory is that the model’s ‘reasoning’ is leading to a recursive number of made up errors and moving towards ‘model collapse’.
Its definitely time for anyone interested in the truth, actual work and delivering value, to hit the hard pause on any use of Generative AI and LLMs until it becomes clear what is going on here.
To be clear, this isn’t to suggest that all forms of machine learning are prone to these kinds of errors. It seems that small, less generic and targeted models can be useful tools. But it does seem that the rush to create ‘general’ models is heading for a train wreck using current approaches and tools.
References and links:
Academic Article - ChatGPT is Bullshit (July 2024) https://link.springer.com/article/10.1007/s10676-024-09775-5
AI Hallucinations are getting worse - NewScientist (May 2025) https://www.newscientist.com/article/2479545-ai-hallucinations-are-getting-worse-and-theyre-here-to-stay/
AI Hallucinations worse than ever - Forbes (May 2025) Why AI ‘Hallucinations’ Are Worse Than Ever https://www.forbes.com/sites/conormurray/2025/05/06/why-ai-hallucinations-are-worse-than-ever/
AI is getting ‘more powerful’ but its hallucinations are getting worse - NYT (May 2025) A.I. Is Getting More Powerful, but Its Hallucinations Are Getting Worse A new wave of “reasoning” systems from companies like OpenAI is producing incorrect information more often. Even the companies don’t know why. https://www.nytimes.com/2025/05/05/technology/ai-hallucinations-chatgpt-google.html
AI model collapse - The Register (May 2025) Some signs of AI model collapse begin to reveal themselves Prediction: General-purpose AI could start getting worse https://www.theregister.com/2025/05/27/opinion_column_ai_model_collapse/
AI model collapse - BGR (May 2025) AI model collapse might make current hallucinations seem like a walk in the park bgr.com/tech/ai-m…
What is Model Collapse (Jan 2025) In this episode of the Charlotte Content Marketing Podcast, Andrew Rusnak discusses how AI model collapse threatens the integrity of data on the Internet. Learn how AI data is feeding upon itself and how you can take steps to protect your brand from harm through authentic content. www.charlottecontentmarketing.com/knowledge…