Large language models have an awkward history with telling the truth, especially if they can’t provide a real answer. Hallucinations have been a hazard for AI chatbots since the technology debuted a few years ago. But ChatGPT 5 seems to be going for a new, more humble approach to not knowing answers; admitting it.
Though most AI chatbot responses are accurate, it’s impossible to interact with an AI chatbot for long before it provides a partial or complete fabrication as an answer. The AI displays just as much confidence in its answers regardless of their accuracy. AI hallucinations have plagued users and even led to embarrassing moments for the developers during demonstrations.
OpenAI had hinted that the new version of ChatGPT would be willing to plead ignorance over making up an answer, and a viral X post by Kol Tregaskes has drawn attention to the groundbreaking concept of ChatGPT saying, “I don’t know – and I can’t reliably find out.”
GPT-5 says ‘I don’t know’.Love this, thank you. pic.twitter.com/k6SNFKqZbgAugust 18, 2025
Technically, hallucinations are baked into how these models work. They’re not retrieving facts from a database, even if it looks that way; they’re predicting the next most likely word based on patterns in language. When you ask about something obscure or complicated, the AI is guessing the right words to answer it, not doing a classic search engine hunt. Hence, the appearance of entirely made-up sources, statistics, or quotes.
But GPT-5’s ability to stop and say, “I don’t know,” reflects an evolution in how AI models deal with their limitations in terms of their responses, at least. A candid admission of ignorance replaces fictional filler. It may seem anticlimactic, but it’s more significant for making the AI seem more trustworthy.
Clarity over hallucinations
Trust is crucial for AI chatbots. Why would you use them if you don’t trust the answers? ChatGPT and other AI chatbots have warnings built into them about not relying too much on their answers because of hallucinations, but there are always stories of people ignoring that warning and getting into hot water. If the AI just says it can’t answer a question, people might be more inclined to trust the answers it does provide.
Of course, there’s still a risk that users will interpret the model’s self-doubt as failure. The phrase “I don’t know” might come off as a bug, not a feature, if you don’t realize the alternative is a hallucination, not the correct answer. Admitting uncertainty isn’t how the all-knowing AI some imagine ChatGPT would behave.
But it’s arguably the most human thing ChatGPT could do in this instance. OpenAI’s proclaimed goal is artificial general intelligence, AI that can perform any intellectual task a human can. But one of the ironies of AGI is that mimicking human thinking includes uncertainties as well as capabilities.
Sometimes, the smartest thing you can do is to say you don’t know something. You can’t learn if you refuse to admit there are things you don’t know. And, at least it avoids the spectacle of an AI telling you to eat rocks for your health.
Leave A Comment