Home/Newsletter/Artificial Intelligence/What actually causes hallucinations in LLMs?
Home/Newsletter/Artificial Intelligence/What actually causes hallucinations in LLMs?

What actually causes hallucinations in LLMs?

Learn why Large Language Models (LLMs) hallucinate and how calibration, abstention, incentives, tools, and evaluation improve reliability.
14 min read
Sep 15, 2025
Share

Last week, I asked a well-known LLM a simple, checkable question. The generated reply was crisp and confident — but spectacularly wrong.

I was surprised that a state-of-the-art model could miss so clearly while sounding so sure. That moment stayed with me. If a system cannot reliably tell fact from fiction, what does that mean for how we write, teach, code, and support customers with AI in the loop?

Then I read OpenAI’s new study, “Why Language Models Hallucinate,” and it clicked. I learned why these confident mistakes happen and how to think about them. In this newsletter, we'll explore what exactly hallucinations even are, what causes them, and how to evaluate them more honestly.


Written By: Fahim