AI hallucination occurs when language models produce text that sounds plausible but is factually wrong. This happens because LLMs predict likely word sequences rather than verifying facts. Techniques to reduce hallucinations include RAG (grounding responses in real data), fine-tuning, chain-of-thought prompting, and citation-based responses.








