The article highlights the concern around AI's propensity for 'hallucination' or outputting incorrect information, with recent events revealing the errors made by AI systems like Google's Gemini, Microsoft's Copilot, and OpenAI's ChatGPT. The issue is inherent to AI language models, and despite efforts at mitigation, poses significant risks to dependability.