Simon Willison discusses common developer frustrations with language learning models (LLMs) for code, specifically 'hallucinations,' where LLMs invent non-existent methods or libraries. He emphasizes that such errors are minor, easily caught, and fixed, unlike subtler LLM errors which may compile but perform incorrectly. The article argues that manual testing is crucial, and provides tips for reducing code 'hallucinations' such as changing models, using context effectively, and choosing well-established technologies.