A discussion on instructing Language Learning Models (LLMs) not to 'hallucinate' is brought forth on Gist by user yoavg. The idea, initially sounding 'silly', gains legitimacy upon reflection. The post articulates that LLMs, if properly trained, could indeed reduce hallucinations - false or misleading information generated by the AI. The method involves fine-tuning the models with instructions like 'don't hallucinate', and evidence suggests that different internal mechanisms can be leveraged for remembering versus improvising responses. The gist acknowledges the possible existence of undesired consequences when LLMs always try to avoid hallucinations.