Unsloth AI has addressed a critical issue in Gradient Accumulation that has been affecting training and finetuning of language models (LLMs). The bug led to higher training losses for larger gradient accumulation steps. The Unsloth AI team formulated and implemented a fix, resulting in improved accuracy for gradient accumulation that aligns with full batch training. Updating Unsloth and using their fixed trainer, as advised, can significantly lower associated errors.