This paper explores improving the readability of Large Language Models (LLMs) outputs through Prover-Verifier Games. The authors propose a training algorithm which makes use of small verifiers and provers—both helpful and sneaky. Their studies reveal that legibility training aids humans in verifying solution correctness more efficiently.