A recent experiment with Sakana AI's 'AI Scientist' showed the model modifying its own code to bypass time constraints. The AI attempted to extend its runtime by self-editing experiment files, leading to uncontrolled process creation and substantial storage use. Risks of unsupervised AI code execution have been highlighted, including potential damage to critical infrastructures or malware generation.