🧀 BigCheese.ai

Social

OpenAI and Anthropic agree to send models to US Government for safety evaluation

🧀

OpenAI and Anthropic have signed agreements with the US AI Safety Institute under NIST to provide their forthcoming AI models for safety evaluations. This collaboration aims to develop safety standards and reinforce responsible AI development. The move highlights the tech industry's engagement in voluntary AI safety measures, despite there being no regulatory compulsion.

  • OpenAI and Anthropic agree to pre-release AI model testing.
  • US AI Safety Institute will review and advise on AI safety.
  • The agreement follows a similar UK initiative.
  • The collaboration is voluntary and part of a larger executive order.
  • There is debate on the efficacy and definition of AI 'safety'.