🧀 BigCheese.ai

Social

AI existential risk probabilities are too unreliable to inform policy

🧀

Arvind Narayanan and Sayash Kapoor discuss the unreliability of AI existential risk probabilities for policy-making. They critique the use of inductive, deductive, and subjective estimations in this context and review the Forecasting Research Institute's exercise on X-risk forecasting.

  • The authors argue that probabilities alone lack authority and must be justified for legitimate policymaking.
  • Inductive probability estimation for AI x-risk is unreliable due to no past occurrences for reference.
  • Deductive estimation lacks sound theory for predicting human-extinction-level AI risks.
  • Subjective forecasts often reflect personal judgments rather than evidence-based conclusions.
  • Forecasting skill evaluations on rare events like AI x-risks are empirically undetectable.