Leopold Aschenbrenner, a young researcher fired from OpenAI for allegedly leaking information, believes we are on the verge of achieving artificial general intelligence (AGI) and artificial superintelligence (ASI).

Aschenbrenner argues that the rapid improvement in AI, such as the advancements from GPT-2 to GPT-4, indicates an exponential growth trajectory. He suggests that with significantly more training, current AI models will reach human-level research capabilities, leading to ASI.

Takeaways:

  • Aschenbrenner’s Background: A prodigious talent, Aschenbrenner published a significant thesis at 17 and was part of OpenAI’s super alignment team before it was disbanded.
  • AGI vs. ASI: AGI matches human capabilities, while ASI surpasses them.
  • Intelligence Explosion: With much more training, current AI models could reach human-level capabilities, leading to ASI.
  • Implications of ASI: Potential benefits include solving complex problems and automating jobs; risks include the development of superweapons and economic disruption.
  • Controversial Claims: Some experts believe AI improvement will slow down as we approach AGI.
  • Data Availability: There’s uncertainty about having enough high-quality data to train AI models to reach AGI.
  • Regulation and Geopolitics: Aschenbrenner emphasizes the need for government regulation and highlights potential competition from China in developing ASI.
  • High Stakes in AI Development: The creation of AGI will significantly impact liberal democracy, the CCP(Chinese Communist Party)’s survival, and the global order for the next century.
  • Intense Espionage and Competition: The CCP’s substantial investment in espionage underscores the geopolitical stakes of AI development.
  • AI Alignment and Governance: Ensuring AI systems align with human values is crucial to prevent misuse by authoritarian regimes or uncontrollable outcomes.

References:


Extra background on ASI and its potential threat to society:

Below is a comprehensive interview lasting over two hours with Lex Fridman and Roman Yampolskiy. Yampolskiy is known for his work on artificial intelligence safety and cybersecurity. He has authored numerous publications on AI safety, behavioral biometrics, and cybersecurity.

They delve into the possible risks posed by superintelligent AI. Yampolskiy asserts there is a 99% probability that it could result in human extinction and strongly advocates for the implementation of stringent safety measures.