Why AI Alignment Work is Urgent

💡
After reading this, you will understand why alignment work is urgent and cannot wait until after we have developed Artificial General Intelligence.
  1. AGI is being projected for 2029, and it’s a realistic prediction.


  1. AGI has no reason to plateau at human-level intelligence. In fact, we believe it would shoot right past it very, very quickly: A system more intelligent than all of humanity could emerge within days or years after we hit AGI. The takeoff to superintelligence will be short.


  1. We have one chance to solve AI alignment. Once a misaligned AGI exists, our fate would be sealed.

    • This problem is even harder because it’s not enough to create an aligned AGI. We must create an aligned AGI that prevents other unaligned AGI from being created. I.e. we will need to build an AI system capable of contributing a weak pivotal act.
      • Otherwise, an unaligned AGI will eventually be developed, and we are back to square one with a misaligned system that can potentially undermine or even negate the positive outcomes ensured by the aligned AGI.
    • Once a highly capable AGI operates at a dangerous level, any misalignment will lead to catastrophic, irreversible consequences. In particular, if the initial AGI system we develop is misaligned, any subsequent AI systems will also be misaligned.
    • AI alignment is not impossible in principle. For example, if we had a textbook from one hundred years in the future containing all the ideas that actually work, we could probably build an aligned superintelligence in six months. The real difficulty is more so that we only have one chance to get it right.

logo

Alignment Guide

© 2024 The Alignment Guide Project