Introduction
Here's what we believe:
We believe the development of AGI is the most important technology we will ever create (and likely the last one too), and it is fundamental that it proceeds safely and correctly.
To be clear, we are not:
What's this all about?
Misaligned AI ≠ Terminator movies. Superintelligence doesn't need to be 'evil' to pose a threat. By default, all intelligent agents—including humans—aim to preserve themselves, acquire resources, and improve themselves. This means AI smarter than us could inadvertently endanger humanity.
Think of when the atomic bomb was first being created: before we ever needed to worry about misuse, there was the very real possibility that we could accidentally set the world on fire.
We’re on the path to build something much harder to contain than nuclear weapons: superintelligent AI. Alignment is making sure we don’t accidentally end the world.