I wrote this and then I realized that the framing was basically copying Vitalik’s.
If AI progresses as far as it seems like it might, then it will likely reshape a lot of how the world works [expand]. So the post-AI world will look very different.
It matters that we build the AI tools that can help us navigate this transition wisely. And we might need to do this soon if the transition happens very quickly.
We want to differentially build AI systems that A) have high cost-benefit and B) help us handle the challenges of the AI transition
This includes wanting to build systems with the following features sooner:
- Truthfulness
- Cooperativeness
- Facilitating coordination and robust distribution of power.
- Intent-alignment
- Systems focused on analysis, control and verification of other AI systems
- [possibly] Privacy-sensitive policing tools
- Philosophical/wise/meaning-making
- Etc
[Many of the above will be AI systems, but some of this might not be - e.g. some of the privacy-sensitive policing might also be about cryptography, coordination stuff could also be more crypto.]
And systems with the following features later:
- Agency
- Executive capacity
- Science / engineering ability (maybe particularly biosciences)
- [others]
Note that it’s also a more positive framing than doom/going to kill us all. E.g. I can imagine it being one that Patrick Collison gets behind.
Things I could expand here
- What the transition might look like
- Why those things are good
- What specific/concrete versions of them seem best
- Mechanisms for getting the specific things to happen
Navigating the transition to a post-AGI world
This is a big thing that lots of effort should be going into.
This framing is more accurate about what’s needed than “AI alignment”.