If you want the long-term future to go well by the lights of a certain value function, you might be tempted to try to align AGI(s) to your own values (broadly construed, including your deliberative values and intellectual temperaments).[1]
Suppose that you're not going to do that, for one of three reasons:
- You can't. People more powerful than you are going to build AGIs and you don't have a say over that.
- You object to aligning AGI(s) to your own values for principled reasons. It would be highly uncooperative, undemocratic, coercive, and basically cartoon supervillain evil.
- You recognize that this behaviour would, when pursued by lots of people, lead to a race to the bottom where everyone fights to build AGI aligned to their values as fast as possible and destroys a ton of value in the process, so you want to strongly reject this kind of norm.
[...]
---
First published:
August 20th, 2025
Source:
https://forum.effectivealtruism.org/posts/TQFHWm4vq6aaEipHm/deep-democracy-as-a-promising-target-for-positive-ai-futures
---
Narrated by TYPE III AUDIO.
---
Images from the article:

Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.