1. EachPod

“Permanent Disempowerment is the Baseline” by Vladimir_Nesov

Author
LessWrong ([email protected])
Published
Mon 04 Aug 2025
Episode Link
https://www.lesswrong.com/posts/AkMdkz2i3wFtpAN6R/permanent-disempowerment-is-the-baseline

Permanent disempowerment without restrictions on quality of life achievable with relatively meager resources (and no extinction) seems to be a likely outcome for the future of humanity, if the current trajectory of frontier AI development continues and leads to AGI[1] shortly. This might happen as a result of at least a slight endorsement by AIs of humanity's welfare, in the context of costs for AIs being about matter or compute rather than technological advancements and quality of infrastructure.


The remaining risks (initial catastrophic harm or total extinction) and opportunities (capturing a larger portion of the cosmic endowment for the future of humanity than a tiny little bit) are about what happens in the transitional period when AIs still don't have an overwhelming advantage, which might take longer than usually expected.


Animal Extinction and Suffering


In recent times, with preservation becoming a salient concern, species facing pressure [...]

---

Outline:

(00:56) Animal Extinction and Suffering

(02:50) AGI-Driven Pause on Superintelligence Development

(04:59) Tradeoffs in a Superintelligent World

(07:06) Giving Away the Cosmic Endowment

The original text contained 5 footnotes which were omitted from this narration.

---


First published:

August 4th, 2025



Source:

https://www.lesswrong.com/posts/AkMdkz2i3wFtpAN6R/permanent-disempowerment-is-the-baseline


---


Narrated by TYPE III AUDIO.

Share to: