1. EachPod

“The stakes of AI moral status” by Joe Carlsmith

Author
LessWrong ([email protected])
Published
Wed 21 May 2025
Episode Link
https://www.lesswrong.com/posts/tr6hxia3T8kYqrKm5/the-stakes-of-ai-moral-status

Podcast version (read by the author) here, or search for "Joe Carlsmith Audio" on your podcast app.

1. Introduction

Currently, most people treat AIs like tools. We act like AIs don’t matter in themselves. We use them however we please.

For certain sorts of beings, though, we shouldn’t act like this. Call such beings “moral patients.” Humans are the paradigm example. But many of us accept that some non-human animals are probably moral patients as well. You shouldn’t kick a stray dog just for fun.[1]

Can AIs be moral patients? If so, what sorts of AIs? Will some near-term AIs be moral patients? Are some AIs moral patients now?

If so, it matters a lot. We’re on track to build and run huge numbers of AIs. Indeed: if hardware and deployment scale fast in a world transformed by AI, AIs could quickly account for most of our civilization's cognition.[2] [...]

---

Outline:

(00:17) 1. Introduction

(02:06) 2. Pain

(05:17) 2.1 That

(06:35) 3. Soul-seeing

(08:56) 4. The flesh fair

(12:30) 5. Historical wrongs

(16:12) 6. A few numbers

(20:33) 7. Over-attribution

(22:55) 8. Good manners

(25:26) 9. Is moral patienthood the crux?

(28:03) 10. The measure of a man

(32:50) 11. Next up: consciousness

The original text contained 21 footnotes which were omitted from this narration.

---


First published:

May 21st, 2025



Source:

https://www.lesswrong.com/posts/tr6hxia3T8kYqrKm5/the-stakes-of-ai-moral-status


---


Narrated by TYPE III AUDIO.


---

Images from the article:












Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

Share to: