1. EachPod

LessWrong (30+ Karma) - Podcast

LessWrong (30+ Karma)

Audio narrations of LessWrong posts.

Technology Philosophy Society & Culture
Update frequency
every day
Average duration
18 minutes
Episodes
577
Years Active
2025
Share to:
“Before LLM Psychosis, There Was Yes-Man Psychosis” by johnswentworth

“Before LLM Psychosis, There Was Yes-Man Psychosis” by johnswentworth

A studio executive has no beliefs

That's the way of a studio system

We've bowed to every rear of all the studio chiefs

And you can bet your ass we've kissed 'em

Even the birds in the Hollywood hil…

00:05:27  |   Mon 25 Aug 2025
“Arguments About AI Consciousness Seem Highly Motivated And At Best Overconfident” by Zvi

“Arguments About AI Consciousness Seem Highly Motivated And At Best Overconfident” by Zvi

I happily admit I am deeply confused about consciousness.

I don’t feel confident I understand what it is, what causes it, which entities have it, what future entities might have it, to what extent …

00:47:45  |   Mon 25 Aug 2025
“The Best Materials To Build Any Intuition” by Algon

“The Best Materials To Build Any Intuition” by Algon

Many textbooks, tutorials or ... tapes leave out the ways people actually think about a subject, and leave you to fumble your way to your own picture. They don't even attempt to help you build intui…

00:05:31  |   Mon 25 Aug 2025
“Kids and Cleaning” by jefftk

“Kids and Cleaning” by jefftk

Before having kids I thought teaching them to clean up would be similar to the rest of parenting: once they're physically able to do it you start practicing with them, and after a while they're …

00:05:20  |   Mon 25 Aug 2025
“Futility Illusions” by silentbob

“Futility Illusions” by silentbob

…or the it doesn’t make a difference anyway fallacy.

Improving Productivity is Futile

I once had a coaching call on some generic productivity topic along the lines of “I’m not getting done as m…

00:08:36  |   Sun 24 Aug 2025
“Notes on cooperating with unaligned AI” by Lukas Finnveden

“Notes on cooperating with unaligned AI” by Lukas Finnveden

These are some research notes on whether we could reduce AI takeover risk by cooperating with unaligned AIs. I think the best and most readable public writing on this topic is “Making deals with ear…

01:01:04  |   Sun 24 Aug 2025
“Shorter Tokens Are More Likely” by Brendan Long

“Shorter Tokens Are More Likely” by Brendan Long

I was thinking about LLM tokenization (as one does) and had a thought: We select the next output token for an LLM based on its likelihood, but shorter tokens are more likely.

Why? Shorter common tok…

00:09:16  |   Sun 24 Aug 2025
“DeepSeek v3.1 Is Not Having a Moment” by Zvi

“DeepSeek v3.1 Is Not Having a Moment” by Zvi

What if DeepSeek released a model claiming 66 on SWE and almost no one tried using it? Would it be any good? Would you be able to tell? Or would we get the shortest post of the year?

Why We Haven’…

00:08:06  |   Sun 24 Aug 2025
“Yudkowsky on ‘Don’t use p(doom)’” by Raemon

“Yudkowsky on ‘Don’t use p(doom)’” by Raemon

For awhile, I kinda assumed Eliezer had basically coined the concept of p(Doom). Then I was surprised one day to hear him complaining about it being an antipattern he specifically thought was unhelp…

00:08:50  |   Sat 23 Aug 2025
“Banning Said Achmiz (and broader thoughts on moderation)” by habryka

“Banning Said Achmiz (and broader thoughts on moderation)” by habryka

It's been roughly 7 years since the LessWrong user-base voted on whether it's time to close down shop and become an archive, or to move towards the LessWrong 2.0 platform, with me as head-admin. For…

00:51:48  |   Fri 22 Aug 2025
“(∃ Stochastic Natural Latent) Implies (∃ Deterministic Natural Latent)” by johnswentworth, David Lorell

“(∃ Stochastic Natural Latent) Implies (∃ Deterministic Natural Latent)” by johnswentworth, David Lorell

Audio note: this article contains 212 uses of latex notation, so the narration may be difficult to follow. There's a link to the original text in the episode description.

Our posts on natural late…

00:19:50  |   Fri 22 Aug 2025
“Mustafa Suleyman’s ‘Seemingly Conscious AI’ Post” by Stephen Martin

“Mustafa Suleyman’s ‘Seemingly Conscious AI’ Post” by Stephen Martin

Recently, the CEO of Microsoft AI posted an article on his blog called "Seemingly Conscious AI is Coming".

Suleyman's post involves both "is" and "ought" claims, first describing the reality of the …

00:16:24  |   Fri 22 Aug 2025
“An Introduction to Credal Sets and Infra-Bayes Learnability” by Brittany Gelb

“An Introduction to Credal Sets and Infra-Bayes Learnability” by Brittany Gelb

Audio note: this article contains 247 uses of latex notation, so the narration may be difficult to follow. There's a link to the original text in the episode description.

Introduction

Credal sets,…

00:26:02  |   Fri 22 Aug 2025
“When Money Becomes Power” by Gabriel Alfour

“When Money Becomes Power” by Gabriel Alfour

I know many people in the libertarian right quadrant of the political compass: progress studies people, economists, techno-optimists, anarcho-capitalists, proper libertarians, etc.

They usually igno…

00:14:44  |   Fri 22 Aug 2025
“Could one country outgrow the rest of the world?” by Tom Davidson

“Could one country outgrow the rest of the world?” by Tom Davidson

This post is speculative and tentative. I’m exploring new ideas and giving my best guess; the conclusions are lightly held.

Summary

Bostrom (2014) says that an actor has a “decisive strategic advan…

00:29:37  |   Fri 22 Aug 2025
“Resampling Conserves Redundancy (Approximately)” by johnswentworth, David Lorell

“Resampling Conserves Redundancy (Approximately)” by johnswentworth, David Lorell

Audio note: this article contains 182 uses of latex notation, so the narration may be difficult to follow. There's a link to the original text in the episode description.

Suppose random variables …

00:12:56  |   Fri 22 Aug 2025
“Being honest with AIs” by Lukas Finnveden

“Being honest with AIs” by Lukas Finnveden

In the future, we might accidentally create AIs with ambitious goals that are misaligned with ours. But just because we don’t have the same goals doesn’t mean we need to be in conflict. We could als…

00:36:24  |   Thu 21 Aug 2025
“Four ways Econ makes people dumber re: future AI” by Steven Byrnes

“Four ways Econ makes people dumber re: future AI” by Steven Byrnes

(Cross-posted from X, intended for a general audience.)

There's a funny thing where economics education paradoxically makes people DUMBER at thinking about future AI. Econ textbooks teach concepts &…

00:14:02  |   Thu 21 Aug 2025
“French Non-Profit Law: Associations are as cool as American churches” by Lucie Philippon

“French Non-Profit Law: Associations are as cool as American churches” by Lucie Philippon

In the latest Bayesian Conspiracy episode (How and Why to Form a Church, with Andrew Wilson), Andrew describes all the great perks churches get compared to normal 501c3. Coming from France myself, i…

00:05:09  |   Thu 21 Aug 2025
“come work on dangerous capability mitigations at Anthropic” by Dave Orr

“come work on dangerous capability mitigations at Anthropic” by Dave Orr

Hi everyone,

TL;DR the Safeguards team at Anthopic is hiring ML experts to focus on dangerous capability mitigations - this is a very high impact ML Engineer / Research Scientist opportunity related…

00:02:53  |   Wed 20 Aug 2025
Disclaimer: The podcast and artwork embedded on this page are the property of LessWrong ([email protected]). This content is not affiliated with or endorsed by eachpod.com.