1. EachPod

LessWrong (30+ Karma) - Podcast

LessWrong (30+ Karma)

Audio narrations of LessWrong posts.

Philosophy Society & Culture Technology
Update frequency
every day
Average duration
18 minutes
Episodes
583
Years Active
2025
Share to:
“Monthly Roundup #32: July 2025” by Zvi

“Monthly Roundup #32: July 2025” by Zvi

Welcome to the monthly roundup of things that don’t fit into other categories and don’t rise to the level of their own posts.

Bad News

When people tell you who they are, believe them (with obv…

01:14:54  |   Mon 21 Jul 2025
“If Anyone Builds It, Everyone Dies: Call for Translators (for Supplementary Materials)” by yams

“If Anyone Builds It, Everyone Dies: Call for Translators (for Supplementary Materials)” by yams

If Anyone Builds It, Everyone Dies is a book Eliezer and Nate have coming out this September. In our other posts talking about the book, some kind souls have volunteered their services as translator…

00:02:43  |   Mon 21 Jul 2025
“Detecting High-Stakes Interactions with Activation Probes” by Arrrlex, williambankes, Urja Pawar, Phil Bland, David Scott Krueger (formerly: capybaralet), Dmitrii Krasheninnikov

“Detecting High-Stakes Interactions with Activation Probes” by Arrrlex, williambankes, Urja Pawar, Phil Bland, David Scott Krueger (formerly: capybaralet), Dmitrii Krasheninnikov

This research was completed for LASR Labs 2025 by Alex McKenzie, Urja Pawar, Phil Blandfort and William Bankes. The team was supervised by Dmitrii Krasheninnikov, with additional guidance from Ekdee…

00:11:11  |   Mon 21 Jul 2025
“HRT in Menopause: A candidate for a case study of epistemology in epidemiology, statistics & medicine” by foodforthought

“HRT in Menopause: A candidate for a case study of epistemology in epidemiology, statistics & medicine” by foodforthought

I recently came across a 2024 update on a 2018 book making the still-controversial case that hormone replacement therapy (HRT) after menopause is highly beneficial and that rumors of its risks are u…

00:07:17  |   Mon 21 Jul 2025
[Linkpost] “GDM also claims IMO gold medal” by Yair Halberstadt

[Linkpost] “GDM also claims IMO gold medal” by Yair Halberstadt

This is a link post.

Google DeepMind announces that they've also achieved a gold medal in the IMO.

They've exactly matched OpenAI, getting perfect scores for the first 5 questions and flunking the 6…

00:00:57  |   Mon 21 Jul 2025
“[Fiction] Our Trial” by Nina Panickssery

“[Fiction] Our Trial” by Nina Panickssery

As I was making my morning coffee, the words SIMULATION 1099 flashed across my vision. I immediately felt exactly as I'd always imagined I would in philosophical thought experiments. There was a lot…

00:06:48  |   Mon 21 Jul 2025
“LLMs Can’t See Pixels or Characters” by Brendan Long

“LLMs Can’t See Pixels or Characters” by Brendan Long

This might be beating a dead horse, but there are several "mysterious" problems LLMs are bad at that all seem to have the same cause. I wanted an article I could reference when this comes up, so I w…

00:08:37  |   Mon 21 Jul 2025
“Plato’s Trolley” by dr_s

“Plato’s Trolley” by dr_s

Epistemic status: I'm sure most of my arguments have already been made and argued to death by philosophers that I'm not knowledgeable enough to name. I'm only writing this to outline what I genuinel…

00:12:19  |   Mon 21 Jul 2025
“Your AI Safety org could get EU funding up to €9.08M. Here’s how (+ free personalized support)” by SamuelK

“Your AI Safety org could get EU funding up to €9.08M. Here’s how (+ free personalized support)” by SamuelK

Thanks to @Manuel Allgaier of AI Safety Berlin for his suggestion to write this post and his helpful feedback. And thanks to LW/AI Alignment Moderator Oliver for looking over the post.

LessWrong an…

00:06:05  |   Sun 20 Jul 2025
“Shallow Water is Dangerous Too” by jefftk

“Shallow Water is Dangerous Too” by jefftk

Content warning: risk to children

Julia and I know drowning is the biggest risk to US children under 5, and we try to take this seriously. But yesterday our 4yo came very close to drowning…

00:03:26  |   Sun 20 Jul 2025
“Make More Grayspaces” by Duncan Sabien (Inactive)

“Make More Grayspaces” by Duncan Sabien (Inactive)

Author's note: These days, my thoughts go onto my substack by default, instead of onto LessWrong. Everything I write becomes free after a week or so, but it's only paid subscriptions that make it po…

00:23:26  |   Sun 20 Jul 2025
[Linkpost] “AI Gets IMO Gold Medal: via general-purpose RL, not via narrow, task specific methodology” by Mikhail Samin

[Linkpost] “AI Gets IMO Gold Medal: via general-purpose RL, not via narrow, task specific methodology” by Mikhail Samin

This is a link post.

I’m excited to share that our latest @OpenAI experimental reasoning LLM has achieved a longstanding grand challenge in AI: gold medal-level performance on the world's most presti…

00:03:43  |   Sat 19 Jul 2025
“A night-watchman ASI as a first step toward a great future” by Eric Neyman

“A night-watchman ASI as a first step toward a great future” by Eric Neyman

I took a week off from my day job of aligning AI to visit Forethought and think about the question: if we can align AI, what should we do with it? This post summarizes the state of my thinking at th…

00:20:47  |   Sat 19 Jul 2025
“Love stays loved (formerly ‘Skin’)” by Swimmer963 (Miranda Dixon-Luinenburg)

“Love stays loved (formerly ‘Skin’)” by Swimmer963 (Miranda Dixon-Luinenburg)

This is a short story I wrote in mid-2022. Genre: cosmic horror as a metaphor for living with a high p-doom.

One

The last time I saw my mom, we met in a coffee shop, like strangers on a first dat…

00:51:28  |   Fri 18 Jul 2025
“Why it’s hard to make settings for high-stakes control research” by Buck

“Why it’s hard to make settings for high-stakes control research” by Buck

One of our main activities at Redwood is writing follow-ups to previous papers on control like the original and Ctrl-Z, where we construct a setting with a bunch of tasks (e.g. APPS problems) and a …

00:06:58  |   Fri 18 Jul 2025
“On METR’s AI Coding RCT” by Zvi

“On METR’s AI Coding RCT” by Zvi

METR ran a proper RCT experiment seeing how much access to Cursor (using Sonnet 3.7) would accelerate coders working on their own open source repos.

Everyone surveyed expected a substantial speedup…

00:21:07  |   Fri 18 Jul 2025
“Trying the Obvious Thing” by PranavG, Gabriel Alfour

“Trying the Obvious Thing” by PranavG, Gabriel Alfour

I am quite harsh in impersonal settings, such as on the Internet or at work. Attention is a scarce resource, and I am stingy with it. The world is Out To Get Us. In many ways, nice or not.

Social Me…

00:06:37  |   Fri 18 Jul 2025
“Video and transcript of talk on ‘Can goodness compete?’” by Joe Carlsmith

“Video and transcript of talk on ‘Can goodness compete?’” by Joe Carlsmith

(This is the video and transcript of a public talk I gave at Mox in San Francisco in July 2025, on long-term equilibria post-AGI. It's a longer version of the talk I gave at this workshop. The slide…

01:06:59  |   Thu 17 Jul 2025
“On being sort of back and sort of new here” by Loki zen

“On being sort of back and sort of new here” by Loki zen

So I'm "back" on Less Wrong, which is to say that I was surprised to find that I already had an account and had even, apparently, commented on some things. 11 years ago. Which feels like half a life…

00:04:35  |   Thu 17 Jul 2025
“Comment on ‘Four Layers of Intellectual Conversation’” by Zack_M_Davis

“Comment on ‘Four Layers of Intellectual Conversation’” by Zack_M_Davis

One of the most underrated essays in the post-Sequences era of Eliezer Yudkowsky's corpus is "Four Layers of Intellectual Conversation". The degree to which this piece of wisdom has fallen into trag…

00:10:07  |   Thu 17 Jul 2025
Disclaimer: The podcast and artwork embedded on this page are the property of LessWrong ([email protected]). This content is not affiliated with or endorsed by eachpod.com.