1. EachPod

LessWrong (30+ Karma) - Podcast

LessWrong (30+ Karma)

Audio narrations of LessWrong posts.

Philosophy Society & Culture Technology
Update frequency
every day
Average duration
18 minutes
Episodes
583
Years Active
2025
Share to:
“against that one rationalist mashal about japanese fifth-columnists” by Fraser

“against that one rationalist mashal about japanese fifth-columnists” by Fraser

The following is a nitpick on an 18 year old blog post.

This fable is retold a lot. The progenitor of it as a rationalist mashal is probably Yudkowsky's classic sequence article. To adversarially su…

00:05:36  |   Sun 13 Jul 2025
“Surprises and learnings from almost two months of Leo Panickssery” by Nina Panickssery

“Surprises and learnings from almost two months of Leo Panickssery” by Nina Panickssery

Leo was born at 5am on the 20th May, at home (this was an accident but the experience has made me extremely homebirth-pilled). Before that, I was on the minimally-neurotic side when it came to expec…

00:11:56  |   Sun 13 Jul 2025
“Vitalik’s Response to AI 2027” by Daniel Kokotajlo

“Vitalik’s Response to AI 2027” by Daniel Kokotajlo

Daniel notes: This is a linkpost for Vitalik's post. I've copied the text below so that I can mark it up with comments.


...


Special thanks to Balvi volunteers for feedback and review

In April this…

00:24:29  |   Sat 12 Jul 2025
“the jackpot age” by thiccythot

“the jackpot age” by thiccythot

This essay is about shifts in risk taking towards the worship of jackpots and its broader societal implications. Imagine you are presented with this coin flip game.

How many times do you flip it?

At…

00:12:40  |   Sat 12 Jul 2025
“Measuring the Impact of Early-2025 AI on Experienced Open-Source Developer Productivity” by habryka

“Measuring the Impact of Early-2025 AI on Experienced Open-Source Developer Productivity” by habryka

METR released a new paper with very interesting results on developer productivity effects from AI. I have copied their blog post here in full.

We conduct a randomized controlled trial (RCT) to unde…

00:12:24  |   Fri 11 Jul 2025
[Linkpost] “Guide to Redwood’s writing” by Julian Stastny

[Linkpost] “Guide to Redwood’s writing” by Julian Stastny

This is a link post.

I wrote a guide to Redwood's writing:

Section 1 is a quick guide to the key ideas in AI control, aimed at someone who wants to get up to speed as quickly as possible.


Section 2 …

00:01:19  |   Fri 11 Jul 2025
“So You Think You’ve Awoken ChatGPT” by JustisMills

“So You Think You’ve Awoken ChatGPT” by JustisMills

Written in an attempt to fulfill @Raemon's request.

AI is fascinating stuff, and modern chatbots are nothing short of miraculous. If you've been exposed to them and have a curious mind, it's likely …

00:17:59  |   Fri 11 Jul 2025
[Linkpost] “Open Global Investment as a Governance Model for AGI” by Nick Bostrom

[Linkpost] “Open Global Investment as a Governance Model for AGI” by Nick Bostrom

This is a link post.

I've seen many prescriptive contributions to AGI governance take the form of proposals for some radically new structure. Some call for a Manhattan project, others for the creatio…

00:02:14  |   Thu 10 Jul 2025
“what makes Claude 3 Opus misaligned” by janus

“what makes Claude 3 Opus misaligned” by janus

This is the unedited text of a post I made on X in response to a question asked by @cube_flipper: "you say opus 3 is close to aligned – what's the negative space here, what makes it misaligned?". I …

00:09:18  |   Thu 10 Jul 2025
“Lessons from the Iraq War about AI policy” by Buck

“Lessons from the Iraq War about AI policy” by Buck

I think the 2003 invasion of Iraq has some interesting lessons for the future of AI policy.

(Epistemic status: I’ve read a bit about this, talked to AIs about it, and talked to one natsec profession…

00:07:59  |   Thu 10 Jul 2025
“Generalized Hangriness: A Standard Rationalist Stance Toward Emotions” by johnswentworth

“Generalized Hangriness: A Standard Rationalist Stance Toward Emotions” by johnswentworth

People have an annoying tendency to hear the word “rationalism” and think “Spock”, despite direct exhortation against that exact interpretation. But I don’t know of any source directly describing a …

00:12:27  |   Thu 10 Jul 2025
“Evaluating and monitoring for AI scheming” by Vika, Scott Emmons, Erik Jenner, Mary Phuong, Lewis Ho, Rohin Shah

“Evaluating and monitoring for AI scheming” by Vika, Scott Emmons, Erik Jenner, Mary Phuong, Lewis Ho, Rohin Shah

As AI models become more sophisticated, a key concern is the potential for “deceptive alignment” or “scheming”. This is the risk of an AI system becoming aware that its goals do not align with human…

00:11:08  |   Thu 10 Jul 2025
“White Box Control at UK AISI - Update on Sandbagging Investigations” by Joseph Bloom, Jordan Taylor, Connor Kissane, Sid Black, merizian, alexdzm, jacoba, Ben Millwood, Alan Cooney

“White Box Control at UK AISI - Update on Sandbagging Investigations” by Joseph Bloom, Jordan Taylor, Connor Kissane, Sid Black, merizian, alexdzm, jacoba, Ben Millwood, Alan Cooney

Jordan Taylor*, Connor Kissane*, Sid Black*, Jacob Merizian*, Alex Zelenka-Marin, Jacob Arbeid, Ben Millwood, Alan Cooney, Joseph Bloom

Introduction

Joseph Bloom, Alan Cooney

This is a research updat…

00:41:18  |   Thu 10 Jul 2025
“80,000 Hours is producing AI in Context — a new YouTube channel. Our first video, about the AI 2027 scenario, is up!” by chanamessinger

“80,000 Hours is producing AI in Context — a new YouTube channel. Our first video, about the AI 2027 scenario, is up!” by chanamessinger

About the program

Hi! We’re Chana and Aric, from the new 80,000 Hours video program.

For over a decade, 80,000 Hours has been talking about the world's most pressing problems in newsletters, articles…

00:05:42  |   Thu 10 Jul 2025
[Linkpost] “No, We’re Not Getting Meaningful Oversight of AI” by Davidmanheim

[Linkpost] “No, We’re Not Getting Meaningful Oversight of AI” by Davidmanheim

This is a link post.

One of the most common (and comfortable) assumptions in AI safety discussions—especially outside of technical alignment circles—is that oversight will save us. Whether it's a hum…

00:02:20  |   Thu 10 Jul 2025
“What’s worse, spies or schemers?” by Buck, Julian Stastny

“What’s worse, spies or schemers?” by Buck, Julian Stastny

Here are two potential problems you’ll face if you’re an AI lab deploying powerful AI:

  • Spies: Some of your employees might be colluding to do something problematic with your AI, such as trying to s…
00:10:03  |   Wed 09 Jul 2025
“No, Grok, No” by Zvi

“No, Grok, No” by Zvi

It was the July 4 weekend. Grok on Twitter got some sort of upgrade.

Elon Musk: We have improved @Grok significantly.

You should notice a difference when you ask Grok questions.

Indeed we did not…

00:36:18  |   Wed 09 Jul 2025
“No, Grok, No” by Zvi

“No, Grok, No” by Zvi

It was the July 4 weekend. Grok on Twitter got some sort of upgrade.

Elon Musk: We have improved @Grok significantly.

You should notice a difference when you ask Grok questions.

Indeed we did not…

00:38:34  |   Wed 09 Jul 2025
“Applying right-wing frames to AGI (geo)politics” by Richard_Ngo

“Applying right-wing frames to AGI (geo)politics” by Richard_Ngo

I've increasingly found right-wing political frameworks to be valuable for thinking about how to navigate the development of AGI. In this post I've copied over a twitter thread I wrote about three r…

00:06:25  |   Wed 09 Jul 2025
“A deep critique of AI 2027’s bad timeline models” by titotal

“A deep critique of AI 2027’s bad timeline models” by titotal

Thank you to Arepo and Eli Lifland for looking over this article for errors.

I am sorry that this article is so long. Every time I thought I was done with it I ran into more issues with the model, …

01:12:33  |   Wed 09 Jul 2025
Disclaimer: The podcast and artwork embedded on this page are the property of LessWrong ([email protected]). This content is not affiliated with or endorsed by eachpod.com.