1. EachPod

LessWrong (30+ Karma) - Podcast

LessWrong (30+ Karma)

Audio narrations of LessWrong posts.

Philosophy Society & Culture Technology
Update frequency
every day
Average duration
18 minutes
Episodes
583
Years Active
2025
Share to:
“Melatonin Self-Experiment Results” by silentbob

“Melatonin Self-Experiment Results” by silentbob

Throughout the first half of 2025 I did a blinded experiment to see how low to moderate melatonin intake affects me. In this post I summarize my findings.

Tl;dr: In my blinded self-experiment on mel…

00:19:35  |   Wed 25 Jun 2025
[Linkpost] “New Paper: Ambiguous Online Learning” by Vanessa Kosoy

[Linkpost] “New Paper: Ambiguous Online Learning” by Vanessa Kosoy

This is a link post.

Abstract: We propose a new variant of online learning that we call "ambiguous online learning". In this setting, the learner is allowed to produce multiple predicted labels. Such…

00:03:08  |   Wed 25 Jun 2025
“What does 10x-ing effective compute get you?” by ryan_greenblatt

“What does 10x-ing effective compute get you?” by ryan_greenblatt

This is more speculative and confusing than my typical posts and I also think the content of this post could be substantially improved with more effort. But it's been sitting around in my drafts for…

00:21:19  |   Wed 25 Jun 2025
“A regime-change power-vacuum conjecture about group belief” by TsviBT

“A regime-change power-vacuum conjecture about group belief” by TsviBT

Crosspost from my blog.

Regime change

Conjecture: when there is regime change, the default outcome is for a faction to take over—whichever faction is best prepared to seize power by force.

One ex…

00:06:21  |   Wed 25 Jun 2025
“Analyzing A Critique Of The AI 2027 Timeline Forecasts” by Zvi

“Analyzing A Critique Of The AI 2027 Timeline Forecasts” by Zvi

There was what everyone agrees was a high quality critique of the timelines component of AI 2027, by the LessWrong user and Substack writer Titotal.

It is great to have thoughtful critiques like th…

00:54:30  |   Tue 24 Jun 2025
“Why ‘training against scheming’ is hard” by Marius Hobbhahn

“Why ‘training against scheming’ is hard” by Marius Hobbhahn

TLDR: I think that AI developers will and should attempt to reduce scheming by explicitly training the model to scheme less. However, I think “training against scheming” is a harder problem than oth…

00:24:03  |   Tue 24 Jun 2025
“My pitch for the AI Village” by Daniel Kokotajlo

“My pitch for the AI Village” by Daniel Kokotajlo

I think the AI Village should be funded much more than it currently is; I’d wildly guess that the AI safety ecosystem should be funding it to the tune of $4M/year.[1] I have decided to donate $100k. …

00:13:28  |   Tue 24 Jun 2025
“Situational Awareness: A One-Year Retrospective” by Nathan Delisle

“Situational Awareness: A One-Year Retrospective” by Nathan Delisle

Nathan Delisle, University of Chicago

tl;dr: Many critiques of *Situational Awareness* have been purely qualitative; one year later we can finally check the numbers. I did my best to verify his clai…

00:32:40  |   Tue 24 Jun 2025
“Compressed Computation is (probably) not Computation in Superposition” by Jai Bhagat, Sara Molas Medina, Giorgi Giglemiani, StefanHex

“Compressed Computation is (probably) not Computation in Superposition” by Jai Bhagat, Sara Molas Medina, Giorgi Giglemiani, StefanHex

Audio note: this article contains 113 uses of latex notation, so the narration may be difficult to follow. There's a link to the original text in the episode description.

This research was complet…

00:22:22  |   Mon 23 Jun 2025
“‘It isn’t magic’” by Ben (Berlin)

“‘It isn’t magic’” by Ben (Berlin)

People keep saying "AI isn't magic, it's just maths" like this is some kind of gotcha.

Turning lead into gold isn't the magic of alchemy, it's just nucleosynthesis.

Taking a living human's heart out…

00:04:26  |   Mon 23 Jun 2025
“Foom & Doom 1: ‘Brain in a box in a basement’” by Steven Byrnes

“Foom & Doom 1: ‘Brain in a box in a basement’” by Steven Byrnes

1.1 Series summary and Table of Contents

This is a two-post series on AI “foom” (this post) and “doom” (next post).

A decade or two ago, it was pretty common to discuss “foom & doom” scenarios, as ad…

00:58:47  |   Mon 23 Jun 2025
“Foom & Doom 2: Technical alignment is hard” by Steven Byrnes

“Foom & Doom 2: Technical alignment is hard” by Steven Byrnes

2.1 Summary & Table of contents

This is the second of a two-post series on foom (previous post) and doom (this post).

The last post talked about how I expect future AI to be different from present AI…

00:56:39  |   Mon 23 Jun 2025
“Comparing risk from internally-deployed AI to insider and outsider threats from humans” by Buck

“Comparing risk from internally-deployed AI to insider and outsider threats from humans” by Buck

I’ve been thinking a lot recently about the relationship between AI control and traditional computer security. Here's one point that I think is important.

My understanding is that there's a big qual…

00:05:20  |   Mon 23 Jun 2025
“Clarifying ‘wisdom’: Foundational topics for aligned AIs to prioritize before irreversible decisions” by Anthony DiGiovanni

“Clarifying ‘wisdom’: Foundational topics for aligned AIs to prioritize before irreversible decisions” by Anthony DiGiovanni

As many folks in AI safety have observed, even if well-intentioned actors succeed at intent-aligning highly capable AIs, they’ll still face some high-stakes challenges.[1] Some of these challenges ar…

00:27:04  |   Mon 23 Jun 2025
“Racial Dating Preferences and Sexual Racism” by koreindian

“Racial Dating Preferences and Sexual Racism” by koreindian

Note: This is a linkpost from my personal substack. This is on a culture war topic, which is not normally the focus of my blogging. Rationalist friends suggested that this post might be interesting …

01:04:25  |   Mon 23 Jun 2025
“The Sixteen Kinds of Intimacy” by Ruby

“The Sixteen Kinds of Intimacy” by Ruby

John Wentworth recently posted a discussion-inducing piece about how the willingness to be vulnerable is core to romantic relationships. This is not obvious to me as the core thing. I think romantic…

00:09:45  |   Sun 22 Jun 2025
“Consider chilling out in 2028” by Valentine

“Consider chilling out in 2028” by Valentine

I'll explain my reasoning in a second, but I'll start with the conclusion:

I think it'd be healthy and good to pause and seriously reconsider the focus on doom if we get to 2028 and the situation fe…

00:23:46  |   Sat 21 Jun 2025
“the sillk pajamas effect” by thiccythot

“the sillk pajamas effect” by thiccythot

<div> <img src="https://substackcdn.com/image/fetch/w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd0ea456c-7425-47d9-b…

00:09:44  |   Sat 21 Jun 2025
“Genomic emancipation” by TsviBT

“Genomic emancipation” by TsviBT

PDF version. berkeleygenomics.org. x.com. bluesky. YouTube.

Introduction

We, the free world, as a society, should decide to invest in the development of technology for human germline genomic engin…

01:52:36  |   Sat 21 Jun 2025
“Making deals with early schemers” by Julian Stastny, Olli Järviniemi, Buck

“Making deals with early schemers” by Julian Stastny, Olli Järviniemi, Buck

Consider the following vignette:

It is March 2028. With their new CoCo-Q neuralese reasoning model, a frontier AI lab has managed to fully automate the process of software engineering. In AI R&D, mo…

00:28:12  |   Sat 21 Jun 2025
Disclaimer: The podcast and artwork embedded on this page are the property of LessWrong ([email protected]). This content is not affiliated with or endorsed by eachpod.com.