1. EachPod

LessWrong (30+ Karma) - Podcast

LessWrong (30+ Karma)

Audio narrations of LessWrong posts.

Society & Culture Philosophy Technology
Update frequency
every day
Average duration
18 minutes
Episodes
571
Years Active
2025
Share to:
“Prioritizing Work” by jefftk

“Prioritizing Work” by jefftk

I recently read a blog post that concluded with:

When I'm on my deathbed, I won't look back at my life and wish I had worked harder. I'll look back and wish I spent more time with the p…
00:01:29  |   Thu 01 May 2025
“How can we solve diffuse threats like research sabotage with AI control?” by Vivek Hebbar

“How can we solve diffuse threats like research sabotage with AI control?” by Vivek Hebbar

Misaligned AIs might engage in research sabotage: making safety research go poorly by doing things like withholding their best ideas or putting subtle bugs in experiments. To mitigate this risk with…

00:14:57  |   Thu 01 May 2025
“GPT-4o Responds to Negative Feedback” by Zvi

“GPT-4o Responds to Negative Feedback” by Zvi

Whoops. Sorry everyone. Rolling back to a previous version. Here's where we are at this point, now that GPT-4o is no longer an absurd sycophant. For now.

Table of Contents

  1. GPT-4o Is W…
00:33:57  |   Thu 01 May 2025
“Obstacles in ARC’s agenda: Finding explanations” by David Matolcsi

“Obstacles in ARC’s agenda: Finding explanations” by David Matolcsi

As an employee of the European AI Office, it's important for me to emphasize this point: The views and opinions of the author expressed herein are personal and do not necessarily reflect those of th…

00:28:29  |   Thu 01 May 2025
“Can we safely automate alignment research?” by Joe Carlsmith

“Can we safely automate alignment research?” by Joe Carlsmith

(This is the fifth essay in a series that I’m calling “How do we solve the alignment problem?”. I’m hoping that the individual essays can be read fairly well on their own, but see this introduction …

01:58:13  |   Wed 30 Apr 2025
“Early Chinese Language Media Coverage of the AI 2027 Report: A Qualitative Analysis” by jeanne_, eeeee

“Early Chinese Language Media Coverage of the AI 2027 Report: A Qualitative Analysis” by jeanne_, eeeee

In this blog post, we analyse how the recent AI 2027 forecast by Daniel Kokotajlo, Scott Alexander, Thomas Larsen, Eli Lifland, and Romeo Dean has been discussed across Chinese language platforms. W…

00:27:36  |   Wed 30 Apr 2025
“Interpreting the METR Time Horizons Post” by snewman

“Interpreting the METR Time Horizons Post” by snewman

[This has been lightly edited from the original post, eliminating some introductory material that LW readers won't need. Thanks to Stefan Schubert for suggesting I repost here. TL;DR for readers alr…

00:19:48  |   Wed 30 Apr 2025
“Bandwidth Rules Everything Around Me: Oliver Habryka on OpenPhil and GoodVentures” by Elizabeth

“Bandwidth Rules Everything Around Me: Oliver Habryka on OpenPhil and GoodVentures” by Elizabeth

In this episode of our podcast, Timothy Telleen-Lawton and I talk to Oliver Habryka of Lightcone Infrastructure about his thoughts on the Open Philanthropy Project, which he believes has become stif…

00:02:26  |   Tue 29 Apr 2025
“Misrepresentation as a Barrier for Interp” by johnswentworth, Steve Petersen

“Misrepresentation as a Barrier for Interp” by johnswentworth, Steve Petersen

John: So there's this thing about interp, where most of it seems to not be handling one of the standard fundamental difficulties of representation, and we want to articulate that in a way which will…

00:13:47  |   Tue 29 Apr 2025
“How to Build a Third Place on Focusmate” by Parker Conley

“How to Build a Third Place on Focusmate” by Parker Conley

Introduction

Focusmate changed my life. I started using it mid-2023 and have been a power user since 2023. Here are the high-level stats:

  • I've spent 2,456 hours on Focusmate.
  • I've met 1,447 people.
00:11:01  |   Tue 29 Apr 2025
“GPT-4o Is An Absurd Sycophant” by Zvi

“GPT-4o Is An Absurd Sycophant” by Zvi

GPT-4o tells you what it thinks you want to hear.

The results of this were rather ugly. You get extreme sycophancy. Absurd praise. Mystical experiences.

(Also some other interesting choices, like …

00:39:21  |   Mon 28 Apr 2025
“Proceedings of ILIAD: Lessons and Progress” by Alexander Gietelink Oldenziel

“Proceedings of ILIAD: Lessons and Progress” by Alexander Gietelink Oldenziel

tl;dr

This post is an update on the Proceedings of ILIAD, a conference journal for AI alignment research intended to bridge the gap between the Alignment Forum and academia. Following our successful…

00:17:14  |   Mon 28 Apr 2025
“7+ tractable directions in AI control” by Julian Stastny, ryan_greenblatt

“7+ tractable directions in AI control” by Julian Stastny, ryan_greenblatt

In this post, we list 7 of our favorite easy-to-start directions in AI control. (Really, projects that are at least adjacent to AI control; We include directions which aren’t as centrally AI control…

00:25:13  |   Mon 28 Apr 2025
“My Research Process: Key Mindsets - Truth-Seeking, Prioritisation, Moving Fast” by Neel Nanda

“My Research Process: Key Mindsets - Truth-Seeking, Prioritisation, Moving Fast” by Neel Nanda

This is post 2 of a sequence on my framework for doing and thinking about research. Start here.

Before I get into what exactly to do at each stage of the research process, it's worth reflecting on t…

00:20:09  |   Mon 28 Apr 2025
“Our Reality: A Simulation Run by a Paperclip Maximizer” by James_Miller, avturchin

“Our Reality: A Simulation Run by a Paperclip Maximizer” by James_Miller, avturchin

Our universe is probably a computer simulation created by a paperclip maximizer to map the spectrum of rival resource‑grabbers it may encounter while expanding through the cosmos. The purpose of thi…

00:10:02  |   Mon 28 Apr 2025
[Linkpost] “How people use LLMs” by Elizabeth

[Linkpost] “How people use LLMs” by Elizabeth

This is a link post.

I've gotten a lot of value out of the details of how other people use LLMs, so I'm delighted that Gavin Leech created a collection of exactly such posts (link should go to the ri…

00:01:46  |   Mon 28 Apr 2025
[Linkpost] “The case for multi-decade AI timelines” by Noosphere89

[Linkpost] “The case for multi-decade AI timelines” by Noosphere89

This is a link post.

So this post is an argument that multi-decade timelines are reasonable, and the key cruxes that Ege Erdil has with most AI safety people who believe in short timelines are due to…

00:02:44  |   Sun 27 Apr 2025
[Linkpost] “Untitled Draft” by RobertM

[Linkpost] “Untitled Draft” by RobertM

This is a link post.

Dario Amodei posted a new essay titled "The Urgency of Interpretability" a couple days ago.

Some excerpts I think are worth highlighting:

The nature of AI training makes it possi…

00:05:15  |   Sun 27 Apr 2025
“AI Self Portraits Aren’t Accurate” by JustisMills

“AI Self Portraits Aren’t Accurate” by JustisMills

For a lay audience, but I've seen a surprising number of knowledgeable people fretting over depressed-seeming comics from current systems. Either they're missing something or I am.

Perhaps you’ve s…

00:10:14  |   Sun 27 Apr 2025
“How I Think About My Research Process: Explore, Understand, Distill” by Neel Nanda

“How I Think About My Research Process: Explore, Understand, Distill” by Neel Nanda

This is the first post in a sequence about how I think about and break down my research process. Post 2 is coming soon.

Thanks to Oli Clive-Griffin, Paul Bogdan, Shivam Raval and especially to Jemim…

00:13:08  |   Sun 27 Apr 2025
Disclaimer: The podcast and artwork embedded on this page are the property of LessWrong ([email protected]). This content is not affiliated with or endorsed by eachpod.com.