Audio narrations of LessWrong posts.
Welcome to the monthly roundup of things that don’t fit into other categories and don’t rise to the level of their own posts.
Bad News
When people tell you who they are, believe them (with obv…
If Anyone Builds It, Everyone Dies is a book Eliezer and Nate have coming out this September. In our other posts talking about the book, some kind souls have volunteered their services as translator…
This research was completed for LASR Labs 2025 by Alex McKenzie, Urja Pawar, Phil Blandfort and William Bankes. The team was supervised by Dmitrii Krasheninnikov, with additional guidance from Ekdee…
I recently came across a 2024 update on a 2018 book making the still-controversial case that hormone replacement therapy (HRT) after menopause is highly beneficial and that rumors of its risks are u…
Google DeepMind announces that they've also achieved a gold medal in the IMO.
They've exactly matched OpenAI, getting perfect scores for the first 5 questions and flunking the 6…
As I was making my morning coffee, the words SIMULATION 1099 flashed across my vision. I immediately felt exactly as I'd always imagined I would in philosophical thought experiments. There was a lot…
This might be beating a dead horse, but there are several "mysterious" problems LLMs are bad at that all seem to have the same cause. I wanted an article I could reference when this comes up, so I w…
Epistemic status: I'm sure most of my arguments have already been made and argued to death by philosophers that I'm not knowledgeable enough to name. I'm only writing this to outline what I genuinel…
Thanks to @Manuel Allgaier of AI Safety Berlin for his suggestion to write this post and his helpful feedback. And thanks to LW/AI Alignment Moderator Oliver for looking over the post.
LessWrong an…
Content warning: risk to children
Julia and I know drowning is the biggest risk to US children under 5, and we try to take this seriously. But yesterday our 4yo came very close to drowning…
Author's note: These days, my thoughts go onto my substack by default, instead of onto LessWrong. Everything I write becomes free after a week or so, but it's only paid subscriptions that make it po…
I’m excited to share that our latest @OpenAI experimental reasoning LLM has achieved a longstanding grand challenge in AI: gold medal-level performance on the world's most presti…
I took a week off from my day job of aligning AI to visit Forethought and think about the question: if we can align AI, what should we do with it? This post summarizes the state of my thinking at th…
This is a short story I wrote in mid-2022. Genre: cosmic horror as a metaphor for living with a high p-doom.
One
The last time I saw my mom, we met in a coffee shop, like strangers on a first dat…
One of our main activities at Redwood is writing follow-ups to previous papers on control like the original and Ctrl-Z, where we construct a setting with a bunch of tasks (e.g. APPS problems) and a …
METR ran a proper RCT experiment seeing how much access to Cursor (using Sonnet 3.7) would accelerate coders working on their own open source repos.
Everyone surveyed expected a substantial speedup…
I am quite harsh in impersonal settings, such as on the Internet or at work. Attention is a scarce resource, and I am stingy with it. The world is Out To Get Us. In many ways, nice or not.
Social Me…
(This is the video and transcript of a public talk I gave at Mox in San Francisco in July 2025, on long-term equilibria post-AGI. It's a longer version of the talk I gave at this workshop. The slide…
So I'm "back" on Less Wrong, which is to say that I was surprised to find that I already had an account and had even, apparently, commented on some things. 11 years ago. Which feels like half a life…
One of the most underrated essays in the post-Sequences era of Eliezer Yudkowsky's corpus is "Four Layers of Intellectual Conversation". The degree to which this piece of wisdom has fallen into trag…