Audio narrations of LessWrong posts.
crossposted from my blog
There are two ways we miscalibrate risk.
I learned these lessons in…
Audio note: this article contains 323 uses of latex notation, so the narration may be difficult to follow. There's a link to the original text in the episode description.
Summary & Motivation
This…
A year or so ago, I thought that 10x speedups in safety research would require AIs so capable that takeover risks would be very high. 2-3x gains seemed plausible, 10x seemed unlikely. I no longer th…
This page is intentionally left blank
---
First published:
June 29th, 2025
Source:
https://www.lesswrong.com/posts/Xo5upyd6x53qgLZqL/conciseness-manifes…
By 'bedrock liberal principles', I mean things like: respect for individual liberties and property rights, respect for the rule of law and equal treatment under the law, and a widespread / consensus…
This is a first-person account from a psychiatry resident (me) enrolling in a clinical trial of psilocybin. Somewhere between a trip report, an overview of the pharmacology of ps…
Abstract
A key step in reverse engineering neural networks is to decompose them into simpler parts that can be studied in relative isolation.
Linear parameter decomposition— a f…
Acknowledgments: The core scheme here was suggested by Prof. Gabriel Weil.
There has been growing interest in the deal-making agenda: humans make deals with AIs (misaligned but lacking decisive stra…
Our director explains Epoch AI's mission and how we decide our priorities. In short, we work on projects to understand the trajectory of AI, share this knowledge publicly, and inform important decis…
I originally wrote this just for myself but thought I'd share it in case others find it useful. Sharing rough notes in the spirit of not letting perfect be the enemy of the good. This was written in…
When discussing AI control, we often talk about levels of AI capabilities where we think control can probably greatly lower risks and where we can probably estimate risks. However, I think it's plau…
The AI Futures Project, the team behind AI 2027, has developed a tabletop exercise (TTX) to simulate AI takeoff where we facilitate people going through an AI 2027-like scenario with different peopl…
It's hard to imagine doing web research without using LLMs. Chatbots may be the first thing you turn to for questions like: What are the companies currently working on nuclear fusion and who investe…
Hillel... would say: "If I am not for me, who will be for me? And If I am just for myself, what am I? And if not now, when?"
Pirkei Avot 1,14
That last question has usually been interpreted as a rhe…
I think more people should say what they actually believe about AI dangers, loudly and often. Even if you work in AI policy.
I’ve been beating this drum for a few years now. I have a whole spiel abo…
Summary
To quickly transform the world, it's not enough for AI to become super smart (the "intelligence explosion").
AI will also have to turbocharge the physical world (the "industrial explosion")…
1 Introduction
(Crossposted from my blog--formatting is better there).
Very large numbers of people seem to think that climate change is likely to end the world. Biden and Harris both called it an …
My post on Neutrality was kind o…
The Dead Biologist
Suppose you're a biologist working with a bacterium. It's a nasty little bug, so you want to make sure your disinfecting protocol is effective. You have a rather good model of how…