Audio narrations of LessWrong posts.
A studio executive has no beliefs
That's the way of a studio system
We've bowed to every rear of all the studio chiefs
And you can bet your ass we've kissed 'em
Even the birds in the Hollywood hil…
I happily admit I am deeply confused about consciousness.
I don’t feel confident I understand what it is, what causes it, which entities have it, what future entities might have it, to what extent …
Many textbooks, tutorials or ... tapes leave out the ways people actually think about a subject, and leave you to fumble your way to your own picture. They don't even attempt to help you build intui…
Before having kids I thought teaching them to clean up would be similar to the rest of parenting: once they're physically able to do it you start practicing with them, and after a while they're …
…or the it doesn’t make a difference anyway fallacy.
Improving Productivity is Futile
I once had a coaching call on some generic productivity topic along the lines of “I’m not getting done as m…
These are some research notes on whether we could reduce AI takeover risk by cooperating with unaligned AIs. I think the best and most readable public writing on this topic is “Making deals with ear…
I was thinking about LLM tokenization (as one does) and had a thought: We select the next output token for an LLM based on its likelihood, but shorter tokens are more likely.
Why? Shorter common tok…
Why We Haven’…
For awhile, I kinda assumed Eliezer had basically coined the concept of p(Doom). Then I was surprised one day to hear him complaining about it being an antipattern he specifically thought was unhelp…
It's been roughly 7 years since the LessWrong user-base voted on whether it's time to close down shop and become an archive, or to move towards the LessWrong 2.0 platform, with me as head-admin. For…
Audio note: this article contains 212 uses of latex notation, so the narration may be difficult to follow. There's a link to the original text in the episode description.
Our posts on natural late…
Recently, the CEO of Microsoft AI posted an article on his blog called "Seemingly Conscious AI is Coming".
Suleyman's post involves both "is" and "ought" claims, first describing the reality of the …
Audio note: this article contains 247 uses of latex notation, so the narration may be difficult to follow. There's a link to the original text in the episode description.
Introduction
Credal sets,…
I know many people in the libertarian right quadrant of the political compass: progress studies people, economists, techno-optimists, anarcho-capitalists, proper libertarians, etc.
They usually igno…
This post is speculative and tentative. I’m exploring new ideas and giving my best guess; the conclusions are lightly held.
Summary
Bostrom (2014) says that an actor has a “decisive strategic advan…
Audio note: this article contains 182 uses of latex notation, so the narration may be difficult to follow. There's a link to the original text in the episode description.
Suppose random variables …
In the future, we might accidentally create AIs with ambitious goals that are misaligned with ours. But just because we don’t have the same goals doesn’t mean we need to be in conflict. We could als…
(Cross-posted from X, intended for a general audience.)
There's a funny thing where economics education paradoxically makes people DUMBER at thinking about future AI. Econ textbooks teach concepts &…
In the latest Bayesian Conspiracy episode (How and Why to Form a Church, with Andrew Wilson), Andrew describes all the great perks churches get compared to normal 501c3. Coming from France myself, i…
Hi everyone,
TL;DR the Safeguards team at Anthopic is hiring ML experts to focus on dangerous capability mitigations - this is a very high impact ML Engineer / Research Scientist opportunity related…