1. EachPod

45 - Samuel Albanie on DeepMind's AGI Safety Approach

Author
Daniel Filan
Published
Sun 06 Jul 2025
Episode Link
https://axrpodcast.libsyn.com/45-samuel-albanie-on-deepminds-agi-safety-approach

In this episode, I chat with Samuel Albanie about the Google DeepMind paper he co-authored called "An Approach to Technical AGI Safety and Security". It covers the assumptions made by the approach, as well as the types of mitigations it outlines.

Patreon: https://www.patreon.com/axrpodcast

Ko-fi: https://ko-fi.com/axrpodcast

Transcript: https://axrp.net/episode/2025/07/06/episode-45-samuel-albanie-deepminds-agi-safety-approach.html

 

Topics we discuss, and timestamps:

0:00:37 DeepMind's Approach to Technical AGI Safety and Security

0:04:29 Current paradigm continuation

0:19:13 No human ceiling

0:21:22 Uncertain timelines

0:23:36 Approximate continuity and the potential for accelerating capability improvement

0:34:29 Misuse and misalignment

0:39:34 Societal readiness

0:43:58 Misuse mitigations

0:52:57 Misalignment mitigations

1:05:20 Samuel's thinking about technical AGI safety

1:14:02 Following Samuel's work

 

Samuel on Twitter/X: x.com/samuelalbanie

 

Research we discuss:

An Approach to Technical AGI Safety and Security: https://arxiv.org/abs/2504.01849

Levels of AGI for Operationalizing Progress on the Path to AGI: https://arxiv.org/abs/2311.02462

The Checklist: What Succeeding at AI Safety Will Involve: https://sleepinyourhat.github.io/checklist/

Measuring AI Ability to Complete Long Tasks: https://arxiv.org/abs/2503.14499

 

Episode art by Hamish Doodles: hamishdoodles.com

Share to: