1. EachPod

“How can we solve diffuse threats like research sabotage with AI control?” by Vivek Hebbar

Author
LessWrong ([email protected])
Published
Thu 01 May 2025
Episode Link
https://www.lesswrong.com/posts/Mf5Hnpi2KcqZdmFDq/how-can-we-solve-diffuse-threats-like-research-sabotage-with

Misaligned AIs might engage in research sabotage: making safety research go poorly by doing things like withholding their best ideas or putting subtle bugs in experiments. To mitigate this risk with AI control, we need very different techniques than we used in the original control paper or Ctrl-Z, because research sabotage differs from those settings in two crucial ways:

  • Firstly, the AI can’t cause a catastrophe by behaving badly on just one occasion. To derail the entire project of solving alignment, the AI probably has to take a large number of malign actions. This contrasts with some other threat models (like rogue internal deployments), where the AI might succeed by acting badly on just a few occasions.
  • Secondly and relatedly, individual bad actions in research sabotage aren’t very incriminating. If a model puts a bug in experiment code, or doesn’t think of an obvious idea, we won’t know [...]

---

Outline:

(02:53) Threat model and basic countermeasures

(03:31) Deployer strategies: Training and incrimination

(07:51) Sabotage strategies

(08:11) Targeting sabotage at the most effective points

(08:48) Withholding good content vs. producing bad content

(10:57) Code vs. ideas

(11:33) What terms should we use for the concentrated/diffuse spectrum?

(14:08) Canary string

(14:23) Acknowledgements

---


First published:

April 30th, 2025



Source:

https://www.lesswrong.com/posts/Mf5Hnpi2KcqZdmFDq/how-can-we-solve-diffuse-threats-like-research-sabotage-with


---


Narrated by TYPE III AUDIO.


---

Images from the article:


Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

Share to: