1. EachPod

“Towards Alignment Auditing as a Numbers-Go-Up Science” by Sam Marks

Author
LessWrong ([email protected])
Published
Mon 04 Aug 2025
Episode Link
https://www.lesswrong.com/posts/bGYQgBPEyHidnZCdE/towards-alignment-auditing-as-a-numbers-go-up-science

Thanks to Rowan Wang and Buck Shlegeris for feedback on a draft.

What is the job of an alignment auditing researcher? In this post, I propose the following answer: to build tools which increase auditing agent performance, as measured on a suite of standard auditing testbeds.

A major challenge in alignment research is that we don’t know what we’re doing or how to tell if it's working.

Contrast this with other fields of ML research, like architecture design, performance optimization, or RL algorithms. When I talk to researchers in these areas, there's no confusion. They know exactly what they’re trying to do and how to tell if it's working:[1]

  • Architecture design: Build architectures that decrease perplexity when doing compute-optimal pre-training.
  • Performance optimization: Make AI systems run as efficiently as possible, as measured by throughput, latency, etc.
  • RL algorithms: Develop RL algorithms that train models to attain higher mean reward [...]

The original text contained 5 footnotes which were omitted from this narration.

---


First published:

August 4th, 2025



Source:

https://www.lesswrong.com/posts/bGYQgBPEyHidnZCdE/towards-alignment-auditing-as-a-numbers-go-up-science


---


Narrated by TYPE III AUDIO.


---

Images from the article:

Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

Share to: