1. EachPod

“come work on dangerous capability mitigations at Anthropic” by Dave Orr

Author
LessWrong ([email protected])
Published
Wed 20 Aug 2025
Episode Link
https://www.lesswrong.com/posts/qBbnXtt9zFWXMrP4M/come-work-on-dangerous-capability-mitigations-at-anthropic

Hi everyone,

TL;DR the Safeguards team at Anthopic is hiring ML experts to focus on dangerous capability mitigations - this is a very high impact ML Engineer / Research Scientist opportunity related to serious harm from advanced AI capabilities. We’re hiring multiple roles at varying levels of seniority including both IC and management.

The Safeguards team focuses on present day and near future harms. Since we activated our ASL 3 safety standard for Claude 4 Opus, in accordance with our Responsible Scaling Plan, this means that we need to do a great job of protecting the world in case Claude can provide uplift for dangerous capabilities. Safeguards builds constitutional classifiers and probes that run over every Opus conversational turn to look for and suppress dangerous content and defend against jailbreaks.

This is a hard modeling task! The boundary between beneficial and risky content is thin and complicated. In some [...]

---


First published:

August 20th, 2025



Source:

https://www.lesswrong.com/posts/qBbnXtt9zFWXMrP4M/come-work-on-dangerous-capability-mitigations-at-anthropic


---


Narrated by TYPE III AUDIO.

Share to: