Two weeks ago, xAI finally published its Risk Management Framework and first model card. Unfortunately, the RMF effects very little risk reduction and suggests that xAI isn't thinking seriously about catastrophic risks. (The model card and strategy for preventing misuse are disappointing but much less important because they're mostly just relevant to a fraction of misuse risks.)
On misalignment, "Our risk acceptance criteria for system deployment is maintaining a dishonesty rate of less than 1 out of 2 on MASK. We plan to add additional thresholds tied to other benchmarks." MASK has almost nothing to do with catastrophic misalignment risk, and upfront benchmarking is not a good approach to misalignment risk. On security, "xAI has implemented appropriate information security standards sufficient to prevent its critical model information from being stolen by a motivated non-state actor." This is not credible, xAI doesn't justify it, and xAI doesn't mention future security [...]
---
Outline:
(01:24) Misalignment
(04:33) Security
(05:17) Misc/context
(06:02) Conclusion
(07:01) Appendix: Misuse (via API) \[less important\]
(08:07) Mitigations
(10:41) Evals
The original text contained 12 footnotes which were omitted from this narration.
---
First published:
September 2nd, 2025
Source:
https://www.lesswrong.com/posts/hQyrTDuTXpqkxrnoH/xai-s-new-safety-framework-is-dreadful
---
Narrated by TYPE III AUDIO.
---
Images from the article:
Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.