1. EachPod

“Attaching requirements to model releases has serious downsides (relative to a different deadline for these requirements)” by ryan_greenblatt

Author
LessWrong ([email protected])
Published
Wed 27 Aug 2025
Episode Link
https://www.lesswrong.com/posts/Eh7WdKTrpLch5Kvkz/attaching-requirements-to-model-releases-has-serious

Here's a relatively important question regarding transparency requirements for AI companies: At which points in time should AI companies be required to disclose information? (While I focus on transparency, this question is also applicable to other safety-relevant requirements, and is applicable to norms around voluntary actions rather than requirements.)


A natural option would be to attach transparency requirements to the existing processes of pre-deployment testing and releasing a model card when a new model is released. As in, companies would be required to include the relevant information whenever they release a new model (likely in the model card). This is convenient because pre-deployment testing and model cards are already established norms in the AI industry, which makes it easier to attach something new to these existing processes rather than creating a new process.


However, I think attaching requirements to model releases (such that the requirement must happen [...]

The original text contained 5 footnotes which were omitted from this narration.

---


First published:

August 27th, 2025



Source:

https://www.lesswrong.com/posts/Eh7WdKTrpLch5Kvkz/attaching-requirements-to-model-releases-has-serious


---


Narrated by TYPE III AUDIO.

Share to: