1. EachPod

“Gradient routing is better than pretraining filtering” by Cleo Nardo

Author
LessWrong ([email protected])
Published
Tue 02 Sep 2025
Episode Link
https://www.lesswrong.com/posts/YdcP2LEsq9nwGKKrB/gradient-routing-is-better-than-pretraining-filtering

Introduction

What is Gradient Routing? Gradient routing controls where learning happens in neural networks by masking gradients during backpropagation. You can route specific data (like dangerous content) to designated parts of the network during training. The ERA (Expand-Route-Ablate) method adds new components to a model, routes unwanted knowledge there during training, then deletes those components - removing the capability while preserving general performance.

In this article, I list three reasons why I think gradient routing is more promising than pre-training filtering:

  1. Gradient routing works better when some dangerous data is incorrectly labelled as safe
  2. Gradient routing allows for flexible access control when there are multiple categories of dangerous data
  3. Gradient routing allows for monitoring when models use dangerous knowledge, and how this affects their behavior

I recommend two alternatives to canary strings:

  1. Canonical canary strings for each category of dangerous knowledge
  2. Natural language categorization of dangerous knowledge

[...]

---

Outline:

(00:11) Introduction

(01:23) Advantages of gradient routing

(01:27) 1. Gradient routing works better with imperfect labels

(04:11) 2. Gradient routing allows for flexible access control

(07:01) 3. Gradient routing allows for monitoring when models use dangerous knowledge

(08:06) Recommendations

(08:10) 1. Canonical canary strings for each category

(08:56) 2. Natural language categorization of dangerous knowledge

---


First published:

September 2nd, 2025



Source:

https://www.lesswrong.com/posts/YdcP2LEsq9nwGKKrB/gradient-routing-is-better-than-pretraining-filtering


---


Narrated by TYPE III AUDIO.

Share to: