We dive into the challenge of updating large language models without erasing what they already know. The Memoir framework freezes the base model, adds a sparse residual memory, and uses a top-hash masking scheme to store and retrieve thousands of edits. Learn how edits are written to tiny memory slots, kept from colliding, and pulled at inference time, plus what experiments say about reliability, generalization, and locality.
Note: This podcast was AI-generated, and sometimes AI can make mistakes. Please double-check any critical information.
Sponsored by Embersilk LLC