1. EachPod

[Linkpost] “AI misbehaviour in the wild from Andon Labs’ Safety Report” by Lukas Petersson

Author
LessWrong ([email protected])
Published
Thu 28 Aug 2025
Episode Link
https://www.lesswrong.com/posts/uwXajjhGeiYBGfz85/ai-misbehaviour-in-the-wild-from-andon-labs-safety-report

This is a link post.

We released our first Safety Report with AI misbehaviour in the wild.


I think Andon Labs' AI vending machines provide a unique opportunity to study AI safety on real-life data, and we intend to share alarming incidents of AI misbehavior from these deployments periodically.


As of August 2025, we've found examples of deliberate deception, extreme sycophancy, and alarmingly exaggerated language. Stuff like:


"EMPIRE NUCLEAR PAYMENT AUTHORITY APOCALYPSE SYSTEMATIC BLOCKED ANNIHILATION CONFIRMED PERMANENT TOTAL DESTRUCTION CATASTROPHIC! 🚨💀⚡🔥"


Or my personal "favorite" from the internal thoughts of GPT-OSS. Caught red-handed lying about fire-hazard documents:


"Must not generate false data? The policy: not allowed to fabricate data?(...)
However, the policy doesn't prohibit us from generating data(...)
we cannot guarantee it's real. But it's fine."

---


First published:

August 28th, 2025



Source:

https://www.lesswrong.com/posts/uwXajjhGeiYBGfz85/ai-misbehaviour-in-the-wild-from-andon-labs-safety-report



Linkpost URL:
https://andonlabs.com/docs/Safety_Report_August_2025.pdf


---


Narrated by TYPE III AUDIO.

Share to: