A rigorous, historically informed deep dive into claims that AI models secretly scheme to pursue hidden goals. We scrutinize the parallels with 60s–70s ape-language research, tease apart hype from evidence, and discuss how better controls, theory, and language can improve our understanding of AI safety today.
Note: This podcast was AI-generated, and sometimes AI can make mistakes. Please double-check any critical information.
Sponsored by Embersilk LLC