An inside look at Google’s Dolphin Gemma AI and its team (Georgia Tech, the Wild Dolphin Project) decoding decades of labeled dolphin sounds from the Bahamas. We explore how the model learns patterns in signature whistles, squawks, and clicks, the vision for a two-way chat using synthetic whistles, and why long‑term context, individual identity, and on‑device processing matter for understanding dolphin language.
Note: This podcast was AI-generated, and sometimes AI can make mistakes. Please double-check any critical information.
Sponsored by Embersilk LLC