Google DeepMind has announced the development of a new AI model, DolphinGemma, designed to assist researchers in decoding dolphin communication. The tool represents a significant step forward in understanding how dolphins vocalize and interact.
DolphinGemma was trained on data provided by the Wild Dolphin Project (WDP), a nonprofit organization that studies the behavior of Atlantic spotted dolphins. Built on Google’s open-source Gemma model series, DolphinGemma is capable of generating “dolphin-like” sound sequences and is lightweight enough to run directly on mobile devices, according to Google.
This summer, WDP plans to utilize Google’s upcoming Pixel 9 smartphone to power a research platform that can generate synthetic dolphin sounds and listen for potential vocal “responses.” Previously, the organization was using the Pixel 6 for this work, but the upgraded Pixel 9 will allow researchers to run AI models and sound-matching algorithms simultaneously—enabling more complex and real-time analysis of dolphin vocalizations.