Google DeepMind, Google’s AI research laboratory, has developed an AI model capable of interpreting dolphin vocalizations, therefore aiding research initiatives aimed at enhancing the comprehension of dolphin communication.

The model, named DolphinGemma, was developed with data from the Wild Dolphin Project (WDP), a nonprofit organization that investigates Atlantic spotted dolphins and their activities.

DolphinGemma, developed from Google’s open Gemma family of models, is capable of generating “dolphin-like” sound sequences and is sufficiently efficient to operate on mobile devices, according to Google.

This summer, WDP intends to utilize Google’s Pixel 9 smartphone to facilitate a platform capable of generating synthetic dolphin vocalizations and analyzing dolphin sounds for corresponding responses.

Previously, WDP employed the Pixel 6 for this research, and according to Google, the upgrade to the Pixel 9 will allow the organization’s researchers to simultaneously execute AI models and template-matching algorithms.

you might also like