Google has announced a new artificial intelligence (AI) model that will help scientists study how dolphins communicate.

Dolphin conversations may be decoded through Google's new AI model DolphinGemma

Dolphin conversations may be decoded through Google's new AI model DolphinGemma

The tech giant collaborated with Georgia Tech and the field research of the Wild Dolphin Project (WDP) to build the new machine-learning tool DolphinGemma, which is "trained to learn the structure of dolphin vocalizations and generate novel dolphin-like sound sequences".

Google hopes DolphinGemma will help "understand the structure and potential meaning within these natural sound sequences" used by the animals, and will analyse sounds such as whistles, squawks and clicks from dolphins in different contexts.

The AI model uses Google’s SoundStream tokeniser, which represents different dolphin sounds that will be processed by a model architecture suited for complex sequences.

Google says DolphinGemma can run directly on Google Pixel phones used by the WDP, and is based on the same technology that is used in the company’s AI model Gemini.

As well as DolphinGemma, the WDP has also developed the Cetacean Hearing Augmentation Telemetry (CHAT) system in collaboration with the Georgia Institute of Technology, which will be used to “establish a simpler, shared vocabulary” between dolphins.

The concept begins by linking unique, synthetic whistles - developed by CHAT and different from natural dolphin sounds - to specific objects the dolphins enjoy, such as sargassum, seagrass, or scarves used by the researchers.

By first demonstrating the system through human interaction, the team hopes the dolphins’ natural curiosity will lead them to imitate the whistles to request these items.

Over time, as researchers gain a better understanding of the dolphins’ natural vocalisations, those sounds can also be incorporated into the system.