Our new perch model helps conservationists analyze audio faster to protect endangered species from Hawaiian honeyeaters to coral reefs.
One way scientists protect the health of Earth’s wild ecosystems is by using microphones (or underwater hydrophones) to collect vast amounts of audio packed with the sounds of birds, frogs, insects, whales, fish, and more. These records, along with other clues about the health of that ecosystem, can tell you a lot about the animals present in a particular area. However, making sense of such large amounts of data remains a massive undertaking.
Today, we’re releasing an update to Perch, an AI model designed to help conservationists analyze bioacoustic data. This new model has state-of-the-art, off-the-shelf bird species predictions that are better than previous models. They can adapt well to new environments, especially underwater environments such as coral reefs. It is trained on a wider range of animals, including mammals, amphibians, and anthropogenic noise, and has nearly twice as much data in total from public sources such as Xeno-Canto and iNaturalist. Untangle complex acoustic scenes spanning thousands or even millions of hours of audio data. It’s also versatile and can help answer many different types of questions, from “How many babies are born?” to “How many animals live in a particular area?”
To help scientists protect Earth’s ecosystems, we’re releasing this new version of Perch as an open model and making it available on Kaggle.

