Skip to main content
Back to Pulse
research
IEEE Spectrum

AI Trained on Birdsong Can Recognize Whale Calls

Read the full articleAI Trained on Birdsong Can Recognize Whale Calls on IEEE Spectrum

What Happened

Birds’ chirps, trills, and warbles echo through the air, while whales’ boings, “biotwangs,” and whistles vibrate underwater. Despite the variations in sounds and the medium through which they travel, both birdsong and whale vocalizations can be classified by Perch 2.0, an AI audio model from Google

Fordel's Take

look, training an AI on whale calls is cool, but it doesn't fundamentally change how signal processing works. it just shows how sophisticated audio models like perch 2.0 can handle complex, variable acoustic data. it proves the model's ability to abstract features from noisy, complex input, which is standard ML stuff. it's impressive acoustic pattern recognition, not a revolution in general AI.

What To Do

treat this as a benchmark for complex multi-modal audio classification models.

Builder's Brief

Who

Bioacoustics and wildlife conservation ML engineers

What changes

Suggests pre-training on one acoustic domain may substitute for scarce labeled data in another, reducing annotation burden

When

months

Watch for

Replication study in a third species domain with field-collected noisy audio

What Skeptics Say

Cross-domain audio transfer learning demos reliably in controlled lab settings but degrades sharply in field deployment where real-world noise, equipment variance, and species diversity compound; conservation applications require sustained accuracy researchers rarely test for.

Cited By

React

Newsletter

Get the weekly AI digest

The stories that matter, with a builder's perspective. Every Thursday.

Loading comments...