Tell Me Who’s Singing | Outside My Window
Have you ever wished for a tool that could accurately identify a single bird’s voice among dozens of singers? You aren’t alone. Ornithologists are eager for a way to census birds using field recordings, but the sheer volume of data and complexity of bird song makes this a daunting task. A free tool that can identify huge volumes of song data doesn’t exist yet, but the Kitzes Lab at the University of Pittsburgh is creating one.
In December 2018 Assistant Professor Justin Kitzes of the Department of Biological Sciences won an AI for Earth Innovation Grant, awarded by Microsoft and National Geographic, to develop the first free open source model for identifying bird songs in acoustic field recordings. Its name is OpenSoundscape.
OpenSoundscape uses machine learning, a subset of artificial intelligence (AI), to scan recorded birdsong and algorithmic hunches to arrive at a song’s identity. To do this the Kitzes Lab starts with real life recordings.
The team places small AudioMoth recorders in an array in the forest, much the same way human observers do point counts except that the Audio Moths are all recording at the same time.
The team brings the recorders back to the lab and downloads the sound files to the database. (Some day the software will be able to triangulate GPS from several Audio Moths and determine a single songbird’s location!)
Here’s one recording of at least six individual birds. OpenSoundscape is learning how to identify them.
It makes a spectrogram of the sound file (below), then picks out each pattern and uses algorithms and the classifier library to identify the individual songs.
The more songs it successfully identifies, the better its algorithms become.
By the end of 2019 the OpenSoundscape models, software, and classifier library of birdsong will be ready for researchers on a laptop, cloud service or supercomputer. Ornithologists will be able to gather tons of data in the field and find out who was singing.
p.s. WESA featured this project in their Tech Report on 26 Feb 2019. Click here to listen.