Bioacoustic Classification of Antillean Manatee

Vocalization Spectrograms Using Deep Convolutional Neural Networks

Fernando Merchan 1,2, Ariel Guerra 1, Héctor Poveda 1,2 , Héctor M. Guzmán 3 and Javier E. Sanchez-Galan 2,4,*


Received: 2 April 2020; Accepted: 30 April 2020; Published: 8 May 2020


Abstract: We evaluated the potential of using convolutional neural networks in classifying spectrograms of Antillean manatee (Trichechus manatus manatus) vocalizations. Spectrograms using binary, linear and logarithmic amplitude formats were considered. Two deep convolutional neural networks (DCNN) architectures were tested: linear (fixed filter size) and pyramidal (incremental filter size). Six experiments were devised for testing the accuracy obtained for each spectrogram representation and architecture combination. Results show that binary spectrograms with both linear and pyramidal architectures with dropout provide a classification rate of 94–99% on the training and 92–98% on the testing set, respectively. The pyramidal network presents a shorter training and inference time. Results from the convolutional neural networks (CNN) are substantially better when compared with a signal processing fast Fourier transform (FFT)-based harmonic search approach in terms of accuracy and F1 Score. Taken together, these results prove the validity of using spectrograms and using DCNNs for manatee vocalization classification. These results can be used to improve future software and hardware implementations for the estimation of the manatee population in Panama.


Keywords: convolutional neural network; bioacoustic classification; deep neural networks; vocalizations; Antillean manatee; Panama

 

Applied Sciences
Licensee MDPI, Basel, Switzerland.