Robust Audio Event Recognition with 1-Max Pooling Convolutional Neural Networks

View Researcher's Other Codes

Disclaimer: The provided code links for this paper are external links. Science Nest has no responsibility for the accuracy, legality or content of these links. Also, by downloading this code(s), you agree to comply with the terms of use as set out by the author(s) of the code(s).

Please contact us in case of a broken link from here

Authors Alfred Mertins, Huy Phan, Marco Maass, Lars Hertel
Journal/Conference Name Proceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH
Paper Category
Paper Abstract We present in this paper a simple, yet efficient convolutional neural network (CNN) architecture for robust audio event recognition. Opposing to deep CNN architectures with multiple convolutional and pooling layers topped up with multiple fully connected layers, the proposed network consists of only three layers convolutional, pooling, and softmax layer. Two further features distinguish it from the deep architectures that have been proposed for the task varying-size convolutional filters at the convolutional layer and 1-max pooling scheme at the pooling layer. In intuition, the network tends to select the most discriminative features from the whole audio signals for recognition. Our proposed CNN not only shows state-of-the-art performance on the standard task of robust audio event recognition but also outperforms other deep architectures up to 4.5% in terms of recognition accuracy, which is equivalent to 76.3% relative error reduction.
Date of publication 2016
Code Programming Language Jupyter Notebook
Comment

Copyright Researcher 2022