Convolutional Recurrent Neural Networks for Small-Footprint Keyword Spotting

View Researcher's Other Codes

Disclaimer: The provided code links for this paper are external links. Science Nest has no responsibility for the accuracy, legality or content of these links. Also, by downloading this code(s), you agree to comply with the terms of use as set out by the author(s) of the code(s).

Please contact us in case of a broken link from here

Authors Andrew Gibiansky, Chris Fougner, Rewon Child, Joel Hestness, Markus Kliegl, Adam Coates, Ryan Prenger, Sercan O. Arik
Journal/Conference Name Proceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH
Paper Category
Paper Abstract Keyword spotting (KWS) constitutes a major component of human-technology interfaces. Maximizing the detection accuracy at a low false alarm (FA) rate, while minimizing the footprint size, latency and complexity are the goals for KWS. Towards achieving them, we study Convolutional Recurrent Neural Networks (CRNNs). Inspired by large-scale state-of-the-art speech recognition systems, we combine the strengths of convolutional layers and recurrent layers to exploit local structure and long-range context. We analyze the effect of architecture parameters, and propose training strategies to improve performance. With only ~230k parameters, our CRNN model yields acceptably low latency, and achieves 97.71% accuracy at 0.5 FA/hour for 5 dB signal-to-noise ratio.
Date of publication 2017
Code Programming Language Unspecified
Comment

Copyright Researcher 2022