A Comparative Study on Transformer vs RNN in Speech Applications

View Researcher's Other Codes

Disclaimer: The provided code links for this paper are external links. Science Nest has no responsibility for the accuracy, legality or content of these links. Also, by downloading this code(s), you agree to comply with the terms of use as set out by the author(s) of the code(s).

Please contact us in case of a broken link from here

Authors Nanxin Chen, Nelson Enrique Yalta Soplin, Xiaofei Wang, Hirofumi Inaguma, Ziyan Jiang, Ryuichi Yamamoto, Shigeki Karita, Takaaki Hori, Tomoki Hayashi, Masao Someki, Takenori Yoshimura, Shinji Watanabe, Wangyou Zhang
Journal/Conference Name 2019 IEEE Automatic Speech Recognition and Understanding Workshop, ASRU 2019 - Proceedings
Paper Category
Paper Abstract Sequence-to-sequence models have been widely used in end-to-end speech processing, for example, automatic speech recognition (ASR), speech translation (ST), and text-to-speech (TTS). This paper focuses on an emergent sequence-to-sequence model called Transformer, which achieves state-of-the-art performance in neural machine translation and other natural language processing applications. We undertook intensive studies in which we experimentally compared and analyzed Transformer and conventional recurrent neural networks (RNN) in a total of 15 ASR, one multilingual ASR, one ST, and two TTS benchmarks. Our experiments revealed various training tips and significant performance benefits obtained with Transformer for each task including the surprising superiority of Transformer in 13/15 ASR benchmarks in comparison with RNN. We are preparing to release Kaldi-style reproducible recipes using open source and publicly available datasets for all the ASR, ST, and TTS tasks for the community to succeed our exciting outcomes.
Date of publication 2019
Code Programming Language Python

Copyright Researcher 2022