Sorry, you need to enable JavaScript to visit this website.

Knowledge Distillation Using Output Errors for Self-Attention ASR Models

Citation Author(s):
Hwidong Na, Hoshik Lee, Jihyun Lee, Tae Gyoon Kang, Min-Joong Lee, Young Sang Choi
Submitted by:
Ho-Gyeong Kim
Last updated:
8 May 2019 - 10:02pm
Document Type:
Poster
Document Year:
2019
Event:
Presenters:
Hwidong Na
 

Most automatic speech recognition (ASR) neural network models are not suitable for mobile devices due to their large model sizes. Therefore, it is required to reduce the model size to meet the limited hardware resources. In this study, we investigate sequence-level knowledge distillation techniques of self-attention ASR models for model compression. In order to overcome the performance degradation of compressed models, our proposed method adds an exponential weight to the sequence-level knowledge distillation loss function, which reflects the word error rate of the output of the teacher model based on the ground-truth word sequences. Evaluated on LibriSpeech dataset, the proposed knowledge distillation method achieves significant improvements over the student baseline model.

up
0 users have voted: