Lip Reading Sentences Using Deep Learning with Only Visual Cues

Souheil Fenghour, Daqing Chen, Perry Xiao

Research output: Contribution to journalArticlepeer-review

37 Citations (Scopus)

Abstract

In this paper, a neural network-based lip reading system is proposed. The system is lexicon-free and uses purely visual cues. With only a limited number of visemes as classes to recognise, the system is designed to lip read sentences covering a wide range of vocabulary and to recognise words that may not be included in system training. The system has been testified on the challenging BBC Lip Reading Sentences 2 (LRS2) benchmark dataset. Experiments with videos of varying illumination have shown that the proposed model has a good robustness to varying levels of lighting. Compared with the state-of-the-art works in lip reading sentences, the system has achieved a significantly improved performance with 15% lower word error rate. The main contributions of this paper are: 1) The classification of visemes in continuous speech using a specially designed transformer with a unique topology; 2) The use of visemes as a classification schema for lip reading sentences; and 3) The conversion of visemes to words using perplexity analysis. All the contributions serve to enhance the accuracy of lip reading sentences. The paper also provides an essential survey of the research area.
Original languageEnglish
JournalIEEE Access
DOIs
Publication statusPublished - 26 Nov 2020

Keywords

  • Speech recognition
  • Deep learning
  • Perplexity analysis
  • Lip reading
  • Neural networks

Fingerprint

Dive into the research topics of 'Lip Reading Sentences Using Deep Learning with Only Visual Cues'. Together they form a unique fingerprint.

Cite this