Recurrent Neural Networks for Decoding Lip Read Speech

Souheil Fenghour, Daqing Chen, Perry Xiao

Research output: Contribution to conferencePaperpeer-review

Abstract

The success of automated lip reading has been constrained by the inability to distinguish between homopheme words, which are words have different characters and produce the same lip movements (e.g. ”time” and ”some”), despite being intrinsically different. One word can often have different phonemes (units of sound) producing exactly the viseme or visual equivalent of phoneme for a unit of sound. Through the use of a Long-Short Term Memory Network with word embeddings, we can distinguish between homopheme words or words that produce identical lip movements. The neural network architecture achieved a character accuracy rate of 77.1% and a word accuracy rate of 72.2%.
Original languageEnglish
Publication statusPublished - 9 Apr 2019
Event2019 8th International Conference on Software and Information Engineering (ICSIE 2019) -
Duration: 4 Sept 2019 → …

Conference

Conference2019 8th International Conference on Software and Information Engineering (ICSIE 2019)
Period4/09/19 → …

Fingerprint

Dive into the research topics of 'Recurrent Neural Networks for Decoding Lip Read Speech'. Together they form a unique fingerprint.

Cite this