Decoder-Encoder LSTM for Lip Reading

Souheil Fenghour, Daqing Chen, Perry Xiao

Research output: Contribution to journalArticlepeer-review

7 Citations (Scopus)

Abstract

The success of automated lip reading has been constrained by the inability to distinguish between homopheme words, which are words have different characters and produce the same lip movements (e.g. ”time” and ”some”), despite being intrinsically different. One word can often have different phonemes (units of sound) producing exactly the viseme or visual equivalent of phoneme for a unit of sound. Through the use of a Long-Short Term Memory Network with word embeddings, we can distinguish between homopheme words or words that produce identical lip movements. The neural network architecture achieved a character accuracy rate of 77.1% and a word accuracy rate of 72.2%.
Original languageEnglish
JournalProceedings of the 2019 8th International Conference on Software and Information Engineering
DOIs
Publication statusPublished - 9 Apr 2019

Fingerprint

Dive into the research topics of 'Decoder-Encoder LSTM for Lip Reading'. Together they form a unique fingerprint.

Cite this