Automatic visual speech recognition
Lee, Kean Hin
Date of Issue2016-08-23
School of Electrical and Electronic Engineering
One of the most challenging tasks in automatic visual speech recognition is the extraction of feature parameters from image sequences of lips. There are primarily two approaches to extract visual speech information from image sequences, i.e. model-based approach and pixel-based approach. The advantage of mode1-based approach is that the parameters of the contour model of the lip are less influenced by the variability of lighting condition, lip location and rotation but the construction of an efficient and yet robust lip contour that is capable of tracking the lip has made this task difficult. The pixel-based approach on the other hand must take the variability of lighting condition, lip rotation and location into account. Despite many researches undertaken, lip tracking remains a challenging task due to the diverse variation of face images. The pixel based approach was adopted in this project. Raw data for visual speech recognition were obtained using digital camcorder. These video recordings were converted to image sequences and the lip of the speaker on each frame was extracted. The lip boundaries were obtained after the lip on each frame was located. The contour of the lip was drawn based on the lip boundaries using least square polynomial. Ten important visual speech features for all frames were extracted and then quantized. These vector sequences were ready to be used for training of HMMs. The trained models were used for recognition of unknown vector sequences.
DRNTU::Engineering::Electrical and electronic engineering
Final Year Project (FYP)
Nanyang Technological University