![Luc Moore](/uploads/images/students/2124415.jpg)
Machine Learning Model for Lip-Reading
Project Abstract
In the UK alone there are around 11 million people with hearing impairments. Tools to support them need to be created, which is one of the main aims of this project.Lip-reading translates visemes into the spoken words that they represent. Neither man nor machine is close to reading lips like words, particularly in sentences without ridged structure and with large dictionaries. However, in sentences with ridged structures of smaller sizes and limited dictionaries, computers can perform to a high accuracy. In my dissertation I have created a lip-reading machine learning model which performs phoneme recognition on viseme?��s, to a reasonable accuracy.This has been made from a database of footage with a singular subject speaking that I have created, adhering to the previously mentioned sentence structure and dictionary. From this I have created a machine learning algorithm using a neural network, trained on this database without sound, interpreting the movement of lips as phonemes.
Keywords: Machine Learning, Neural Networks,
Conference Details
Session: Poster Session A at Poster Stand 124
Location: Sir Stanley Clarke Auditorium at Tuesday 7th 13:30 – 17:00
Markers: Betsy Dayana Marcela Chaparro Rico, Joe Macinnes
Course: BSc Computer Science, 3rd Year
Future Plans: I’m looking for work