This project will advance the state-of-the-art in cross-disciplinary areas including motion signal processing, machine learning (ML), sign language modeling, and real-time ML with dynamic device/edge partitioning, to develop new technology for automatic Sign Language Recognition (SLR) and translation to spoken language that enables more seamless communication between deaf and hearing people. The technology will incorporate wearable devices (such as a smartwatch, smart ring, and earphones) that are gaining in popularity, and will have broad impact through its introduction in deaf communities along with a sign language equivalent of voice assistants such as Amazon Alexa. The project will establish a pipeline of collaboration with deaf students, as well as courses based on SLR technology that will be disseminated through MOOC platforms such as Coursera. Additional impact will derive from workshops on wearable computing that will be conducted at the K-12 level, and a 'sign-to-speech' library that will be publicly released for extensibility of the new technology to multiple sign languages.
To achieve its goals this research will include three thrusts: Development of ML models with efficient training that can perform accurate SLR by fusing multimodal input data from wearable devices that capture body motion and facial expressions; Implementation of efficient ML models by means of optimal partitioning between end-device and edge resources to achieve the best tradeoff in real time performance and SLR accuracy; Design of systematic user studies with fluent sign language users both for generating training data for ML models as well as for validation of accuracy, usability, and acceptability of the technology within the deaf community.
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
|Effective start/end date||3/1/21 → 2/28/26|
- National Science Foundation: $188,434.00