The goal of this product is to help people who wish to learn American Sign Language (ASL).Learning sign language can be challenging, especially without immediate feedback. That’s where our app comes in. It compares the sign user provides to the standard example using a large dataset of images and videos from fluent sign language users
The app was created using 2 Deep Learning models trained on large datasets of images and videos. To process these, we used hand tracking tools to extract hand landmarks.
Next steps for this app:
- Increased complexity: more words and phrases
- Addition of more languages
- Increased accuracy
- Incorporation of contextual understanding and facial&body language
Demo day video
Tech stack
Python
FastAPI
Streamlit
TensorFlow
Keras
OpenCV
MediaPipe