This project focuses on the detection of sign language gestures using YOLOv5s, a state-of-the-art deep learning model for object detection. The model is trained on a custom dataset containing sign language gestures.
Sign language detection is crucial for creating inclusive technology that can aid communication for people with hearing impairments. This project aims to develop a robust sign language detection system using computer vision techniques.
The dataset used in this project consists of images of various sign language gestures captured using OpenCV. Each image is labeled using the LabelImg library to provide bounding box annotations for training the YOLOv5s model.
The YOLOv5s model is trained using the custom dataset for 300 epochs. During training, the model learns to detect different sign language gestures represented in the dataset.
After training for 300 epochs, the model achieves satisfactory results in detecting sign language gestures. The accuracy and performance metrics are evaluated to assess the effectiveness of the model.
Here is a sample prediction made by the trained model on an image from the test dataset:
- Python 3.x
- PyTorch
- OpenCV
- LabelImg
- YOLOv5s
To use the sign language detection model:
-
Clone this repository:
git clone https://github.com/Sainarendra21/SignLanguageDetection.git
-
Install dependencies:
pip install -r requirements.txt
-
Run inference:
python run.py
- Improve model performance by collecting more diverse data and fine-tuning the model architecture.
- Deploy the sign language detection model as a web or mobile application for real-time usage.
This project is licensed under the MIT License - see the LICENSE file for details.