DIY Drowsiness Detection System for Vehicles Using Raspberry PI 4
by techfreak09 in Circuits > Raspberry Pi
701 Views, 3 Favorites, 0 Comments
DIY Drowsiness Detection System for Vehicles Using Raspberry PI 4
Driver drowsiness is a critical concern in the realm of road safety, posing a significant threat to both drivers and pedestrians. Fatigue-related accidents are responsible for a substantial portion of road fatalities, making it imperative to develop effective tools for detecting and mitigating drowsy driving. Drowsy driving is characterized by lapses in attention, slow reaction times, and impaired decision-making, which can lead to accidents. The primary objective of this system is to monitor a driver's state in real time, analyzing image frames captured from within the vehicle to determine whether the driver is exhibiting signs of drowsiness. By harnessing the power of machine learning and computer vision, this system aims to provide a proactive and potentially life-saving solution. The project utilizes a Haar-Cascade Classifier to detect key facial features, specifically the state of the eyes (open or closed) and the mouth (open or closed).
Supplies
Bill of Materials (Hardware):
- Raspberry PI 3B or 4B model
- 7 inch Raspberry PI display (optional)
- USB camera / Raspberry PI camera
- 5V/3A power adapter
- Female jumper cables for connecting Raspberry PI display with Raspberry PI board
Software:
1) Python
For running the code written in Python programming language.
https://raspberrytips.com/install-latest-python-raspberry-pi/ (URL for python installing guide on Raspberry PI)
2) Open CV
OpenCV stands for Open Source Computer Vision. It is a library used for image processing applications which is being used in this project.
https://robu.in/installing-opencv-using-cmake-in-raspberry-pi/ (URL for OpenCV installing guide on Raspberry PI)
3) Visual studio Codex
Visual Studio Code is the development platform for editing/testing the code in any programming language. (python being used in our project)
https://code.visualstudio.com/download (URL for Visual Studio Code installation on PC/Laptop)
4) Dlib
Dlib is a popular toolkit for machine learning that is used primarily for computer vision and image processing tasks, such as face recognition, facial landmark detection, object detection, and more. It is written in C++ but has Python bindings, making it easily accessible from Python code.
https://www.pyimagesearch.com/2017/05/01/install-dlib-raspberry-pi/ (URL for Dlib library installation on Raspberry PI)
Block Diagram and Working
The face recognition process begins with the Haar cascade algorithm detecting faces in an image, followed by Dlib pinpointing key facial landmarks like eye corners and mouth edges. Dlib's pre-trained ResNet-based model extracts high-dimensional features from aligned face images, capturing unique facial characteristics. Optionally, custom face recognition models can be trained by fine-tuning the pre-trained model with labeled face data for improved accuracy. During operation, features from new images are extracted and compared against stored representations in a database using similarity scores. Post-processing steps, like thresholding, help mitigate false positives and seamlessly integrate results into larger applications, ensuring accurate and robust face recognition across various scenarios.
The Haar Cascade object detection method relies on a series of steps to efficiently identify objects in images or video streams. It begins by defining Haar-like features, which are simple rectangular filters capable of detecting variations in intensity, such as edges or corners. These features serve as the foundation for training a machine learning model, where positive and negative image samples are used to create a Haar Cascade classifier. This classifier comprises cascading stages, each containing multiple weak classifiers trained to recognize specific patterns. During object detection, a sliding window traverses the image, computing Haar-like features at various positions and scales. The classifier evaluates these features, determining whether they match learned patterns to identify the object. Additionally, thresholding is employed to minimize false positives, enhancing detection accuracy. Overall, the Haar Cascade approach combines speed and accuracy, making it well-suited for real-time object detection tasks.
The Eye Aspect Ratio (EAR) plays a crucial role in driver drowsiness detection systems, serving as a key metric to quantify fatigue levels by analyzing facial features, particularly focusing on the eyes. It measures the ratio between the width of the eye and the distance from the vertical midpoint of the eye to the horizontal line passing through the top and bottom eyelids. As a driver becomes drowsy, their eyelids tend to droop, causing changes in the shape and size of the eyes. By continuously monitoring the EAR in real-time using cameras installed in the vehicle, these systems can detect patterns indicative of drowsiness. A significant decrease in EAR suggests that the driver's eyes are closing or nearly closed, signaling an increased risk of falling asleep at the wheel.
Circuit Diagram and Code
For the connections between Display and Raspberry PI please visit the following URL from Raspberry PI foundation:
https://www.raspberrypi.com/documentation/accessories/display.html
Plugin the USB camera to any of the USB port of Raspberry PI board. (Make sure to enable the camera functionality if you are using Raspberry PI cam using CSI connector of Raspberry PI board)
All the required files for the project can be accessed by clicking on below link:
https://drive.google.com/drive/folders/18JBHIukuy94GFZHltv2YQyrXBIT4b-yZ?usp=sharing
Copy the required files (except "Driver Drowsiness Detection.mp4" video file) to Raspberry PI board
Setup and Testing
Step 1 : Copy the source code into Raspberry PI
Step 2 : Open terminal and navigate to the file directory using 'cd' command ( eg: cd Desktop )
Step 3 : Run the python file using command " python3 drowsiness_yawn.py " in terminal
Note :- Make sure your Raspberry PI is connected to the webcam before running the project. For audio output please connect either Bluetooth speaker or speaker with AUX cable to Raspberry PI board.
Project Demonstration:
The system was tested on different occasions for drowsiness detection and the system’s performance was found to be consistent and the results were recorded.
The following table shows the drowsiness detection results based on the amount of time the eyes were open or closed:
Table 1 indicates, if the driver blinks for less than 3 seconds, the Alert will not activate as it is considered a normal reflex action. However, if the duration of blinking or closing of the eyes exceeds 3 seconds, the Alert will be triggered to warn the driver that they might be feeling drowsy and need to take a break or rest. This feature is designed to enhance road safety and prevent accidents caused by driver fatigue.
Table 2 indicates, in the case of Yawning there is no threshold value for the time but only for the EAR value. This helps to quickly alert the driver before he goes drowsy.
Conclusion
The main purpose of the Driver Drowsiness Detection System that we developed is to ensure the safety of drivers and passengers traveling in vehicles. Although there is a lot of research done in this area of work, our system stands out because of its cost-effectiveness, the ease of implementation of the IoT system, and the reliability of the alert system using sound. This system is precise enough to differentiate between the movements of the lips to determine whether the driver is simply talking or if they are yawning and tired. The system is also able to reliably differentiate between normal blinking of the eye and drowsiness, even when the driver is wearing spectacles, thus reducing the number of accidents and fatalities caused due to drowsy driving.
Future Scope
In the next phase, we have the opportunity to enhance our outcomes and refine our models by implementing further optimizations and adjustments. First, add emergency contacts in the system which will send alerts in an emergency. Second, improve the quality of our training dataset to improve the accuracy of the system. Third, improve the camera setup for better performance in night conditions.
Happy Tinkering!!!