Emotion Recognition Using Brainypi
by PIAI23JAN1020 in Teachers > University+
370 Views, 3 Favorites, 0 Comments
Emotion Recognition Using Brainypi
Emotion recognition using deep learning involves training a neural network to identify emotions in human faces based on visual cues such as facial expressions, body posture, and tone of voice. The network is trained on large datasets of images and videos that are labeled with the corresponding emotions, such as happiness, sadness, anger, and so on. The model learns to recognize patterns in the data that correspond to different emotions, and then can use this knowledge to predict the emotions in new, unseen data.
The process typically involves using convolutional neural networks (CNNs) or recurrent neural networks (RNNs) to extract features from the input data and make predictions based on those features. These networks are optimized using gradient-based optimization algorithms, and their performance is evaluated using metrics such as accuracy, precision, and recall.
Overall, deep learning has shown promising results for emotion recognition, with some state-of-the-art models achieving high accuracy levels on benchmark datasets. However, the field is still an active area of research, and there are still challenges to be addressed, such as handling variations in the data due to differences in lighting, facial expressions, and other factors.
Emotion recognition can be implemented on Raspberry Pi using machine learning techniques such as deep learning. The process involves training a neural network on a dataset of facial expressions and mapping them to specific emotions. The Raspberry Pi can then run the trained model on incoming video data from a camera to detect emotions in real-time. However, implementing emotion recognition on a Raspberry Pi can be computationally intensive, so it is important to optimize the model for efficient inference on the Pi's hardware.
Supplies
Libraries used : Numpy, Pandas, Keras, Tensorflow, Opencv, OS
Implementation on : Brainypi
https://brainypi.com/docs/13-opencv-examples/
About the Dataset
The dataset has around 9000 images with 1200+ images for each class and separate test, train and validation parts of the dataset.
Total number of classes - 7
Classes - Angry, Surprise, Neutral, Happy, Sad, Fear and Disgust.
Model Architecture
#1st CNN layer
Conv2D = (64,(3,3),padding = 'same',input_shape = (48,48,1))
BatchNormalization()
Activation = ('relu')
MaxPooling = (pool_size = (2,2))
Dropout(0.25)
#2nd CNN layer
Conv2D = (128,(5,5),padding = 'same')
BatchNormalization()
Activation = ('relu')
MaxPooling = (pool_size = (2,2))
Dropout = (0.25)
#3rd CNN layer
Conv2D = (512,(3,3),padding = 'same')
BatchNormalization()
Activation = ('relu')
MaxPooling = (pool_size = (2,2))
Dropout = (0.25)
#4th CNN layer
Conv2D = (512,(3,3),padding = 'same')
BatchNormalization()
Activation = ('relu')
MaxPooling = (pool_size = (2,2))
Dropout = (0.25)
Flatten()
#Fully connected 1st layer
Dense = (256)
BatchNormalization()
Activation = ('relu')
Dropout = (0.25)
# Fully connected layer 2nd layer
Dense = (512)
BatchNormalization()
Activation = ('relu')
Dropout = (0.25)
#Output Layer
Dense = (no_of_classes= 7)
Activation = ('softmax')
Learning Rate = (0.0001)
optimizer = Adam
loss = categorical_crossentropy'
metrics = ('accuracy')
Total Given Epochs = (48)
Accessing Brainypi
Step1 : Remote access to Brainypi using terminal.
Step 2 : Git clone the repository to access all the files of the project
Step 3 : Run the python code and detect using the trained model.
Results of the Trained Model
The model is trained with the same architecture in a local computer and uploaded to the git. The maximum accuracy was 72.84% with all tunings of the parameter. The model is then saved and run in the brainypi.
Running the Model on Brainypi
Results of testing the model using OpenCV and their outputs. Brainypi recognised a series of images given to the trained model and recognised almost every image accurately.
Applications
Disney uses emotion-detection tech to find out opinion on a completed project, other brands have used it to directly inform advertising and digital marketing. Kellogg’s is just one high-profile example, having used Affectiva’s software to test audience reaction to ads for its cereal. Unilever does this, using Hire Vue’s AI-powered technology to screen prospective candidates based on factors like body language and mood. Like such, this project is tending to promote more applications such as reaction to products, movie teasers, natural interview expressions, recordings and so on.
https://www.ijert.org/audience-feedback-analysis-using-emotion-recognition
https://medium.com/analytics-vidhya/feedback-system-using-facial-emotion-recognition-e4554157a060