Skip to content

Advanced Configurations for Real time Emotion Recognition

Michael Sawan edited this page Feb 5, 2024 · 1 revision

Facial Emotion Recognition with OpenCV

Introduction

This guide provides comprehensive instructions on using the facial emotion recognition code that combines WebRTC and OpenCV for real-time communication and advanced facial analysis. The code, hosted on AWS EC2, aims to create an immersive telemedicine experience with features such as emotion detection and face recognition.

Getting Started

Prerequisites

Before using the provided code, ensure that you have the following:

  • Python installed on your system (recommended version: 3.7 or later).

  • OpenCV library (cv2) installed. You can install it using the following command:

    pip install opencv-python
    

Code Setup

  1. Clone the repository containing the facial emotion recognition code:

    git repo clone github.uconn.edu/mrd19007/CSESDP32
  2. Navigate to the project directory:

    cd opencv
  3. Open the script in your preferred Python editor or IDE.

Running the Code

Once the setup and configuration are complete, you can run the facial emotion recognition code. Follow these steps:

  1. Execute the script in your Python environment:

    python expression_ssd_detect.py
  2. The code will initiate a real-time video stream with facial emotion recognition. The detected emotions will be displayed on the video frames.

Advanced Configurations

(Tentative) Customizing Input Sources for Emotion Detection

Within the FER_live_cam() function, you have the flexibility to tailor the input source for emotion detection based on your specific preferences.

Using a Pre-Recorded Video

To analyze emotions in a pre-recorded video, ensure the video is placed within the "opencv" directory. Replace the placeholder 'video_name_here.mp4' with the actual name of your video within the following line:

cap = cv2.VideoCapture('video_name_here.mp4')

This empowers you to effortlessly apply emotion detection to diverse video content, opening up possibilities for analyzing emotions in recorded scenarios.

Real-time Emotion Detection via Webcam

Alternatively, if you prefer real-time analysis through your webcam, uncomment the following code snippet:

Uncomment the line below for webcam input
# cap = cv2.VideoCapture(0)

Comment out the line with the video path:

Comment out this line for pre-recorded video input
# cap = cv2.VideoCapture('video_name_here.mp4')

Ensure that your system has an active camera connected. This option grants you the capability to perform emotion analysis on live video feeds, making it suitable for interactive applications or immediate response scenarios.

This level of customization in input sources enhances the adaptability of the emotion detection system, catering to a spectrum of use cases from recorded data analysis to real-time interaction with a live audience.

Feel free to experiment with both options to find the most fitting approach for your specific application and unleash the full potential of the emotion detection capabilities embedded in the code.

Modifying Facial Analysis Parameters

The facial analysis parameters in the script provide a means of fine-tuning the behavior of the facial emotion recognition system. Understanding and adjusting these parameters can significantly impact the accuracy and performance of the emotion detection process.

1. image_mean

The image_mean parameter represents the mean values used for image normalization across the RGB channels. Image normalization is a common preprocessing step in deep learning models. Adjusting image_mean involves changing the baseline values used to center the pixel values, influencing the model's perception of color and intensity.

2. image_std

The image_std parameter is the standard deviation used for image normalization. Similar to image_mean, adjusting image_std alters the scale of pixel values, affecting the normalization process. Proper normalization contributes to stable model training and improved performance.

3. iou_threshold

The iou_threshold (Intersection over Union) is a critical parameter in object detection tasks, determining the threshold for bounding box matches. A higher threshold may lead to more precise bounding boxes but may miss some detections, while a lower threshold could result in more detections with potential overlaps. Finding the right balance is essential for accurate and reliable facial emotion recognition.

How These Parameters Affect the Output

  • Accuracy: Tweaking image_mean and image_std can impact the model's ability to detect facial features accurately. Experimenting with different values allows you to find the optimal normalization settings for your specific dataset.

  • Bounding Box Precision: Adjusting iou_threshold influences the precision of the bounding boxes around detected faces. Higher values may yield more conservative bounding boxes, while lower values may result in larger and potentially overlapping bounding boxes.

Example Adjustments

  • If the detected emotions seem inaccurate or inconsistent, try experimenting with different values for image_mean and image_std to achieve better color normalization.

  • If the bounding boxes appear too conservative or too large, adjust the iou_threshold to find a balance that suits your specific use case.

Exploring AWS Rekognition Integration

We are currently exploring the possibility of integrating Amazon Rekognition for facial analysis and emotion analysis. The provided code offers a solid foundation, and transitioning to Amazon Rekognition could provide additional benefits or even be an effective replacement in terms of simplified implementation and detailed emotion analysis.

The End

Congratulations! You've successfully set up and run the facial emotion recognition code using OpenCV. Feel free to experiment with advanced configurations and explore the potential integration of Amazon Rekognition for enhanced facial analysis.

If you encounter any issues or have specific questions, refer to the script comments and documentation for guidance. Happy coding!