Face Emoji Week 6

    Face Emoji Week 6

    Week 6 Content of Face Emoji

    By AI Club on 3/24/2025
    0

    Week 6 of Face Emoji - Testing Your Model in Real-Time

    Welcome to Week 6 of our Face-Emoji project! Last week you worked on gathering your dataset and training your PyTorch model using one of the two paths we outlined. Now it's time to put your trained model to the test in a real-time environment.

    Whether you chose Path 1 (training on images) or Path 2 (training on landmarks), this week you'll be integrating your model with the webcam feed to detect emotions in real-time. Let's see how your model performs with live data!

    Testing Your Model in Real-Time

    Regardless of which path you chose, the overall structure of what you need to do this week is similar:

    1. Load your trained model

    2. Set up the webcam feed with MediaPipe

    3. Process frames and run inference with your model

    4. Display the detected emotion on screen

    However, the specific implementation details will differ depending on your chosen path.

    For Path 1: Testing Image-Based Models

    If you trained your model on facial expression images, you'll need to:

    1. Load your trained PyTorch model from the saved file

    2. Use MediaPipe to detect faces in the webcam feed

    3. For each detected face:

      • Crop the face region from the frame

      • Resize the cropped image to match your training data dimensions

      • Preprocess the image (normalization, etc.) to match your training process

      • Convert the processed image to a PyTorch tensor

      • Run inference with your model to get the predicted emotion

      • Display the emotion label near the detected face

    Important Considerations for Path 1:

    • Make sure your image preprocessing matches exactly what you did during training

    • Watch out for different lighting conditions affecting your model's performance

    • Consider adding a confidence threshold to only display emotions when the model is reasonably confident

    For Path 2: Testing Landmark-Based Models

    If you trained your model on MediaPipe facial landmarks, your workflow will be:

    1. Load your trained PyTorch model from the saved file

    2. Set up the webcam feed with MediaPipe Face Mesh (not just Face Detection)

    3. For each frame:

      • Extract facial landmarks using MediaPipe

      • Format the landmarks to match your training data structure

      • Convert the landmarks to a PyTorch tensor

      • Run inference with your model to get the predicted emotion

      • Display the emotion label near the detected face

    Important Considerations for Path 2:

    • Ensure you're extracting and formatting landmarks consistently with your training process

    • Consider handling cases where multiple faces are detected (if your application supports this)

    • Think about how to handle frames where no face is detected

    Looking Ahead: Emoji Mapping

    For next week, we will focus on mapping the detected emotions to emojis. We will come back with some suggestions on libraries or method that might be able to do that. Then, for the last week, (week 8) our focus will on setting up a simple front-end to package your application.

    Evaluation Tips

    As you test your models this week, keep these evaluation points in mind:

    • Speed: How many frames per second can your system process? Is it fast enough for a smooth user experience?

    • Accuracy: How well does your model detect emotions in different lighting conditions and for different people?

    • Robustness: Does your system handle edge cases well (glasses, different facial orientations, etc.)?

    Helpful Resources

    Remember, your goal this week is a working real-time emotion detection system. Focus on getting the core functionality working before adding additional features. Good luck, and feel free to collaborate and share your experiences with your fellow club members!

    Comments