PlumX Metrics
Embed PlumX Metrics

Model Explainability for Masked Face Recognition

Lecture Notes in Networks and Systems, ISSN: 2367-3389, Vol: 755 LNNS, Page: 359-368
2023
  • 0
    Citations
  • 0
    Usage
  • 2
    Captures
  • 0
    Mentions
  • 0
    Social Media
Metric Options:   Counts1 Year3 Year

Metrics Details

Conference Paper Description

With the incessant increase of COVID-19 and related variants, various regulating agencies have emphasized the importance of face masks, especially in public areas. The face recognition systems already deployed by various organizations have to be recalibrated to be able to detect subjects wearing a face mask. The modern-day face recognizers composed of various face detection and classification algorithms appear as a black box to the end user. The nature of such systems becomes suspicious when highly sensitive scenarios are concerned. This consequently raises trust issues towards the model deployed in the background. The behavior of the image classification model can be interpreted for this purpose using LIME (Local Interpretable Model Agnostic Explanations). It can give a clear idea of the features or super-pixels that are in charge of making a particular prediction. This work aims to investigate the local features of a target image that helps the classifier to make a prediction using LIME. A Vanilla CNN model has been selected, which was trained with 7553 face images. The model exhibits a classification accuracy of 98.19%, and it is revealed from the heatmaps that the model works by learning the structure of a face with and without masks for making accurate predictions.

Provide Feedback

Have ideas for a new metric? Would you like to see something else here?Let us know