Model Explainability for Masked Face Recognition
Lecture Notes in Networks and Systems, ISSN: 2367-3389, Vol: 755 LNNS, Page: 359-368
2023
- 2Captures
Metric Options: CountsSelecting the 1-year or 3-year option will change the metrics count to percentiles, illustrating how an article or review compares to other articles or reviews within the selected time period in the same journal. Selecting the 1-year option compares the metrics against other articles/reviews that were also published in the same calendar year. Selecting the 3-year option compares the metrics against other articles/reviews that were also published in the same calendar year plus the two years prior.
Example: if you select the 1-year option for an article published in 2019 and a metric category shows 90%, that means that the article or review is performing better than 90% of the other articles/reviews published in that journal in 2019. If you select the 3-year option for the same article published in 2019 and the metric category shows 90%, that means that the article or review is performing better than 90% of the other articles/reviews published in that journal in 2019, 2018 and 2017.
Citation Benchmarking is provided by Scopus and SciVal and is different from the metrics context provided by PlumX Metrics.
Example: if you select the 1-year option for an article published in 2019 and a metric category shows 90%, that means that the article or review is performing better than 90% of the other articles/reviews published in that journal in 2019. If you select the 3-year option for the same article published in 2019 and the metric category shows 90%, that means that the article or review is performing better than 90% of the other articles/reviews published in that journal in 2019, 2018 and 2017.
Citation Benchmarking is provided by Scopus and SciVal and is different from the metrics context provided by PlumX Metrics.
Metrics Details
- Captures2
- Readers2
Conference Paper Description
With the incessant increase of COVID-19 and related variants, various regulating agencies have emphasized the importance of face masks, especially in public areas. The face recognition systems already deployed by various organizations have to be recalibrated to be able to detect subjects wearing a face mask. The modern-day face recognizers composed of various face detection and classification algorithms appear as a black box to the end user. The nature of such systems becomes suspicious when highly sensitive scenarios are concerned. This consequently raises trust issues towards the model deployed in the background. The behavior of the image classification model can be interpreted for this purpose using LIME (Local Interpretable Model Agnostic Explanations). It can give a clear idea of the features or super-pixels that are in charge of making a particular prediction. This work aims to investigate the local features of a target image that helps the classifier to make a prediction using LIME. A Vanilla CNN model has been selected, which was trained with 7553 face images. The model exhibits a classification accuracy of 98.19%, and it is revealed from the heatmaps that the model works by learning the structure of a face with and without masks for making accurate predictions.
Bibliographic Details
http://www.scopus.com/inward/record.url?partnerID=HzOxMe3b&scp=85174511469&origin=inward; http://dx.doi.org/10.1007/978-981-99-5085-0_34; https://link.springer.com/10.1007/978-981-99-5085-0_34; https://dx.doi.org/10.1007/978-981-99-5085-0_34; https://link.springer.com/chapter/10.1007/978-981-99-5085-0_34
Springer Science and Business Media LLC
Provide Feedback
Have ideas for a new metric? Would you like to see something else here?Let us know