Versions

Description

This paper has been submitted for publication in the 15th IEEE International Conference on Automatic Face and Gesture Recognition on Automatic Facial and Gesture Recognition (FG2020). This work reveals a bias in scoring sensitivity across different subgroups when verifying the identity of a subject using facial images. In other words, the performance of an FR system on different subgroups (e.g., male vs female, Asian vs Black) typically depends on a global threshold (i.e., decision boundary on scores or distances to determine whether true or false pair). Our work uses fundamental signal detection theory to show that the use of a single, global threshold causes a skew in performance ratings across different subgroups. For this, we demonstrate that subgroup-specific thresholds are optimal in terms of overall performance and balance across subgroups.

Repository

https://github.com/visionjo/facerec-bias-bfw

Project Slug

facebias

Last Built

1 year, 6 months ago passed

Maintainers

Home Page

https://github.com/visionjo/facerec-bias-bfw

Badge

Tags

computer-vision, data-bias, face-recognition, fairness-in-ml, machine-learning, machine-learning-bias

Short URLs

facebias.readthedocs.io
facebias.rtfd.io

Default Version

latest

'latest' Version

master