PlumX Metrics
SSRN
Embed PlumX Metrics

When a Small Change Makes a Big Difference: Algorithmic Fairness Among Similar Individuals

55 UC Davis Law Review 2337 (2022)
2022
  • 0
    Citations
  • 2,702
    Usage
  • 3
    Captures
  • 1
    Mentions
  • 0
    Social Media
Metric Options:   Counts1 Year3 Year

Metrics Details

  • Usage
    2,702
    • Abstract Views
      2,292
    • Downloads
      410
  • Captures
    3
    • Readers
      3
      • SSRN
        3
  • Mentions
    1
    • Blog Mentions
      1
      • Blog
        1
  • Ratings
    • Download Rank
      144,219

Paper Description

If a machine learning algorithm treats two people very differently because of a slight difference in their attributes, the result intuitively seems unfair. Indeed, an aversion to this sort of treatment has already begun to affect regulatory practices in employment and lending. But an explanation, or even a definition, of the problem has not yet emerged. This Article explores how these situations—when a Small Change Makes a Big Difference (SCMBDs)—interact with various theories of algorithmic fairness related to accuracy, bias, strategic behavior, proportionality, and explainability. When SCMBDs are associated with an algorithm’s inaccuracy, such as overfitted models, they should be removed (and routinely are.) But outside those easy cases, when SCMBDs have, or seem to have, predictive validity, the ethics are more ambiguous. Various strands of fairness (like accuracy, equity, and proportionality) will pull in different directions. Thus, while SCMBDs should be detected and probed, what to do about them will require humans to make difficult choices between social goals.

Bibliographic Details

Jane R. Bambauer; Tal Zarsky; Jonathan Mayer

AI; artificial intelligence; algorithmic fairness; algorithmic bias; privacy; rules v standards

Provide Feedback

Have ideas for a new metric? Would you like to see something else here?Let us know