PlumX Metrics
Embed PlumX Metrics

Detecting shortcut learning for fair medical AI using shortcut testing

Nature Communications, ISSN: 2041-1723, Vol: 14, Issue: 1, Page: 4314
2023
  • 28
    Citations
  • 0
    Usage
  • 61
    Captures
  • 3
    Mentions
  • 0
    Social Media
Metric Options:   Counts1 Year3 Year

Metrics Details

Most Recent News

5 Ways To Embrace AI To Help You Keep (And Do) Your Job

am EDT I was a newly-minted lawyer when mergers and acquisitions were on fire. Everywhere you turned, companies were gobbling each other up, grabbing synergies

Article Description

Machine learning (ML) holds great promise for improving healthcare, but it is critical to ensure that its use will not propagate or amplify health disparities. An important step is to characterize the (un)fairness of ML models—their tendency to perform differently across subgroups of the population—and to understand its underlying mechanisms. One potential driver of algorithmic unfairness, shortcut learning, arises when ML models base predictions on improper correlations in the training data. Diagnosing this phenomenon is difficult as sensitive attributes may be causally linked with disease. Using multitask learning, we propose a method to directly test for the presence of shortcut learning in clinical ML systems and demonstrate its application to clinical tasks in radiology and dermatology. Finally, our approach reveals instances when shortcutting is not responsible for unfairness, highlighting the need for a holistic approach to fairness mitigation in medical AI.

Bibliographic Details

Brown, Alexander; Tomasev, Nenad; Freyberg, Jan; Liu, Yuan; Karthikesalingam, Alan; Schrouff, Jessica

Springer Science and Business Media LLC

Chemistry; Biochemistry, Genetics and Molecular Biology; Physics and Astronomy

Provide Feedback

Have ideas for a new metric? Would you like to see something else here?Let us know