PlumX Metrics
Embed PlumX Metrics

Conceptual Challenges for Interpretable Machine Learning

SSRN, ISSN: 1556-5068
2020
  • 3
    Citations
  • 1,359
    Usage
  • 9
    Captures
  • 0
    Mentions
  • 0
    Social Media
Metric Options:   Counts1 Year3 Year

Metrics Details

  • Citations
    3
    • Citation Indexes
      3
  • Usage
    1,359
    • Abstract Views
      1,157
    • Downloads
      202
  • Captures
    9
  • Ratings
    • Download Rank
      304,021

Article Description

As machine learning has gradually entered into ever more sectors of public and private life, there has been a growing demand for algorithmic explainability. How can we make the predictions of complex statistical models more intelligible to end users? A sub-discipline of computer science known as interpretable machine learning (iML) has emerged to address this urgent question. Numerous influential methods have been proposed, from local linear approximations to rule lists and counterfactuals. In this article, I highlight three conceptual challenges that are largely overlooked by authors in this area. I argue that the vast majority of iML algorithms are plagued by: (1) ambiguity with respect to their true target; (2) a disregard for error rates and severe testing; and (3) an emphasis on product over process. Each point is developed at length, drawing on relevant debates in epistemology and philosophy of science. Examples and counterexamples from iML are considered, demonstrating how failure to acknowledge these problems can result in counter-intuitive and potentially misleading explanations. Without greater care for the conceptual foundations of iML, future work in this area is doomed to repeat the same mistakes.

Bibliographic Details

David S. Watson

Elsevier BV

Multidisciplinary; Artificial Intelligence; Algorithmic Explainability; Interpret-Able Machine Learning; Scientific Explanation; Severe Testing

Provide Feedback

Have ideas for a new metric? Would you like to see something else here?Let us know