• Lipton, Zachary Chase. The Mythos of Model Interpretability. Proceedings of the ICML Workshop on Human Interpretability in Machine Learning, pp. 96–100, 2016. https://arxiv.org/pdf/1606.03490.pdf
  • Richard Tomsett, Dave Braines, Dan Harborne, Alun Preece, Supriyo Chakraborty: Interpretable to Whom? A Role-based Model for Analyzing Interpretable Machine Learning Systems. 2018 ICML Workshop on Human Interpretability in Machine Learning (WHI 2018), Stockholm, Sweden. https://arxiv.org/ftp/arxiv/papers/1806/1806.07552.pdf
  • Cynthia Ruden: Stop Explaining Black Box Machine Learning Models for High Stakes Decisions and Use Interpretable Models Instead. Nat Mach Intell 1, 206–215 (2019) doi:10.1038/s42256-019-0048-x (https://arxiv.org/pdf/1811.10154.pdf)
  • Wachter, S., Mittelstadt, B., & Russell, C. (2017). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. https://arxiv.org/abs/1711.00399

BT