iSee: Intelligent Sharing of Explanation Experience by Users for Users
A right to obtain an explanation of the decision reached by a machine learning (ML) model is now an EU regulation. Different stakeholders (e.g. managers, developers, auditors, etc.) may have different background knowledge, competencies and goals, thus requiring different kinds of interpretations and explanations. Fortunately, there is a growing armoury of ways of interpreting ML models and explaining their predictions, recommendations and diagnoses. We will refer to these collectively as explanation strategies. As these explanation strategies mature, practitioners will gain experience that helps them know which strategies to deploy in different circumstances. What is lacking, and what we propose addressing, is the science and technology for capturing, sharing and re-using explanation strategies based on similar user experiences, along with a much-needed route to explainable AI (XAI) compliance. Our vision is to improve every user’s experience of AI, by harnessing experiences of best practice in XAI by users for users.