Here are open questions that we would like to answer collaboratively with community members :

  • Do you use AI systems to support decision making in your operations ?
  • In which context do you see that an explanation could be useful ?
    For instance, to fill-in a lack of confidence in the machine-generated recommendation, or to leverage the ability of learning by experience for new joiners…?
    Now assuming there in an explanation enriching the AI system output, when could we say that explanations become useful in the operational process ?

How do we know that the explanation brings a positive impact to the human end user ?
Can it be measured by the type of actions that end-user decide to take after having read the explanation ?

It is very likely that the utility of an explanation will be related to the profile of end-user and how it relates to the user’s perception of AI systems