With the breakthroughs in deep learning, the recent years have witnessed a booming of AI applications and services, spanning from personal assistant to recommendation systems to video/audio surveillance. Nevertheless, many high performance AIs work as “black box”, which limits their explainability, i.e., hard to understand by human.
This is more challenging in conducting AI in management systems, due to the strong-subjectivity and complexity of human behaviour/cognition embedded management systems.
several XAI studies were published to try introduce explanations bridging the AI-based customer success algorithms with Customer preferences and Net Promoting Success metrics (NPS).
Interesting article : “Understanding Consumer Preferences for Explanations Generated by XAI Algorithms” in Human-Computer Interaction downloadable at [2107.02624] Understanding Consumer Preferences for Explanations Generated by XAI Algorithms (arxiv.org) (https://arxiv.org/abs/2107.02624)
Three main attributes are proposed to describe automatically-generated explanations : format, complexity, and specificity. These are coupled with context-awareness as well as heterogeneity in users’ cognitive styles. Despite their popularity among academics, the authors found that counterfactual explanations are not popular among users, unless they follow a negative outcome. Authors also found that users are willing to tolerate some complexity in explanations. Finally, their results suggest that preferences for specific (vs. more abstract) explanations are related to the level at which the decision is constructed by the user, and to the deliberateness of the user’s cognitive style.
room for another use case class !
About The Author: ISee Team
More posts by iSee Team