Dr Anne Liret, BT and from iSee team with colleague Researchers Dr Mercedes Arguello Casteleiro, University of Southampton, and Dr Christoph Tholen, DFKI: German Research Center for Artificial Intelligence are pleased to announce the Explainable AI workshop that will be held in Cambridge on 13th December 2022.
It is part of the AI conference held there till 15th December (SGAI International Conference on Artificial Intelligence (bcs-sgai.org)). We are looking forward to meeting industry and research community interested in XAI there.
Explainable AI (XAI) aims to enhance machine learning (ML) techniques with the aim of producing more explainable ML models that would enable human users to understand and appropriately trust ML models.
Reusing Explanation experience – Dr Anne Liret, BT
Even with the growing list of explanation libraries that are published, real-world decision-makers still face the challenge of designing the right questions and measuring instruments to prove that the evaluation fits for purpose and brings benefits to the end-user. iSee (isee4xai.com) is an interactive toolbox for XAI with Case-based Reasoning at heart which focuses on explanation experience evaluation and reusing across different use cases. This part of the workshop will look at why it is important to model the end-to-end experience of user, will showcase explanation methods, and exemplify the importance of validation according to the intent of users and human perception, and present real examples.
Speakers:
- Anne Liret, BT Applied Research: “Evaluating and reusing explanation experience across use cases”
- David Corsar, Robert Gordon University: “Modelling explanation strategies and experiences with iSeeOnto”
- iSee team: “Demo of the iSee cockpit: Reusing explanation strategy in action”
- Matthew Wallwork, BT Technology: “Connected Care – supporting people in their homes”
- Mahsa Abazari Kia and Aygul Garifullina, Essex University and BT: “Using NLP to understand complex technical notes – a telecoms case study”
Explainability hands-on in deep learning – Dr Mercedes Arguello Casteleiro, University of Southampton
Deep Learning algorithms are considered black box algorithms, where a close examination by humans does not reveal the features used to generate the prediction. This part of the workshop will focus on explainable AI for Deep Learning algorithms in domains with abundant unlabelled text, such as biomedicine. The workshop will exemplify how to provide predictions (outcome) with accompanying justifications (outcome explanation). The approach presented belongs to the new field of explainable active learning (XAL), combining active learning (AL) and local explanations.
Application cases – Dr Christoph Tholen, Mattis Wolf, and Dr Frederic Stahl, DFKI: German Research Center for Artificial Intelligence
This part of the workshop will focus on XAI applications in the maritime domain. Here, on one hand, safety concerns prevent the use of deep learning techniques for many applications. Nevertheless, XAI techniques have the potential to enable, for instance, control systems for autonomous ships. Another example is the use of convolutional neural networks (CNNs) for plastic waste identification and classification. In this use case, the acceptance of potential end users depends on the confidence of the human stakeholder in AI systems used. Here XAI methods, like result explanations, can help to increase the acceptance of the end users. In this workshop, both use cases and other possible applications of XAI in the maritime domain will be discussed.
About The Author: ISee Team
More posts by iSee Team