During Year 1 and 2 of the project, the following materials have been created and implemented by the iSee team.

Explanation Experience Representation,   Use case and demo,   iSee platform build,  Supporting published resources,  Dissemination,  Co-creation, Evaluation trials, Satellite projects 

Explanation Experience Representation

iSee Ontology (complete representation of the case “problem, solution, evaluation”) :  iSeeOnto Documentation (isee4xai.github.io)

Ontology visualisation tool : GAIA Ontologies Visualization (ucm.es)

Use case and demo

New Use Case demo of iSee platform (Cockpit and evaluation) on Radiology diagnosis : Radiology use case demo video

New (November 2022) : iSee in action video is published on our youtube channel = iSee Demo – YouTube

New Use Case design cockpit demo : ▶ isee-dashboard (figma.com)

New Use Case end-user evaluation demo▶ Radiology Use Case – iSee Chat Bot (figma.com)

In March 2022, we built a new video showing the progress of the project.

iSee platform build

We store our programming build on the iSee Git Hub at iSee · GitHub

New (November 2022) : We built the library of reusable explainers, reused and recommended by iSee CBR engine. The Explainers library has an API allowing researchers to upload(add), reuse (read) explanation methods. The library is at https://github.com/isee4xai/ExplainerLibraries

New (November 2022) : We decided to represent explanation strategies and user-evaluation processes (may be complex user evaluation) using the Behaviour Tree concept. We then built an editor of behaviour tree dedicated to the object we have to represent for explanation experience evaluation and stratregies. The Behaviour Tree Editor can be used to tune the recommended explanation strategy by iSee cockpit, and as well as a standalone creation tool for new real world case. It is at https://github.com/isee4xai/ExplanationExperienceEditor

Finally the iSee cockpit is deployed on our platform at https://cockpit-dev.isee4xai.com/user/login

To make it very easy for our team to reuse and do local testing, we published iSee Cockpit as a docket hub image. The cockpit component has been embedded in Docker container and can be dowloaded from our Docker Hub page for further deployment on iSee platform server: isee4xai’s Profile | Docker Hub

iSee cockpit release 3 has come to a reality in October 2023. The Behaviour tree editor has been enhanced to cope with : user question coupled to explainer action, similarity calculation to recommend suitable alternative explainer and their options of usage (from successful past usage) within an explanation strategy. this recommendation depends on where in the Behaviour tree you will look for alternatives. This tool is supported by the ontology of iSee for Explanation Experience which has been extended to User evaluation outcome representation (in addition to explainer requirements). Should you wish to use iSee for one explanation need, please contact the team at info@isee4xai.com

UCM have added 70 reusable explainers to the iSee catalog, rewritten from literature papers or from collaboration with partners. they have passed the criteria we have defined for reusability , which is to be suitable for at least 2 use case (among the seed cases which are in the CBR engine). The iSee catalog is available on the iSee platform (user login controled). More details , please contact the team at hello@isee4xai.com

New Functionalities of Behaviour Tree Editor
New reuse functionalities were created and released in the BT editor. Users can edit explanation strategies as they need and want. Recommendation of suitable explainers for substitution is based on explainer previously used and positively positively.
Testing BTs were created and released in the BT editor (Automatic reuse). Users can check their strategies’ applicability for their use case and check the BT structure’s suitability.

Supporting published resources (in addition to papers and tutorial)

Use case onboarding path : iSee_use case guidevshort_oct23
Use case status overview (click)
Explanation Experience certification path (click)

iSee courses :
We created a new dedicated page How to including the replay of the online iSee tutorial and Supporting materials. The supporting material can be shared on demand on our contact form.

Dissemination

To facilitate dissemination in open access although managing the access to the wider community, we store the publications and important reports of the project on Zenodo community for iSee4XAI.  This includes the Data Management Plan at end of Year1.

Deliverables for December 2022 – Milestone 2 – are published on Zenodo DOI 10.5281/zenodo.7472464
keyword =https://isee4xai.com/ XAI explanations experience reuse
:  https://zenodo.org/record/7472464#.Y7515tXP1PY

(1st October 2023) Deliverables for October 2023 – Milestone 3 – are published on Zenodo  DOI 10.5281/zenodo.8414582. link https://zenodo.org/records/7472464#.Y7515tXP1PY
index:

  • D1.5 iSeeOnto revisions and final version SW+R, PU M21,M30, M36
  • D2.1b API for external contributions SW, PU M18 (Aug 22)
  • D2.2a QA reports R, PU M9,M21,M36
  • D2.4 Retrieval from Pre-defined Interactive Casebase (iSee v2) SW, PU M21
  • D3.3 Design of composite strategies R+SW, CO M21
  • D3.4 Composite strategies R+SW, PU M21
  • D4.1 Interaction engine SW, PU M21, M27
  • D4.2 Evaluation module SW, PU M21
  • D4.3 Intent understanding module SW, PU M18 (Aug 22)
  • D5.4a Use-Case Ethics and Compliance Framework

Co-creation in explanation experience and evaluation

BTF, RGU, UCM and UCC jointly put efforts to organise co-creation activities with academics and industry representatives. The outcome feeds the design work in WP4, WP5 and WP6 focusing on modelling and specifying what an explanation of AI model and it evaluation against a set of pre-listed user intents.

Co-Creation Objective Stakeholder Event Date
Identify avenues for further review of literature to describe, implement and evaluate explanations. Academics SICSA XAI 2021 1st June 2021
a. Identify suitable evaluation metrics for the evaluation cockpit
b. Feedback on evaluation cockpit UI designs
c. engagement of attendees on new use case
Industry Representatives Tommy Flowers Network conference on “DataScience the hearting beat of AI” 2021 13th and 14th October 2021
-build satellite project with researchers willing to contribute to iSee
-populate more cases on the CBR iSee base to improve performance of iSee retrieval
-add new explainers to the iSee catalogue
-assess the reusability of iSee existing explainers on different domains
Academics and industry XCBR Challenge and workshop with ICCBR 2022 Call for XCBR 2022 – Explainable AI Challenge – ICCBR 2022 – 30TH INTERNATIONAL CONFERENCE ON CASE-BASED REASONING – SEPTEMBER 12th to 15th IN NANCY, FRANCE (loria.fr) 12th to 15th September

2022

-populate more cases on the CBR iSee Cockpit
-add new explainers to the iSee catalogue
-assess the reusability of iSee existing explainers on different domains
Academics and industry XCBR Challenge and iSee workshop with ICCBR 2023 Call for XCBR 2023 – ICCBR  XCBR Challenge 2023 17th to 20th July

2023

-foster connections between research communities to encourage further collaboration leading to the next generation of explanation algorithms.

 -find out source  and adoption state for explainers flexible to multiple data modalities, and how the idea of interactive nature explanation could help servicing a broad range of stakeholders and explanation needs.

Academics and industry Workshop on Interactive Explanations of Neural Netowrks and Artificial Intelligence. Int’ XAI 2023 June 2023

 

how to use the Explainers library to add a new explainer
about entering a new use case in the cockpit / explaining what is an intent, a persona, showing the recommendation of strategy
about selecting the questions for the evaluation and using the dialog manager
Academics and industry iSee tutorial with ICCBR 2023 conference, Aberdeen iSee Tutorial ICCBR2023 18th July

2023

ICCBR 2024  conference

Merida, Mexico.

Academics and industry iSee hosts the 6th XCBR Workshop on Explainable CBR  July

2024

New Explainers working on time series dataset and image dataset have been added to the iSee explainers library thanks to the collaboration with AAAI-MX research center in Mexico – Tecnológico Nacional de México/Instituto Tecnológico de Mérida. New explainer working on free text and NLP data has been added as a result of the Telecom use case work with BTF.

iSee evaluation Can you explain the outcomes of AI models for your business?
Not everyone can. iSee can help you.
Tech Talk 19th February 2024
  Academics and industry iSee webinar May-June 2023

 

to build the first End-to-End version of iSee from use case creation, explanation strategies retrieval by CBR engine and recommendation with metrics, to user evaluation dialog (chatbot) Academics and Industry Hackathon Integration Meeting 19th to 23rd- November 2022, Madrid in UCM campus
-explain how to create a new use case in iSee,
-put it in action on a real world example (from the materials prepared for the XCBR challenge)
-identify new collaboration potential
-new explainers in DeepLearning
Academics XAI workshop at SG AI 2022 conference SGAI International Conference on Artificial Intelligence (bcs-sgai.org) 13th December 2022
Explaination of the project aimed at the greater public Academics and industry BCS Real AI to be planned

In parallel discussion with IEEE working group on explainability usage aim at framing some initial standard criteria for evaluating the usefulness, suitability of a group of explanation method to user profiles and goals (sector wise for instance). analytics metrics are built in iSee to evaluate the dialog outcome and highlight the popularity of certain explainers and/or user questions in certain cases (whether it is a recurring persona feature or recurring intent selected by test user).

Evaluation experiments :

  • In November 2023, we carried out a Content Validation Study with XAI practitioners. The study asks participants to review a curated list of statements based on their Relevance and Clarity. This has helped us in building a new scale measure for the purpose of evaluating the quality of explanation experiences. Then we have augmented the metrics dashboard to show the compliance based on this new scale
  • In March 2024, we completed 3 real world use case with explanation requirements on a Health case (Jiva.ai), a Telecom case (BT), and a production line case (Bosch). They were using on image and natural language data types. Evaluation experiments have been carried out by BT France with  IT minded community. Participants were asked to walkthrough the iSee chatbot to see an AI recommendation on data instances, request explanations from iSee and evaluation their meaningfulness.
  • 2 use cases in this explanation interaction trial : Health radiograph fracture and Sensor anomaly use case (image data type). Thanks to this month of trial, we have been able to find out that explanation is differently perceived depending on the time (position in the interaction with chat bot) it is watched/proposed by the system. Moreover the format of rendering is very important and a raw format such as image of integrated Gradients  are not always positively perceive. this has led to one Satellite project where we are studying how to use Text generation (generative AI and LLM) to improve the intelligibility of explanation and consequently improve explanation experience with iSee Chatbot.

New Main publication providing the last-breaking features developed in iSee platform : it will be published at PAIS 2024.

Satellite projects :

3 Satellite projects started in March 2024.

  1. RGU-IIT-BTF : Explaining CBR-RAG model in LLM for legal question answering knowledge extraction
  2. BTF : Generative AI(LLM) to build a human intelligible explanation on the top of iSee recommended explanation, to improve explanation experience.
  3. UCM-NTNU : Evaluation metrics for counterfactual and feature-based explainers including PertCF