HOW TO Use iSee platform

Quick start Videos :

How to use the Explainers library to add a new explainer

About entering a new use case in the cockpit, explaining what is an intent, a persona, showing the recommendation of strategy : an example on Radiology diagnosis : Radiology use case demo video
another example on Loan approval : iSee Demo – YouTube

About selecting the questions for the evaluation and using the dialog manager : an example on Radiology diagnosis : Radiology use case demo video

iSee Cockpit URL (you will need to have a user account) :

Questionnaire : We would like to hear from you about your experience with iSee tools. Please would you mind sharing your feedback by clicking and filling in this online questionnaire.

How to contribute to the Explainers catalogue :

The Explainers library has an API allowing researchers to upload(add), reuse (read) explanation methods. The library is at

Do you need an online “Explainability using iSee” course ?

Please contact us using the contact form at Contact – iSee (

How to design explanation strategies and user-evaluation processes

 In iSee we are using the Behaviour Tree concept. Have a look at our Behaviour Tree Editor personalised for Explanation experience design.

FAQ (click here for the updated FAQ)

How to add a new explainer, and validate it has the right format ?

The only way to add an explainer to the iSeeExplainerLibrary is to implement the skeleton py class as described on the README in github,  section “Adding New Explainers to the Catalogue”. Point 3 shows the skeleton.

Within that class you can call your pypi library, or another GitHub repository of your own work. In case you need help, you can work with us in iSee team : you just need to implement your methods on a ipynb or something similar, and we can help to complete the full cycle.

Then, please create a pull request to the iSeeExplainerLibrary with your explainer. Often this involves editing 2 existing py files in the repository and adding one new py file.

To show, check the reusability aspect, you will need to test your explainer on multiple datasets of the applicable data type. We already do this for existing explainers in iSee. For example, you can  check your explainer is working on use cases provided by the iSee team in iSee platform. On our side, we will propose standardised use case (though complex enough) for each dataset type, image (BW/RGB), Time-series, tabular and natural language text (NLP). 

What is an intent ?

How to link user evaluation question to intent ?

Why is the data type needed ?


What is the format of AI data sample to allow iSee generating an evaluation seed case ?


How to select preferred explanation strategy ?


How to build an Explanation strategy ?

How to evaluate my explainer in an (simple) Explanation strategy ?