XCBR CHALLENGE 2022

At the 30th International Conference on Case-Based Reasoning September 2022, in Nancy, France

CHALLENGE GOAL | RULES | WHAT WE PROVIDE | EVALUATION | PRIZES | COMMITTEE | FAQ

RESULTS

1st Prize: Team Indiana: case PsychologyPrediction. Leveraging SHAP and CBR for Dimensionality Reduction on the Psychology Prediction Dataset. Authors: Zachary Wilkerson, David Leake, and David Crandall.

2nd Prize: Team RGU: case WeatherForecasting. Explainable Weather Forecasts Through an LSTM-CBR Twin System. Authors: Craig Pirie, Malavika Suresh, Pedram Salimi, Chamath Palihawadana and Gayani Nanayakkara.

3rd Prize: Team NTNU (NordicXCBR) : case PsychologyPrediction. Explaining your Neighbourhood: A CBR Approach for Explaining  Black-Box Models. Authors : Betül Bayrak, Paola Marin-Veites and Kerstin Bach.

Participation: 

  • Team ITMeridaCuevas (AAAIMX) : IREX.  IREX: A reusable process for the iterative refinement and explanation of classification  models. Authors : Cristian E. Sosa-Espadas, Manuel Cetina-Aguilar, Jose A. Soladrero, Jesus M. Darias, Esteban E. Brito-Borges, Nora L. Cuevas-Cuevas and Mauricio G. Orozco-del-Castillo.
  • Team ITMeridaSabbagh (AAAIMX) : case WeatherForecasting. CBR-foX:A generic post-hoc case-based reasoning method for the explanation of time-series forecasting. Authors : Moisés Fernando Valdez-Ávila, Gerardo Arturo Pérez-Pérez, Humberto Sarabia-Osorio, Carlos Bermejo-Sabbagh and Mauricio G. Orozco-del-Castillo.

📣 AGENDA (News !)

There are 5 submissions to the challenge :

  1. Team NTNU (NordicXCBR) : case PsychologyPrediction. Explaining your Neighbourhood: A CBR Approach for Explaining  Black-Box Models. Authors : Betül Bayrak, Paola Marin-Veites and Kerstin Bach.
  2. Team ITMeridaCuevas (AAAIMX) : IREX.  IREX: A reusable process for the iterative refinement and explanation of classification  models. Authors : Cristian E. Sosa-Espadas, Manuel Cetina-Aguilar, Jose A. Soladrero, Jesus M. Darias, Esteban E. Brito-Borges, Nora L. Cuevas-Cuevas and Mauricio G. Orozco-del-Castillo.
  3. Team ITMeridaSabbagh (AAAIMX) : case WeatherForecasting. CBR-foX:A generic post-hoc case-based reasoning method for the explanation of time-series forecasting. Authors : Moisés Fernando Valdez-Ávila, Gerardo Arturo Pérez-Pérez, Humberto Sarabia-Osorio, Carlos Bermejo-Sabbagh and Mauricio G. Orozco-del-Castillo.
  4. Team RGU : case WeatherForecasting. Explainable Weather Forecasts Through an LSTM-CBR Twin System. Authors: Craig Pirie, Malavika Suresh, Pedram Salimi, Chamath Palihawadana and Gayani Nanayakkara.
  5. Team Indiana: case PsychologyPrediction. Leveraging SHAP and CBR for Dimensionality Reduction on the Psychology Prediction Dataset. Authors: Zachary Wilkerson, David Leake, and David Crandall.

Next step for the team, is an oral presentation and/or a video on 12th September in Nancy, France. During the day, each team can speak or use a pre-recorded video to defend their work. The duration of each presentation/video is 10 minutes max + 2-5 minutes for quick questions.
After the presentations time, the voting will happen. The final scoring is composed from a review by the expert group (pre-review of the submission) and a voting by XCBR workshop community on the day.

Winners will be announced on the day of the challenge 12th September after the voting.

📣 SUBMISSION, REGISTRATION and CONTACTS (News !)

If you or your team would like to join the challenge, please register by sending an email to hello@isee4xai.com with “XCBR challenge 2022” in the subject and indicating the GitHub ids of your team members, names of participants and a contact person. Then you will be added to the GitHub repositories to receive all the material required for the challenge.  

Enquiries related to this challenge and any questions should be also sent to the following email address: hello@isee4xai.com 

Each team can choose to work on all or any number of use cases out of the 4 proposed.

Each use case is described in a separated Github repository:  Telecom Recommendation using textual notes, Psychology Prediction using questionnaires,   Weather Forecasting using sensor data and Fracture Detection using radiographs. Each use case page is provided with: an README detailing the data, the  AI model, the goal of explanation, the task to complete and how to submit your work and result. When writing your 5 pages summary, please mention the citation that is indicated on the README page.

🔔 Submission deadline: 26th August 2022

How to submit

When you selected your use case, you should have created a fork of the GitHub repository hosting the example you have chosen. Here are some tips to submit:

  1. Please push the source files composing your submitted program onto your GitHub repository branch.
  2. Then please send us an email to hello@isee4xai.com
  3. with subject: “[XAICBR challenge] submission <team name>”
  4. Confirming the link to the GitHub repository where you pushed your files, the folder path and the revision details. By default we will consider the latest (head) revision as your submission.
  5. Attaching your 5 pager document.
  6. In addition to the criteria on the web page, the jury will pay attention to the clarity of your code and of your analysis.

We are welcoming Summary video presenting your work as well, though it is optional. If you have made one, please let us know the link.

🎯 GOAL

The objective of this challenge is to highlight your expertise, skills and experience in applying Explainable AI (XAI) techniques to a “blackbox” AI model, so that the predictions of the model can be understood, improved or challenged by the users interacting with or impacted by the model.

Different user personas can be defined by the participants, each of them with a distinct objective or intent for explainability.

Participants will also need to provide how the different explanations can be evaluated by these personas and conduct an evaluation of their explanation strategies. These evaluations can be automated or assessed manually by a survey performed on a users group.

Participants can also either use existing explainer libraries or build their own explainers to generate explanations and document how they have been used. Note that an explanation strategy may combine different explainers.

RULES (News about video/presentation duration)

  • This challenge is open to all ICCBR 2022 participants.
  • This challenge is available at no cost and no registration fee is required.
  • Those who are interested in participating should register for the ICCBR conference and to the XCBR workshop. In case of a participating team, at least one member from each team should register for the conference and workshop.
  • The participation can be either individual or in groups (up to 6 members). Members can come either from schools, universities, students or industry.
  • The submissions must be received by the organisers by the 26th of August 2022.
  • A max 5-page report summarising the explanation technique and evaluation, with associated source codes of all programs developed, is expected.
  • Participants will also be asked to deliver a live presentation of their submission in a session during the conference. A recorded video can also be used for participants who can’t attend the session. Presentation/video time should fit in 10 minutes as much as possible.
  • The organisers reserve the right to reject submissions that do not fulfil the above-mentioned criteria.

What the organisers provide (News!)

  • A set of “black box” models trained for a variety of different tasks: image classification, natural language processing, and classification of tabular data.
  • Train and test datasets for each model.
  • Explanation requirements for each task such as who are the target users and what aspect need to be explained.
  • These are shared through private GitHub repositories that will be shared with those who register interest by emailing hello@isee4xai.com
  • Organizers wilL publish the data sets and AI black-box models on 26th May 2022

EVALUATION CRITERIA

Each submission will be evaluated and noted by a panel of expert reviewers and by the community with a vote. The evaluation will take place during the ICCBR 2022 conference.

  • The submission must consist of the following deliverables:
    • a max 5-pager report summarising the project, explainability techniques used and their evaluation, submitted through EasyChair.
    • the source codes of all software, libraries, and programs used by the participant, submitted through GitHub.
  • An oral presentation will be delivered during the XCBR conference (or a recorded Youtube video if you can not attend the ICCBR conference or the XCBR workshop). duration = 10 minutes per participant.
  • The 5-pager summary will have the option to be published in the XCBR proceedings (using CEUR-WS.org).
  • Key criteria that will be evaluated:
    • Performance of the explanations and their evaluation
    • Novelty and innovativeness of the solution
    • Use of a CBR approach in the reasoning
    • Reusability of the solution in other contexts or domains.

PRIZES

  • 3-month funding (covering accommodation and travel costs to an iSee partner organisation: Spain, Scotland, Ireland or France) to work on a satellite project related to the iSee project. This will be a joint work with the iSee members.
  • $400 Amazon gift card
  • $100 Amazon gift card
  • Champagne from the Nancy area.

ORGANISING COMMITTEE

Belén Díaz-Agudo
Bruno Fleisch
David Leake
Anne Liret
Kyle Martin
Juan A. Recio García
Anjana Wijekoon

F A Q

When will the data be published / provided ?

We will provide and publish the data set, AI black box models and explainers catalog by 26th May 2022.

What will the winner get out of the satellite project ?

Participants from Universities will be able to publish the work done and applied to iSee: This work could describe a new explainer, a new explanation strategy and how it suits a use case, and/or how reusable this explainer is in different cases.

Participants from Industry will have the option to work closely with the iSee team and become one industry partner: this work will provide the opportunity for them to evaluate their existing explainer once their case is designed in the iSee cockpit and evaluation of explanation utilisation performed, they could be granted a certification of explainer “explainability compliance” at no fees, and finally (optionally) they could benefit from the integration of their explainer into iSee library as a recognition.

Is the challenge opened to work in progress and young researcher ?

Yes, they are very welcome ! XCBR challenge is designed to be attractive to early career researcher and/or industry organisation.

We are a team participating to the challenge, who can benefit from the satellite project visit ?

Anyone from the participating team can receive it, providing it is to work and contributing with iSee. The budget is provided as a whole and can be split among 1-2 visitors and for a duration of at least 2 months. The first prize would go either to the team leader, or to another team member selected by mutual agreement between the project site and the team leader.

Which country will I travel to if I win the first prize ?

Regarding the country hosting the satellite project visit, we will recommend the most suitable one to the winner, according to the contribution and synergies of their work with iSee members. On the hand, if the participating team is introducing a new explainer in the community, then we would suggest to go to Spain or Ireland where iSee members developing CBR engine and Explanation Experience case similarity measures. On the other hand, if the participating team is introducing a new use case or application domain need for explainability, then we  would suggest to go to Scotland or France where iSee members are focusing on Evaluating explainability case and developing conversational evaluation layer and representing new cases. 

Can I participate without publishing my submission ?

Participants will have the option to accept or not the publication of the 5-pagers submission in the XCBR proceedings. Publication consent is not required to participate to the Challenge.