RGU team will present their initial work on explanation evaluation on a simple use case : “Counterfactual Explanations for Student Outcome Prediction with Moodle Footprints”, by Anjana Wijekoon, Nirmalie Wiratunga, Ikechukwu Nkisi-Orji, Kyle Martin, Chamath Palihawadana and David Corsar

With XAI SICSA workshop, we would hope to share (in both ways) novel theoretical and applied research around 3 points which are at the core of iSee :

  • Design and implementation of new methods of explainability for intelligent systems of all types, particularly highlighting complex systems combining multiple AI components.
  • Evaluation of explainers or explanations using autonomous metrics, novel methods of user-centered evaluation, or evaluation of explainers with users in a real-world setting.
  • Ethical considerations surrounding explanation of intelligent systems, including subjects such as accountability, accessibility, confidentiality and privacy.

The proceedings is published online on CEUR http://ceur-ws.org/Vol-2894/