Summary of the project

Providing high quality explanations for AI predictions based on machine learning requires combining several interrelated aspects, including, among the others: selecting a proper level of generality/specificity of the explanation, considering assumptions about the familiarity of the explanation beneficiary with the AI task under consideration, referring to specific elements that have contributed to the decision, making use of additional knowledge (e.g. metadata) which might not be part of the prediction process, selecting appropriate examples, providing evidences supporting negative hypothesis, and the capacity to formulate the explanation in a clearly interpretable, and possibly convincing, way.
According to the above considerations, ANTIDOTE fosters an integrated vision of explainable AI, where low level characteristics of the deep learning process are combined with higher level schemas proper of the human argumentation capacity. The ANTIDOTE integrated vision is supported by three considerations:

  • There is a consensus that neural architectures exhibit a weak correlation between internal states of the network (e.g. weights assumed by single nodes) and the network classification outcome;
  • High quality explanations are crucially based on argumentation mechanisms (e.g. provide supporting examples and rejected alternatives), that are, to a large extent, task independent;
  • In real settings, providing explanations is inherently an interactive process, where an explanatory dialogue takes place between the system and the user.
Accordingly, ANTIDOTE will exploit cross-disciplinary competences in three areas, i.e. deep learning, argumentation and interactivity, to support a broader and innovative view of explainable AI. Although we envision a general integrated approach to explainable AI, we will focus on a number of deep learning tasks in the medical domain, where the need for high quality explanations, both to clinicians and to patients, is perhaps more critical than in other domains.
ANTIDOTE Project
ANTIDOTE Project

  • Call Topic: Explainable Machine Learning-based Artificial Intelligence (XAI), Call 2019
  • Start date: April 2021 (40 months)
  • Funding support: 957 478 €

Meetings

Year 1

Date Location Attending partners Purpose
23/04/2021 Virtual All partners Kick-off meeting, defining first collaborations
26-27/10/2021 Sophia Antipolis (FR) All partners Semester scientific meeting: update of work and collaborations
16-17/05/2022 San Sebastián (SP) All partners Semester scientific meeting: update of work and collaborations
20-21/02/2023 Trento (IT) All partners Semester scientific meeting: update of work and collaborations
12-13/10/2023 Lisboa (PT) All partners Semester scientific meeting: update of work and collaborations
27-28/10/2024 Leuven (BEL) All partners Semester scientific meeting: update of work and collaborations
01-02/07/2024 Sophia Antipolis (FR) All partners Final semester scientific meeting: update of work and collaborations