We help you find the perfect fit.

Swiss Ai Research Overview Platform

28 Research Topics
Reset all filters
Select all Unselect all
Close
Select all Unselect all
Close
Select all Unselect all
Close
Select all Unselect all
Close
Select all Unselect all
Close
Select all Unselect all
Close
Select all Unselect all
Close
Filter
Reset all filters
Select all Unselect all
Close Show projects Back
Show all filters
71 Application Fields
Reset all filters
Select all Unselect all
Close
Select all Unselect all
Close
Select all Unselect all
Close
Select all Unselect all
Close
Select all Unselect all
Close
Select all Unselect all
Close
Select all Unselect all
Close
Select all Unselect all
Close
Select all Unselect all
Close
Select all Unselect all
Close
Select all Unselect all
Close
Select all Unselect all
Close
Select all Unselect all
Close
Select all Unselect all
Close
Select all Unselect all
Close
Filter
Reset all filters
Select all Unselect all
Close Show projects Back
Show all filters
34 Institutions
Reset all filters
Select all Unselect all
Close
Select all Unselect all
Close
Select all Unselect all
Close
Select all Unselect all
Close
Select all Unselect all
Close
Select all Unselect all
Close
Filter
Reset all filters
Select all Unselect all
Close Show projects Back
Show all filters
Personalized Explainable Artificial Intelligence for decentralized agents with heterogeneous knowledge

Lay summary

SUBJECT & OBJECTIVE

This project tackles the generation, aggregation, negotiation, and personalization of the information/explanations produced by machine learning predictors (known to be black-boxes) over heterogeneous (i.e., having different formats, scales, nature, etc.) datasets via multi-modal (i.e., sound, images, and text-based) interactions.

 

SOCIO-SCIENTIFIC CONTEXT

This work will allow understanding how and why certain decisions are taken by intelligent machine(s), from both humans and other machines' with diverse perspectives. The envisioned impact is four-folded:

- Academic - the creation of novel models and mechanism for dynamic explainable distributed systems;

- Technological - the inspiration of new user-centric trustworthy intelligent products (even for safety-critical applications);

- Industrial - transparent eHealth and assistive systems that will finally be inspectable and trustable in very sensitive topics such as nutrition, where individuals (e.g., children) deserve clear and unbiased support;

- Ethics - the setting up of explainable and ethic-by-design mechanisms to ensure correct behaviors and prevent possible AI misconducts.

Abstract

Explainable AI (XAI) has emerged in recent years as a set of techniques and methodologies aiming at explaining machine learning (ML) models and predictors, enabling humans to understand, trust, and manipulate the outcomes produced by artificial intelligent entities effectively. Although these initiatives have advanced over the state of the art, several challenges still need to be addressed to apply XAI in real-life scenarios adequately. In particular, two key aspects that need to be addressed are the personalization of XAI and the ability to provide explanations in decentralized environments where heterogeneous knowledge is prevalent.
- Firstly, the personalization of explanation is particularly relevant due to the diversity of backgrounds, contexts, and abilities of the subjects receiving the explanations generated by AI-systems (e.g., humans - patients and healthcare professionals or virtual - intelligent autonomous agents). Henceforth, the need for personalization must be coped with the imperative need for providing trusted, transparent, interpretable, and understandable (symbolic) outcomes from ML processing (sub-symbolic).
- Secondly, the emergence of diverse AI systems collaborating on a given set of tasks relying on heterogeneous datasets opens to questioning how explanations can be aggregated or integrated, given that they emerge from different knowledge assumptions and processing pipelines.

In this project, we aim at addressing those two main challenges, by leveraging on the multi-agent system (MAS) paradigm, where decentralized AI agents will extract and inject symbolic knowledge from/into ML sub-symbolic predictors, which, in turn, will be dynamically shared composing personalized and ethical explanations according to context and recipient knowledge/background. The proposed approach combines intra-agent, inter-agent, and human-agent interactions so as to benefit from both the specialization of ML agents and the establishment of agent negotiation, argumentation, and ontological reasoning, which will integrate symbolic heterogeneous knowledge/explanations extracted from sub-symbolic predictors wrapped by AI agents.
- The project includes the validation of the personalization and heterogeneous knowledge integration approach through a prototype application in the domain of food and nutrition monitoring and recommendation, which, as of today, needs more than ever explainability and user-centric personalization. Finally, performance, agent-agent and agent-human explainability, transparency, and ethics of the employed techniques in a collaborative AI environment will be tested. The expected results will push the boundaries of XAI, ML, MAS, and nutrition recommender systems with
(i) new theoretical foundations enriching the state of the art of each discipline,
(ii) new technological explainable-by-design methodologies and systems by bridging symbolic and sub-symbolic AI,
(iii) new means and tools at the service of eHealth and wellbeing application (commercial and academic),
(iv) personalized, transparent, and ethical-by-design innovations.

The project's results are expected to impact
(i) health and safety-critical domains - ensuring understandability and transparency, and promoting trust and ethics,
(ii) academic domain - producing innovative distributed explainable techniques and methodologies with built-in ethics to be applied in several other scenarios,
(iii) real-world applications - bridging the increasing ML-dependency and the need for personalization of the human user.

Last updated:14.01.2022

  Prof.Michael Ignaz Schumacher