Explainable Feature Importance: Interpretable machine learning through game theoretic analysis of influence variables and interaction effects

Overview

Machine learning (ML) methods support the search for patterns in data and relationships between variables, e.g. in complex bio-medical systems. In this way, they can provide new insights and improve decisions in fields of action such as medical diagnostics. In addition to the quality of the models learned from the data, the trust of human experts in these models is an important prerequisite for the usefulness and applicability of ML. This requires a certain degree of transparency of the models, especially with respect to the significance of and the interaction between individual influencing variables. In this project, we propose a game-theoretic approach to model and decompose higher-order dependencies, i.e., dependencies between subsets of variables. On this basis, theoretically sound measures of the importance of individual influencing variables and the strength of the interaction between these variables can be determined. In this project, we not only develop the theoretical and conceptual foundations of this approach, but also work on an efficient algorithmic implementation. To improve understanding and acceptance, we are also developing an interactive approach for exploring dependencies in high-dimensional data spaces.

Key Facts

Project duration:
01/2021 - 12/2024
Funded by:
MKW NRW

More Information

Principal Investigators

contact-box image

Prof. Dr. Reinhold H?b-Umbach

Communications Engineering / Heinz Nixdorf Institute

About the person