Thumbnail for Robustness of Explainable Artificial Intelligence in Industrial Process Modelling
ML4LMS @ ICML 2024
Benedikt Kantz, Clemens Staudinger, Christoph Feilmayr, Johannes Wachlmayr, Alexander Haberl, Stefan Schuster, Franz Pernkopf

Background

This paper was part of my Master’s thesis in cooperation with the voestalpine Stahl GmBH. It was my first accepted paper.

Abstract

eXplainable Artificial Intelligence (XAI) aims at providing understandable explanations of black box models. This paper evaluates current XAI methods by scoring them based on ground truth simulations and sensitivity analysis. To this end, we used an Electric Arc Furnace (EAF) model to understand better the limits and robustness characteristics of XAI methods such as SHapley Additive exPlanations (SHAP), Local Interpretable Model-agnostic Explanations (LIME), as well as Averaged Local Effects (ALE) or Smooth Gradients (SG) in a highly topical setting. These XAI methods were applied to various black-box models and then scored based on their correctness compared to the ground-truth sensitivity of the data-generating processes using a novel scoring evaluation methodology over a range of simulated additive noise. The resulting evaluation shows that the capability of the Machine Learning (ML) models to capture the process accurately is, indeed, coupled with the correctness of the explainability of the underlying data-generating process. We show the differences between XAI methods in their ability to predict the true sensitivity of the modeled industrial process correctly