Background
This paper was the second paper of my Master’s thesis, and the core scientific novelty. The paper was written in cooperation with the voestalpine Stahl GmBH.
Abstract
Attributing uncertainties to the input space elevates the trustworthiness and explainability of machine learning appli cations. This paper proposes a novel method called Smoothness Constrained Attribution (SCA), which uses the uncertainty prop agation mechanism to propagate the output uncertainty back to the input space. This input uncertainty attribution relies solely on test-time data, the trained uncertainty-aware Machine Learning (ML) model, and assumes a smooth input space, resulting in an efficient and simple system. SCA is compared to existing input Uncertainty Attribution Mechanisms (iUCAMs) based on eXplainable Artificial Intelligence (XAI) and an oracle reference using heteroscedastic noise in different synthetic datasets. These evaluations demonstrate the robustness and improvements of SCA compared to existing methods.