AG Bildverarbeitung

Felix Klein Kolloquium des Fachbereichs

Recent advances in mathematical optimization enable practically efficient solutions of even large-scale problems. However, optimal solutions are often difficult to interpret and may not be accepted in practice. In contrast to Artificial Intelligence, explainability is still rarely incorporated into optimization. 

We propose to treat explainability as an additional criterion alongside optimality. From a data-driven perspective, a solution is considered explainable if it resembles decisions observed in similar situations in the past, leading to a trade-off between solution quality and interpretability. This approach gives rise to computational challenges: We show that even simple explainable optimization models are NP-hard in general. However, we identify polynomially solvable special cases, such as the explainable shortest-path problem. 

Moreover, identifying a small and relevant subset of features to capture similarity leads to a feature selection problem for which we show that its decision version is NP-complete. 

To address the general cases, we propose mathematical models and develop heuristic solution approaches. Computational experiments on synthetic and real-world data for explainable shortest path problems demonstrate that high-quality solutions can be obtained efficiently in practice, and that explainability can often be achieved at only a small loss in optimality. 

This is joint work with Kevin Aigner (FAU), Marc Goerigk (Passau), Michael Hartisch (Passau), Arthur Miehlich (FAU), and Florian Rösel (FAU).