Accountability layers: explaining complex system failures by parts

Abstract

With the rise of AI used for critical decision-making, many important predictions are made by complex and opaque AI algorithms. The aim of eXplainable Artificial Intelligence (XAI) is to make these opaque decision-making algorithms more transparent and trustworthy. This is often done by constructing an explainable model for a single modality or subsystem. However, this approach fails for complex systems that are made out of multiple parts. In this paper, I discuss how to explain complex system failures. I represent a complex machine as a hierarchical model of introspective subsystems working together towards a common goal. The subsystems communicate in a common symbolic language. This work creates a set of explanatory accountability layers for trustworthy AI.

Publication
AAAI