The `state of the art’ artificially intelligent systems, like Watson, Deep Blue, and AlphaZero, are opaque to humans. Previously, these intelligent systems were designed to play games, like chess, jeopardy, and Go. But now as these systems start approaching human-level decision making , they need to be able to explain their decisions, and be able to tell a story of why they did it. I am developing the underlying technology and methods to model complex systems as a layered system of communicating agents that can explain their behavior and learn from their mistakes.