Using Explanations for Robust Autonomous Decision Making

Abstract

The `state of the art’ artificially intelligent systems, like Watson, Deep Blue, and AlphaZero, are opaque to humans. Previously, these intelligent systems were designed to play games, like chess, jeopardy, and Go. But now as these systems start approaching human-level decision making , they need to be able to explain their decisions, and be able to tell a story of why they did it. I am developing the underlying technology and methods to model complex systems as a layered system of communicating agents that can explain their behavior and learn from their mistakes.

Date
Oct 23, 2019 1:00 PM — 2:30 PM
Event
UC San Diego - Halicioglu Data Science Institute Seminar Series
Location
San Diego Supercomputer Center
Avatar
Leilani H. Gilpin
Assistant Professor

My research interests include explainable artificial intellgience,anomaly detection, and system debugging.