Leilani H. Gilpin

Assistant Professor

UC Santa Cruz

I am an Assistant Professor in Computer Science and Engineering and an affiliate of the Science & Justic Research Center at UC Santa Cruz.

Previously, I was a research scientist at Sony AI working on explainability in AI agents. I graduated with my PhD in Electrical Engineering and Computer Science at MIT in CSAIL, where I continue as a collaborating researcher. During my PhD, I developed “Anomaly Detection through Explanations” or ADE, a self-explaining, full system monitoring architecture to detect and explain inconsistencies in autonomous vehicles. This allows machines and other complex mechanisms to be able to interpret their actions and learn from their mistakes.

My research focuses on the theories and methodologies towards monitoring, designing, and augmenting complex machines that can explain themselves for diagnosis, accountability, and liability. My long-term research vision is for self-explaining, intelligent, machines by design.


  • Explainable AI (XAI)
  • Anomaly Detection
  • Commonsense Reasoning
  • Anticipatory Thinking for Autonomy
  • Semantic Representations of Language
  • Story-enabled intelligence
  • AI & Ethics


  • PhD in Electrial Engineering and Computer Science, 2020

    Massachusetts Institute of Technology

  • M.S. in Computational and Mathematical Engineering, 2013

    Stanford University

  • BSc in Computer Science, BSc in Mathematics, Music minor, 2011

    UC San Diego


  • February 2022: Sony AI’s GT Sophy work was published in Nature.
  • January 2022: Our paper on Anticipatory Thinking Challenges in Open Worlds: Risk Management was accepted to the AAAI spring symposium on Designing Artificial Intelligence for Open Worlds.
  • November 2021: I’m recruiting for PhD students! See this post for more information. And our new paper on “Explaining Multimodal Errors in Autonomous Vehicles” was published in the proceedings of DSAA 2021.
  • October 2021: I started my job at UC Santa Cruz.
  • September 2021: Our AAAI Fall Symposium on Anticipatory Thinking is happening remotely. I also moved to California.
  • August 2021: Our paper on “Explaining Multimodal Errors in Autonomous Vehicle” was accepted to DSAA 2021 in the Special Session on Practical applications of explainable artificial intelligence methods.
  • July 2021: Our workshop on “eXplainable AI approaches for debugging and diagnosis” was accepted to NeurIps 2021.
  • May 2021: My paper with co-lead Gregory Falco, “A Stress Testing Framework for Autonomous System Verification and Validation (V&V),” was accepted to ICAS 2021.
  • February 2021: I will be giving an invited talk on “Anticipatory Thinking: a Testing and Representation Challenge for Self-Driving Cars” at the 55th Annual Conference on Information Sciences and Systems.


Explaining Multimodal Errors in Autonomous Vehicles

Complex machines, such as autonomous vehicles, are unable to reconcile conflicting behaviors between their underlying subsystems, which …

A stress testing framework for autonomous system verification and validation (v&v)

Anticipatory Thinking: A Testing and Representation Challenge for Self-Driving Cars

A Knowledge Driven Approach to Adaptive Assistance Using Preference Reasoning and Explanation

There is a need for socially assistive robots (SARs) to pro- vide transparency in their behavior by explaining their rea- soning. …

Anomaly Detection Through Explanations

Under most conditions, complex machines are imperfect. When errors occur, as they inevitably will, these machines need to be able to …

Recent & Upcoming Talks

Featured talks are available as videos.

Explaining Errors in Complex Systems

Research overview for CSE 200.

Explaining Errors in Autonomous Driving: A Diagnosis Tool and Testing Framework for Robust Decision Making

Autonomous systems are prone to errors and failures without knowing why. In critical domains like driving, these autonomous …

Perception Challenge for Autonomous Vehicles

I'm Recruiting PhD Students

Anomaly Detection Through Explanations

FUZZ-IEEE invited talk


Lead Instructor


Teaching Assistant

  • MIT - 6.905/6.945: Large-scale Symbolic Systems
  • Stanford University - CS 348A: Geometric Modeling (PhD Level Course)
  • UC San Diego - COGS 5A (beginning java), CSE 8A/8B (beginning java), CSE 5A (beginning C), CSE 21 (discrete mathematics), CSE 100 (Advanced Data Structures), CSE 101 (Algorithms)


AI and ethics

The AI and ethics reading group is a student-lead, campus-wide initiative.

Explanatory Games

Using internal symbolic, explanatory representations to robustly monitor agents.

Monitoring Decision Systems

An adaptable framework to supplement decision making systems with commonsense knowledge and reasonableness rules.

The Car Can Explain!

The methdologies and underlying technologies to allow self-driving cars and other AI-driven systems to explain behaviors and failures.


Academic Interests as a Bookshelf

  • Sylvain Bromberger - On What We Know We Don’t Know
  • Yuel Noah Harari - Sapiens
  • Marvin Minsky - The Emotion Machine
  • Roger Schank - Scripts, Plans, Goals and Understanding: an Inquiry into Human Knowledge Structures
  • Patick C. Suppes - Introduction to Logic

Note: This is a working list. It is inspired by my colleague. Let’s pass it along.

Other Happenings

  • My father, Brian M. Gilpin, is a retired manager and has a new book about white privilege in Hawaii. My mother is a retired recreation therapist who worked over 30 years at Sonoma State Hospital, and my brother is an aspiring writer.
  • In fall 2018, I learned How to Make Almost Anything.
  • When I’m not working, I enjoy rowing, swimming, and hiking. I’m also a former water polo player.
  • Sometimes, I manage to take photos.
  • I am captivated by personality traits and analysis. I did a project on detecting personality traits using speech signals. I consistently score as an INTJ, but am quite in the middle in (T)hinking versus (F)eeling.
  • Currently reading: Deep Work.


lgilpin @ ucsc.edu