Complex machines, such as autonomous vehicles, are unable to reconcile conflicting behaviors between their underlying subsystems, which leads to accidents and other negative consequences. Existing approaches to error and anomaly detection are not equipped to detect and mitigate inconsistencies among parts. In this paper, we present “Anomaly Detection through Explanations” or ADE, a multimodal monitoring architecture to reconcile critical discrepancies under uncertainty. ADE uses symbolic explanations as a debugging language, by examining underlying reasons for those decisions. Further, when decisions conflict, our method uses a synthesizer, along with a priority hierarchy, to process subsystem outputs along with their underlying reasons and transparently judges the conflicts. We show the accuracy and performance of ADE on autonomous vehicle scenarios and data, and discuss other error evaluations for future work.