- This event has passed.
ASSET Seminar: “What Constitutes a Good Explanation?” (Lyle Ungar, Penn)
November 15 at 12:00 PM - 1:15 PM
Shapley values and similar methods are widely used to explain the importance of features in model predictions. Clarity in the semantics of these feature importances is subtle, but crucial: What do these explanations actually mean? And how are they useful? We illustrate using explanations of predictions in three domains: (a) medical outcomes, (b) image content, and (c) first impressions of people—specifically their warmth and competence—derived from video recordings and transcripts. In each scenario, the presence of intermediate-level features enhances the clarity and usefulness of
Professor of Computer and Information Science, Ph.D.
Lyle Ungar is a Professor of Computer and Information Science at the University of Pennsylvania, where he also holds appointments in Psychology, Bioengineering, Genomics and Computational Biology, and Operations, Information and Decisions. He has published over 400 articles, supervised two dozen Ph.D. students, and is co-inventor on ten patents. His group uses natural language processing and explainable AI for psychological research, including analyzing social media and cell phone sensor data to better understand the drivers of physical and mental well-being. He is currently building socio-emotionally sensitive GPT-based tutors.