- This event has passed.
ASSET Seminar: Explainable AI via Semantic Information Pursuit (René Vidal, Johns Hopkins University)
September 21, 2022 at 12:00 PM - 1:30 PM
Presentation Abstract:
There is a significant interest in developing ML algorithms whose final predictions can be explained in domain-specific terms that are understandable to a human. Providing such an “explanation” can be crucial for the adoption of ML algorithms in risk-sensitive domains such as healthcare. This has motivated a number of approaches that seek to provide explanations for existing ML algorithms in a post-hoc manner. However, many of these approaches have been widely criticized for a variety of reasons and no clear methodology exists for developing ML algorithms whose predictions are readily understandable by humans. To address this challenge, we develop a method for constructing high performance ML algorithms that are “explainable by design”. Namely, our method makes its prediction by asking a sequence of domain- and task-specific yes/no queries about the data (akin to the game “20 questions”), each having a clear interpretation to the end-user. We then minimize the expected number of queries needed for accurate prediction on any given input. This allows for human interpretable understanding of the prediction process by construction, as the questions which form the basis for the prediction are specified by the user as interpretable concepts about the data. Experiments on vision and NLP tasks demonstrate the efficacy of our approach and its superiority over post-hoc explanations. Joint work with Aditya Chattopadhyay, Stewart Slocum, Benjamin Haeffele and Donald Geman.
Speaker Bio:
Dr. René Vidal is the Herschel Seder Professor of Biomedical Engineering, and the Director of the Mathematical Institute for Data Science (MINDS), the NSF-Simons Collaboration on the Mathematical Foundations of Deep Learning and the NSF TRIPODS Institute on the Foundations of Graph and Deep Learning at Johns Hopkins University. He is also an Amazon Scholar, Chief Scientist at NORCE, and Associate Editor in Chief of TPAMI. His current research focuses on the foundations of deep learning and its applications in computer vision and biomedical data science. He is an AIMBE Fellow, IEEE Fellow, IAPR Fellow and Sloan Fellow, and has received numerous awards for his work, including the IEEE Edward J. McCluskey Technical Achievement Award, D’Alembert Faculty Award, J.K. Aggarwal Prize, ONR Young Investigator Award, NSF CAREER Award as well as best paper awards in machine learning, computer vision, controls, and medical robotics.