- This event has passed.
CIS Seminar: “Reliable and Socially Aligned LLMs: Are We There Yet?”
October 7, 2025 at 3:30 PM - 4:30 PM
Large language models (LLMs) are powerful but not yet reliable: they hallucinate, misalign with human values, and struggle with social reasoning. In this talk, I will trace a path from diagnosing failure modes such as hallucinations, to uncovering the pitfalls of aligning models with noisy human preferences and diverse values, and finally to emerging frontiers in socially grounded reasoning. Along the way, I will highlight design principles, empirical findings, and open questions that reveal both how far we’ve come—and how far we still have to go—toward building reliable and socially aligned LLMs.
Sharon Li
University of Wisconsin Madison
Sharon Li is an Associate Professor in the Department of Computer Sciences at the University of Wisconsin-Madison. Her broad research interests are in deep learning and machine learning. Her research focuses on algorithmic and theoretical foundations of reliable machine learning, addressing challenges in both model development and deployment in the open, uncertain world. Previously, she was a postdoc researcher in the Computer Science department at Stanford University. She completed her Ph.D. from Cornell University, advised by John E. Hopcroft. She is serving as the Program Chair for ICML 2026. She was the recipient of the Alfred P. Sloan Fellowship (2025), NSF CAREER Award (2023), MIT Innovators Under 35 Award (2023), AFOSR Young Investigator Award (2022), Forbes 30under30 in Science (2020), and multiple faculty research awards from Google, Meta, and Amazon. She was named the Innovator of the Year 2023 by MIT Technology Review. Her works have won the Outstanding Paper Award at NeurIPS 2022 and ICLR 2022.