Loading Events

« All Events

  • This event has passed.

ASSET Seminar: “Formal Methods for Language Model Systems”

January 14 at 12:00 PM - 1:15 PM

Formal methods are often dismissed as too rigid, complex, or unscalable for frontier language model systems (e.g., LLMs, VLMs, agentic systems). In this talk, I will challenge this assumption with both theoretical insights and empirical evidence across various domains, including chatbots, autonomous driving, mathematical reasoning, code generation, and agentic AI.
I will present a new set of efficient formal frameworks for LLMs that:
  • Specify and verify safety properties (e.g., secure code generation, catastrophic risk), yielding stronger guarantees than standard evaluation methods such as benchmarks or red teaming.
  • Guide generation with semantic guardrails, ensuring outputs respect formal constraints, substantially improving both reasoning performance and safety.
  • Train models that are more performant and safer, and synthesize agents that provably adhere to formally specified constraints (e.g., privacy, resource consumption).

Together, these advances demonstrate that formal methods provide a principled foundation for improving the utility, safety, and efficiency of frontier language model systems.

 

 

Seminar Recording

Gagandeep Singh

Assistant Professor of Computer Science

Gagandeep Singh is an Assistant Professor in the Siebel School of Computing and Data Science at the University of Illinois Urbana-Champaign (UIUC). He co-leads the Science and Technology working group at the Institute of Government and Public Affairs, University of Illinois. His research combines ideas from formal methods, machine learning, and systems research to develop systematic and theoretically principled approaches for constructing intelligent computing systems with formal guarantees about their behavior and safety.

Singh’s group at UIUC has been at the forefront of advancing trustworthy AI, pioneering state-of-the-art methods for training, verifying, and monitoring language model systems (e.g., LLMs, VLMs, Agentic AI) with formal guarantees. Their work has been recognized through several awards and fellowships, including the NSF Career, Google Research Scholar, multiple Amazon Research Awards, the Qualcomm Innovation Fellowship, and an Open Philanthropy research grant.

Details

Organizer

  • AI-enabled Systems: Safe, Explainable, and Trustworthy (ASSET) Center
  • Email asset-info@seas.upenn.edu
  • View Organizer Website

Venue

  • Amy Gutmann Hall, Room 414
  • 3333 Chestnut Street
    Philadelphia, 19104 United States
    + Google Map