CIS Seminar: “Thinking Outside the GPU: Systems for Scalable Machine Learning Pipelines”

Wu & Chen Auditorium

Scalable and efficient machine learning (ML) systems have been instrumental in fueling recent advancements in ML capabilities. However, further scaling these systems requires more than simply increasing the number and performance of accelerators. This is because modern ML deployments rely on complex pipelines composed of many diverse and interconnected systems.  In this talk, I will […]

ASSET Seminar: “Steering Machine Learning Ecosystems of Interacting Agents”

Amy Gutmann Hall, Room 414 3333 Chestnut Street, Philadelphia, United States

Abstract:  Modern machine learning models—such as LLMs and recommender systems—interact with humans, companies, and other models in a broader ecosystem. However, these multi-agent interactions often induce unintended ecosystem-level outcomes such as clickbait in classical content recommendation ecosystems, and more recently, safety violations and market concentration in nascent LLM ecosystems. In this talk, I discuss my […]

IDEAS/STAT Optimization Seminar

Amy Gutmann Hall, Room 414 3333 Chestnut Street, Philadelphia, United States

Zoom link: https://upenn.zoom.us/j/98843354016

CIS Seminar: “Leveraging the Wisdom of Clouds for Internet Security”

Wu and Chen Auditorium (Room 101), Levine Hall 3330 Walnut Street, Philadelphia, PA, United States

Over the past decade, networked systems have consolidated under just a handful of hyperscale cloud providers (e.g., AWS, Azure). While this offers logistical and economic advantages, attackers specifically target providers and their customers, a shift that has left traditional network vantage points blind to the most sophisticated adversaries. In this talk, I’ll explore how we […]

ASSET Seminar: “Beyond Scaling: Frontiers of Retrieval-Augmented Language Models”

Amy Gutmann Hall, Room 414 3333 Chestnut Street, Philadelphia, United States

Abstract: Large Language Models (LMs) have demonstrated remarkable capabilities by scaling up training data and model sizes. However, they continue to face critical challenges, including hallucinations and outdated knowledge, which particularly limit their reliability in expert domains such as scientific research and software development. In this talk, I will urge the necessity of moving beyond […]

IDEAS/STAT Optimization Seminar: “Foundations of Deep Learning: Optimization and Representation Learning”

Amy Gutmann Hall, Room 414 3333 Chestnut Street, Philadelphia, United States

Deep learning's success stems from the ability of neural networks to automatically discover meaningful representations from raw data. In this talk, I will describe some recent insights into how optimization enables this learning process. First, I will show how optimization algorithms exhibit surprisingly rich dynamics when training neural networks, and how these complex dynamics are […]

CIS Seminar: “Bridging Informal and Formal AI Reasoning”

Wu and Chen Auditorium (Room 101), Levine Hall 3330 Walnut Street, Philadelphia, PA, United States

Neural language models have opened a fascinating, flexible platform for reasoning in mathematics, programming, and beyond. This talk will explore the intersection of these models and the rigor of formal reasoning. First, I discuss my work on building foundation models for mathematics and using language to guide the search for formally verified proofs. Then, I […]

CIS Seminar: “Efficient Probabilistically Checkable Proofs from High-Dimensional Expanders”

Amy Gutmann Hall, Room 414 3333 Chestnut Street, Philadelphia, United States

The PCP theorem, proved in the 90’s, shows how to encode a proof for any theorem into a format where the theorem's correctness can be verified by making only a constant number of queries to the proof. This result is a significant milestone in computer science and has important implications for approximation algorithms, cryptography, and […]

ASSET Seminar: “Demystifying the Inner Workings of Language Models”

Amy Gutmann Hall, Room 414 3333 Chestnut Street, Philadelphia, United States

Abstract: Large language models (LLMs) power a rapidly-growing and increasingly impactful suite of AI technologies. However, due to their scale and complexity, we lack a fundamental scientific understanding of much of LLMs’ behavior, even when they are open source. The “black-box” nature of LMs not only complicates model debugging and evaluation, but also limits trust […]