IDEAS on Generative AI Symposium
Join us at the University of Pennsylvania for the IDEAS on Generative AI Symposium, a forward-looking event exploring the next wave of generative and multimodal artificial intelligence. As generative models rapidly […]
ASSET Seminar: “Distortion of AI Alignment from Human Feedback”
After pre-training, large language models are aligned with human preferences based on pairwise comparisons. State-of-the-art alignment methods (such as PPO-based RLHF and DPO) are built on the assumption of aligning […]
FOLDS seminar: Surrogate-Model Approaches to Optimizers for LLM Training
Zoom link: https://upenn.zoom.us/j/98220304722 The recent empirical success of the Muon optimizer in training large language models has outpaced the theoretical understanding of its matrix-gradient orthogonalization design. To bridge this gap, […]
FOLDS seminar: Global Convergence of Gradient EM for Over-Parameterized Gaussian Mixtures
Zoom link: https://upenn.zoom.us/j/98220304722 Learning Gaussian Mixture Models (GMMs) is a fundamental problem in machine learning, and the Expectation-Maximization (EM) algorithm and its variant gradient-EM are the most widely used algorithms […]
FOLDS seminar: Differentially Private Space-Efficient Algorithms for Counting Distinct Elements in the Turnstile Model
ATTENTION: NEW DATE AND LOCATION Monday, March 23, 2026 (Noon – 1 pm) Glandt Forum, Singh Center 3205 Walnut St, Philadelphia, PA 19104 Zoom link: https://upenn.zoom.us/j/98220304722 The turnstile continual release […]
FOLDS seminar: Coherence Mechanisms for Provable Self-Improvement
Zoom link: https://upenn.zoom.us/j/98220304722 Large language models are increasingly trained to improve themselves, yet the mechanisms driving this, such as self-reflection or RLAIF, rely almost entirely on empirical heuristics. Is it possible […]
FOLDS seminar & PENN AI seminar: Optimization Challenges in Physics-Informed Neural Networks
Zoom link: https://upenn.zoom.us/j/98220304722 Physics-informed neural networks (PINNs) minimize composite losses that penalize PDE residuals alongside boundary and initial conditions. While this resembles multi-task learning, the optimization landscape is fundamentally different. […]
FOLDS seminar: Multi-step reasoning via curriculum learning
Zoom link: https://upenn.zoom.us/j/98220304722 Can multi-step reasoning be learned from data? We investigate this question in the context of a simple function composition task. We prove that this task is […]