
CIS Seminar: “Learning, Reasoning, and Planning with Neuro-Symbolic Concepts”
April 3 at 3:30 PM - 4:30 PM
I aim to build complete intelligent agents that can continually learn, reason, and plan: answer queries, infer human intentions, and make long-horizon plans spanning hours to days. In this talk, I will describe a general learning and reasoning framework based on neuro-symbolic concepts. Drawing inspiration from theories and studies in cognitive science, neuro-symbolic concepts serve as compositional abstractions of the physical world, representing object properties, relations, and actions. These concepts can be combinatorially reused in flexible and novel ways. Technically, each neuro-symbolic concept is represented as a combination of symbolic programs, which define how concepts can be structurally combined (similar to the ways that words form sentences in human language), and modular neural networks, which ground concept names in sensory inputs and agent actions. I show that systems that leverage neuro-symbolic concepts demonstrate superior data efficiency, enable agents to reason and plan more quickly, and achieve strong generalization in novel situations and for novel goals. This is illustrated in visual reasoning in 2D, 3D, motion, and video data, as well as in diverse decision-making tasks spanning virtual agents and real-world robotic manipulation.

Jiayuan Mao
EECS, MIT
Jiayuan Mao is a Ph.D. student at MIT, advised by Professors Josh Tenenbaum and Leslie Kaelbling. Her research agenda is to build machines that can continually learn concepts (e.g., properties, relations, rules, and skills) from their experiences and apply them for reasoning and planning in the physical world. Her research topics include visual reasoning, robotic manipulation, scene and activity understanding, and language acquisition. She was named a Rising Star in EECS (2024) and in Generative AI (2024). Her research has received Best Paper Awards at CogSci 2024, SoCal NLP 2024, and the CoRL 2024 Workshop on Language and Robot Learning, as well as a Best Paper nomination at ACL 2019.