ESE PhD Seminar – “Prehistory of Continual Learning and All Else That We Forget”
November 5 at 12:00 PM - 1:00 PM
I would probably “forget” what I will say in the abstract, and you would, too. Translation: deep neural networks can “forget”, meaning they might perform poorly on previously learned tasks when learning a new task. A major goal of the subject now known as deep continual learning is to address this issue. In order to alleviate forgetting, the subject seems to forget that it has a prehistory (1960 – 1980). In this talk, we will recollect a few historical pieces and compare them with their modern counterparts. If time allows, I shouldn’t forget and should be excited to share my recent work on continual learning with you.
Liangzu Peng
ESE Ph.D. Candidate
Liangzu Peng is a fourth-year PhD student working with Rene Vidal. He received his master’s degree from ShanghaiTech University and his undergraduate at Zhejiang University. He has co-authored over 20 papers on machine learning, computer vision, optimization, and signal processing. His current research focus is on continual learning.