Loading Events

« All Events

  • This event has passed.

GRASP on Robotics: “Event-based Neuromorphic Perception and Computation: The Future of Sensing and AI”

November 5 at 10:30 AM - 11:45 AM

*This will be a HYBRID Event with in-person attendance in Wu & Chen Auditorium and Virtual attendance via Zoom Webinar here.

There has been significant research over the past two decades in developing new systems for spiking neural computation. The impact of neuromorphic concepts on recent developments in optical sensing, display and artificial vision is presented. State-of-the-art image sensors suffer from severe limitations imposed by their very principle of operation. These sensors acquire the visual information as a series of ’snapshots’ recorded at discrete point in time, hence time-quantized at a predetermined frame rate, resulting in limited temporal resolution, low dynamic range and a high degree of redundancy in the acquired data. Nature suggests a different approach: Biological vision systems are driven and controlled by events happening within the scene in view, and not — like image sensors — by artificially created timing and control signals that have no relation whatsoever to the source of the visual information. Translating the frameless paradigm of biological vision to artificial imaging systems implies that control over the acquisition of visual information is no longer being imposed externally to an array of pixels but the decision making is transferred to the single pixel that handles its own information individually. It is demonstrated that bio-inspired vision systems have the potential to outperform conventional, frame-based vision acquisition and processing systems in many application fields and to establish new benchmarks in terms of redundancy suppression/data compression, dynamic range, temporal resolution and power efficiency to realize advanced functionality like 3D vision, object tracking, motor control, visual feedback loops and even allow us to rethink our current paradigm of computation. The ultimate goal is to develop brain-inspired general purpose computation architectures that can breach the current bottleneck introduced by the von Neumann architecture.

Ryad Benosman

University of Pittsburgh

Ryad Benosman is Professor with University of Pittsburgh Medical Center, adjunct professor at CMU Robotics Institute. He received the M.Sc. and Ph.D. degrees in applied mathematics and robotics from University Pierre and Marie Curie in 1994 and 1999, respectively. His work is widely recognized as a pioneer and visionary in cognitive neuromorphic sensing and processing. His research goal is to understand the algorithms and mathematics that underlie cortical computation, with the aim of creating new mathematical models and replicating them as functional neuromorphic silicon devices. This unique translational approach combines physiological recordings, theoretical neuroscience, electrical engineering, and bioengineering, which allows him to build hardware and algorithms to emulate natural neural systems, with the goal of solving biological and non-biological problems.
Dr Benosman runs a unique research laboratory that dramatically changes the conventional approach to engineering in this field. He records brain activities of primates to derive new mathematical models that can then be used to replicate in hardware the relevant portions of the brain. This unique approach enables researchers to develop new brain-like processors such as artificial cameras and other sensors. This approach impacts almost all areas of society, from medicine, engineering, AI, communication, mobility, etc. His work has led to new bioinspired models for novel AI techniques and sensors that are now widely used in academia and industry.
This work is seen as a new paradigm of applied neuroscience that merges several, traditionally separate fields, such as mathematics, neurosciences, engineering, medicine, and hardware design, into a single thread.
Another important aspect of his research is to apply these technologies as neural interfaces. His group has developed several generations of neural implants and optogenetic stimulation devices that are currently being used in clinical trials (NCT01864486, NCT02670980, NCT03333954, NCT04676854, NCT03392324, NCT03326336). More recently, he started applying his work to decoding movement intentions from motor cortical neural recordings.
Dr Benosman has authored more than 200 peer-reviewed papers and 25 patents, which together pioneered the field of neuromorphic processing and cognition.
Dr Benosman translates his research to industry, and has co-founded several successful companies such as Pixium Vision (Retina Prosthetics), Prophesee (the world leader in Neuromorphic Event-based Cameras and Computation), GraiMatterLabs (Spike-based Neuromorphic on the edge for Machine Learning) and, most recently, ThinkLink (Motorcortex intention decoding to restore mobility for tetraplegic patients), spanning a large scientific andtechnological spectrum. He has also led and developed all optogenetics stimulation technologies for sight restoration for Gensight Biologics to restore vision for blind patients.

Details

Date:
November 5
Time:
10:30 AM - 11:45 AM
Event Category:
Website:
https://www.grasp.upenn.edu/events/grasp-on-robotics-ryad-benosman/

Organizer

General Robotics, Automation, Sensing and Perception (GRASP) Lab
Email:
grasplab@seas.upenn.edu
Website:
https://www.grasp.upenn.edu/

Venue

Wu and Chen Auditorium (Room 101), Levine Hall
3330 Walnut Street
Philadelphia, PA 19104 United States
+ Google Map
Website:
https://www.facilities.upenn.edu/maps/locations/levine-hall-melvin-and-claire-weiss-tech-house