TILOS Seminar: Neuromorphic LLMs

Speaker: Jason Eshraghian, UC Santa Cruz
Abstract: This talk will show you what neuromorphic computing can do when an academic lab accidentally pulls $2-million of GPU-hours. We will showcase a series of frontier reasoning LLMs developed out of an academic lab, from data curation and pre-training to post-training and alignment. These models surpass leading LLMs from Meta, Google, and other heavily-resourced labs in the ~10-billion parameter regime, despite being 5x smaller.
We have deployed several models on neuromorphic hardware at just 2 watts, bringing state-of-the-art reasoning from the datacenter to the edge. Along the way, we dispel a series of widely-held assumptions about large-scale neuromorphic computation, revealing how it fundamentally differs from conventional deep learning, and why that difference matters.
Bio: Jason Eshraghian is an Assistant Professor at the Department of Electrical and Computer Engineering, University of California, Santa Cruz. He received the Bachelor of Engineering and the Bachelor of Laws degrees from The University of Western Australia, where he also obtained his Ph.D. degree. He currently serves as the secretary-elect of the IEEE Neural Systems and Applications Technical Committee and is a consultant to several medical-tech startups. Dr. Eshraghian was awarded the 2019 IEEE VLSI Systems Best Paper Award, the 2019 IEEE AICAS Best Paper Award, and the Best Live Demonstration Award at the 2020 IEEE International Conference on Electronics, Circuits, and Systems. He is a recipient of the Fulbright, Endeavour, and Forrest Research Fellowships. His current research focuses on brain-inspired circuit design to accelerate AI algorithms and spiking neural networks.