Link: https://eu02web.zoom-x.de/j/66719911736?pwd=N5n96kFJbauLi2u79eJI0ZD15hgNsi.1
4:30pm Central European time is (usually) 7:30am Pacific time and 11:30pm Beijing time
Compute-in-Memory (CiM) has been recognized as a promising enabler for achieving energy-efficient artificial intelligence (AI) acceleration. However, existing CiM implementations based on mature CMOS processes, such as SRAM and eDRAM, face significant limitations in storage density, necessitating frequent off-chip DRAM accesses for weight data retrieval in data-intensive applications. This constraint severely limits the task-level energy efficiency. This talk introduces a hybrid SRAM-ROM multiply-accumulate (MAC) CiM macro architecture, demonstrating convolutional neural network (CNN) and Transformer acceleration chips with ultra-high marco-level memory density, in both digital and and analog compute methods. The design supports flexible task expansion through transfer learning, YOLOC (an optimized YOLO framework for CiM), Hidden-ROM (a memory-hiding technique for privacy-preserving inference), and FSDB (digital kernel-level reconfigurability and cross-model scalability for deep neural networks).

Dr. Xueqing Li is currently an Associate Professor at the Department of Electronic Engineering, Tsinghua University, Beijing, China. Between 2013 and 2017, he was a posdoc at Penn State University. His research interests include mixed-signal circuits, emerging memory and memory-oriented computing circuits and architecture. Dr. Li has published 200 papers and received a few best paper awards in HPCA, ASP-DAC, DATE, TMSCS, etc. He is also the recipient of National Early-Career Award, National Career Award, and other research, teaching and thesis awards. Dr. Li also served as Associate/Guest Editor in a few IEEE journals (JSSC, TVLSI, TETC, JETCAS, etc.), TPC in a few conferences (ISSCC, DAC, ICCAD, DATE, etc.), and organizing committee member in a few conferences (CCMCC, AICAS, COINS, etc.)