top of page

W7: Compute-In-Memory


09:30 - 13:00

room 12


Antoine Frappé and Mahmut Ersin Sinangil


Compute-In-Memory (CIM) has recently emerged to complement digital accelerators and further increase energy efficiency for machine learning tasks. The memory access is reduced by performing computations within the memory, improving energy and latency compared to Von Neumann architectures. Given the excitement around these techniques, the latest advances in CIM will be covered in this workshop, by highlighting digital and mixed-signal architectures for SRAM-based CIM and exploring potentialities of non-volatile memories and emerging devices.


Metis AIPU – a 210 TOPS SoC Powered by Digital in-Memory Computing

Stefan Cosemans (Axelera AI)

Design and Technology Optimization for Digital Compute-In-Memory in Advanced Technology Nodes

Hidehiro Fujiwara (TSMC)

Compute-in-memory (CIM) is being widely explored to minimize power consumption related to data movement and multiply-and-accumulate (MAC) operations for AI edge devices. Compared to analog-based CIMs, digital-based CIMs with small distributed SRAM banks and a customized MAC unit realize massively parallel computation, maintaining accuracy and achieving better power-performance-area (PPA) scaling in advanced technology nodes. On the other hand, because design-technology co-optimization (DTCO) is one of the key technology enablers in leading-edge nodes, CIM needs to take it into account for design optimization. In this talk, we will discuss the macro architecture, circuit implementation, and transistor-level decisions that designers have to consider for CIM design in advanced technology nodes.

Memristive and Memcapacitive Neuromorphic Compute-in-Memory Arrays for Distributed Parallel AI at Extreme Scale and Efficiency

Gert Cauwenberghs (UC San Diego)

We present neuromorphic and artificially intelligent cognitive computing systems-on-chip implemented in custom silicon compute-in-memory neural and memristive synaptic crossbar array architectures that combine the efficiency of local interconnects with flexibility and sparsity in global interconnects, and that realize a wide class of deeply layered and recurrent neural network topologies with embedded local plasticity for on-line learning, at a fraction of the computational and energy cost of implementation on CPU and GPGPU platforms.  Co-optimization across the abstraction layers of hardware and algorithms leverages inherent stochasticity in the physics of synaptic memory devices and neural interface circuits with plasticity in reconfigurable massively parallel architecture towards high system-level accuracy, resilience, and efficiency.  Adiabatic energy recycling in charge-mode crossbar arrays permits extreme scaling in energy efficiency, approaching that of synaptic transmission in the mammalian brain

Analog In-Memory-Computing: technology and design techniques for a PCM based solution

Marco Pasotti (STMicroelectronics)


Stefan Cosemans

Stefan Cosemans co-founded Axelera AI in 2021, where he is currently leading the development of in-Memory Computing technology. He received his Ph.D. degree in 2009 from KU Leuven for his work on low power SRAM. From 2010 to 2014, he worked at imec on circuit design for RRAM, STT-MRAM and other novel memory devices. From 2015 to 2017, he led the development of single rail near-threshold SRAM compilers at sureCore Ltd. From 2017 to 2021, he worked at imec, focused on circuits and memory devices for machine learning accelerators.


Hidehiro Fujiwara

Hidehiro Fujiwara received the B.E., M.E., and Ph.D. degree from Kobe University, Kobe, Japan, in 2005, 2006 and 2009, respectively. From 2009 to 2013, he worked with Renesas Electronics.  He is currently a TSMC Academician and a Manager with Taiwan Semiconductor Manufacturing Company, Hsinchu, Taiwan. He is engaging SRAM test vehicle development for advanced technology and application specific memory design including SRAM based digital compute-in-memory. He has published 50 technical papers in IEEE conferences or journals. Dr. Fujiwara serves as a TPC Member for IEDM 2023 and 2024.

Gert Cauwenberghs

Gert Cauwenberghs is Professor of Bioengineering and Co-Director of the Institute for Neural Computation at UC San Diego.  He received the Ph.D. in Electrical Engineering from Caltech and was previously Professor of Electrical and Computer Engineering at Johns Hopkins University, and Visiting Professor of Brain and Cognitive Science at MIT.  His research focuses on neuromorphic engineering, adaptive intelligent systems, neuron-silicon and brain-machine interfaces, and micropower biomedical instrumentation.  He is a Fellow of the Institute of Electrical and Electronic Engineers (IEEE) and the American Institute for Medical and Biological Engineering (AIMBE), and a Francqui Fellow of the Belgian American Educational Foundation.  He served IEEE in a variety of roles including as Distinguished Lecturer of the IEEE Circuits and Systems Society, as VP of Technical Activities on the Executive Committee of the IEEE Engineering in Medicine and Biology Society, on the Steering Committee of IEEE Brain, and as Editor-in-Chief of the IEEE Transactions on Biomedical Circuits and Systems.

Marco Pasotti

Marco Pasotti received the degree in electronic engineering from Universitá degli Studi di Pavia, Italy, in 1991. In 1994, he joined STMicroelectronics, Agrate B. (MI), Italy, collaborating in the design of digital and mixed analog/digital IC for image acquisition and processing. In 1996, he was engaged in the design of flash memory for analog applications, multilevel storage and applications to multimedia products. Since 2005 is driving the team designing cost effective eNVM solutions and lately embedded Flash and PCM memories for Smart Power applications. He is currently senior member of STMicroelectronics technical staff, and his present interests are in high performances embeddable NVM memories, Analog In Memory Computing, new technologies for non-volatile memories and all related application aspects.

bottom of page