Samsung has announced a new type of HBM. Named HBM-PIM, this is the world's first High Bandwidth Memory integrated with artificial intelligence processing power. Featuring a processing-in-memory design, HBM-PIM is aimed at data centre use, HPC systems, and AI-enabled mobile applications.
Although HBM-PIM has been designed for such tasks, Samsung will work with AI specialists for “even more advanced PIM-powered applications”. Rick Stevens, Argonne’s Associate Laboratory director for computing, environment, and life sciences, stated it's glad that Samsung is tackling the “memory bandwidth/power challenges for HPC and AI computing” with HBM-PIM, providing noticeable performance improvements in various classes of AI applications.
Most computing systems of today are based on the von Neumann architecture, where a processor is a separate unit from the memory. In any application where constant movement of data between both is necessary, as the volume of data increases, performance bottlenecks are unavoidable.
With HBM-PIM, Samsung adds processing power to the memory by inserting a DRAM-optimized AI engine on each memory rank to allow parallel processing and reduce data movement. By adding this DRAM-optimized AI engine to Samsung's HBM2 Aquabolt solution, HBM-PIM delivers up to more than twice the performance while also reducing power consumption by over 70%. Moreover, HBM-PIM is easy and fast to implement because it doesn't require any hardware or software changes.
The HBM-PIM white paper will be presented at the International Solid-State Circuits Virtual Conference (ISSCC), scheduled for February 22nd. The validation process is currently underway and should end within H1 2021.
KitGuru says: How will manufacturers make use of Samsung's HBM-PIM? Do you think we will ever see this technology on graphics cards?