New DRAM for next-generation AI GPUs

NVIDIA has commissioned Samsung Electronics, SK hynix, and Micron to develop SoCEM. This is a new memory module standard invented by NVIDIA, and Micron is reportedly the first to receive a license for mass production. This puts the American memory manufacturer ahead of its South Korean rivals.

According to South Korean media reports, the new SoCEM standard is a memory module invented by NVIDIA that consists of 16 stacked LPDDR5X chips arranged in groups of four.

SoCEM is a new DRAM that connects to the CPU and its primary task is to ensure peak performance for AI accelerators. SoCEM is expected to appear in NVIDIA's next-generation AI GPUs, Rubin, which will be released in 2026.

Unlike HBM (a DRAM connected to the GPU), which connects DRAM using vertical drilling, SoCEM is manufactured using a wire bonding method that connects 16 chips with copper wires. Copper has high thermal conductivity, and its main advantage is that it minimizes the heat generation of individual DRAM chips. According to Micron, the energy efficiency of its latest low-power DRAM is 20% higher than that of its competitors.

NVIDIA's next-generation AI server (with Rubin AI GPUs and Vera CPUs) will use 4 x SoCEM memory modules, which translates to a total of 256 chips based on the number of LPDDR5X memory modules. According to the industry, Micron was able to ship SoCEM memory modules earlier than SK hynix or Samsung due to the introduction of later EUV lithography equipment, the new report states.

This means that, unlike its DRAM manufacturing competitors, which are using EUV to increase DRAM performance, US-based Micron has used low-heat technology while improving memory performance.

SoCEM is not the only example of Micron's advanced technological capabilities. Micron reportedly supplied most of the initial low-power DRAM for this year's Galaxy S25 series, ahead of Samsung Semiconductor. This made Micron the "main memory supplier" for Samsung smartphones for the first time. Micron also developed the world's first LPDDR5X in 2022, which was installed in the iPhone 15 series. 

Share this post
Where is Artificial Intelligence Really Today?
The development of artificial intelligence has produced spectacular and often impressive results in recent years. Systems like ChatGPT can generate natural-sounding language, solve problems, and in many tasks, even surpass human performance. However, a growing number of prominent researchers and technology leaders — including John Carmack and François Chollet — caution that these achievements don’t necessarily indicate that artificial general intelligence (AGI) is just around the corner. Behind the impressive performances, new types of challenges and limitations are emerging that go far beyond raw capability.
SEAL: The Harbinger of Self-Taught Artificial Intelligence
For years, the dominant belief was that human instruction—through data, labels, fine-tuning, and carefully designed interventions—was the key to advancing artificial intelligence. Today, however, a new paradigm is taking shape. In a recent breakthrough, researchers at MIT introduced SEAL (Self-Adapting Language Models), a system that allows language models to teach themselves. This is not only a technological milestone—it also raises a fundamental question: what role will humans play in the training of intelligent systems in the future?
All it takes is a photo and a voice recording – Alibaba's new artificial intelligence creates a full-body avatar from them
A single voice recording and a photo are enough to create lifelike, full-body virtual characters with facial expressions and emotions – without a studio, actor, or green screen. Alibaba's latest development, an open-source artificial intelligence model called OmniAvatar, promises to do just that. Although the technology is still evolving, it is already worth paying attention to what it enables – and what new questions it raises.
Spatial intelligence is the next hurdle for AGI to overcome
With the advent of LLM, machines have gained impressive capabilities. What's more, their pace of development has accelerated, with new models appearing every day that make machines even more efficient and give them even better capabilities. However, upon closer inspection, this technology has only just enabled machines to think in one dimension. The world we live in, however, is three-dimensional based on human perception. It is not difficult for a human to determine that something is under or behind a chair, or where a ball flying towards us will land. According to many artificial intelligence researchers, in order for AGI, or artificial general intelligence, to be born, machines must be able to think in three dimensions, and for this, spatial intelligence must be developed.
What lies behind Meta's artificial intelligence reorganization?
Mark Zuckerberg, CEO of Meta, is not taking a bold step for the first time, but this time he is carrying out a more comprehensive reorganization than ever before in the company's artificial intelligence divisions. All existing AI teams, including research and development, product development, and basic model building, will fall under the newly created division called Meta Superintelligence Labs (MSL). The goal is not only to create artificial intelligence (AGI) that can compete with human thinking, but also to create a system-level superintelligence that surpasses human capabilities.
Why is Samsung stopping production of 1.4nm chips – and what does this mean for future phones?
Samsung's latest announcements about delays in 1.4 nanometer chip production foreshadow worrying consequences. Although this may seem like a minor engineering detail at first glance, the decision could have serious implications for the mobile industry as a whole and for Samsung's own chip manufacturing strategy. It is worth understanding what this actually means and why the manufacturing technology used for future Exynos chips matters.

Linux distribution updates released in the last few days