The last time you and I thought about memory chips might have been in our secondary or high school, during those lessons on floppy disks and how to shut down Windows XP “the right way.” Since then, we haven’t given them a second thought. They work quietly inside our laptops and phones, while storing and moving data as we scroll, type, and stream.
However, in the last two years, memory chips have transitioned from being overlooked parts to some of the most in-demand components in the world. The reason for this is the rapid rise of artificial intelligence. Since ChatGPT gained popularity, AI systems have been driving translation, design, and even drug development, all of which require faster, more powerful computing. That hunger for speed and capacity has put memory chips right at the heart of the action.
The Shift from DRAM to HBM Is Changing Computing
Computers handle data much like a kitchen handles coffee: the processor is the barista, the memory chip is the cup. The bigger the cup, the more coffee (data) the barista can pour and serve.
For decades, DRAM (Dynamic Random-Access Memory) has been that cup — reliable, fast enough for daily tasks, and widely available. But AI is different. Large language models, like those powering today’s chatbots, don’t just need more data; they need it instantly, and in massive volumes.
That’s where HBM (High-Bandwidth Memory) comes in. Engineers designed HBM to transfer data much faster than DRAM. It stacks memory chips vertically and links them with ultra-fast wiring, so GPUs (graphics processing units) can pull huge amounts of data with minimal delay. The gaming industry first developed HBM for high-end graphics, but AI hardware now relies on it as a critical component.
The ChatGPT Effect
When ChatGPT launched in late 2022, it didn’t just reveal what AI could do. It highlighted the hardware limits, slowing it down. Training and running AI models require billions of calculations every second and constant movement of petabytes of data in and out of memory.
Almost overnight, demand for HBM chips skyrocketed. Industry analysts reported a more than 60% increase in HBM shipments during 2023, with prices rising alongside. For suppliers ready to meet this surge, it became a major opportunity.
SK Hynix’s Rise and Samsung’s Struggle
SK Hynix, a South Korean company that once faced serious challenges, became one of the biggest winners in the AI chip boom. Years before AI grabbed headlines, SK Hynix invested heavily in developing HBM technology, collaborating closely with chip designers like AMD. When AI demand surged, they were prepared.
By 2024, SK Hynix became Nvidia’s primary supplier of HBM chips, powering some of the world’s most popular AI accelerators. Analysts estimate that SK Hynix’s chips account for about 20% of Nvidia’s hardware costs, fueling Nvidia’s record profits and market dominance.
In contrast, Samsung, a leader in DRAM production and a pillar of South Korea’s economy, lagged in HBM investment. As a result, Samsung secured less than 5% of Nvidia’s AI-focused HBM chip orders in 2024. This gap hit Samsung hard: its semiconductor division reported a steep decline in profits, and company leaders publicly admitted the mistake.
More Than Corporate Competition
This is about more than two companies vying for market share. Memory chips are becoming the new strategic resource — as essential to AI as oil was to industrialisation. Countries and corporations are investing billions to secure supply chains, not just for profit, but for technological independence.
HBM production is complex, with only a handful of companies capable of manufacturing at scale. That scarcity, combined with skyrocketing demand, is driving both innovation and geopolitical tension.
The Next Chapter
Samsung is racing to catch up, investing heavily in HBM4, the next generation of high-bandwidth memory. They’ve secured a $16.5 billion deal with Tesla for AI chips, showing their determination to reclaim market share. Meanwhile, SK Hynix is expanding production and locking in long-term supply contracts with major AI companies.
The race isn’t over yet. What matters most is this: the future of AI depends not just on algorithms or software advances, but on whether hardware, especially memory, can keep up.
Closing Thought
You may never see the tiny chips inside your laptop that move your data, but they’re now at the centre of a global race shaping the next decade of technology. Moving from DRAM to HBM isn’t just a technical upgrade but a change in how we build, train, and run tomorrow’s intelligence. As AI’s demand grows, the fastest “coffee cups” will decide who leads the future.