GTC 2025: NVIDIA's Blackwell-Based Servers and DGX Station

The GTC (GPU Technology Conference), held annually since 2009, will be hosted by NVIDIA this year from March 17 to 21. The conference is designed to showcase the latest developments and to promote collaboration and further innovation across different industries. It is attended mainly by developers, researchers, and technology leaders. NVIDIA CEO Jensen Huang has been saying for some time that companies will become token factories in the future—meaning that every workflow will be supported by artificial intelligence. Currently, large servers play a major role in this process, but AI integration will increasingly extend to personal computers. In the future, computers and laptops will have hardware capable of running even large language models in the background. This is necessary because programmers, engineers, and almost everyone will work with AI assistance.

The New Blackwell GPU

However, let’s not get ahead of ourselves. According to Jensen Huang, to achieve these goals, it is first necessary to dramatically increase computing power (scale-up) and then expand the system by combining these scaled-up components (scale-out). He explained that the concept was first used in the Grace Hopper architecture and Ranger server model presented three years ago. Although the physical size of the server was too large, it validated their ideas. The size issue was partly due to the use of air cooling, which was replaced by liquid cooling in the new Blackwell GPU, allowing it to fit into standard server racks. Another problem was that the NVLink-connected CPU and GPU were housed in the same module. NVLink provides a high-speed connection between the CPU and GPU. In traditional systems, a PCI Express interface is used for this, but NVLink has lower latency. By disaggregating NVLink, the CPU and GPU can be placed in separate modules, so that each component can be replaced independently in the server.

Despite these improvements, a third problem remains: the optical data transmission cables (transceivers) used to connect the GPUs. These cables are extremely expensive (six are needed for each GPU, adding $6000 to the GPU price) and they increase power consumption by an extra 180 watts per GPU. To solve this, Jensen Huang presented a solution based on silicon photonics that enables GPUs to communicate using photons. Incidentally, Google is already using this technology in its data centers, achieving a 40% reduction in power consumption. 

Server performance roadmap
Server performance roadmap

Thanks to the reduction in size, they have reached a performance of 1 exaflop (1000 petaflops) per server rack. The memory bandwidth is an impressive 570 TB/s. For comparison, an NVIDIA RTX 4070 has a bandwidth that is a thousand times lower at 504 GB/s, although it is not designed for servers. For another realistic performance comparison, Jensen Huang states that for an AI company consuming 1 megawatt, the current setup with 1400 server racks using H100s can process 300 million tokens per second when running a large language model. With the new solution, assuming the same 1 MW consumption, 600 server racks replacing the old 1400 and H100s replaced by the new Blackwell compute units would result in 12,000 million tokens per second. The performance increase is so dramatic that it is hard to keep track, but successors have already been announced. The Blackwell Ultra will arrive at the end of this year, followed by the Rubin and Rubin Ultra GPUs next year and in 2027. The Rubin Ultra will deliver 15 exaflops per rack instead of the current 1 exaflop.

DGX Station

As mentioned earlier, Jensen Huang speaks of 30 million programmers who will soon work with some form of AI assistance. This is an important distinction between those who already claim that programmers will become obsolete and those who believe programmers will continue to be needed. However, for programmers to run large language models locally, they need adequate memory bandwidth and enough memory. The DGX Station is NVIDIA’s answer to this market need. It features 8 TB/s of memory bandwidth, 20,000 AI TFLOPS, and 784 GB of RAM—of which 288 GB is available to the GPU—so it can run relatively large models. Naturally, it uses the newly announced Blackwell chip, just like the Geforce RTX 5xxx series of graphics cards. The big question will obviously be its price. The lower-performing DGX Spark, also launched earlier this year, costs $4000 while offering only 128 GB of RAM, 273 GB/s of memory bandwidth, and 1000 AI TFLOPS, making it 20 times less powerful and much smaller. Although the DGX Spark has its advantages due to its smaller size—allowing you to connect more units to build a powerful small server for an office—the price is still quite steep. 

DGX Station
DGX Station
Share this post
Apple’s AI Doctor Plans
Apple is developing an AI-powered health coach, codenamed “Project Mulberry,” designed to give personalized advice for everyday life. The new feature is expected to be included in a future iOS 19.4 update—likely in spring or summer 2026—and will first launch in the US.
 Credit-based Windows Notepad usage with Copilot integration
Microsoft is introducing a new feature in Windows Notepad that allows you to use Microsoft Copilot, an artificial intelligence to improve your writing in Notepad. The feature allows you to rephrase your writing, generate a summary, or make other text tweaks such as adjusting the tone or style of text.
Artificial Intelligence in Practice: An Innovative Collaboration between NVIDIA and Boston Dynamics
Modern robotics has grown hand in hand with artificial intelligence and simulation technologies. NVIDIA’s Isaac™ GR00T research project aims to speed up the development of humanoid robots with new basic models. Meanwhile, Boston Dynamics uses its long experience and modern computing platforms to create robots that move in a natural, lifelike way. Their partnership is a new milestone in humanoid robotics, as it combines simulation, learning, and real-world testing to create adaptive, real-time robotic solutions.
Apple may drop USB-C in the future
The EU's Common Charger Directive, aimed at protecting the environment and standardizing consumer products, has been essential in introducing a universal USB-C charging solution for smartphones and other electronic devices.