NVidia H200, a GPU monster with 141 GB of HBM3E memory

With the democratization of artificial intelligence and the use of machine learning, increasingly commonplace in a vast number of application fields, the demand for high-performance GPUs has grown dramatically. GPU-based cards from serie H from NVidia are designed and optimized to handle specific workloads that require high parallel computing performance and significant processing capacity.

NVidia H200, the new graphics processor for high-performance computing

Presented at the conference Supercomputing 23, NVidia H200 It is configured as the most powerful GPU ever created by Jen-Hsun Huang’s company. Based on the existing H100 (Hopper) architecture, H200 stands out for its significant increase in capacity memory and of computing power.

H200 features a total memory equal to 141 GB HBM3E, with a throughput of approximately 6.25 Gbps, for a total bandwidth of 4.8 TB/s per GPU across six HBM3E stacks. This is a major leap forward from the previous GPU, the H100, which used 80 GB of HBM3 memory and offered 3.35 TB/s bandwidth.

Raw computing performance doesn’t appear to see any significant changes compared to the H100, with a computing power total of 32 PFLOPS in FP8 using eight GPU configurations. NVidia claims that, in working scenarios they benefit greatly from the increased memory capacity, as in the case of large language models (LLM) like GPT-3, H200 can deliver up to an 18x performance increase over an original A100 GPU, outperforming the H100 by 11x.

The H200 GPUs will obviously power the supercomputer of new generation, capable of going beyond 200 exaFLOPS in terms of computing power and expected by the end of 2024.

HGX H200 and Quad GH200 platform

In addition to the H200 GPU, NVidia also presented the platform HGX H200, designed to exploit the potential of the new GPU. The HGX H200 maintains compatibility with existing HGX H100 systems, allowing for an upgrade in performance and memory capacity without having to redesign the infrastructure. This makes the transition relatively simple for builders system server.

NVidia HGX H200

The board Quad GH200however, is a board that houses four GH200 accelerators and offers 288 core ARM for a total of well 2.3 TB of memory high speed. The solution, which includes CPU Grace and GPU Hopper, is intended to be used as a building block for larger systems, leveraging a NVLink 4-way for the connection between the four accelerators.

While the H200 is a stand-alone GPU designed as an upgrade over the previous H100, GH200 it is a more complete solution that combines the H200 GPU with the Grace CPU. The GH200 is presented as a “superchip” which offers a balance between computing power and memory management capabilities.

NVidia Quad GH200

Jupiter supercomputer, solution for artificial intelligence and HPC applications

With 23,762 knots GH200, Jupiter it is the largest to date supercomputer based on the Hopper architecture. Born from a collaboration between NVidia and the EuroHPC Joint Undertaking, the supercomputer represents the first large-scale implementation of the Hopper architecture aimed at supporting traditional HPC workloads in addition to low-precision AI tasks.

With its 93 exaFLOPS of performance in low-precision AI applications and high-precision 1+ exaFLOPS (FP64) for HPC workloads, it is set to be a leading player in climate research, materials science, drug discovery, industrial engineering, and quantum computing. So much so that a specimen will be installed at the research center research center Julich in Germany.


Please enter your comment!
Please enter your name here