Pictured is an Nvidia Hopper H100 with 80GB HBM3 and an impressive VRM

bottom line: Nvidia revealed its Hopper architecture at GTC 2022, announcing the H100 server accelerator but only showing renders of it. Now we finally have some photos of the SXM card variant with a staggering 700W TDP.
It’s been a little over a month since Nvidia unveiled its H100 server accelerator based on the Hopper architecture, and so far we’ve only seen renderings of it. It’s changing today like ServeTheHome just shared photos of the card in the SXM5 form factor.
The GH100 compute GPU is manufactured at TSMC’s N4 process node and has a die size of 814mm2. The SXM variant has 16896 FP32 CUDA cores, 528 Tensor cores, and 80 GB of HBM3 memory connected using a 5120-bit bus. As you can see in the images, there are six 16GB memory stacks around the GPU, but one of them is disabled.
Nvidia also listed a staggering 700W TDP, which is 75% higher than its predecessor, so it’s no surprise that the card comes with an extremely impressive VRM solution. It has 29 inductors, each equipped with two power stages, and three more inductors with one power stage. Cooling all these densely packed components is likely to be a challenge.
Another notable change is the connector layout for the SXM5. There is now a short and long mezzanine connector, whereas previous generations had two longer connectors of the same size.
Nvidia will begin shipping systems with the H100 in the third quarter of this year. It is worth noting that the PCIe H100 version is currently listed in Japan for ¥4,745,950 (US$36,300) after taxes and shipping, although it has fewer CUDA cores, trimmed HBM2e memory, and half the TDP of the SXM variant.
Source link