PNY EU
NVIDIA H100 NVL

NVIDIA H100 NVL

NVIDIA H100 NVL

  • SKU: TCSH100NVLPCIE-PB
  • Description

    RoHS Compliant Logo

    NVIDIA H100 NVL

    Unprecedented Performance, Scalability, and Security for Every Data Center

    H100 NVL is designed to scale support of Large Language Models in mainstream PCIe-based server systems. With increased raw performance, bigger, faster HBM3 memory and NVLink connectivity via bridges, mainstream systems configured with 8x H100 NVL outperform HGX A100 systems by up to 12X on GPT3-175B LLM throughput.

    H100 NVL enables standard mainstream servers to provide high-performance large language model generative AI inference while enabling partners and solution providers the fastest time to market and ease of scale out.

     

    Performance Highlights

    FP64

    30 TFLOPS

    FP64 Tensor Core

    60 TFLOPS

    FP32

    60 TFLOPS

    TF32 Tensor Core

    835 TFLOPS | Sparsity

    BFLOAT16 Tensor Core

    1671 TFLOPS | Sparsity

    FP16 Tensor Core

    1671 TFLOPS | Sparsity

    FP8 Tensor Core

    3341 TFLOPS | Sparsity

    INT8 Tensor Core

    3341 TOPS | Sparsity

    GPU Memory

    94GB HBM3

    GPU Memory Bandwidth

    3.9 TB/s

    Max Thermal Design Power (TDP)

    350-400W (Configurable)

    NVIDIA AI Enterprise

    Included

    Warranty Shield Icon
    Warranty

    Dedicated NVIDIA professional products Field Application Engineers

    Contact pnypro@pny.eu for additional information.

Close