Where to Buy Request a Quote
The NVIDIA A100 Tensor Core GPU delivers unprecedented acceleration—at every scale—to power the world’s highest performing elastic data centers for AI, data analytics, and high-performance computing (HPC) applications. As the engine of the NVIDIA data center platform, A100 provides up to 20x higher performance over the prior NVIDIA Volta generation. A100 can efficiently scale up or be partitioned into seven isolated GPU instances, with Multi-Instance GPU (MIG) providing a unified platform that enables elastic data centers to dynamically adjust to shifting workload demands.
A100 is part of the complete NVIDIA data center solution that incorporates building blocks across hardware, networking, software, libraries, and optimized AI models and applications from NGC. Representing the most powerful end-to-end AI and HPC platform for data centers, it allows researchers to deliver real-world results and deploy solutions into production at scale, while allowing IT to optimize the utilization of every available A100 GPU.
Highlights
Feature
A100
GPU Memory
80GB HBM2e
Memory Bandwidth
1555 GB/s
MIG instances
7 instances @ 10GB each
3 instances @ 20GB each
2 instances @ 40GB each
1 instance @ 80GB
Interconnect
PCIe Gen4 x16
NVLINK Bridge
3x
Form Factor
2 Slot FHFL
Max Power
250 W
de
NVIDIA Ampere-Based Architecture
Third-Generation Tensor Cores
TF32 for AI: 20x Higher Performance, Zero Code Change
Double-Precision Tensor Cores: The Biggest Milestone Since FP64 for HPC
Multi-Instance GPU (MIG)
HBM2e
Structural Sparsity
Next Generation NVLink
Every Deep Learning Framework, 700+ GPU-Accelerated Applications
Virtualization Capabilities
Structural Sparsity: 2X Higher Performance for AI
Warranty
3-Year Limited Warranty
Dedicated NVIDIA professional products Field Application Engineers
Resources
Links
Go to cart/checkout Continue Shopping