EOL Notification Form EOL Notification Form
The NVIDIA A2 Tensor Core GPU provides entry-level inference with low power, a small footprint, and high performance for intelligent video analytics (IVA) or NVIDIA AI at the edge. Featuring a low-profile PCIe Gen4 card and a low 40–60 watt (W) configurable thermal design power (TDP) capability, the A2 brings versatile inference acceleration to any server.
A2’s versatility, compact size, and low power exceed the demands for edge deployments at scale, instantly upgrading existing entry-level CPU servers to handle inference. Servers accelerated with A2 GPUs deliver up to 20X higher inference performance versus CPUs and 1.3x more efficient IVA deployments than previous GPU generations — all at an entry-level price point.
NVIDIA-Certified systems with the NVIDIA A2, A30, and A100 Tensor Core GPUs and NVIDIA AI—including the NVIDIA Triton Inference Server, open source inference service software—deliver breakthrough inference performance across edge, data center, and cloud. They ensure that AI-enabled applications deploy with fewer servers and less power, resulting in easier deployments and faster insights with dramatically lower costs.
Highlights
Third-Generation NVIDIA Tensor Cores
Second-Generation RT Cores
Structural Sparsity
Hardened Root of Trust for Secure Deployments
Superior Hardware Transcoding Performance
Warranty
3-Year Limited Warranty
Free dedicated phone and email technical support (1-800-230-0130)
Dedicated NVIDIA professional products Field Application Engineers
Resources
Links
Contact gopny@pny.com for additional information.
Go to cart/checkout Continue Shopping