PNY EU
PNY 3S

PNY 3S

PNY 3S

  • Description

    PNY 3S

    Maximizing your DGX systems throughout your AI journey

    NOW WITH FULL A100 COMPATIBILITY AND PERFORMANCE 

    Simple Connectivity, Flexible Design and Solutions, one DGX configuration, or a PNY AI CLUSTER up to 10 nodes, no need for multiple storage nodes or controllers, everything needed is contained and automated within the single appliance.

    Highlights

    50% FASTER TRAINING Real life deep learning projects show a massive 50% improvement in training times when compared to other solutions. Excellent performance with the standard storage synthetic benchmarks, with bandwidth, latency and IOPS, leaving others behind.

     

    50% LOWER COST Cost and affordability are a key design focus. By removing the need for expensive storage controllers, costs are dramatically reduced, and more of your investment is spent on GPU and NVMe resource providing greater productivity and ROI.

     

    100% SCALABILITY With up to 360TB within 2U and a massive 150TB within the 1U, even solutions starting at 30TBs have can scale in stages that suit your project.

     

    Extending NVIDIA’s DGX Resource NVIDIA’s DGX range has helped shape the AI landscape and changed future possibilities. However, the DGX range has limited internal space for NVMe flash storage, an essential element for performance and overall capability. PNY AI Optimised Storage Server creates a central pool of ultra-low latency NVMe which can be shared amongst one or multiple DGX servers. Providing each DGX with the ideal level of resource without the need for upfront over investment. Simply connected via NVIDIA compatible InfiniBand / Ethernet, the unique RDMA protocol ensures the NVMe resource is seen and performs as if it were internal to the DGX.

Close