PNY EU

PNY LAB

Test your future NVIDIA AI environment,
from training to inference, under real workloads before
deploying in production.

PNY LAB,
YOUR NVIDIA AI TEST ENVIRONMENT

The PNY Lab is a hands-on lab where you can test your future NVIDIA AI environment— from training to inference, multi-GPU scaling, vGPU, software stacks, and production workflows.

PNY LAB ON DEMAND What you can test*

  • Training Performance

    Training
    Performance

    • Train a small model to measure real training speed
    • Use known models such as LLaMA 7B or standard vision models
    • Measure examples per second
    • Monitor GPU utilization during training
    • Compare performance with your current hardware

    Training performance
  • Large-Model Inference

    Large-Model
    Inference

    • Load large models such as Mixtral or LLaMA 70B
    • Measure tokens per second and latency
    • Test how many concurrent requests can be processed
    • Verify memory stability under sustained load

     Large-Model Inference
  • Multi-GPU Scalability

    Multi-GPU
    Scalability

    • Test how performance increases when adding GPUs
    • Check the quality of GPU-to-GPU communication
    • Compare the same job on 1 GPU, 2 GPUs, 4 GPUs, and more

    Multi-GPU Scalability
  • Data Pipeline Performance

    Data Pipeline
    Performance

    • Measure file loading speed from storage
    • Evaluate CPU preprocessing speed
    • Detect whether GPUs are waiting for data
    • Identify bottlenecks in the end-to-end pipeline

    Data Pipeline Performance
  • GPU Memory & Load Stress Testing

    GPU Memory & Load
    Stress Testing

    • Run long-sequence LLMs (from 8k to 64k tokens)
    • Test very large batch sizes
    • Run heavy tensor operations
    • Observe memory stability, fragmentation, and limits

    GPU Memory & Load Stress Testing

* Non-exhaustive list

What you can test

From standard benchmarks to full end-to-end workflows, test, measure and compare your NVIDIA AI environment with confidence.

Standard
Benchmarks

  • Run lightweight versions of MLPerf-style tests
  • Get a clear benchmark score to compare with other machines
  • Ensure the system delivers expected performance

Software Environment Validation

  • Check CUDA and driver stability
  • Test NVIDIA containers
  • Launch distributed training workflows
  • Test inference servers such as Triton or vLLM

End-to-End Workflow Simplicity

  • Load a dataset
  • Train a small model
  • Run an evaluation
  • Deploy a simple API for inference
  • Our products

The hardware behind your test

NVIDIA DGX™ Spark

NVIDIA® DGX Spark™ is a new class of AI computer for building, fine-tuning, and running large models locally, with easy deployment to data centers or the cloud

NVIDIA DGX™Systems

NVIDIA DGX™ systems are recognized as the world's top solutions for scaling enterprise AI infrastructure and delivering exceptional performance.

NVIDIA Ethernet and InfiniBand switches

NVIDIA Ethernet and InfiniBand switches provide high-performance, low-latency networking for scalable AI, HPC, and data center workloads.

NVIDIA RTX PRO™ 6000 Blackwell Server Edition

The NVIDIA RTX PRO™ 6000 Blackwell Server Edition is a powerful data center GPU for AI and visual computing workloads.

NVME all Flash Storage

Purpose-built for AI-driven workloads, delivering high-performance data storage to power GPU-intensive servers and accelerating your AI workflows at scale.

pny logo nvidia logo

Request your access
to the PNY lab



Follow PNY Pro

Sign Up Now

Transform your data center operations! Join our newsletter to receive exclusive podcasts, virtual events, and expert information on data center and networking innovations.

Sign Up

Close