NVIDIA DGX SuperPOD
with DGX GB200 Systems
AI infrastructure with constant uptime.
NVIDIA DGX SuperPOD™ with DGX GB200 systems is purpose-built for training and inferencing trillion-parameter generative AI models. Each liquid-cooled rack features 36 NVIDIA GB200 Grace Blackwell Superchips–36 NVIDIA Grace CPUs and 72 Blackwell GPUs–connected as one with NVIDIA NVLink. Multiple racks connect with NVIDIA Quantum InfiniBand to scale up to tens of thousands of GB200 Superchips.
Learn how DGX SuperPOD with DGX GB200 systems accelerates AI innovation.
Read the DatasheetRead how the NVIDIA DGX™ platform and NVIDIA NeMo™ have empowered leading enterprises.
Download the EbookAn intelligent control plane tracks thousands of data points across hardware, software, and data center infrastructure to ensure continuous operation and data integrity, plan for maintenance and automatically reconfigures the cluster to avoid downtime.
Scaling up to tens of thousands of NVIDIA GB200 Superchips, DGX SuperPOD with DGX GB200 systems effortlessly performs training and inference on state-of-the-art trillion-parameter generative AI models.
NVIDIA GB200 Superchips, each with one Grace CPU and two Blackwell GPUs, are connected via fifth-generation NVLink to achieve 1.8 terabytes per second (TB/s) of GPU-to-GPU bandwidth.
DGX SuperPOD with NVIDIA DGX B200 and DGX H100 systems is an ideal choice for large development teams working on enterprise AI workloads.
AI Infrastructure for Enterprise DeploymentsNVIDIA Enterprise Services provide support, education, and professional services for your DGX infrastructure. With NVIDIA experts available at every step of your AI journey, Enterprise Services can help you get your projects up and running quickly and successfully.
Learn More About DGX Enterprise Services