Hewlett Packard Enterprise (HPE) has officially shipped its first NVIDIA Blackwell family-based system, the NVIDIA GB200 NVL72, marking a significant advancement in AI computing. This cutting-edge rack-scale solution is designed to assist service providers and large enterprises in quickly deploying massive, complex AI clusters. Equipped with advanced direct liquid cooling technology, the system optimizes efficiency, performance, and scalability.
Revolutionizing AI Deployment
The NVIDIA GB200 NVL72 is built to handle the demands of modern AI service providers and large-scale enterprise model builders. Trish Damkroger, Senior Vice President and General Manager of HPC & AI Infrastructure Solutions at HPE, emphasized the importance of high-performance computing in today’s AI landscape.
“AI service providers and large enterprise model builders are under tremendous pressure to offer scalability, extreme performance, and fast time-to-deployment,” said Damkroger. “As builders of the world’s top three fastest systems with direct liquid cooling, HPE delivers lower cost per token training and industry-leading services expertise.”
Advanced Architecture for Large-Scale AI Models
The NVIDIA GB200 NVL72 features a shared-memory, low-latency architecture designed to process AI models with over a trillion parameters in a single memory space. The system seamlessly integrates NVIDIA CPUs, GPUs, compute and switch trays, networking, and software, ensuring high efficiency for parallelizable workloads such as generative AI (GenAI) model training and inferencing.
“Engineers, scientists, and researchers need cutting-edge liquid cooling technology to keep up with increasing power and compute requirements,” said Bob Pette, Vice President of Enterprise Platforms at NVIDIA. “With HPE’s first shipment of the NVIDIA GB200 NVL72, large enterprises and service providers can efficiently build, deploy, and scale AI clusters.”
Direct Liquid Cooling: The Key to Efficiency
With escalating power requirements and increasing data center density, HPE leverages its five decades of expertise in liquid cooling to bring fast deployment and extensive infrastructure support to complex AI environments. HPE’s direct liquid cooling technology has positioned it as a leader in energy-efficient supercomputing, having built seven of the world’s top 10 fastest supercomputers and delivering eight of the top 15 systems on the Green500 list.
Key Features of the NVIDIA GB200 NVL72 by HPE
- 72 NVIDIA Blackwell GPUs and 36 NVIDIA Grace CPUs interconnected via high-speed NVIDIA NVLink
- Up to 13.5 TB total HBM3e memory with an impressive 576 TB/sec bandwidth
- HPE direct liquid cooling technology for optimized power efficiency and system performance
Industry-Leading Services and Support
HPE is equipped to deliver AI solutions on a global scale, supporting massive, custom AI clusters with industry-leading serviceability. HPE offers tailored support services that ensure rapid installation and accelerated time-to-value for customers deploying large-scale AI infrastructure.
HPE’s HPC & AI Custom Support Services provide:
- Expert on-site support and dedicated remote engineers
- Customized service levels to match customer needs
- Sustainability services to enhance energy efficiency
- Proactive incident management for uninterrupted AI operations
With the first shipment of the NVIDIA GB200 NVL72, HPE and NVIDIA are setting new benchmarks in AI computing, paving the way for next-generation AI model training, inferencing, and large-scale deployments.