Nvidia debuts new DGX H100 systems powered by Intel's 4th Gen Intel Xeon Scalable chips

Nvidia debuts new DGX H100 systems powered by Intel’s 4th Gen Intel Xeon Scalable chips

Posted on

Nvidia Corp. today announced a refreshed lineup of Nvidia Hopper accelerated computing systems powered by its own H100 Tensor Core graphics processing units, as well as by the 4th Gen Intel Xeon Scalable processors that were launched by Intel Corp. today.

In addition, dozens of Nvidia’s partners have announced their own server systems based on the new hardware combination, and the company says they provide up to 25 times more efficiency than previous generation machines.

Nvidia explained that Intel’s new central processing units will be combined with its GPUs in a new generation of Nvidia DGX H100 systems. Intel’s 4th Gen Intel Xeon scalable processors, the Intel Xeon CPU Max Series and the Intel Data Center GPU Max Series, were announced today. Intel says they deliver a significant leap in data center performance and efficiency, with enhanced security and new capabilities for artificial intelligence the cloud, the network and edge, and the world’s most powerful supercomputers.

The new CPUs offer workload-first acceleration and highly optimized software tuned for specific workloads, enabling users to squeeze the right performance at the right power in order to optimize the total cost of ownership. In addition, the 4th Gen Xeon processors are said to deliver customers a range of features for managing power and performance, making the optimal use of CPU resources to help achieve their sustainability goals.

One of the key new capabilities of Intel’s 4th Gen Intel Xeon scalable processors is their support for PCIe Gen 5, which is able to double the data transfer rates from CPU to GPU. These increased PCIe landes allow for a greater density of GPUs and high speed networking within each server. It also improves the performance of data-intensive workloads such as AI, while boosting network speeds to up to 400 gigabits per second per connection, meaning faster data transfer between servers and storage arrays.

Intel’s CPUs will be combined with eight Nvidia H100 GPUs in the new DGX systems, Nvidia said. The Nvidia H100 GPU is the most powerful chip the company has ever made, containing more than 80 billion transistors, making it an ideal companion for Intel’s new chips. It boasts some unique features that make it ideal for high-performance computing workloads, including a built-in Transformer Engine and a highly scalable NVLink interconnect that enable it to power large artificial intelligence models, recommendation systems and more.

Patrick Moorhead of Moor Insights & Strategy said he was impressed with Nvidia’s newest DGX systems, but he pointed out that they’re not the first to support PCIe 5, as Advanced Micro Devices Inc.’s latest processors also come with that feature. “I don’t think PCIe 5 is the deciding factor,” he added. “I think it will likely come down to lower pricing, as I am hearing that Intel is providing deep discounts these days.”

The new Nvidia DGX H100 systems will be joined by more than 60 new servers featuring a combination of Nvdia’s GPUs and Intel’s CPUs, from companies including ASUSTek Computer Inc., Atos Inc., Cisco Systems Inc., Dell Technologies Inc., Fujitsu Ltd., GIGA-BYTE Technology Co. Ltd., Hewlett Packard Enterprise Co., Lenovo Group Ltd., Quanta Computer Inc. and Super Micro Computer Inc.

These forthcoming systems from Nvidia and others will leverage the latest GPU and CPU hardware to run workloads with 25 times the efficiency afforded by traditional, CPU-only servers, Nvidia said. They offer an “incredible performance per watt” that results in far less power consumption, it claims. Further, compared with the previous-generation Nvidia DGX systems, the latest hardware boosts the efficiency of AI training and inference workloads by 3.5-times, resulting in around three times lower costs of ownership.

The software powering Nvidia’s new systems comes in handy too. The new DGX H100 systems all come with a free license for Nvidia Enterprise AI. That’s a cloud-native suite of AI development tools and deployment software, providing users with a complete platform for their AI initiatives, Nvidia said.

Customers can alternatively buy multiple DGX H100 systems in the shape of the Nvidia DGX SuperPod platform, which is essentially a small supercomputing platform that provides up to one exaflop of AI performance, Nvidia said.

Photo: Nvidia

Show your support for our mission by joining our Cube Club and Cube Event Community of experts. Join the community that includes Amazon Web Services and Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger and many more luminaries and experts.

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *