NVIDIA GTC 2022 Keynote Unveilings. New Hopper GPU Architecture Replacing Ampere, New H100 Hopper-Based GPUs Debut and Grace CPU Superchips Make a Big Splash

NVIDIA GTC 2022 Keynote Unveilings: New Hopper GPU Architecture Replacing Ampere, New H100 Hopper-Based GPUs Debut and Grace CPU Superchips Make a Big Splash

In Technology News by Todd R. WeissLeave a Comment

NVIDIA GTC 2022 Keynote Unveilings. New Hopper GPU Architecture Replacing Ampere, New H100 Hopper-Based GPUs Debut and Grace CPU Superchips Make a Big Splash

The News: This week at the Spring GPU Technology Conference (GTC) 2022, NVIDIA CEO Jensen Huang shared announcements about the company’s latest Hopper GPU accelerated computing architecture and its new NVIDIA H100 GPUs for AI. Check out all the Press Releases from NVIDIA’s Spring GTC 2022 in NVIDIA’s newsroom.

NVIDIA GTC 2022 Keynote Unveilings: New Hopper GPU Architecture Replacing Ampere, New H100 Hopper-Based GPUs Debut and Grace CPU Superchips Make a Big Splash

Analyst Take: NVIDIA never fails to hit a few massive home runs at its GTC events, but this year’s Spring GTC 2022 announcements from CEO Jensen Huang are particularly impressive.

A year ago, NVIDIA unveiled its NVIDIA Grace data center CPU for enterprises. This year, Huang built on that 2021 Grace news by unveiling NVIDIA Hopper, a new accelerated computing platform architecture that is replacing its existing two-year-old Ampere architecture.

Huang also announced the new NVIDIA H100 Tensor core GPU, which is the company’s first GPU based on the nascent Hopper architecture. With an astounding 80 billion transistors, NVIDIA says the H100 is the world’s largest and most powerful accelerator. The H100, which is built on a TSMC 4nm process, also gets a new Transformer Engine that is as much as 6x faster than previous versions, as well as a highly-scalable, super-fast NVIDIA NVLink interconnect.

Topping all the Hopper news, however, is the unveiling of the new NVIDIA Grace CPU Superchip, which is built of two CPU chips that connect with NVIDIA’s NVLink-C2C high-speed, low-latency, chip-to-chip interconnect.

All these components are named for Admiral Grace Hopper, who was a pioneering U.S. computer scientist and a legend in the industry. This is an honor that is well deserved by the legacy of the late Grace Hopper.

In addition to its especially creative naming of the new NVIDIA technologies, I am impressed by the company’s technological prowess in bringing these clear innovations and impressive performance capabilities to the market.

The beauty of NVIDIA’s new Grace Hopper family of products is that they serve enterprise and other computing needs by including almost all the critical components necessary in the IT infrastructure, from CPUs to GPUS to networking, interconnects and more.

The H100 GPUs support PCIe Gen5 and are the first to utilize HBM3, enabling 3TB/s of memory bandwidth, according to NVIDIA.

And NVIDIA says that 20 NVIDIA H100 GPUs are so powerful when combined that they could sustain the equivalent of the entire world’s internet traffic, making it possible for customers to deliver advanced recommender systems and large language models running inference on data in real time.

I have been watching and covering NVIDIA for several years, including Huang’s GTC keynotes and his infectious enthusiasm for his company’s technology breakthroughs and moxie. NVIDIA continues to push and celebrate a can-do spirit and technology road map that helps to set the direction and speed of enterprise technology progress every year.

A Deeper Dive into the New Hopper Architecture

With its new NVIDIA H100 GPUs and the new Hopper accelerated computing platform architecture, the new Grace Hopper Superchips, the new Transformer Engine and more, NVIDIA is again poised for success in the enterprise, HPC and scientific marketplaces. The company continues once again to bring ever-higher levels of performance and computing capabilities for users of all types.

Adding to the promise and value of the new H100 GPUs is that they can be deployed in every type of data center, including on-premises, cloud, hybrid-cloud and edge. H100s are expected to be available worldwide later this year from the major cloud service providers including Alibaba Cloud, Amazon Web Services, Baidu AI Cloud, Google Cloud, Microsoft Azure, Oracle Cloud and Tencent Cloud, from computer makers, and directly through NVIDIA.

Also coming is NVIDIA’s fourth-generation DGX system, the DGX H100, which will feature eight H100 GPUs to deliver 32 petaflops of AI performance at new FP8 precision, providing the scale to meet the massive compute requirements of large language models, recommender systems, healthcare research and climate science. Those GPUs in the DGX H100 systems will be connected by fourth-generation NVLink, providing 900GB/s connectivity, which is 1.5x more than the earlier version of the technology. By using NVSwitch, NVIDIA enables all eight of the H100 GPUs to connect over NVLink. Also configurable is an external NVLink Switch, which can network up to 32 DGX H100 nodes in the next-generation NVIDIA DGX SuperPOD supercomputers. NVIDIA H100s will be available starting in the third quarter of 2022.

The NVIDIA Grace CPU Superchip

Also announced this week at GTC is the NVIDIA Grace CPU Superchip, which includes an impressive 144 high-performance cores and 1TB/s of memory bandwidth in a single socket. The Grace CPU Superchip is an Arm Neoverse-based discrete data center CPU designed for AI infrastructure and high performance computing, comprised of two CPU chips connected using the high-performance NVLink-C2C chip-to-chip interconnect.

The Grace CPU Superchip complements NVIDIA’s first CPU-GPU integrated module, the Grace Hopper Superchip, announced last year, which is designed to serve giant-scale HPC and AI applications in conjunction with an NVIDIA Hopper™ architecture-based GPU. Both superchips share the same underlying CPU architecture, as well as the NVLink-C2C interconnect.

The new Grace CPU Superchip has an estimated performance of 740 on the SPECrate 2017_int_base benchmark. That performance is more than 1.5x higher compared to the dual-CPU shipping with NVIDIA’s DGX A100 chips today, according to the company.

The Grace CPU Superchip can run all of NVIDIA’s software, including Omniverse, NVIDIA HPC and NVIDIA AI and is designed for AI, HPC, cloud and hyperscale applications.

Both the Grace CPU Superchip and the Grace Hopper Superchip are expected to be available in the first half of 2023.

What Does This Mean for NVIDIA?

I believe these latest innovations from NVIDIA are impressive, set laudable new performance standards, and show the continual out-of-the-box thinking that goes on in the minds of the company’s developers, engineers, and visionaries.

Think about this for a moment – NVIDIA is producing CPUs in addition to its powerful and market-leading GPUs. This is part of their plan to build CPUs that can work even more closely with its own GPUs through powerful interconnects it is also developing and building. I believe that’s a bold move in a CPU market where existing giants already dominate.

Yes, Intel Corp. is also going through its own re-birth since the early 2021 return of CEO Pat Gelsinger, with plans for new chip fabs, increased production, new chips and more, while the other big CPU maker, AMD, continues its own innovations and CPU performance gains as well. Despite the presence of Intel and AMD, however, NVIDIA has been taking its fight to the larger market, while constantly driving innovation and expanding its presence as a market leader.

Not bad for a company that started out by building and selling its first PCI graphics card for gaming, the NV-1, back in 1995, followed by the first GPU, the GeForce256, in 1999. It has been quite a ride. I’m looking forward to watching this evolve as NVIDIA continues what is clearly a commitment to innovate.

Disclosure: Futurum Research is a research and advisory firm that engages or has engaged in research, analysis, and advisory services with many technology companies, including those mentioned in this article. The author does not hold any equity positions with any company mentioned in this article.

Other insights from Futurum Research:

NVIDIA Acquires Software-Defined Storage Vendor Excelero as it Continues to Fill Out its Technology Stack 

Intel Investor Day & Tower, Alphabet-IDFA, Cisco-NVIDIA-Lattice Earnings, AMD Closes Xilinx – The Six Five Webcast

CES Roundup: NVIDIA Omniverse Now Available for Free for Millions of Creators

Image Credit: VentureBeat

 

The original version of this article was first published on Futurum Research.

Todd is an experienced Analyst with over 21 years of experience as a technology journalist in a wide variety of tech focused areas.

Leave a Comment