NVIDIA increases the volume of the CPU and Omniverse software

Two industry events provided the stage for NVIDIA to share their plans for a proprietary CPU Arm-based product line that we believe will transform their business and the AI ​​/ HPC industry landscape. NVIDIA used the annual Computex and International Super Computing (ISC) to emphasize that 1) the Grace CPU “SuperChip” Arm (expected in 2023) represents a strategic push for the company and 2) NVIDIA will not abandon its server partners while transforms its business from supplying GPU chips to supplying integrated systems that include CPU, GPU and NPU.

Along the way, NVIDIA shared its vision of sizing market opportunities. Investors should note that NVIDIA is now projecting a $ 150 billion market for 2030 in AI and HPC, $ 150 billion in “Digital Twins” (think Omniverse), and $ 100 billion in cloud-based games. Let it sink. That’s nearly half a trillion dollars in new business that NVIDIA and its competitors are chasing.

Let’s dive in.

Computex: NVIDIA still loves its server partners

When NVIDIA announced plans to build their own Arm-based CPUs, many didn’t fully understand the strategic intent that CEO Jensen Huang has in mind. Accelerated processing faces a memory problem. Transferring data to and from storage over the network to a CPU and then a relatively slow PCIe-based accelerator is a bottleneck. And moving instead of sharing data involves capital and energy costs. As a result, NVIDIA is building a future with three Bluefield CPU, GPU, and NPU chips that all share memory access. It sounds strange, but this is an approach that AMD and Intel are pursuing as well, with supercomputers from Argon and Oakridge National Labs.

So what will be the role of OEMs and ODMs in a world where NVIDIA designs and delivers complete systems, without memory, sheet metal, fans, IOs and power supplies? NVIDIA is extending its HGX model to ensure these important channel partners are not left out. At Computex, NVIDIA announced new Grace-Hopper reference designs to enable rapid time-to-market when Grace appears in volume in early 2023. And the Taiwanese ODM community is poised to adopt the first systems-based designs. Grace in two modes: dual Grace CPU and Grace-Hopper accelerated systems.

The four new Grace-based reference designs will reduce costs and accelerate time-to-market for partners looking to deliver state-of-the-art servers for HPC, AI, and cloud-based gaming and visualization. Additionally, NVIDIA announced liquid-cooled A100 and H100 GPUs that can reduce power consumption by 30% and rack space by more than 60%.

Finally, NVIDIA announced a series of NVIDIA Jetson AGX Orin edge servers at Computex, with strong adoption by Taiwanese ODMs. We note, however, that large server vendors like Dell, HPE, and Lenovo seemed left out of the data center and edge server party, but this is likely due to their rigorous testing cycles and conservative ad policies.

At the ISC it’s all about Grace and Hopper with a sprinkling of Omniverse in HPC

NVIDIA is facing growing challenges from AMD and Intel, who have won all three US-based DOE Exascale supercomputer projects totaling more than $ 1.5 billion in US government funding. In fact, the Frontier Supercomputer at Oak Ridge National Labs (ORNL) was announced this week at the ISC in first place in the TOP500, with just over 1 Exaflop of performance based on AMD’s CPU and GPU with HPE Cray Networking. While planning issues have delayed Intel’s move to the Exascale finish, HPE is busy installing the Ponte Vecchio / Xeon-based exascale system at DOE’s Argonne National Labs.

NVIDIA is clearly intent on regaining the crown, lost in ORNL, with Grace-Hopper integrated systems. Having previously announced CSC’s ALPS Grace based system with 20 Exaflops of AI performance, NVIDIA announced “VENADO” at ISC, a 10 Exaflop system (again, with AI performance) that uses the Grace-Hopper Superchip to be installed at the Los Alamos National Labs. Note that the TOP500 list does not measure “AI Performance” which is based on a lower precision floating point and NVIDIA has not yet disclosed the double precision performance of any of its Grace wins.

NVIDIA also announced the collaboration with the University of Manchester, using Omniverse to create the digital twin to model the operation of a fusion reactor. This is a classic Omniverse use case example, enabling the collaboration of engineers and scientists who use 3D graphics to explore the behavior of complex systems in a virtual world to accelerate development and ensure design quality.

Conclusions

NVIDIA is on track to transform the company from a high-performance GPU provider to a high-performance data center designer for HPC and AI. This week’s announcements should ease any customer concerns that their trusted infrastructure providers would be relegated to a lower tech class. We still await full large-scale performance data for Grace-Hopper systems, but we are likely to get a taste of more data at SuperComputing’s annual conference in November.

Equally important is the monetization of NVIDIA’s software arsenal in both AI and the metaverse. The company highlighted a few factors here in last week’s earnings call, pointing to the software as a catalyst for increasing margins and revenue, projecting $ 150 billion of market potential for Digital Twins.

Disclosures: This article expresses the views of the author and is not to be considered as advice for buying or investing in the companies mentioned. Cambrian AI Research is fortunate to have many, if not most, semiconductor companies as our customers, including Blaize, Cerebras, D-Matrix, Esperanto, Graphcore, GML, IBM, Intel, Mythic, NVIDIA, Qualcomm Technologies, Si-Five, Synopsys and Tenstorrent. We have no investment positions in any of the companies mentioned in this article and we do not plan to start any in the near future. For more information, please visit our website at https://cambrian-AI.com.

Leave a Reply

Your email address will not be published.