The News: Intel deftly leveraged the epochal Six Five Summit to debut its vison for the Infrastructure Processing Unit (IPU), a programmable networking device designed to enable cloud and communication service providers to reduce overhead and free up performance for CPUs (Central Processing Units). With an IPU, customers can use resources with a secure, programmable, stable solution that potentially enables them to better balance processing and storage. Read the Intel release here.
Intel Changing Data Center Stakes with Debut of Infrastructure Processing Unit
Analyst Take: Intel’s new IPU proposition is a programmable network device that intelligently manages systems-level infrastructure resources by securely accelerating such functions in a data center (DC). It is designed to enable cloud operators to transition to a completely virtualized storage and network architecture while maintaining high performance and predictability, along with a high degree of control.
I see Intel swiftly expanding its influence and presence across the cloud data center market by developing the IPU to accelerate emerging, bandwidth-intensive applications that are composed using a DC-enabled microservices-based architecture. Intel leverages research conducted by Google and Facebook indicating that 22% to an astonishing 80% of CPU cycles can be consumed by microservices overhead.
Intel’s IPU solution directly addresses what can be characterized as the microservices compute tax by accelerating infrastructure functions, including storage virtualization, network virtualization, and security with dedicated protocol accelerators as well as freeing up CPU cores by shifting storage and network functions that were previously done in software on the CPU to the IPU. Additionally, Intel’s IPU targets improving DC utilization by allowing for dynamic workload placement and powers cloud service providers to customize infrastructure function deployments at software efficiency levels.
I expect that Intel’s IPU competitive position is strengthened since it expands on the company’s existing SmartNIC technology deployed across Intel server platforms to offload and accelerate networking and storage stacks through cloud services. Intel already asserts market primacy in the newborn IPU market segment due to widespread adoption of its Xeon-D, FPGA (Field-Programmable Gate Array), and Ethernet components.
For instance, Intel’s FPGA SmartNIC C500X platform architecture and recent product entrants such as the Silicom FPGA N5010 are already widely deployed and making further inroads. Intel’s FPGA-based IPU platform is already deployed across multiple cloud service provider DC environments and the initial ASIC IPU is undergoing testing.
Why Intel Needed to Unveil the IPU Proposition
Intel needed to launch its IPU proposition to fulfill rapidly expanding intelligent bandwidth and workload optimization demands across hyperscaler and operator DC environments. I see emerging applications, such as DC Interconnect (DCI), HPC (High Performance Computing) clusters, uncompressed video, and hosted 5G networks, as fueling the requirement for broader, more efficient CPU offload as well as related DC capabilities such as 400G/800G connectivity.
Intel also needed to counter NVIDIA’s foray into creating a new product category, BlueField DPU (Data Processing Unit), as integral to differentiating its SmartNIC products following the Mellanox acquisition. NVIDIA’s DPU is a SoC, or system-on-a-chip, that features an ARM-based multi-core processor, a high-performance network/data interface, and series of programmable acceleration engines. This is designed to allow the DPU to offload and isolate networking, security, and storage functions from a server’s CPU, and it can also accelerate performance and free up CPU cores for other server applications.
Clearly Intel’s x86-based IPU is speeding up a showdown with NVIDIA’s Arm-based DPU across cloud DC server environments. I anticipate Intel will make headway while NVIDIA is potentially distracted by its proposed $40 billion acquisition of Arm which still must pass regulatory scrutiny, including in the UK. The deal’s demise could dampen NVIDIA’s DC server market ambitions. On the other hand, approval of the acquisition could lessen developer enthusiasm for Arm due to its position as vendor-agnostic across the system-on-chip vendor landscape. By becoming part of NVIDIA, Arm technology could become less appealing to non-NVIDIA chipset rivals, particularly among NVIDIA’s key rivals.
AMD shelled out $30 billion to take over Xilinx, gaining Xilinx’s Alveo U25 SmartNIC technology in the process. The Alveo U25 SmartNIC technology uses ARM processors and as a result AMD must keep a close eye on the NVIDIA/Arm acquisition journey. Regardless, Intel’s IPU launch potentially rivals AMD’s objectives to rapidly leverage Xilinx’s SmartNIC technology to garnish mind and market share in the cloud DC space as the IPU solution fortifies its overall DC infrastructure position. In great likelihood, Intel can eat popcorn as AMD (and other multi-core ARM users) grapple with how to handle NVIDIA’s newly arrayed DPU/SmartNIC-related competitive threat should the Arm deal clear all the regulatory hurdles.
Intel IPU Debut: Hyperscalers Matter
I believe Intel’s strategic decision to align its IPU portfolio development in accordance with direct input from hyperscalers will prove a key differentiator across the data center infrastructure realm. Microsoft Azure, Google, and Facebook are all prominently featured in the kick-off announcement. Among silicon suppliers, Intel has attained a key validation differentiator in demonstrating that its IPU solution directly aligns with the meeting the highest priority intelligent DC architecture and CPU offload needs of the hyperscalers.
In contrast, Amazon is developing its Amazon Nitro technology, following on its 2016 acquisition of Annapurna Labs to tackle the same DC server architecture challenges. AWS positions AWS Nitro System as the underlying platform for its EC2 instances. With Nitro System, AWS breaks apart hypervisor functions (i.e., virtualize the CPU, storage, and networking as well as provide management capabilities), and offloads them to dedicated hardware and software, as well as reduces costs by delivering server resources to customer instances. AWS Nitro enables AWS to exercise more control over its product development and supply chain agenda, although its market influence is limited to the AWS universe.
Key Takeaways on the Intel Infrastructure Processing Unit Debut
I believe that the new IPU proposition provides Intel the competitive foundation needed to power the buildout and evolution of intelligent DC architectures across hyperscaler and operator cloud environments. Intel’s IPU provides the intelligent infrastructure acceleration, system-level security, control and isolation, common software frameworks, and programmable hardware/software key to fulfilling the topmost priorities of customers. Factor in the tight collaboration with hyperscaler cloud partners to ensure the IPU platform aligns with their emerging stringent CPU offload demands and intelligent DC vision, Intel is a formidable foe that will be hard to beat in the IPU realm.
Disclosure: Futurum Research is a research and advisory firm that engages or has engaged in research, analysis, and advisory services with many technology companies, including those mentioned in this article. The author does not hold any equity positions with any company mentioned in this article.
Other insights from Futurum Research:
Image Credit: HPCwire
The original version of this article was first published on Futurum Research.
Ron is an experienced research expert and analyst, with over 20 years of experience in the digital and IT transformation markets. He is a recognized authority at tracking the evolution of and identifying the key disruptive trends within the service enablement ecosystem, including software and services, infrastructure, 5G/IoT, AI/analytics, security, cloud computing, revenue management, and regulatory issues.