CES is a gadgetfest par excellence. It’s also the annual event at which platform vendors launch the latest chipsets and other advanced technologies that will power gadget innovations in coming years.
At Intel’s CES 2020 Press Conference, the semiconductor powerhouse announced a range of next-generation chipsets designed to embed artificial intelligence (AI), edge computing, 5G, and streaming technologies into practically every category of consumer device.
AI was the clear strategic focus of many hardware launches discussed at the news conference Bob Swan made his first CES appearance as Intel CEO. Swan discussed how Intel’s most visible acquisition in the past few years–Nervana, Movidius,, Habana, and Mobileye—address the opportunities to sell AI-optimized chips for intelligent edge applications, such as autonomous vehicles, entertainment, and athletics.
Swan predicted that there will be 175 ZB of data generated per year by 2025, growing 75% per year and that most of this new data will be generated in edge devices. He stated that barely 1% of that data is currently being processed by AI models, but that these workloads will surely expand and that Intel is optimizing its chipset architectures to address that opportunity.
Navin Shenoy, Intel’s Data Platforms Group Executive Vice President said that the addressable market for AI-optimized hardware will grow to greater than $25 billion by 2024. In his remarks at the press conference, Shenoy focused on the following next-generation processor architectures that are optimized for AI:
- Tiger Lake: Due to launch this summer, this is a next-generation mobile 10nm CPU core architecture. Shenoy touted Tiger Lake as offering “double-digit” performance improvements for AI and graphics workloads compared to the previous “Ice Lake” generation.
- Xeon Scalable: Due out this summer, the third generation of this edge-to-cloud CPU architecture offers “deep learning boost” extensions. Shenoy said more AI workloads run on Xeon than any other type of CPU. He stated that the third generation Xeon will offer up to 60 percent greater AI training and inferencing performance compared to the second generation.
- Nervana NNP-1: Due out in March, this is an AI inferencing processor for edge-to-cloud applications. Intel states that the NNP-2 offers up to 3.7 the compute density and up to 1.6 the throughput for some natural language processing operations as the Nvidia T4.
Gregory Bryant, Executive Vice President and General Manager of Intel’s Client Computing Group, discussed system-level optimization that have resulted in longer battery lives faster resume times, and higher performance across a wide range of partner laptops under Intel’s multi-year “Project Athena” program. He also discussed some new creative experiences enabled through AI capabilities executing on the Ice Lake processor. He brought out Jason Levin, the Principal Worldwide Evangelist for Adobe, to demonstrate AI-facilitated real-time background removal, cropping, and scaling of several different images and video streams.
Lisa Pearce, VP of Intel’s Architecture and GPU Group, discussed the Xe graphics architecture. She stated that Xe graphics performance will double with the Tiger Lake generation, compared to the Ice Lake generation, but offered no specific metrics in that regard. She demoed acceleration of AI-facilitated photo enhancements on DG1, which is Intel’s first discrete mobile Xe GPU, running on stage with the Tiger Lake processor. DG1, like Tiger Lake, will be released this summer to Intel OEMs.
It will be interesting to see how Intel’s many OEMs incorporate these next-generation AI-optimized processors into their Internet of Things, mobile, embedded, and other edge devices in coming months. Futurum Research expects to see plenty of prototypes of such products on the exhibition floor at CES 2020 this week in Las Vegas.
Futurum Research provides industry research and analysis. These columns are for educational purposes only and should not be considered in any way investment advice.
Image Credit: ArandTech
The original version of this article was first published on Futurum Research.