blobbotronic - stock.adobe.com
Intel results reveal a push to address AI shortcomings
The chipmaker has been impacted by poor PC sales, but aims to drive adoption of its AI inferencing technology
Intel has reported first-quarter revenues of $11.7bn, down 36% from last year. Its Data Center and AI (DCAI) business posted a 39% decline in revenue to $3.7bn, while the Client Computing Group (CCG) posted a 38% decline, with revenue of $5.8bn. Its Network and Edge division reported revenue of $1.6bn, a 30% decline on last year. The two areas of growth for Intel were its foundry business, which grew by 24% to $118m, and Mobileye, which reported revenue of $458m, 16% up from last year.
Intel CEO Pat Gelsinger said: “We hit key execution milestones in our datacentre roadmap and demonstrated the health of the process technology underpinning it. While we remain cautious on the macroeconomic outlook, we are focused on what we can control as we deliver on IDM 2.0: driving consistent execution across process and product roadmaps, and advancing our foundry business to best position us to capitalise on the $1tn market opportunity ahead.”
The transcript of the earnings call, posted on Seeking Alpha, shows that Intel was affected by “a conservative” PC refresh rate.
When asked about the opportunity to reverse the decline in revenue experienced in its datacentre business, which provides the Xeon processors that power servers, Gelsinger described the business as being in a strong position. “We’d say that some of that strength in the datacentre is driven by AI,” he said.
Referring to the company’s AI chip, Gaudi, which competes with Nvidia’s A100 GPU (graphics processing unit), Gelsinger said: “We’re seeing a very positive response to Gaudi 2 and seeing our pipeline growing very rapidly for that product line.”
According to Gelsinger, demand for AI processing will also drive the adoption of the next generation of the company’s Xeon processor, Gen 4 Sapphire Rapids. According to the company’s official product literature, the Gen 4 Xeon processor has a 55% lower total cost of ownership when used for AI workloads, when compared with the previous generation of Xeon chips, as the same level of processing can be achieved with fewer fourth-generation Intel Xeon servers.
Gelsinger regards AI as “critical” to the company’s success in the datcentre business. The company’s strategy is about “democratising AI”, he said, to offer a lower-cost alternative to what he describes as “super high-end machines”, which, according to Gelsinger, are “uneconomical for most environments”.
There are two parts to AI. The first is machine learning, which generally needs to be run on high-performance machines capable of number-crunching vast datasets to build a data model. The second half of AI is the inference part, where the data model is run to predict likely outcomes.
Gelsinger sees a huge opportunity for Intel powering inference workloads. “We have to enable the broad deployment of inferencing – being able to use AI – and that’s an area where, particularly our core Xeon product line, has particular strengths well above prior generations and competitive alternatives,” he said.
Read more about AI inference
- Israeli startup Deci claims GPU-like performance in running computer vision and natural language processing models on Intel’s fourth-generation Xeon processors.
- Learn the basics of machine learning inference with AWS Greengrass and how it can help train models and increase data value with real-time insights.