KOHb - Getty Images
Nvidia CEO sees ChatGPT as iPhone moment for AI
It took the iPhone to kickstart a revolution in mobile phone usage, and Jensen Huang believes datacentres will radically change to support AI workloads
Nvidia has posted first-quarter revenue of $7.19bn, down 13% from a year ago, but its datacentre business has seen significant growth thanks to artificial intelligence (AI) workloads.
The Nvidia datacentre business reported first-quarter revenue of $4.28bn, up 14% from a year ago and up 18% from the previous quarter.
The company sees a huge opportunity in transitioning the trillion dollars of installed global datacentre infrastructure based on general purpose computing to what its CEO, Jensen Huang, sees as “accelerated computing”. “The computer industry is going through two simultaneous transitions – accelerated computing and generative AI,” he said.
This change to datacentre infrastructure will be needed to support application areas like generative AI, which Nvidia and much of the industry believes will be infused in every product, service and business process.
During Nvidia’s GTC 2023 financial analyst presentation in March, the company discussed the growth of AI and its accelerated computing platform. At the time, Nvidia said the number of developers using its AI and acceleration libraries has more than doubled since 2020.
According to the transcript of the company’s earnings call, posted on Seeking Alpha, large language models like ChatGPT are driving significant growth in Nvidia’s datacentre business. Chief financial officer Colette Kress said: “When we talk about the sequential growth that we’re expecting between Q1 and Q2, our generative AI large language models are driving the surge in demand, and it’s broad-based across both our consumer internet companies, our CSPs [cloud service providers], our enterprises, and our AI startups.”
Bloomberg is one of its enterprise customers creating large language models using technology from Nvidia. Kress said Bloomberg is building a 50 billion-parameter model, BloombergGPT, to enable financial natural language processing tasks such as sentiment analysis, named entity recognition, news classification and question-answering.
Read more about optimising datacentre infrastructure
- Specialised AI chips released by companies like Amazon, Intel and Google tackle model training efficiently and generally make AI solutions more accessible.
- A chip from Microsoft reflects a need to cut costs while scaling large models. The move could force AI hardware providers such as Nvidia to make changes.
Another customer, CCC Intelligent Solutions, is using AI to estimate car insurance repair jobs.
When asked about the company’s datacentre growth, Huang described the emergence of ChatGPT as the “ChatGPT moment”, analogous to the iPhone moment in 2007, which catalysed the smartphone revolution. He said ChatGPT has helped everybody crystallise how to transition from the technology of large language models to a product and service based on a chatbot.
“The integration of guardrails and alignment systems were through reinforcement learning, human feedback, knowledge vector databases for proprietary knowledge, connection to search, all of that came together in a really wonderful way, and it’s the reason why I call it the iPhone moment,” he said.
Earlier this month, Dell and Nvidia unveiled an infrastructure and software partnership for delivering a blueprint for on-premise generative AI, to support enterprises that need to use proprietary data.
Called Project Helix, Nvidia said the technology aims to simplify enterprise generative AI deployments by providing optimised hardware and software from Dell. The collaboration targets those enterprises that want to develop generative AI applications but need to maintain data privacy.