AMD bets on India, doubles down on AI
The chipmaker recently inaugurated its largest global design facility in Bangalore as part of a $400m investment in India to support its expanded portfolio and build artificial intelligence capabilities
AMD recently unveiled its largest global design centre in Bangalore, underscoring India’s importance to the chipmaker’s efforts to gain a leg-up over rivals in the artificial intelligence (AI) chip race.
Mark Papermaster, chief technology officer at AMD, said the new design centre is part of a $400m investment in India, and will house some 3,000 engineers dedicated to the design and development of semiconductors, as well as AI and machine learning technologies.
In an interview with Computer Weekly, Papermaster delves into the role India plays in AMD’s global strategy, how the company stacks up against rivals such as Nvidia in the race for domination of AI chips, and developments in emerging technologies such as neuromorphic computing.
Can you tell us more about the new design centre and AMD’s investment in India?
Papermaster: Opening AMD’s largest global design centre in India is a very exciting opportunity. We have been in India for almost 20 years now. India is an integral part of AMD’s global strategy, with strong involvement in product development across our portfolio. The inauguration of the new Technostar campus in Bangalore is part of a $400m investment spanning the next five years. We plan to utilise the full investment by the end of 2028, including finishing the remaining phases of the Technostar campus and hiring 3,000 new engineers across India.
What sort of work will happen in India, and how will that support AMD’s global strategy and product development?
Papermaster: We chose India to host our largest design facility because the local leadership team has demonstrated the capability to hire great talent and deliver great results in product development across AMD’s portfolio. The centre spans an area of 500,000ft2 and will house engineers dedicated to the design and development of semiconductor technology, including CPUs [central processing units], GPUs [graphics processing units], AI and machine learning, among other areas. The 2022 acquisitions of Xilinx and Pensando, which had major operations in India, also resulted in the company growing its employee base in the country. This investment will enable us to support our expanded portfolio and move quickly as we add AI capability across our products.
What is AMD’s stance on the custom silicon being developed by Amazon Web Services, Google and Microsoft for AI and cloud workloads?
Papermaster: AI workloads are rapidly evolving, and as they become more prevalent in various industries, AI will require a diverse set of hardware solutions to meet the growing demands of these workloads. One type of processor does not fit all. In most cases, a mix of CPUs, GPUs, FPGAs [field programmable gate arrays] and specialised ASICs [application-specific integrated circuits] or custom silicon will become the norm. When algorithms are stable, some workloads can be run more economically on custom silicon. When an algorithm needs more programmability, CPUs, GPUs and FPGAs are ideal.
How will AMD’s new MI300 chips affect the performance war in AI, and how do they stack up against the Cerebras WSE CS2, the Nvidia H100 and the upcoming H200?
Papermaster: On 6 December 2023, we launched the AMD Instinct MI300 GPU, which we believe will give leadership performance in AI. Now with AMD’s highly programmable GPU and the AMD ROCm AI software stack that has been in development for years, customers have a choice for their generative AI applications without modifying their software approach.
Mark Papermaster, AMD
MI300 is AMD’s most advanced GPU that directly challenges Nvidia whose chips have dominated the AI computing market. We are the only two providers of general-purpose GPUs. In the case of generative AI inferencing, that is, using a trained model to answer questions or assist work efforts, the applications are highly dependent on the amount of local memory. The MI300 has 192GB of memory, which is a competitive advantage and delivers superior inferencing performance.
At a more holistic level, AMD has invested in breakthrough technologies such as advanced packaging, 3D stacking and chiplet architectures that allow us to enable our leadership in compute and AI across our portfolio of products. For example, we have integrated our AI engines into our Ryzen 7040 series processors to bring AI acceleration to PCs. AMD is uniquely positioned to accelerate AI applications from supercomputers to the cloud, edge and end-user devices.
Does the absence of a strong software stack affect hardware performance, especially when it comes to translating floating-point, memory and latency into good inferencing performance for large-scale models?
Papermaster: AMD has been working for years on a competitive AI software stack named ROCm, and it reached full production level and ramped up industry adoption in 2023. ROCm and Cuda are platforms used for parallel programming and accelerating GPU computing performance. While both platforms provide similar functionalities, there is one significant difference: Cuda is exclusive to Nvidia GPUs, while ROCm is open source and industry standard. It enables the community to drive advancements and does not lock in customers to a specific platform.
AMD ROCm 6.0 is a full production software stack that supports TensorFlow v1.12 and PyTorch 1.0. These popular frameworks allow developers to leverage the power of AMD GPUs for their AI and machine learning workloads. They can develop, collaborate, test and deploy applications in a free, open source and integrated software ecosystem.
Are there any comparisons?
Papermaster: At our launch event in December 2023, we highlighted a 1.4-times inference performance advantage for MI300X versus the H100 using an equivalent datatype and library setup. With the latest optimisations we’ve made, this performance advantage has increased to 2.1 times. With MI300 GPUs, AMD has delivered competition to the market for generative AI training and inference computing.
Will the AI war in the industry boil down to a chip war?
Papermaster: Optimal execution of AI workloads requires innovation at all levels of the stack, including hardware, software and system-level solutions. We believe AI will be pervasively infused into all computing applications, from the endpoint to the edge to the cloud. Given the diversity of AI requirements, having the right computing solution that addresses specific needs will be key.
Mark Papermaster, AMD
Does Moore’s Law apply to AI chips? Will Hoff’s Law of scalability become more relevant now?
Papermaster: AI performance will be driven much more by architectural improvements and hardware-software co-design than by the traditional Moore’s Law. Hoff’s Law, which showed how standards can enable scalable design processes, still applies in the AI chip world. Today, de facto standards are driven by the leading volume players in AI that determine the software base. Significant divergence from these de facto standards will greatly impede scalability.
How strongly would new silicon innovations reflect Koomey’s Law and Neven’s Law?
Papermaster: Koomey’s Law was another empirical observation of energy-efficiency gains correlated with Moore’s Law scaling. Since AI capability will be driven by factors beyond Moore’s Law, as mentioned previously, we would expect something closer to Neven’s Law scaling for AI.
Editor’s note: Koomey’s Law states that the energy efficiency of classical computers doubles roughly every 18 months, while Neven’s Law posits that quantum computers will gain compute power at a doubly exponential rate over classical computers.
Is there anything you can share about the intent and progress of Universal Chiplet Interconnect Express (UCIE)? How will it redefine interoperability and standardisation in the microprocessor industry?
Papermaster: The UCIe standard is a key factor in driving systems innovation leveraging heterogeneous compute engines and accelerators that will enable the best solutions optimised for performance, cost and power efficiency. There will be many opportunities for the industry to innovate on interconnect, memory and computing. AI is a big growth opportunity for the entire industry. As AI technologies proliferate, it’s imperative to drive costs down, and standardisation plays a key role in accelerating innovation and competition.
How much has been done in the space of neuromorphic architectures and the use of photons and memristors?
Papermaster: AMD has demonstrated neuromorphic compute approximation with LogicNets on FPGAs, and is moving mainstream GPUs and focused AI processors towards low-precision digital math and higher levels of sparsity. At this time, AMD is focused on digital architectures in our product roadmap, while our research team continues to evaluate emerging technologies.
How much of the AI chip market would AMD command and why?
Papermaster: In 2024, we expect to generate $2bn in new sales of GPUs for AI. That market could grow to over $400bn in 2027.
Read more about AI in India
- Almost two-thirds of organisations in India said their responsible AI practices and policies were mature or they had taken steps towards responsible AI adoption, according to a Nasscom study.
- Indian organisations are speeding up deployments of AI across multiple sectors, but legacy systems, siloed data and a shortage of AI-specific talent will stand in the way of greater adoption.
- Machine learning engineer Saurabh Agarwal talks about his career journey in artificial intelligence and what it takes for one to succeed in the field.
- Indian prime minster Narendra Modi has proposed a comprehensive framework for responsible and human-centric AI governance, emphasising the need for ethical AI deployment.