Arthead - Adobe
Big tech’s cloud oligopoly risks AI market concentration
The oligopoly that big tech giants have over cloud computing could translate to a similar domination in the AI market, given the financial and compute resources needed to make the technology effective at scale
Since the start of 2024, competition authorities in the US and UK have set their sights on the worryingly close links between generative artificial intelligence (AI) firms and their wealthy tech giant backers.
This includes a general probe into big tech firms' AI investments and partnerships by the US Federal Trade Commission (FTC), and a specific investigation into Microsoft’s partnership with OpenAI by the UK Competition and Markets Authority (CMA).
In April, CMA chief executive Sarah Cardell told a conference in Washington, DC of her organisation's concerns about an “interconnected web” of over 90 partnerships and strategic investments established by Google, Apple, Microsoft, Meta, Amazon and Nvidia in the market for generative AI foundation models.
While the debate around these partnerships may seem like an technical one, the tech giants’ growing oligopoly over AI doesn’t just mean countries, consumers and companies could be losing out financially, but according to those Computer Weekly spoke with, it’s starting to reshape the future of this nascent industry and its role in society at large.
On the surface, the sole issue here seems to be the relationships tech giants have with individual companies, with Microsoft, Alphabet and Amazon investing billions in the top AI startups to supercharge the sector and give themselves growing influence over the frontrunners in the process.
Take Microsoft, for example. Alongside owning nascent AI pioneer OpenAI and being the major backer of French AI firm Mistral, the firm’s investment arm M12 has poured huge sums into a slate of other AI companies that now sit in the tech giants’ orbit. OpenAI’s main rival Anthropic is heavily funded by Google (Alphabet) and Amazon – Amazon’s venture investment is its single biggest in any firm – while the two firms have invested billions of dollars in businesses elsewhere in the AI space, including support for those just starting up.
“What it looks like from here is Microsoft using its bottomless pockets to hoover up the nascent AI industry,” explains Nicky Stewart, the non-executive director of independent AI start-up Yellow Submarine.
Beyond direct financial ownership though, maybe the most powerful tool these companies have to dominate the AI industry is their existing stranglehold over the cloud computing sector.
Cloud dominance
Cloud computing, or the delivery of computing services over the internet, allows users to access data and applications over the internet from remote physical servers, databases and computers. Its ability to run platforms or software at a much larger scale than its physical, localised counterpart has made it the basis of everything from websites and email to modern AI systems.
The market is currently dominated by Amazon (via AWS) and Microsoft (via Azure) as well as to a lesser extent Google (via Google Cloud). Between them, the trio controls 66% of the entire cloud computing market. The technology is the main infrastructure for any current AI systems.
As Stewart explains, it is this position of power that allows cloud computing giants to shape which AI systems are promoted and used by their customers – usually systems they own or have investments in, at the expense of new (potentially better or cheaper) systems from competitors.
“You can’t separate AI away from the cloud industry, the two are like conjoined twins,” Stewart says, who in a past life was the former commercial director of UK Cloud. The firm went bust back in 2022 after struggling to compete with cloud giants such as Amazon and Microsoft.
This process is compounded by their deep pockets. For example, many of these firms offer huge subsidies and deals to help entice customers, which competitors seeking to avoid short-term losses simply can’t offer, while also sponsoring huge swathes of the AI industry to bring more nascent firms into their orbit.
“If you’re a buyer of these services it’s very cheap at the start. Often you can be enticed with large amounts of free credit, lots of incentives and training to get you into your ecosystem,” says Mark Boost, chief executive of cloud computing competitor Civo. “And after that it’s very, very hard to leave – you’ve brought into a lot of their propriety technology, so even as you see escalating costs you’re locked in.”
That’s not even covering the ability it gives these firms to hoover up all of the hardware needed to power these kind of tools – such as GPUs, which often cost around $40,000 each – at a scale smaller firms just cannot match.
Some of the dangers here are obvious. Trapped by the lack of alternatives, interoperability, and the fact they’ve tied all their systems into one provider, users of big tech cloud infrastructure often end up stuck dealing with ever increasing prices.
But there are risks that are less obvious too. As one example, the datacentres that power cloud computing and AI for tech giants such as Microsoft and Amazon chew through huge amounts of power; so much so that Elon Musk even predicted the rise of AI could leave the US facing shortages of electricity as soon as 2025.
And so in that fight for the finite amount of energy being created by our current infrastructure there’s a real risk that the rest of society and economy will find itself competing with AI and cloud computing giants.
And those firms are working overtime to try and ensure they have enough power to match their growing ambitions. Earlier this month, AWS bought an entire nuclear power station to power one of its datacentres. In 2023, Microsoft made a similar move, while Google signed a deal to power its Nevada datacentres with a Geothermal Energy plant.
Then there’s the risk that with such an oligopoly, these firms become a lynchpin for the entire economy. “If something disastrous should happen to one of those providers, if that’s a prolonged outage, a cyber attack, whatever it was, it would have devastating effects for the whole UK economy,” says Boost. He compares it to what happened to European gas prices after the war in Ukraine left it without Russian-provided natural gas.
The push for “data sovereignty” is a major concern for those Computer Weekly spoke to – with most tech giants being based in the US, the country is home to 5381 datacentres, about 10 times as many as its next national competitor, largely in hubs run by a handful of tech giants. Even just holding and processing that data gives them a massive edge when it comes to training their AI tools, which are reliant on analysing huge amounts of data to function.
“Every day, we are giving money to the hyperscalers to the detriment of our own nascent AI industry. But we’re just not giving them the opportunities,” as Simon Hansford, the former chief executive of UKCloud, explains it. “Data is the new oil. It has great value and we need to own that data ourselves – to use and mine that data for national benefit, rather than the benefit of others.”
A spokesperson for Amazon claimed the firm was “democratising access to AI, making our cost-effective technologies accessible to any organisation who wants to develop their own models, and build safe, secure generative AI applications” and that Amazon’s systems make it easy to access models “from leading AI companies, including AI21 Labs, Anthropic, Cohere, Meta, and Mistral AI”.
They highlighted the firm’s response to the CMA, in which it claims to generally work to “increase interoperability, not limit it” in its systems.
While Google did not comment in time for publication, Microsoft sent Computer Weekly a blog post outlining how it is attempting to promote competition in the AI space by, for example, advancing a broad array of partnerships throughout the technology stack; making application programming interfaces (APIs) publicly available; enabling customers to switch over to other cloud providers; and supporting both the physical and cybersecurity needs of all different types of AI models.
It also outlined that it is making AI models and development tools broadly available to software developers around the world, so that every nation has the opportunity to build its own AI economy.
Effective interventions?
Those Computer Weekly spoke to weren’t optimistic that recent interventions by the CMA, FTC or EU regulators would do enough. While part of that is technical – the huge differentials in wages between the private and public sector has sparked fears of a AI “brain drain” in Whitehall – another major concern is that regulators are unlikely to take substantive action while the central government remains so glowing in its reception of the AI industry.
The UK government have hosted AI conferences to try and set global standards for the technology, and recently pumped £100m into AI research to prove the country’s “pro-innovation approach” to AI. In November 2023, the Chancellor called Microsoft’s plans to build £2.5bn in datacentre infrastructure in the UK proof the country is set to become a “science superpower”.
It’s worth bearing in mind, as one of our interviewees pointed out, that just the cost of buying the GPUs needed for that plant could make up well around half of that headline figure – but nonetheless the government have made their stance on ensuring the UK becomes an AI powerhouse clear. And major interventions by the CMA would certainly pose problems for that agenda.
“But if the CMA ignore them, then no other regulatory body will pick up another investigation for many years, because they’ll say, ‘Well, the CMA looked at it recently’,” says Hansford.
And sometimes cynicism of government regulation in this space risks veering into the arguments made by parts of the sector over the years; that governments inherently move too slow to keep pace with the tech
“It’s a very tiring argument to say that the law is always lagging…Or that regulation stifles innovation, or the third one that regulators don’t know what they’re talking about,” says Sandra Wachter, of technology and regulation at the Oxford Internet Institute. “It’s the holy trinity of boring arguments, all of which are nonsense.”
She cites the fact that regulators have increasingly started setting out more stringent rules governing the tech space – from multibillion-pound CMA and EU cases against tech giants for anti-competitive practice in everything from their app stories to ad policies to the UK’s Online Safety Bill.
Some of those Computer Weekly spoke to were concerned that even if regulators drop massive fines, it may make little difference. “They’ll take the fines, because by that time, they’ve almost created a market. What’s a half a billion fine, when you’ve created a 50 billion market out of it?,” says Boost.
But would making these changes actually address any of the underlying concerns critics have about the current and potential uses of AI?
“The centralisation aspect of AI is definitely a concern, but it highlights something that AI is always about, which is a redistribution of power,” says Dan McQuillan, a lecturer in creative and social computing at Goldsmiths. “AI is intrinsically about the transfer of agency from people who are closest to an area of activity or expertise, be that teaching, healthcare or anything else.”
And already, even though this technology is still in a nascent phase, it’s starting to have devastating effects on society.
Generative AI has led to a booming industry for deepfake pornography which is netting millions of viewers – some 96% of all deepfake images are non-consensual pornography of women. A controversial and potentially faulty AI system is used by the UK Department for Work and Pensions to assess potential fraud among Universal Credit recipients. It’s even subject of a lawsuit in the US as health insurance firms roll out AI to streamline the denial of medical claims by their clients – often unjustly.
“You can address the bloated lack of competitiveness,” says McQuillan. “But that isn’t going to address the kind of harms that I am thinking about.”
Read more about artificial intelligence
- Government insists it is acting ‘responsibly’ on military AI: The government has responded to calls from a Lords committee that it must “proceed with caution” when it comes to autonomous weapons and military artificial intelligence, arguing that caution is already embedded throughout its approach.
- Inclusive approaches to AI governance needed to engage public: Technology practitioners and experts gathered at an annual Alan Turing Institute-run conference discussed the need for more inclusive approaches to AI governance that actually engage citizens and workers.
- Lord Holmes: UK cannot ’wait and see’ to regulate AI: Legislation is needed to seize the benefits of artificial intelligence while minimising its risks, says Lord Holmes - but the government’s ‘wait and see’ approach to regulation will fail on both fronts.