peshkov - stock.adobe.com
Digital Ethics Summit 2024: recognising AI’s socio-technical nature
At trade association TechUK’s eighth annual Digital Ethics Summit, public officials and industry figures and civil society groups met to discuss the ethical challenges associated with the proliferation of artificial intelligence tools globally and the direction of travel set for 2025
Artificial intelligence (AI) must be recognised and regulated as a socio-technical system that has the potential to massively deepen social inequalities and further entrench concentrations of power.
Speaking at trade association TechUK’s eighth annual Digital Ethics Summit in December, delegates heralded 2025 as the “year of diffusion” for AI, noting while there were no major new technical advances in the technology in the past year – particularly from the generative AI platforms that kickstarted the current hype cycle with the release of their models to millions of users at the end of 2022 – organisations are now “in the nitty gritty” of practically mapping the technology into their workflows and processes.
The delegates added that, in the year ahead, the focus will be on getting AI applications into production and scaling its use, while improving the general safety of the technology.
They also highlighted the new UK Labour government’s focus on using the technology as an engine of economic growth, as well as its emphasis on achieving greater AI diffusion in both the public and private sectors through building a stronger safety and assurance ecosystem.
However, delegates argued that the increasing proliferation and penetration of AI tools through 2024 means there is now a pressing need to operationalise and standardise ethical approaches to AI, as while numerous frameworks and principles have been created in recent years, the majority are still too “high level and vague”.
They stressed the need for any regulation to recognise the socio-technical nature of AI (whereby the technical components of a given system are informed by social processes and vice versa), which means it must consider how the technology will lead to greater inequality and concentrations of power. They also highlighted how AI is diffusing at a time when trust in government and corporations is at an all-time low.
To avoid deepening these inequalities and power concentrations while also re-building trust, many delegates emphasised the need for inclusive, participatory approaches to AI’s development and governance, something that needs to be implemented locally, nationally and globally.
Conversation also touched on questions around sovereign technical capabilities, with delegates noting that only China and the US are in a position to act unilaterally on AI, and whether it is better to take a closed or open approach to the technology’s further development.
During the previous summit in December 2023, conversation similarly focused on the need to translate well-intentioned ethical principles and frameworks for AI into concrete practices, emphasising that much of the discussion around how to control AI is overly dominated by corporations and governments from rich countries in the global north.
A consensus emerged that while the growing intensity of the international debate around AI is a sign of positive progress, there must also be a greater emphasis placed on AI as a social-technical system, which means reckoning with the political economy of the technology and dealing with the practical effects of its operation in real-world settings.
Operationalising ethics, inclusively
Speaking in a panel on what the latest technical breakthroughs mean for AI ethics, Leanne Allen, the UK head of AI at consultancy firm KPMG, said that while the ethical principles around AI have stood the test of time (at least in the sense that no new principles are being added to the list), applying those principles in practice remains challenging.
Giving the example of “explainability” as one of the settled ethical principles for AI, Allen added that it’s “fundamentally difficult” to explain the outputs of generative AI models, meaning “there needs to be nuances and further guidance on” what many ethical principles look like in practice.
Melissa Heikkilä, a senior reporter for MIT Technology Review, agreed that while it’s promising to see so many companies developing ethical frameworks for AI, as well as expanding their technical capacities for red teaming and auditing of models, much of this work is still taking place at a very high level.
“I think no one can agree how to do a proper audit, or what these bias evaluations look like. It’s still very much in the Wild West, and companies each have their own definitions,” she said, adding there is a distinct lack of “meaningful transparency” that is hindering the process of standardisation in these areas.
Alice Schoenauer Sebag, a senior member of technical staff in Cohere’s AI safety team, added that finding some way of standardising ethical AI approaches will help to “kickstart innovation” while getting everyone on the same page in terms of setting a shared understanding of the issues.
Highlighting the “risk and reliability” working group of AI benchmarking firm MLCommons, she said work is already underway to build a shared taxonomy around the technology, from which a benchmarking platform for companies can then be built to assist them in evaluating their models: “I think this will really be critical in building trust, and helping diffusion by having a shared understanding of safety, accessibility and transparency.”
Sebag further added that, as more companies move from AI “experimentation to production”, there is a need to start having “really grounded conversations about what it means for the product to be safe” in the context it’s being deployed.
“I wouldn’t necessarily say that the [ethical] conversations are getting harder, I would say that they’re getting more concrete,” she said. “It means we need to be super specific, and so that’s what we’re doing with customers – we’re really discussing all the really the crucial details about what it means for this application, this use case, to be deployed and to be responsible.”
Allen said because most organisations are adopting AI as opposed to building their own models, they’re ultimately looking for assurance that what they’re purchasing is safe and reliable.
“Most organisations…don’t have control over that aspect, they’re relying on whatever the contract is with that organisation to say that they’ve gone through ethical standards,” she said. “But there’s still uncertainty, as we all know it’s not perfect, and it won’t be perfect for a long time.”
However, the delegates stressed the importance of operationalising inclusivity in particular, noting there are cultural differences that are not accounted for in the AI industry’s focus on the English-speaking world.
“Since we’re deploying products to help businesses around the world…it’s super important that we don’t have a Western, English-centric approach to safety,” said Sebag. “We really need to make sure that the models are safe by what safe means, basically, everywhere.”
Concentrations of power
Heikkilä added that over the past 10 years, the dominance of English-speaking countries in the development of AI hasn’t really changed, but has huge implications: “With language comes a lot of power and values and culture, and the language or country that can dominate the technology can dominate our whole view, as everyone interacts with these models.
“Having a diverse set of languages and a geographical split is so important, especially if you want to scale this technology globally…It’s really important to have these representations, because we don’t want to have an AI that only really applies in one way, or only works for one set of people.”
However, Heikkilä stressed that in political, socio-technical terms, we’re likely to witness “an even further concentration of power, data, [and] compute” among a few firms and countries, particularly amid “rising tensions with the US and China”.
Speaking on a separate panel about AI regulation, governance and safety in the UK, Alex Krasodomski, director of the Digital Society Programme at Chatham House, said that Sino-American rivalry over AI means “this is the era of the accelerator, not the hand brake”, noting that some are already calling for a new AI-focused Manhattan Project to ensure the US stays ahead of China.
He also noted that because of the geopolitical situation, it is unlikely for any international agreements to be reached on AI that go beyond technical safety measures to anything more political.
Andrew Pakes, the Labour (Co-op) MP for Peterborough, said that while geopolitical power dynamics are important to consider, there must also be a concurrent consideration of internal power imbalances within UK society, and how AI should be regulated in that context as well.
“We are in what I hope will be the dying days of essentially the neoliberal model when it comes to regulation. We talk a lot about regulation…and everyone means something different. In the public discourse on what they believe regulation is, they think it’s about the public good regulation, but largely it’s about economic competition and protecting marketplace,” he said, adding any regulatory approach should focus on building a cohesive, democratic environment based on inclusion.
“We need to think about what AI means in the everyday economy for people in work, but largely in places like mine, the sad reality is that industrial change has largely meant people being told they will have better opportunities, and their children having worse opportunities. That’s the context of industrial change in this country.”
He added while there is a clear need for the UK to figure out ways to capitalise on AI and remain competitive internationally, the benefits cannot be limited to the country’s existing economic and scientific centres.
“Places like mine in Peterborough, sitting next to Cambridge, you’ve got the tale of the UK in two cities, 40 minutes away,” he said, noting while the latter is a hub for innovation, jobs and growth, the former has some of the highest levels of child poverty in the UK, and some of the lowest levels of people going on to university. “We have two different lives that people live just by the postcode they live in – how do we deal with that challenge?”
Pakes added that the picture is further complicated by the fact this next wave of AI-powered industrial change is coming at a time when people simply do not trust in change, especially when it’s being imposed on them from above by governments and large corporations.
“We are trying to embark on one of the greatest, most rapid industrial changes we’ve known, but we’re probably at the period in our democracy where people have the least trust of change which is done to them. And we see that globally,” he said. “People love innovation, they love their gadgets, but they also see things like the Post Office [scandal] and they distrust government IT, and government’s ability to do good. They equally distrust large corporations, I’m afraid.”
Pakes concluded that it is key for people to feel like change was being done with them, rather than to them, otherwise: “We will lose the economic benefits and we may actually build more division in our society.”
Responsible diffusion
Speaking on a panel about how to responsibly diffuse AI throughout the public sector, other delegates stressed the need to co-design AI with the people it will be used on, and to get them involved early in the lifecycle of any given system.
For Jeni Tennison, founder of campaign group Connected By Data, while this could be via public deliberations or user-led research, AI developers need to have links with civil society organisations and directly with the public.
She added there needs to be a “mode of curious experimentation” around AI that recognises it will not be perfect or work straight out-of-the-box: “Let’s together find the route that leads us to something that we all value.”
Also noting the pressing need to include normal people in the development of AI, Hetan Shah, chief executive at the British Academy, highlighted the public’s experience of austerity in the wake of the 2008 recession, saying that while the government at the time pushed this inclusive idea of The Big Society, in practice the public witnessed a diminishing of public services.
“There was a respectable kernel of thought behind that, but it got bound up in something completely different. It became about, ‘We don’t spend any money on libraries anymore, so we’ll have volunteers instead’,” he said. “If that’s where AI gets wrapped up, citizens won’t like it. Your agenda will end in failure if you don’t think about the citizens.”
Tennison added that, ultimately, practically including normal people is a question of politics and political priorities.
On the international side, Martin Tisné, CEO and thematic envoy to the AI Action Summit being held in France in early 2025, underscored the need for international collaboration to address AI’s strategic importance and geopolitical implications.
He noted that in organising the latest summit – which follows on from the inaugural AI Safety Summit held in Bletchley Park in November 2023 and the AI Seoul Summit held in May 2024 – while some nations clear view the technology as a strategic asset, the event is being organised in a way that is attempting to “bend the trajectory of AI globally towards collaboration, rather than competition”.
He added that collaboration was critical, because “unless you’re the US or China”, you simply are not in the position to act nationalistically around AI.
Sovereign capabilities?
For Krasodomski, part of the solution to the clear geopolitical power imbalances around AI could lay in countries building up their own sovereign capabilities and technical expertise, something that is already underway with AI safety institutes.
“Chatham House has come out strongly in its research on this idea of public AI…the idea being it’s not good enough for governments to be regulators,” he said. “They need to be builders, they need to have capacity and a mandate to deliver AI services, rather than simply relying on enormous technology companies with whom they are unable to strike an equal and balanced bargain over the technology that their population depends on.”
However, he stressed that currently, “it is not a fair conversation” for the UK and others due to the “realities that the AI industry is currently hyper concentrated in the hands of a few really big companies” such as Microsoft and Google.
Highlighting the power imbalance that exist both in and between countries – which has hobbled the ability of many governments to invest in and roll out their own digital capabilities – Krasodomski added that: “Each country will be vying about who can have the strongest relationship with the only companies who are capable of deploying the capital at the scale that AI requires.”
In terms of what the UK government in particular should do, Krasodomski said other countries such as Sweden, Switzerland and Singapore are building their own national AI models “because they recognise that this is their chance to take control”, adding that while the UK doesn’t necessarily have to build a sovereign AI capability, it should at least build up technical capacities within government that allow it to start negotiating more effectively with the big providers.
Chloe MacEwen, senior director of UK government affairs at Microsoft, added that the UK is uniquely placed in an increasingly multilateral world to reap the benefits of AI, as its scientific talent and expertise will help contribute to “standard setting” and safety research.
She further added that the UK is well placed to build new markets around third-party audits and assurance of AI models, and that the cloud-first strategy of successive governments means organisations will have access to the underlying infrastructure necessary for AI’s further proliferation.
She said that while the building blocks for AI are already in place in terms of the cloud infrastructure (in December 2023, Microsoft committed to investing £2.5bn in the UK over the next three years to more than double the size of its datacentre footprint), the next step for Microsoft is thinking about is how to centralise and organise disparate data so it can be used effectively.
However, Linda Griffin, vice-president of global policy at Mozilla, said that cloud represents “a major geopolitical, strategic issue”, noting “it’s really where a lot of the power of the AI stack lies at the moment, and it’s with a handful of US companies. I don’t feel like we’ve balanced the risks very well.”
Open vs closed AI
Recounting how Mozilla and others spent the 1990s fighting to keep the internet open in the face of its corporate enclosure, Griffin said that “we’re gearing up for the same type of fight with AI”, noting how most AI research is now done by companies in-house without being openly shared.
She added while open approaches to AI are on the rise – and have been gaining traction over the past year in particular with government’s, businesses, third sector organisations and research bodies all looking for open models to build on – it’s still a threat to the current AI market and how it’s shaping up.
“Markets change all the time, but some of the companies that rely on black box AI models are very threatened by open source because, you know, competing with free can be hard, and they’ve done a really good job of painting a picture of open source as ‘dangerous’,” she said, highlighting a report by the US’ National Telecommunications and Information Administration (NTIA) that researched the open versus closed model question.
It concluded that: “Current evidence is not sufficient to definitively determine either that restrictions on such open weight models are warranted, or that restrictions will never be appropriate in the future.” Instead, it outlined the steps the US government should take if “heightened risks emerge” with the models down the line.
Griffin added: “There’s no evidence or data to back up [the assertion that open is dangerous]. All AI is going to be dangerous and difficult to work with, but just because something’s put in a black box that we can access or truly understand doesn’t mean it’s safer. They both need guardrails, open and closed.”
In line with Krasodomski, who stressed the need for governments to have greater control over the infrastructure that enables AI-powered public services, Griffin said it’s in the UK’s interests to “think ahead on how we can use open source to build”.
“Most companies won’t be able to pay for the increasing cost of these APIs, they’ve no control over how the terms and conditions will change. We know from 25 years of the Web and more that real innovation that benefits more people happens in the open, and that’s where the UK can really shine.”
Griffin said that, ultimately, it comes down to a question of trust: “We hear a lot of hype about AI and public services, how it’s going to be transformative and save us loads of taxpayers money, which would be wonderful. But how are we supposed to trust AI, really, at scale in healthcare, for example, unless we understand more about the data, how it’s been trained, and why it arrives at certain decisions? And the clear answer is: be more open.”
Read more about digital ethics
- UK police continue to hold millions of custody images unlawfully: Annual report from the biometrics and surveillance camera commissioner of England and Wales highlights the ongoing and unlawful retention millions of custody images of innocent people never charged with a crime by police.
- Digital Ethics Summit: Who benefits from new technology?: Experts at the 2022 Digital Ethics Summit say expedited development cycles and obviously over-hyped PR material, in tandem with the public’s near-total exclusion from conversations around technology, is creating distrust towards the tech sector.
- Interview: Petra Molnar, author of The walls have eyes: Refugee lawyer and author Petra Molnar speaks to Computer Weekly about the extreme violence people on the move face at borders across the world, and how increasingly hostile anti-immigrant politics is being enabled and reinforced by a ‘lucrative panopticon’ of surveillance technologies.