MarekPhotoDesign.com - stock.ado
Top 10 data and ethics stories of 2024
Here are Computer Weekly’s top 10 data and ethics stories of 2024
In 2024, Computer Weekly’s data and ethics coverage continued to focus on the various ethical issues associated with the development and deployment of data-driven systems, particularly artificial intelligence (AI).
This included reports on the copyright issues associated with generative AI (GenAI) tools, the environmental impacts of AI, the invasive tracking tools in place across the internet, and the ways in which autonomous weapons undermine human moral agency.
Other stories focused on the wider social implications of data-driven technologies, including the ways they are used to inflict violence on migrants, and how our use of technology prefigures certain political or social outcomes.
1. AI likely to worsen economic inequality, says IMF
In an analysis published 14 January 2024, the IMF examined the potential impact of AI on the global labour market, noting that while it has the potential to “jumpstart productivity, boost global growth and raise incomes around the world”, it could just as easily “replace jobs and deepen inequality”; and will “likely worsen overall inequality” if policymakers do not proactively work to prevent the technology from stoking social tensions.
The IMF said that, unlike labour income inequality, which can decrease in certain scenarios where AI’s displacing effect lowers everyone’s incomes, capital income and wealth inequality “always increase” with greater AI adoption, both nationally and globally.
“The main reason for the increase in capital income and wealth inequality is that AI leads to labour displacement and an increase in the demand for AI capital, increasing capital returns and asset holdings’ value,” it said.
“Since in the model, as in the data, high income workers hold a large share of assets, they benefit more from the rise in capital returns. As a result, in all scenarios, independent of the impact on labour income, the total income of top earners increases because of capital income gains.”
2. GenAI tools ‘could not exist’ if firms are made to pay copyright
In January, GenAI company Anthropic claimed to a US court that using copyrighted content in large language model (LLM) training data counts as “fair use”, and that “today’s general-purpose AI tools simply could not exist” if AI companies had to pay licences for the material.
Anthropic made the claim after, a host of music publishers including Concord, Universal Music Group and ABKCO initiated legal action against the Amazon- and Google-backed firm in October 2023, demanding potentially millions in damages for the allegedly “systematic and widespread infringement of their copyrighted song lyrics”.
However, in a submission to the US Copyright Office on 30 October (which was completely separate from the case), Anthropic said that the training of its AI model Claude “qualifies as a quintessentially lawful use of materials”, arguing that, “to the extent copyrighted works are used in training data, it is for analysis (of statistical relationships between words and concepts) that is unrelated to any expressive purpose of the work”.
On the potential of a licensing regime for LLM’s ingestion of copyrighted content, Anthropic argued that always requiring licences would be inappropriate, as it would lock up access to the vast majority of works and benefit “only the most highly resourced entities” that are able to pay their way into compliance.
In a 40-page document submitted to the court on 16 January 2024 (responding specifically to a “preliminary injunction request” filed by the music publishers), Anthropic took the same argument further, claiming “it would not be possible to amass sufficient content to train an LLM like Claude in arm’s-length licensing transactions, at any price”.
It added that Anthropic is not alone in using data “broadly assembled from the publicly available internet”, and that “in practice, there is no other way to amass a training corpus with the scale and diversity necessary to train a complex LLM with a broad understanding of human language and the world in general”.
Anthropic further claimed that the scale of the datasets required to train LLMs is simply too large to for an effective licensing regime to operate: “One could not enter licensing transactions with enough rights owners to cover the billions of texts necessary to yield the trillions of tokens that general-purpose LLMs require for proper training. If licences were required to train LLMs on copyrighted content, today’s general-purpose AI tools simply could not exist.”
3. Data sharing for immigration raids ferments hostility to migrants
Computer Weekly spoke to members of the Migrants Rights Network (MRN) and Anti-Raids Network (ARN) about how the data sharing between public and private bodies for the purposes of carrying out immigration raids helps to prop up the UK’s hostile environment by instilling an atmosphere of fear and deterring migrants from accessing public services.
Published in the wake of the new Labour government announcing a “major surge in immigration enforcement and returns activity”, including increased detentions and deportations, a report by the MRN details how UK Immigration Enforcement uses data from the public, police, government departments, local authorities and others to facilitate raids.
Julia Tinsley-Kent, head of policy and communications at the MRN and one of the report’s authors, said the data sharing in place – coupled with government rhetoric about strong enforcement – essentially leads to people “self-policing because they’re so scared of all the ways that you can get tripped up” within the hostile environment.
She added this is particularly “insidious” in the context of data sharing from institutions that are supposedly there to help people, such as education or healthcare bodies.
As part of the hostile environment policies, the MRN, the ARN and others have long argued that the function of raids goes much deeper than mere social exclusion, and also works to disrupt the lives of migrants, their families, businesses and communities, as well as to impose a form of terror that produces heightened fear, insecurity and isolation.
4. Autonomous weapons reduce moral agency and devalue human life
At the very end of April, military technology experts gathered in Vienna for a conference on the development and use of autonomous weapons systems (AWS), where they warned about the detrimental psychological effects of AI-powered weapons.
Specific concerns raised by experts throughout the conference included the potential for dehumanisation when people on the receiving end of lethal force are reduced to data points and numbers on a screen; the risk of discrimination during target selection due to biases in the programming or criteria used; as well as the emotional and psychological detachment of operators from the human consequences of their actions.
Speakers also touched on whether there can ever be meaningful human control over AWS, due to the combination of automation bias and how such weapons increase the velocity of warfare beyond human cognition.
5. AI Seoul Summit review
The second global AI summit in Seoul, South Korea saw dozens of governments and companies double down on their commitments to safely and inclusively develop the technology, but questions remained about who exactly is being included and which risks are given priority.
The attendees and experts Computer Weekly spoke with said while the summit ended with some concrete outcomes that can be taken forward before the AI Action Summit due to take place in France in early 2025, there are still a number of areas where further movement is urgently needed.
In particular, they stressed the need for mandatory AI safety commitments from companies; socio-technical evaluations of systems that take into account how they interact with people and institutions in real-world situations; and wider participation from the public, workers and others affected by AI-powered systems.
However, they also said it is “early days yet” and highlighted the importance of the AI Safety Summit events in creating open dialogue between countries and setting the foundation for catalysing future action.
Over the course of the two-day AI Seoul Summit, a number of agreements and pledges were signed by the governments and companies in attendance.
For governments, this includes the European Union (EU) and a group of 10 countries signing the Seoul Declaration, which builds on the Bletchley Deceleration signed six months ago by 28 governments and the EU at the UK’s inaugural AI Safety Summit. It also includes the Seoul Statement of Intent Toward International Cooperation on AI Safety Science, which will see publicly backed research institutes come together to ensure “complementarity and interoperability” between their technical work and general approaches to AI safety.
The Seoul Declaration in particular affirmed “the importance of active multi-stakeholder collaboration” in this area and committed the governments involved to “actively” include a wide range of stakeholders in AI-related discussions.
A larger group of more than two dozen governments also committed to developing shared risk thresholds for frontier AI models to limit their harmful impacts in the Seoul Ministerial Statement, which highlighted the need for effective safeguards and interoperable AI safety testing regimes between countries.
The agreements and pledges made by companies include 16 AI global firms signing the Frontier AI Safety Commitments, which is a specific voluntary set of measures for how they will safely develop the technology, and 14 firms signing the Seoul AI Business Pledge, which is a similar set of commitments made by a mixture of South Korean and international tech firms to approach AI development responsibly.
One of the key voluntary commitments made by the AI companies was not to develop or deploy AI systems if the risks cannot be sufficiently mitigated. However, in the wake of the summit, a group of current and former workers from OpenAI, Anthropic and DeepMind – the first two of which signed the safety commitments in Seoul – said these firms cannot be trusted to voluntarily share information about their systems capabilities and risks with governments or civil society.
6. Invasive tracking ‘endemic’ on sensitive support websites
Dozens of university, charity and policing websites designed to help people get support for serious issues such as sexual abuse, addiction or mental health are inadvertently collecting and sharing site visitors’ sensitive data with advertisers.
A variety of tracking tools embedded on these sites – including Meta Pixel and Google Analytics – mean that when a person visits them seeking help, their sensitive data is collected and shared with companies like Google and Meta, which may become aware that a person is looking to use support services before those services can even offer help.
According to privacy experts attempting to raise awareness of the issue, the use of such tracking tools means people’s information is being shared inadvertently with these advertisers, as soon as they enter the sites in many cases because analytics tags begin collecting personal data before users have interacted with the cookie banner.
Depending on the configuration of the analytics in place, the data collected could include information about the site visitor’s age, location, browser, device, operating system and behaviours online.
While even more data is shared with advertisers if users consent to cookies, experts told Computer Weekly the sites do not provide an adequate explanation of how their information will be stored and used by programmatic advertisers.
They further warned the issue is “endemic” due a widespread lack of awareness about how tracking technologies like cookies work, as well as the potential harms associated with allowing advertisers inadvertent access to such sensitive information.
7. AI interview: Thomas Dekeyser, researcher and film director
Computer Weekly spoke to author and documentary director Thomas Dekeyser about Clodo, a clandestine group of French IT workers who spent the early 1980s sabotaging technological infrastructure, which was used as the jumping off point for a wider conversation about the politics of techno-refusal.
Dekeyser says a major motivation for writing his upcoming book on the subject is that people refusing technology – whether that be the Luddites, Clodo or any other radical formation – are “all too often reduced to the figure of the primitivist, the romantic, or the person who wants to go back in time, and it’s seen as a kind of anti-modernist position to take”.
Noting that ‘technophobe’ or ‘Luddite’ have long been used as pejorative insults for those who oppose the use and control of technology by narrow capitalist interests, Dekeyser outlined the diverse range of historical subjects and their heterogenous motivations for refusal: “I want to push against these terms and what they imply.”
For Dekeyser, the history of technology is necessarily the history of its refusal. From the Ancient Greek inventor Archimedes – who Dekeyser says can be described as the first “machine breaker” due to his tendency to destroy his own inventions – to the early mercantilist states of Europe backing their guild members’ acts of sabotage against new labour devices, the social-technical nature of technology means it has always been a terrain of political struggle.
8. Amazon Mechanical Turk workers suspended without explanation
Hundreds of workers on Amazon’s Mechanical Turk (MTurk) platform were left unable to work after mass account suspensions caused by a suspected glitch in the e-commerce giant’s payments system.
Beginning on 16 May 2024, a number of US-based Mechanical Turk workers began receiving account suspension forms from Amazon, locking them out of their accounts and preventing them from completing more work on the crowdsourcing platform.
Owned and operated by Amazon, Mechanical Turk allows businesses, or “requesters”, to outsource various processes to a “distributed workforce”, who then complete tasks virtually from wherever they are based in the world, including data annotation, surveys, content moderation and AI training.
According to those Computer Weekly spoke with, the suspensions were purportedly tied to issues with the workers’ Amazon Payment accounts, an online payments processing service that allows them to both receive wages and make purchases from Amazon. The issue affected hundreds of workers.
MTurk workers from advocacy organisation Turkopticon outlined how such situations are an on-going issue that workers have to deal with, and detailed Amazon’s poor track record on the issue.
9. Interview: Petra Molnar, author of ‘The walls have eyes’
Refugee lawyer and author Petra Molnar spoke to Computer Weekly about the extreme violence people on the move face at borders across the world, and how increasingly hostile anti-immigrant politics is being enabled and reinforced by a ‘lucrative panopticon’ of surveillance technologies.
She noted how – because of the vast array of surveillance technologies now deployed against people on the move - entire border-crossing regions have been transformed into literal graveyards, while people are resorting to burning off their fingertips to avoid invasive biometric surveillance; hiding in dangerous terrain to evade pushbacks or being placed in refugee camps with dire living conditions; and living homeless because algorithms shielded from public scrutiny are refusing them immigration status in the countries they’ve sought safety in.
Molnar described how lethal border situations are enabled by a mixture of increasingly hostile anti-immigrant politics and sophisticated surveillance technologies, which combine to create a deadly feedback loop for those simply seeking a better life.
She also discussed the “inherently racist and discriminatory” nature of borders, and how the technologies deployed in border spaces are extremely difficult, if not impossible, to divorce from the underlying logic of exclusion that defines them.
10. AI’s environmental cost could outweigh sustainability benefits
The potential of AI to help companies measure and optimise their sustainability efforts could be outweighed by the huge environmental impacts of the technology itself.
On the positive side, speakers at the AI Summit London outlined, for example, how the data analysis capabilities of AI can assist companies with decarbonisation and other environmental initiatives by capturing, connecting and mapping currently disparate data sets; automatically pin point harmful emissions to specific sites in supply chains; as well as predict and manage the demand and supply of energy in specific areas.
They also said it could help companies better manage their Scope 3 emissions (which refers to indirect greenhouse gas emissions that occur outside of a company’s operations, but that are still a result of their activities) by linking up data sources and making them more legible.
However, despite the potential sustainability benefits of AI, speakers were clear that the technology itself is having huge environmental impacts around the world, and that AI itself will come to be a major part of many organisations Scope 3 emissions.
One speaker noted that if the rate of AI usage continues on its current trajectory without any form of intervention, then half of the world’s total energy supply will be used on AI by 2040; while another pointed out that, at a time when billions of people are struggling with access to water, AI-providing companies are using huge amounts of water to cool their datacentres.
They added AI in this context could help build in circularity to the operation, and that it was also key for people in the tech sector to “internalise” thinking about the socio-economic and environmental impacts of AI, so that it is thought about from a much earlier stage in a system’s lifecycle.
Read more about data and ethics
- UN chief blasts AI companies for reckless pursuit of profit: The United Nations general secretary has blasted technology companies and governments for pursuing their own narrow interests in artificial intelligence without any consideration of the common good, as part of wider call to reform global governance.
- Barings Law plans to sue Microsoft and Google over AI training data: Microsoft and Google are using people’s personal data without proper consent to train artificial intelligence models, alleges Barings Law, as it prepares to launch a legal challenge against the tech giants.
- UK Bolt drivers win legal claim to be classed as workers: Employment Tribunal ruling says Bolt must classify its drivers as workers rather than self-employed, putting drivers in line to receive thousands of pounds in compensation from the ride-hailing and delivery app.