Research network for ethical AI launched in the UK
The network claims it will put social justice at the core of a multidisciplinary approach to artificial intelligence research
A humanities-led network of researchers has set out to establish a multidisciplinary base around the development of ethical artificial intelligence (AI).
The Just AI network will build on research in AI ethics, orienting it around the practical issues of social justice, distribution, and governance and design.
Its aim is to connect researchers and practitioners from a range of disciplines – including philosophy, law, media and communications, human-computer interaction, ethnography, user-centred design, data science, and computer and social sciences – to identify opportunities for collaborative, interdisciplinary work.
The initiative is being led by the Ada Lovelace Institute, an independent data and AI think tank, in partnership with the Arts and Humanities Research Council (AHRC), and will also seek to inform the development of policy and best practice around the use of AI.
“The Just AI network will help ensure the development and deployment of AI and data-driven technologies serves the common good by connecting research on technical solutions with understanding of social and ethical values and impact,” said Carly Kind, director of the Ada Lovelace Institute. “We’re pleased to be working in partnership with the AHRC and with Alison Powell, whose expertise in the interrelationships between people, technology and ethics make her the ideal candidate to lead the Just AI network.”
Powell, who works at the London School of Economics (LSE), specifically researches how people’s values influence how technology is built, as well as how it changes the way we live and work. She’s currently working on several projects related to citizenship, internet of things-enabled cities, data and ethics.
“By looking at how ethics is practised, and connecting a range of disciplinary and practical perspectives, we can cut through the noise and start to make an influence in this area,” said Powell.
Read more about artificial intelligence
- AI has been baked into enterprise applications in recent years and often given special names, such as Einstein, Leonardo and Coleman. But has the hype delivered?
- The inherent opacity of artificial neural networks means human accountability is needed to keep these systems in check, rather than increased transparency of its inner workings.
- As AI becomes more widespread, so the need to render it explainable increases. How can companies navigate the technical and ethical challenges?
By establishing the network, it’s hoped that the researchers will be able to create a common infrastructure that will form the basis for future collaboration, and that connecting different approaches will identify ways to translate evidence into practical guidance, regulation and design.
The network will also deliver a programme of activity, including workshops, written and creative outputs, and peer-reviewed articles, it’s claimed.
“There’s no doubt that the development and use of artificial intelligence has the potential to transform our lives, but for society to use and benefit from it, we need to be assured that AI technologies are being developed and deployed in responsible and ethical ways,” said professor Edward Harcourt, director of research, strategy and innovation at AHRC.
“This network is a vital step in the right direction towards achieving that, integrating expertise to understand and challenge the ethical and social risks and impacts of data and AI.”
The network will initially operate for one year, and recruitment for a postdoctoral research officer to support the network is open until 12 February.
Collaborative approach
In June 2018, the government launched the Centre for Data Ethics and Innovation (CDEI) to drive a collaborative, multi-stakeholder approach to developing frameworks that manage the proliferation of AI and other data-driven technologies.
According to CDEI chair Roger Taylor, who spoke to Computer Weekly shortly after it was established, said that when it comes to AI, there’s an imbalance of power between organisations and governments on the one hand, and consumers on the other.
“Their knowledge of the customer’s behaviour far exceeds the customer’s knowledge of their behaviour,” he said at the time.
“The question is who should control that power. What we’re talking about really is trying to nuance the degree to which that power is either held by particular organisations – in which case, is it properly held accountable for the way it’s using that power – or are there mechanisms that will distribute that power more evenly across people?”
In September 2019, the CDEI published a series of “snapshot papers” that looked at various ethical issues around AI.
In the same month, a report it commissioned from the Royal United Services Institute (RUSI) was also published, which looked into the use of algorithms in policing and found that stronger safeguards were needed to protect against bias.