Feng Yu - stock.adobe.com
The power of AI can be unleashed with a focus on ethics
An EY survey reveals the public’s concerns around the growing use of artificial intelligence – but also the opportunities for organisations that take the right ethical approach
Artificial intelligence (AI) is already an omnipresent feature of our daily lives and its growing usage is likely to transform the world around us. But the rapid development of AI tools has been accompanied by growing ethical questions, which could ultimately limit the extent to which AI flourishes and reaches its full potential. Indeed, AI governance is one of three pillars of the UK’s government’s national AI strategy, published in September.
If AI is to reach its full potential, the questions around governance and ethics need to be addressed urgently. To answer these questions, not only should businesses embed ethics into their AI and get the basics right, but they must also think proactively about how to go beyond this, including making the technology a force for good.
In the wake of the national AI strategy, EY UK&I has published a report, How AI can deliver an ethical future, for which we surveyed more than 2,000 consumers, to understand how they engage with AI and what concerns – if any – they have about its use.
Given AI’s ubiquity, it is perhaps no surprise that nearly all consumers are aware of the technology; many also see the benefits of the fast, personalised services it can enable. But, despite familiarity with AI, consumers’ understanding of where it is used, and how it works, is extremely low.
According to the EY survey, while 96% of respondents said they were aware of AI, just 25% said they had a good understanding of what it was. This gap contributes to a feeling of anxiety and a lack of trust.
The low level of trust in AI among consumers is most pronounced when it comes to their data and privacy. There are also concerns about how AI can perpetuate biases based on ethnicity and gender. Just 11% of respondents said they felt their privacy was protected when companies collected personal data for use in AI. Trust is lost when consumers have no awareness or knowledge as to how their data is being used.
“Consumers’ concerns about AI should not be ignored – and they are not insurmountable”
Praveen Shankar, EY
The consumers we spoke to want action to address their worries. They want control over the use of their personal data, proactive action to combat AI bias, and tangible action from regulators.
The ability to make simple and effective decisions regarding their personal data is overwhelmingly important for consumers too. At present, 59% of respondents to our survey agreed that companies are not doing enough to ensure the decisions made by AI are fair, transparent, and free from bias – 40% of respondents said they didn’t know how to report an AI issue.
Consumers’ concerns about AI should not be ignored – and they are not insurmountable. As businesses invest in AI, they need to adopt a wider perspective that is not focused purely on the technology itself. There are three strands to this approach for government, regulators and businesses to tackle, each of which can address ethical questions and improve trust and confidence.
The first of these is building trust into the foundations of AI through a human-led approach, starting with the design of AI algorithms. Input data needs to be bias-free, while the people behind AI tools need to have the right training to ensure bias-free outputs. Above all, there should be transparency and accountability in how decisions are made by AI.
Citing technological limitations as a reason for a lack of transparency or not owning accountability for AI’s decision-making won’t be accepted by consumers – or regulators.
To bridge the trust gap, collaboration is key and AI stakeholders must work as an ecosystem to mitigate risks. Businesses must ensure AI is a board-level agenda item and a core part of their overall strategy, and there must be a proactive dialogue between government, suppliers, consumers and regulators.
Read more about AI ethics
- Unesco member states adopt AI ethics recommendation.
- Cities worldwide band together to push for ethical AI.
- Developers have a moral duty to create ethical AI.
Partners, vendors and suppliers need to understand and respect a company’s ethical guardrails, and it is incumbent on all parties to collaborate on a common risk framework that keeps everyone on track. Likewise, a proactive approach with regulators is imperative – it is far better to engage regulators in an ongoing ethics dialogue than tackle an issue retrospectively.
The final, crucial step is to focus on AI’s role as a force for good. Indeed, in many ways, a solution to the challenges posed by AI can be found within AI itself.
AI tools can help counter human programmers’ unconscious biases, for example, while they can also be used to protect – rather than undermine – privacy and security. Beyond that, efforts to tackle global issues such as sustainability, climate change, or health and educational inequalities could all benefit from the application of AI tools. Customer education is vital here and represents a significant opportunity to help consumers understand AI’s positive, transformative potential.
AI is here to stay. It is transforming our lives, societies and economies, while redefining industries and businesses. But to unlock its full potential, it is vital that ethics is woven into AI’s fabric. Achieve this, and AI can act as the force for good that it promises to be.
Praveen Shankar is head of technology, media and telecommunications at EY UK & Ireland.