d1revolver - Fotolia
Why AI skills, broadband and machine liability must be addressed
The House of Lords select committee on artificial intelligence has urged the government to adapt policies to bolster AI and protect the public
The House of Lords select committee on artificial intelligence (AI) has called on the government to do more to bolster the UK’s network infrastructure to support artificial intelligence.
The select committee has been hearing evidence since October 2017. Its findings, published in the AI in the UK: ready, willing and able? report, also recommended that the government should amend its skilled migrant quotas to encourage more AI experts to come to the UK.
The report said: “We are concerned that the number of workers provided for under the Tier 1 (exceptional talent) visa scheme will be insufficient and the requirements too high for the needs of UK companies and startups.”
In the report, the committee recommended that the government should add machine learning and AI to the Tier 2 skills shortage list, rather than rely on an increase of 1,000 specialists at Tier 1.
Regarding the roll-out of superfast broadband and mobile networking, the committee said: “We welcome the government’s intentions to upgrade the nation’s digital infrastructure, as far as they go. However, we are concerned that it does not have enough impetus behind it to ensure that the digital foundations of the country are in place in time to take advantage of the potential artificial intelligence offers.
“We urge the government to consider further substantial public investment to ensure that everywhere in the UK is included within the roll-out of 5G and ultrafast broadband, as this should be seen as a necessity.”
The report highlighted the accountability of AI-powered systems as among its major concerns and called on the government and Ofcom to research the impact of AI on media.
Paul Clarke, CTO at Ocado, who gave evidence to the select committee, warned: “AI definitely raises all sorts of new questions to do with accountability. Is it the person or people who provided the data who are accountable, the person who built the AI, the person who validated it, the company that operates it?
“I am sure much time will be taken up in the courts deciding on a case-by-case basis until legal precedence is established. It is not clear. In this area this is definitely a new world, and we are going to have to come up with some new answers regarding accountability.”
Read more about the House of Lords select committee on AI
- House of Lords artificial intelligence select committee hears evidence on societal risks of AI, data, life-long learning and the changing role of white-collar workers.
- House of Lords select committee calls for government to draw up an ethical code of conduct, which organisations developing AI can sign up to.
Addressing how AI could be used to influence people’s opinions on social media, the select committee said: “AI makes the processing and manipulating of all forms of digital data substantially easier and, given that digital data permeates so many aspects of modern life, this presents both opportunities and unprecedented challenges.
“There is a rapidly growing need for public understanding of, and engagement with, AI to develop alongside the technology itself. The manipulation of data in particular will be a key area for public understanding and discussion in the coming months and years.
“We recommend that the government and Ofcom commission research into the possible impact of AI on conventional and social media outlets, and investigate measures that might counteract the use of AI to mislead or distort public opinion, as a matter of urgency.”
The liability of AI systems is another big area in AI that the committee said needs further investigation. The report said: “In our opinion, it is possible to foresee a scenario where AI systems may malfunction, underperform or otherwise make erroneous decisions which cause harm. In particular, this might happen when an algorithm learns and evolves of its own accord.
“It was not clear to us, nor to our witnesses, whether new mechanisms for legal liability and redress in such situations are required, or whether existing mechanisms are sufficient. We recommend that the Law Commission consider the adequacy of existing legislation to address the legal liability issues of AI and, where appropriate, recommend to government appropriate remedies to ensure that the law is clear in this area.”
Commenting on the report’s findings, Louis Halpern, chairman of Active OMG, the British company behind the natural language conversational self-learning AI, Ami, highlighted the importance of keeping personal data safe to avoid it being misused within AI algorithms.
“AI will penetrate every sector of the economy and has tremendous potential to improve people’s lives,” he said. “Consumers need to know their data is safe. We have to avoid the AI industry being tainted with Facebook/Cambridge Analytica-type scandals.”
Prevent bias in machine learning
Brandon Purcell, principal analyst at Forrester, warned: “To prevent bias in machine learning, you must understand how bias infiltrates machine learning models. And politicians are not data scientists, who will be on the front lines fighting against algorithmic bias. And data scientists are not ethicists, who will help companies decide what values to instill in artificially intelligent systems.
‘At the end of the day, machine learning excels at detecting and exploiting differences between people. Companies will need to refresh their own core values to determine when differentiated treatment is helpful, and when it is harmful.”
Such biases will only be found if people can audit what the AI algorithm learns and its decision-making process. A recent survey from Fortune Knowledge Group, commissioned by Genpact, found that 63% out of 300 senior decision-makers surveyed said they wanted to see more governance in AI.
Sanjay Srivastava, chief digital officer at Genpact, said: “The challenge of AI isn’t just the automation of processes – it’s about the up-front process design and governance you put in to manage the automated enterprise.”
The ability to trace the reasoning path that AI technologies use to make decisions is important. This visibility is crucial in financial services, where auditors and regulators require firms to understand the source of a machine’s decision.
As Computer Weekly has reported previously, the House of Lords select committee on AI has said there is an urgent need for a cross-sector ethical code of conduct, or AI code, suitable for implementation across public and private sector organisations that are developing or adopting AI. It said such an AI code could be drawn up and promoted by the Centre for Data Ethics and Innovation, with input from the AI Council and the Alan Turing Institute.