Pro-business AI framework spans sector-specific regulations

But should organisations deploying artificial intelligence comply with EU or UK proposals?

The Department for Digital, Culture, Media and Sport’s (DCMS’s) new paper on artificial intelligence (AI), published earlier this week, outlines the government’s approach to regulating AI technology in the UK, with proposed rules addressing future risks and opportunities so that businesses are clear how they can develop and use AI systems, and consumers are confident that they are safe and robust. 

The paper presents six core principles, with a focus on pro-innovation and the need to define AI in a way that can be understood across different industry sectors and regulatory bodies. The six principles for AI governance presented in the paper cover the safety of AI, explainability and fairness of algorithms, the requirement to have a legal person to be responsible for AI, and clarified routes to redress unfairness or to contest AI-based decisions.

Digital minister Damian Collins said: “We want to make sure the UK has the right rules to empower businesses and protect people. It is vital that our rules offer clarity to businesses, confidence to investors and boost public trust.”

Much of what is presented in the Establishing a pro-innovation approach to regulating AI paper is reflected in a new study from the Alan Turing Institute. The authors of this report urged policymakers to take a joined-up approach to AI regulations to enable coordination, knowledge generation and sharing, and resource pooling.

Role of the AI regulators

Based on questionnaires sent out to small, medium and large regulators, the Alan Turing Institute study found that AI presents challenges for regulators because of the diversity and scale of its applications. The report’s authors said there were also limitations of sector-specific expertise built up within vertical regulatory bodies.

The Alan Turing Institute recommended that capacity building must provide a means to navigate through this complexity and move beyond sector-specific views of regulation. “Interviewees in our research often spoke of the challenges of regulating uses of AI technologies which cut across regulatory remits,” the report’s authors wrote. “Some also emphasised that regulators must collaborate to ensure consistent or complementary approaches.”

The study also found instances of firms developing or deploying AI in ways that cut across traditional sectoral boundaries. In developing appropriate and effective regulatory responses, there is a need to fully understand and anticipate risks posed by current and potential future applications of AI. This is particularly challenging given that uses of AI often reach across traditional regulatory boundaries, said the report’s authors.

The regulators interviewed for the Alan Turing Institute study said this can lead to concerns around appropriate regulatory responses. The report’s authors urged regulators to address questions over the regulation of AI in order to prevent AI-related harms, and simultaneously to achieve the regulatory certainty needed to underpin consumer confidence and wider public trust. This, according to the Alan Turing Institute, will be essential to promote and enable innovation and uptake of AI, as set out in the UK’s National AI Strategy.

Among the recommendations in the report is that an effective regulatory regime requires consistency and certainty across the regulatory landscape. According to the Alan Turing Institute, such consistency gives regulated entities the confidence to pursue the development and adoption of AI while also encouraging them to incorporate norms of responsible innovation into their practices.

UK’s approach is not equivalent to EU proposal

The DCMS policy paper proposes a framework that sets out how the government will respond to the opportunities of AI, as well as new and accelerated risks. It recommends defining a set of core characteristics of AI to inform the scope of the AI regulatory framework, which can then be adapted by regulators according to their specific domains or sectors. Significantly, the UK’s approach is less centralised compared to the proposed EU AI Act.

Wendy Hall, acting chair of the AI Council, said: “We welcome these important early steps to establish a clear and coherent approach to regulating AI. This is critical to driving responsible innovation and supporting our AI ecosystem to thrive. The AI Council looks forward to working with government on the next steps to develop the whitepaper.”

Commenting on the DCMS AI paper, Tom Sharpe, AI lawyer at Osborne Clarke, said: “The UK seems to be heading towards a sector-based approach, with relevant regulators deciding the best approach based on the particular sector in which they operate.  In some instances, that might lead to a dilemma in which regulator to choose (given the sector) and perhaps means there is a large amount of upskilling to do by regulators.”

While it aims to be pro-innovation and pro-business, the UK is planning to take a very different approach to the EU, where regulation will be centralised. Sharpe said: “There is a practical risk for UK-based AI developers that the EU’s AI Act becomes the ‘gold standard’ (much like the GDPR) if they want their product to be used across the EU. To access the EU market, the UK AI industry will, in practice, need to comply with the EU Act in any case.”

Read more about AI policies

  • Update to copyright law will mean researchers who already have access to data will not require extra permission from copyright owner to run data mining algorithms, removing barriers to artificial intelligence R&D.
  • Westminster claims its new data laws will boost British benefits, protect consumers, and seize the ‘benefits’ of Brexit.

Read more on Artificial intelligence, automation and robotics