metamorworks - stock.adobe.com

Preparing for AI regulation: The EU AI Act

The EU AI Act builds on existing cyber security, privacy and data governance regulations such as GDPR

Any business that sells products or services in the European Union (EU) that use artificial intelligence (AI) must comply with the EU AI Act, regardless of where they are based.

The first phase of the act becomes law next month. This is Article 5, covering prohibited AI practices and unacceptable uses of AI. The text for Article 5 was finalised on 12 July 2024 and is taking effect six months later, which means from February, organisations building AI systems or using AI as part of their EU products and services will need to prove their systems comply with Article 5.

Ready for Article 5

Among the uses of AI that are banned under Article 5 are AI systems that deploy subliminal techniques beyond a person’s consciousness or purposefully manipulative or deceptive techniques. Article 5 also prohibits the use of AI systems that exploit any of the vulnerabilities of a person or a specific group of people due to their age, disability, or a specific social or economic situation. Systems that analyse social behaviours and then use this information in a detrimental way are also prohibited under Article 5 if their use goes beyond the original intent of the data collection.

Other areas covered by Article 5 include the use of AI systems in law enforcement and biometrics. Industry observers describe the act as a “risk-based” approach to regulating artificial intelligence. 

While Article 5 is due to be enforced from February, the next phase of the AI Act roll-out is the application of codes of practice for general-purpose AI systems. These are systems that can handle tasks they have not been specially trained to do. Such systems cover foundation AI, such as large language models (LLMs). This next phase of the EU AI Act will come into force in May 2025.

Companies selling or using AI in the EU must comply with the AI Act, regardless of where they are based. According to Deloitte, the reach of the act presents multinational companies with three potential options: they can develop AI systems specifically for the EU market, adopt the AI Act as a global standard, or restrict their high-risk offerings within the EU.

The regulatory landscape

Bart Willemsen, vice-president analyst at Gartner, says he is fielding hundreds of conversations on the topic of the EU AI Act and what it means for IT leaders and chief information security officers. Before joining the analyst firm, Willemsen held chief privacy and security officer roles in a number of organisations. His experience and the takeaway from the conversation with Gartner clients is that the EU AI Act builds on the General Data Protection Regulation (GDPR)
 
Under the GDPR, data must be collected for a specific and legitimate purpose and should be processed lawfully, fairly and in a transparent manner. Specifically, the data collection should be limited to what is strictly necessary, and the accuracy of the data must be maintained. Recently, with the introduction of the GDPR Certification Standard and Criteria BC 5701:2024, organisations are now able to show that they meet a level of competency in handling personally identifiable information (PII).

There are plenty of lessons that can be learned from GDPR that should be applied to the EU AI Act. Although the text for GDPR was finalised in 2016, it did not come into effect until 2018. 

“The lawmakers have learned a little bit from the GDPR experience,” says Willemsen. “Two years on from the grace period in May 2018, everybody started calling me up asking where do they start.” In other words, organisations spent two years during the grace period doing nothing about GDPR.

But it is not just GDPR. “One of the things I find myself having to explain to organisations is to look at the AI Act in the context of the legislative framework,” says Willemsen. “It is flanked by things like the Digital Services Act, Digital Markets Act, Data Governance Act, and even the Corporate Sustainability Reporting Directive (CSRD).”

The growth of AI, along with increased cloud usage and rising volumes of data in traditional applications, is forecast to lead to significantly higher IT carbon emissions across industries

For instance, while many organisations are confident they can report on greenhouse gas emissions to comply with the EU CSRD, management consulting firm Bain & Company has forecast that by 2030, the growth of AI, along with increased cloud usage and rising volumes of data in traditional applications will lead to significantly higher IT carbon emissions across industries.

Organisations operating in the EU will need to take into account CSRD. Given the power-hungry nature of machine learning and AI inference, the extent to which AI is used may well be influenced by such regulations going forward. 

While it builds on existing regulations, as Mélanie Gornet and Winston Maxwell note in the Hal Open Science paper The European approach to regulating AI through technical standards, the AI Act takes a different route from these. Their observation is that the EU AI Act draws inspiration from European product safety rules.

As Gornet and Maxwell explain: “AI systems will require a conformity assessment that will be based on harmonised standards, i.e. technical specifications drawn up by European standardisation organisations (ESOs).”

The authors point out that these possess various legal properties, such as generating a presumption of conformity with the legislation. This conformity assessment results in European Conformity (CE) marking of the AI product to show compliance with EU regulations. Unlike other product safety regulations, Gornet and Maxwell note that the AI Act is not only intended to protect against risks to safety, but also against adverse effects on fundamental rights.

Standards and best practices

“What we’ve seen in the last decade is relevant now when preparing for the AI Act,” says Willemsen, when asked what steps organisations should be taking to ensure they remain compliant with the act. He urges organisations embarking on an AI strategy not to underestimate the relevance of these legal requirements.

In a blog looking at the significance of the EU AI Act, Martin Gill, vice-president research director at Forrester, describes the legislation as “a minimum standard, not a best practice”.

He says: “Building trust with consumers and users will be key to the development of AI experiences. For firms operating within the EU, and even those outside, following the risk categorisation and governance recommendations that the EU AI Act lays out is a robust, risk-oriented approach that, at a minimum, will help create safe, trustworthy and human-centric AI experiences that cause no harm, avoid costly or embarrassing missteps and, ideally, drive efficiency and differentiation.”

Taking responsibility for AI

Willemsen does not believe organisations need to create a chief AI officer role. “It’s not a different discipline like security or privacy. Most of the time, AI is considered a new type of technology,” he says.

Nevertheless, privacy and security measures are required when considering how to deploy AI technology. This is why Willemsen feels GDPR is one of the regulations organisations need to use to frame their AI strategy.

He urges organisations to put in place strategic, tactical and operational-level measures when deploying AI systems. This requires a multi-stakeholder, multi-disciplinary AI team, which Willemsen says needs to grow as the projects grow, building knowledge and experience. “In this team, you will see security, privacy, legal, compliance and business stakeholders,” he adds. 

While business leaders may feel that their own AI strategy is compliant with the EU AI Act, the same is not true of suppliers and AI-enabled enterprise systems. In the Gartner paper, Getting ready for the EU AI Act, phase 3, the analyst firm recommends that IT and business leaders accommodate the AI Act in any third-party risk assessment. This, says Gartner, should include contractual reviews and a push to amend existing contracts with new language to reinforce emerging regulatory requirements.

As Gartner notes, there is a good chance that an organisation’s largest AI risk may have nothing to do with the AI it develops itself. Instead, it may still risk being non-compliant with the EU AI Act if its IT providers and suppliers use the organisation’s data to train their models.

“Most organisations say their vendor contracts don’t allow their vendors to use their data, but most vendor contracts have a product enhancement clause,” Gartner warns. Such a clause could be interpreted as giving the supplier the right to use the organisation’s data to help improve its own products.

What is clear is that, irrespective of whether an organisation has EU offices, if it provides products and services to EU citizens, an assessment of the impact of the AI Act is essential. Non-compliance with the act’s requirements could cost businesses up to €15m or 3% of global turnover. Violation of Article 5 – covering banned uses of AI – can result in fines of up to €35m or 7% of global turnover.

Read more about the EU AI Act

Read more on Regulatory compliance and standard requirements