The EU AI Act – how does it apply to your organisation?

May 22nd, 2024 Posted in Data Protection

On May 21st 2024, the European Council gave the final nod of approval on a law aiming to harmonise rules on artificial intelligence, the artificial intelligence act. This article highlights the key things your organisation needs to know about this ground-breaking law.

Introduction

Whilst Artificial Intelligence (“AI”) has been around for decades, it is only during the last few years that we have seen the use of AI explode across various industries. This is partly because of the abundance of data that is now available, the significant sums organisations are investing in this technology and the ever-increasing computational power.

Together, these elements have created the perfect environment for AI development to thrive, advancing from the more basic predictive AI to the very complex generative AI models, (see below for definitions). This has enabled the development of sophisticated large language models that can mimic human behaviours such as creativity and reasoning. For example, as a large language model, Chat GPT 4 can generate text, pictures and videos and can be applied to what appears to be an almost never-ending list of use cases from writing poems to solving complex maths problems, passing exams, drafting lawsuits and even building a working website using only a hand-drawn sketch as a guide.

Why do we need legislation to regulate AI?

Although, there are significant benefits associated with using AI, its use does not come without risks. Whilst AI undoubtedly has the ability to make our lives easier, improve business performance and drive positive change, it has the potential to cause confusion, interfere with fundamental rights and even cause serious harm.

As an example, users have raised concerns about the accuracy of the outputs of some AI tools because they have been known to provide incorrect responses and even ‘hallucinate’ i.e. if the AI tool does not know the correct answer, it will simply fabricate a response.

Another serious concern is the question of whether the AI system being used is biased, as a result of biases that exist in those that have developed the system and/or the data it has been trained on because, if it is, this could, potentially, result in individuals being subjected to discrimination. In particular, if an AI tool decides whether or not an individual will be offered a job, be given a loan or be subject to law enforcement, and the decision is flawed, this could have serious implications for the individual involved.

Further, if AI is used to control the supply of heating and water but it fails to work appropriately, this could endanger lives.

AI Act officially adopted

In view of the above, as well as other issues, EU legislators recognised that individuals need to be protected and companies need to work to an acceptable standard. Consequently, work began on the European Union Artificial Intelligence Act (“the Act”), the world’s first comprehensive law regulating AI.

The first iteration from April 2021 only took account of predictive AI and, therefore, had to be amended, quite significantly, to make provision for generative AI. The version of the Act which has now been adopted by the EU on 21 May 2024, contains a very broad definition of AI that will (hopefully) stand the test of time.

Key dates

The new law will enter into force 20 days after it is published in the Official Journal of the European Union, with most provisions coming into effect within two years. That said, some provisions will become effective after only 6 months, whilst there will be a delay of up to 6 years for other provisions. See below:

6 months

The ban on prohibited AI practices will take effect.

12 months

The rules relating to general purpose AI for new systems will take effect.

2 Years

All the other provisions not mentioned elsewhere in this list will take effect.

3 Years

The rules relating to general purpose AI for pre-existing systems will take effect.

6 Years

After six years the rules will apply to pre-existing high-risk systems used by public authorities and, by 31st December 2030, the rules will also apply to “large scale IT systems” that were put on the market or put into service before the 36 month point (as referred to in Annexe X of the Act).

Where does it apply?

Whilst the Act is EU legislation, it will apply, not only to organisations located within the EU but, in certain circumstances, it will also apply to organisations outside the EU, due to its extra-territorial scope. In particular, if you are an organisation based outside the EU, you will be subject to this far-reaching law if:

  • You are a ‘provider’ placing an AI product or service on the EU market (i.e: you offer an AI tool to companies or individuals in the EU) or
  • You are a ‘provider’ or a ‘deployer’ of an AI system where the output produced by the system is used in the EU (i.e. its use affects people in the EU). For example, if you are a US based organisation and you use an AI tool in the US, in relation to US citizens but at a later date, use the output generated relating to US citizens in the EU, in relation to EU citizens, your use of the output in respect of EU citizens will be subject to the AI Act.

It is also important to note that the legislation doesn’t only apply to systems that are purchased. It applies to free-to-use AI technology as well.

What does the EU AI Act cover?

The Act will regulate the use of ‘AI systems’ which includes both predictive AI and generative AI. With regard to the difference between these two types of AI, in simple terms, predictive AI uses previous patterns to predict future events, whereas generative AI learns from previous patterns to create new content. The definition within the Act is very broad, so as to cover the existing types of AI systems that we are already aware of but also, legislators are hopeful that it will be able to encompass new types of AI, as and when they arise. The definition within the Act reads as follows:

‘a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments;

Therefore, the Act will not apply to very basic automation tools but to AI systems that have the capacity to infer something from the information it receives and to create relevant outputs. As such, it will apply to a huge number of AI systems that are already in place by organisations throughout the EU.

The Act also covers to General Purpose Artificial Intelligence (“GPAI”) Models which is a term that has been adopted to describe AI that can be used for a wide range of purposes, rather than just one unique purpose. For example, narrow AI can only be used for one particular task such as playing chess whereas GPAI Models can be used for various tasks including playing games, conversing, programming etc.

However, due to their abilities, GPAI Models are much more complicated and costly to build than narrow AI and, as a result, are much more powerful. Unsurprisingly, GPIA Models often carry greater risks. Legislators recognised this and have made provision for GPAI Models with systemic risks, which are subject to additional requirements under the Act. Article 51 of the Act sets out the criteria to be considered in assessing whether or not a GPIA Model has systemic risks, which includes reference to the amount of computation used in the model.

That said, there are a number of exemptions to the provisions within the Act. In particular, the Act will not apply to AI systems used solely for:

  • National security, military or defence purposes
  • Research and innovation.

Who does the EU AI Act apply to?

The Act will apply to ‘providers’, ‘deployers’, ‘importers’ and ‘distributors’. See definitions below.

  • A provider is a person or organisation that develops the AI and places it on the EU market or arranges for the AI to be developed and subsequently placed on the EU market under their own name or trademark. This applies to AI that is offered for a charge or for free. For example, Open AI developed ChatGPT and offered it to the EU market.
  • A deployer is a person or organisation that uses (deploys) the AI in the EU. For example, an organisation using a chat bot in the EU or a bank using AI to decide whether or not an applicant can be granted a loan in the EU. However, if the AI is used in a personal capacity, the provisions relating to a deployer within the Act will not apply. In certain circumstances, deployers may become ‘providers’ if they significantly modify an AI system that has been provided to them.
  • An importer is a person or organisation located in the EU that places a third country AI product or service on the EU market.
  • A distributor is a person or organisation that makes the AI available to the EU market but they are not a provider or an importer.

Each of the above have different obligations under the Act, with most of those obligations falling to providers. See below for further details.

How does the EU AI Act regulate AI?

Different obligations apply depending on whether the AI is an AI system or a GPAI Model.

AI systems

A risk-based approach has been adopted by legislators which means that greater obligations are imposed in relation to AI systems with higher risks. The Act divides the risks into four main categories, namely ‘Unacceptable’, ‘High’, ‘Limited’ and ‘Minimal’.

Unacceptable risk

Article 5 within Chapter II of the Act Article 5: Prohibited Artificial Intelligence Practices | EU Artificial Intelligence Act sets out prohibited AI systems which include AI systems that:

  • Use subliminal, manipulative or deceptive techniques that are designed to distort someone’s behaviour
  • Exploit individuals  due to their age, disability or a specific social or economic situation
  • Social scoring leading to detrimental or unfavourable treatment that is unjustified or disproportionate
  • Predictive policing (unless certain conditions are satisfied)
  • Carry out untargeted scraping of facial images from the internet or CCTV for the purpose of creating or expanding facial recognition databases
  • Infer emotions of employees at work (unless certain conditions are satisfied)
  • Categorise people according to their race, political opinions, trade union membership, religious or philosophical beliefs, sex life or sexual orientation (unless certain conditions are satisfied)
  • Carry out real time remote biometric identification systems in publicly accessible spaces for the purpose of law enforcement (unless certain conditions are satisfied).

Therefore, the above are considered to potentially pose such a serious risk to individuals that they are not permitted under the Act. Examples that are likely to fall into this category include voice activated toys that encourage dangerous behaviour in children and scraping social media sites for facial images in order to add them to general purpose databases. That said, some of the processing activities listed above, are permitted, if strict criteria are satisfied.

High risk

The list of AI systems that will be classed as high risk is much longer than the list of prohibited systems. Full details of the type of AI systems that will be classed as high-risk are set out in Annexe III to the Act and includes AI systems used in:

  • Biometrics tools that identify and categorise individuals
  • Critical infrastructure management and operation (e.g. electricity and gas)
  • Education and vocational training
  • Employment, access to self-employment and worker management
  • Private services and essential public services (e.g. emergency services dispatching)
  • Law enforcement
  • Migration and border control
  • Administration of justice and the democratic process

In addition, AI systems that are a safety component of a product or the product themselves and are already covered by EU safety laws listed in Annex I of the Act, will be considered high-risk.

Therefore, the starting point is that AI systems used for the above purposes are regarded as such a high risk that they can only be lawfully used if the obligations relating to their use set out in the Act, are fully complied with. Examples that are likely to fall into this category include AI systems that mark exams, examine visa applications, sift through CVs for job applicants and conduct credit scoring which could result in applications for loans being declined.

Limited risk

This category relates to AI systems that, whilst not considered as a high risk to individuals, have the capability to manipulate or deceive individuals and, because of this, it must be made clear to users that they are using an AI system. Examples that are likely to fall into this category include Chatbots, deepfakes and generative AI systems that create content such as ChatGPT.

Minimal risk

The final category of AI systems is minimal risk and it is thought that most AI systems presently used in the EU would be deemed to fall into this category. Examples include AI-enabled video games and spam filters.

GPAI Models

GPAI Models fall into two categories, namely, those with systemic risks and those without. In order to identify the GPAI Models with systemic risks, the GPAI Models will be assessed for their impact capabilities by being subjected to an evaluation of technical tools and methodologies, including indicators and benchmarks. However, GPIA Models will be assumed to have high impact capabilities (and a systemic risk) if its computing power exceeds 10(^25) FLOPS (floating-point operations).

Obligations for AI systems

There are four different categories of AI Systems and two different categories of GPAI Models and, for the most part, the legal obligations differ depending on whether the AI is an AI System or a GPAI Model. That said, the transparency requirements, as set out within Article 50 of the Act, apply to both GPIA Models and all AI Systems except the ‘minimal risk’ category. The transparency obligations require that the system must make it clear to the user that they are using AI.

High risk

As expected, the obligations under the Act in respect of high-risk AI systems are much greater than for any other type of AI system. There are numerous requirements that need to be adhered to during the design phase, before placing it on the EU market and post-market.  The requirements include:

  • Establishing a risk management system i.e. a process applied during the development phase that identifies and evaluates risks and implements measures to address those risks (Article 9)
  • Data and Data Governance i.e. ensuring that the AI system is developed using appropriate training, validation and testing (Article 10)
  • Drawing up technical documentation clearly describing the AI system and demonstrating how the system complies with the legislation (Article 11)
  • Keeping records i.e. to ensure the AI system allows for the automatic recording of logs (Article 12)
  • Ensuring transparency i.e. clearly and concisely explaining appropriate information to deployers and users to enable them to fully understand important considerations such as the purpose of the AI, its level of accuracy and its risks, as well as other relevant information (Article 13)
  • Ensuring that the AI system is built in such a way that it can be overseen by a human with a view to minimising risks (Article 14)
  • Ensuring that the AI system is designed in such a way that it is consistently accurate and robust and that it has an appropriate level of cybersecurity in place (Article 15)
  • Registering the AI system on an EU-wide database managed by the European Commission before placing it on the market (Article 49)
  • Post-marketing monitoring i.e. establish and document a post-marketing monitoring system to collect data enabling the system to be continually evaluated (Article 72)
  • Reporting of serious incidents i.e. inform market surveillance authorities of any serious incidents (Article 73)

Limited risk

AI systems in the limited risk category will not have the same strict obligations to comply with as those falling into the high risk category. However, they will have transparency obligations insofar as the suer will need to be informed that they are interacting with an AI system.

Minimal risk

Those AI systems considered to be of minimal risk will have no particular legal obligations under the Act. However, it is anticipated that the EU will develop a Code of Practice in relation to such systems with a view to setting a common standard for all AI systems and ensuring that individuals are suitably protected and are provided with a degree of confidence that they can trust the AI systems they use.

What are the requirements for GPIA Models?

GPIA Models without systemic risk (Articles 53 and 54 of the Act)

The following obligations need to be complied with for GPIA Models without systemic risk:

  • Technical documentation about the model needs to be drawn up and kept it up to date (as this may be required by the AI Office)
  • Documentation for providers who wish to integrate the GPIA Model into their AI systems needs to be drawn up and kept up-to-date
  • A policy to demonstrate how EU copyright law will be complied needs to be implemented
  • A summary explaining the content used to train the GPAI Model needs to be drawn up and made publicly available
  • Providers are legally required to co-operate with the Commission and
  • Before placing a GPAI Model on the EU market, providers established in third countries must appoint an EU representative (Article 54).

GPIA Models with systemic risk (Article 55)

The following obligations need to be complied with for GPIA Models with systemic risk:

  • All the obligations set out within Articles 53 and 54 of the Act, as specified above in relation to GPAI Models without systemic risk
  • Model evaluations need to be performed with a view to identifying and mitigating systemic risks
  • Systemic risks need to be assessed and mitigated
  • Serious incidents, together with remediation steps need to be documented and reported to the AI Office and
  • An adequate level of cyber-security needs to be in place.

What are the consequences of non-compliance?

The law-makers have made it clear that non-compliance with the Act will be treated as a very serious matter with the legislation providing regulators with the power to impose penalties even greater than those under the GDPR. The maximum fine for the most serious infringements will be 35 million Euros or 7% of a company’s global annual turnover in the previous financial year. That said, this high level will not apply to all breaches and the level of fine will depend on the type of incident. There will be a three-tier system as follows:

  • Highest tier – 35 million Euros or 7% of global annual turnover
    • For non-compliance with prohibited AI practices.
  • Medium tier – 15 million Euros or 3% of global annual turnover
    • For breach of certain obligations such as the transparency requirements.
  • Lowest tier – 7.5 million Euros or 1% of global annual turnover
    • For providing incorrect, incomplete or misleading information.

The potential to be liable for such substantial penalties will, no doubt, be of great concern to organisations providing and/or deploying AI.

As with the GDPR, supervisory authorities will enforce the AI Act but the European Commission has also established a new EU AI Office European AI Office | Shaping Europe’s digital future (europa.eu) This newly created office will support Member States in implementing the Act, adopt regulatory and monitoring functions and engage in consultations in order to promote best practice and develop codes of practice.

In addition to the EU AI Office, the European Artificial Intelligence Board has been established, which has been tasked with duties such as providing advice and assistance to the Commission, contributing to the effective co-operation between the Commission and Member States and ensuring the consistent application of the Act.

Will the UK follow suit with similar AI legislation?

As mentioned above, the Act is the world’s first comprehensive law regulating AI. The EU is, therefore, leading the way and setting the benchmark in this field. Back in 2018 the EU introduced the GDPR which has come to be accepted as the golden standard in relation to the protection of personal data and we have seen numerous other countries around the world implement similar laws in recent years. We may, therefore, see a similar situation arise in relation to the AI legislation.

The UK government certainly seems keen to regulate this topic, having published a White Paper on it in March 2023 AI regulation: a pro-innovation approach – GOV.UK (www.gov.uk) and we have already seen the ICO introduce an AI Toolkit AI and data protection risk toolkit | ICO. In view of this, it would not be surprising if we were to see the UK add such a law to its statute books. In any event, UK organisations may need to comply with the Act if they wish to trade in the EU or are otherwise caught by the Act’s extraterritorial scope.

Next Steps

It can be seen from the above that the Act will introduce significant regulatory change and impose demanding legal obligations, albeit the above is simply an overview. Due to the complexities involved, organisations would be wise to familiarise themselves with the legislation now so they  can start readying themselves as the Act becomes enforceable.

As first steps, we would recommend that you review the Act, establish if the Act applies to your organisation and your role (i.e. are you are provider, deployer or both?), conduct a gap-analysis of your operational activity against the Act to understand where you have work to do and map your AI usage to identify risks related to your use of AI.

Need assistance with your AI compliance?

If you would like any help in understanding how the Act will impact your business or you would like to discuss how we can support your organisation from an AI governance perspective, please get in touch using the form below.

  • This field is for validation purposes and should be left unchanged.

Sandra May

Written by Sandra May

Sandra is an experienced senior data protection consultant and is a designated DPO for Evalian™ clients. Sandra spent much of her career as a litigation lawyer and over the last ten years has been focusing on specialising in data protection. Sandra's qualifications include BCS Practitioner Certificate in Data Protection, ISEB Certificate in Data Protection, as well as being a FCILEx (Fellow of the Chartered Institute of Legal Executives).