AI Evalian blog

The EU’s Proposed Artificial Intelligence Regulation – an overview

June 21st, 2021 Posted in Evalian News

In April of this year, the European Commission released a proposal for a ‘Regulation Laying Down Harmonized Rules on Artificial Intelligence. The draft regulation builds on earlier consultation, across member states, on policy options to support the development of trustworthy artificial intelligence (“AI”) in the EU, which respects individuals’ rights and freedoms. As a regulation, the draft sets out a proposed legal framework for AI, which includes rules on the development, implementation and use of it in the EU.  

The proposal is part of a wider EU drive to ensure the development and control of AI as ethical and trustworthy. In the last few years, you will have no doubt read articles about perceived bias in AI systems. These are based on the notion that AI, inherently, is flawed, because it is built by people, who in themselves have their own predispositions, opinions and impartialities that could contaminate the systems they create.  

More than that though, the autonomy and self-learning nature of AI means that it could become a danger to democracy and citizen rights. As one European Parliament blog noted, AI has the potential to incorrectly influence decision-making, inhibit data privacy and create online echo chambers. There’s also concern about the rise of deep fakes, which are incredibly realistic but falsified videos of individuals built with AI. If used maliciously, these videos can damage a person’s reputation and even cause financial loss.   

While this might sound somewhat dystopian, the pace of innovation today means that technology often moves faster than regulations can keep up. The draft regulation, therefore, aims to be technology-neutral and future-proof whilst pointing to technology-specific examples in the annexes, which can be updated as technology develops. 

In introducing the proposal, the EU recognises the potential benefits of AI to society and enterprises. It appears that the body does not want to limit the use of AI. Rather, it wants to ensure that AI is developed and implemented in an ethical way, by reference to the risks associated with specific AI technology.  

At the heart of the new legal framework is a focus on protecting individuals against threats to their health, safety and fundamental rights based on risk. Some types of AI will be banned (due to unacceptable risks) but most will be permitted – subject to rules that apply according to the level of risk associated with them. 

If your organisation builds AI systems, or perhaps uses providers or software that have AI embedded, it’s wise to understand the pending regulation and how it could impact your company. 

Here’s what you need to know. 

First, what is AI?

The draft regulation defines AI systems as: 

“Software that is developed with one or more [specified] techniques and approaches [e.g. machine learning] and can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with.” 

In a business context, common examples of AI systems today include facial recognition technology, biometric identification systems, social media monitoring software and smart chatbot assistants. 

What is the draft regulation?

The proposal lays out four risk categories for AI, each of which has specific governance. These are:  

  • Unacceptable: “A very limited set of particularly harmful uses of AI that contravene EU values because they violate fundamental rights (e.g., social scoring by governments, exploitation of vulnerabilities of children, use of subliminal techniques, and – subject to narrow exceptions – live remote biometric identification systems in publicly accessible spaces used for law enforcement purposes) will be banned.” 
  • High Risk: “A limited number of AI systems defined in the proposal, creating an adverse impact on people’s safety or their fundamental rights (as protected by the EU Charter of Fundamental Rights) are considered to be high-risk.” These include AI recruitment technology, credit scoring systems and biometric identification systems.  
  • Limited Risk: “For certain AI systems specific transparency requirements are imposed, for example where there is a clear risk of manipulation (e.g., via the use of chatbots). Users should be aware that they are interacting with a machine.” 
  • Minimal Risk: “All other AI systems can be developed and used subject to the existing legislation without additional legal obligations. The vast majority of AI systems currently used in the EU fall into this category. Voluntarily, providers of those systems may choose to apply the requirements for trustworthy AI and adhere to voluntary codes of conduct.” This category includes video games and spam filters.  

The sectioning of AI by risk means that not all organisations will have to alter their processes to become compliant. Many AI systems will likely fall in the ‘minimal’ or ‘limited risk categories’. For the latter, the main requirement of the draft regulation is transparency, meaning that companies must make it clear to their customers when they are communicating with a machine, rather than a human. 

Saying this, the draft also encourages a ‘voluntary code of conduct’ for these categories, whereby organisations should do their best to ensure AI is developed and used ethically, irrespective of a government mandate to do so.  

In the case of high-risk AI systems, there are a number of obligations, including developing a risk management system, stringent design and development protocols and mitigation measures. For example, the draft regulation puts in place conformity assessments for high-risk use cases. The objective of these assessments is to “demonstrate that [the company’s] system complies with the mandatory requirements for trustworthy AI (e.g., data quality, documentation and traceability, transparency, human oversight, accuracy and robustness).” 

As well as this, the draft mandates that high-risk systems are guarded by human oversight. This must be accompanied by rigorous reporting of any error or failure found within an AI system, which could have caused damage to a person or breached their fundamental rights. To keep track of high-risk AI systems, the draft regulation also proposes the creation of a public database, where providers will log their systems before they launch.  

What are the penalties for non-compliance?

To ensure implementation and enforcement, the proposal puts forward the establishment of a European Artificial Intelligence Board (“EAIB”).  The EAIB will be responsible for issuing penalties related to non-compliance.  

In terms of the fines, the regulation has imposed a three-tier fine regime, similar to the GDPR. These tiers are: 

  • Up to 2% of annual worldwide turnover or €10 million for incorrect, incomplete or misleading information to notified bodies in response to a request 
  • Up to 4% of annual global turnover or €20 million for non-compliance  
  • Up to 6% of annual global turnover or €30 million for violations relating to unacceptable AI systems  

Who does it apply to and when will it be introduced?

The draft regulation will apply to both public and private bodies that operate within the EU when implemented. If a company is based outside of the EU, but its AI system operates and impacts people within the EU, then the regulation will still apply, meaning it has extraterritorial effect, like the GDPR. The regulation will be relevant to both providers of AI systems and their clients.  

In terms of a timeline for implementation, the draft is still very much in its infancy. The next step is for the European Parliament to discuss and adopt the proposal. Once that happens, it will then be introduced across the EU. However, this will likely take at least two years, if not longer, given the wrangling between the member states, the EU institutions and the lobbying from industry that can be expected. Usually, with laws like this, there is a grace period, which gives organisations time to assess and alter their solutions for compliance.  

Because the proposed regulation is at an early stage, we don’t advise that organisations rush to review their AI systems immediately. This is because, firstly, it’s possible that the articles within the draft regulation will change during the review process; and secondly, the requirements for high-risk systems will need further clarification for companies to understand where their solutions fall.  

As the law continued to develop, we will keep you informed on its impact and what you need to do to comply.  

Need Help?

If you need help on where to start with your compliance or cyber security needs, contact us for a friendly, no-obligation chat. 

Hannah Pisani 250 x 250

Written by Hannah Pisani

Hannah is our in house writer, working with consultants on articles, guides, advisories and blogs and writing our news updates on data protection and information security topics. She has a background in content creation and PR, specialising in technology, data and consumer topics. Her qualifications include a BA in English Language and Literature from Royal Holloway University, London.