Artificial Intelligence compliance – The EU’s proposed regulation update 2023
In June 2021, we wrote the blog below on the EUs Proposed AI Regulation about the European Commission’s proposal for a ‘Regulation Laying Down Harmonized Rules on Artificial Intelligence’ which focussed on supporting the development of trustworthy artificial intelligence (“AI”) in the EU by introducing the first ever legal framework on AI. Since then, the proposals for an AI Act have undergone numerous changes, following input from EU member states. However, on 6 December 2022 the Council of the European Union approved a new version and it is expected that, once the European Parliament have voted on it, the AI Act will be adopted by the end of 2023. That said, it is anticipated that there will be a two-year grace period before it becomes enforceable, giving organisations an opportunity to adjust to the new regime.
Table of Contents
What’s changed?
The present draft of the AI Act differs in some respects from the original. For example, the definition of what constitutes an AI system has narrowed in scope and includes systems that use machine learning but does not include systems that simply operate according to a set of rules defined by a human. Further, the new draft has extended the list of AI practices that are prohibited, with a view to offering greater protection to individuals.
Of note is that it has been recognised that there will be an administrative burden in complying with the AI Act and, in view of this, extra support will be offered to smaller companies and a newly established European Artificial Intelligence Board (“EAI”) will, not only advise the Commission but will develop guidance for organisations in the industry.
Recently, the Italian privacy regulator (GPDP) has ordered a popular AI chatbot, Replika, to cease processing data on domestic citizens after breaking GDPR rules. According to the regulator, the app doesn’t comply with the law’s transparency requirements, and it processes the personal data of children unlawfully. So it goes without saying, such guidance will, no doubt, be welcomed by all companies developing AI as, once the law is in force, hefty penalties for non-compliance can be imposed with the maximum fine being 30,000,000 Euros or 6% of a company’s total worldwide annual turnover (3% for SMEs and start-ups).
We will keep you updated with developments and what you need to do to comply.
Considering using AI and want to know your compliance obligations?
If you are considering using AI or need guidance on where to start with your data protection or cyber security compliance, contact us for a friendly chat about your data protection compliance needs.
EU’s Proposed AI Regulation
In April 2021, the European Commission released a proposal for a ‘Regulation Laying Down Harmonized Rules on Artificial Intelligence’. The draft regulation builds on earlier consultation, across member states, on policy options to support the development of trustworthy artificial intelligence (“AI”) in the EU, which respects individuals’ rights and freedoms. As a regulation, the draft sets out a proposed legal framework for AI, which includes rules on the development, implementation and use of it in the EU.
The proposal is part of a wider EU drive to ensure the development and control of AI as ethical and trustworthy. In the last few years, you will have no doubt read articles about perceived bias in AI systems. These are based on the notion that AI, inherently, is flawed, because it is built by people, who in themselves have their own predispositions, opinions and impartialities that could contaminate the systems they create.
More than that though, the autonomy and self-learning nature of AI means that it could become a danger to democracy and citizen rights. As one European Parliament blog noted, AI has the potential to incorrectly influence decision-making, inhibit data privacy and create online echo chambers. There’s also concern about the rise of deep fakes, which are incredibly realistic but falsified videos of individuals built with AI. If used maliciously, these videos can damage a person’s reputation and even cause financial loss.
While this might sound somewhat dystopian, the pace of innovation today means that technology often moves faster than regulations can keep up. The draft regulation, therefore, aims to be technology-neutral and future-proof whilst pointing to technology-specific examples in the annexes, which can be updated as technology develops.
In introducing the proposal, the EU recognises the potential benefits of AI to society and enterprises. It appears that the body does not want to limit the use of AI. Rather, it wants to ensure that AI is developed and implemented in an ethical way, by reference to the risks associated with specific AI technology.
At the heart of the new legal framework is a focus on protecting individuals against threats to their health, safety and fundamental rights based on risk. Some types of AI will be banned (due to unacceptable risks) but most will be permitted – subject to rules that apply according to the level of risk associated with them.
If your organisation builds AI systems, or perhaps uses providers or software that have AI embedded, it’s wise to understand the pending regulation and how it could impact your company.
Here’s what you need to know.
What is AI?
The draft regulation defines AI systems as:
“Software that is developed with one or more [specified] techniques and approaches [e.g. machine learning] and can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with.”
In a business context, common examples of AI systems today include facial recognition technology, biometric identification systems, social media monitoring software and smart chatbot assistants.
What is the draft regulation?
The proposal lays out four risk categories for AI, each of which has specific governance. These are:
- Unacceptable: “A very limited set of particularly harmful uses of AI that contravene EU values because they violate fundamental rights (e.g., social scoring by governments, exploitation of vulnerabilities of children, use of subliminal techniques, and – subject to narrow exceptions – live remote biometric identification systems in publicly accessible spaces used for law enforcement purposes) will be banned.”
- High Risk: “A limited number of AI systems defined in the proposal, creating an adverse impact on people’s safety or their fundamental rights (as protected by the EU Charter of Fundamental Rights) are considered to be high-risk.” These include AI recruitment technology, credit scoring systems and biometric identification systems.
- Limited Risk: “For certain AI systems specific transparency requirements are imposed, for example where there is a clear risk of manipulation (e.g., via the use of chatbots). Users should be aware that they are interacting with a machine.”
- Minimal Risk: “All other AI systems can be developed and used subject to the existing legislation without additional legal obligations. The vast majority of AI systems currently used in the EU fall into this category. Voluntarily, providers of those systems may choose to apply the requirements for trustworthy AI and adhere to voluntary codes of conduct.” This category includes video games and spam filters.
The sectioning of AI by risk means that not all organisations will have to alter their processes to become compliant. Many AI systems will likely fall in the ‘minimal’ or ‘limited risk categories’. For the latter, the main requirement of the draft regulation is transparency, meaning that companies must make it clear to their customers when they are communicating with a machine, rather than a human.
Saying this, the draft also encourages a ‘voluntary code of conduct’ for these categories, whereby organisations should do their best to ensure AI is developed and used ethically, irrespective of a government mandate to do so.
In the case of high-risk AI systems, there are a number of obligations, including developing a risk management system, stringent design and development protocols and mitigation measures. For example, the draft regulation puts in place conformity assessments for high-risk use cases. The objective of these assessments is to “demonstrate that [the company’s] system complies with the mandatory requirements for trustworthy AI (e.g., data quality, documentation and traceability, transparency, human oversight, accuracy and robustness).”
As well as this, the draft mandates that high-risk systems are guarded by human oversight. This must be accompanied by rigorous reporting of any error or failure found within an AI system, which could have caused damage to a person or breached their fundamental rights. To keep track of high-risk AI systems, the draft regulation also proposes the creation of a public database, where providers will log their systems before they launch.
What are the penalties for non-compliance?
To ensure implementation and enforcement, the proposal puts forward the establishment of a European Artificial Intelligence Board (“EAIB”). The EAIB will be responsible for issuing penalties related to non-compliance.
In terms of the fines, the regulation has imposed a three-tier fine regime, similar to the GDPR. These tiers are:
- Up to 2% of annual worldwide turnover or €10 million for incorrect, incomplete or misleading information to notified bodies in response to a request
- Up to 4% of annual global turnover or €20 million for non-compliance
- Up to 6% of annual global turnover or €30 million for violations relating to unacceptable AI systems
Who does it apply to and when will it be introduced?
The draft regulation will apply to both public and private bodies that operate within the EU when implemented. If a company is based outside of the EU, but its AI system operates and impacts people within the EU, then the regulation will still apply, meaning it has an extraterritorial effect, like the GDPR. The regulation will be relevant to both providers of AI systems and their clients.
In terms of a timeline for implementation, the draft is still very much in its infancy. The next step is for the European Parliament to discuss and adopt the proposal. Once that happens, it will then be introduced across the EU. However, this will likely take at least two years, if not longer, given the wrangling between the member states, the EU institutions and the lobbying from industry that can be expected. Usually, with laws like this, there is a grace period, which gives organisations time to assess and alter their solutions for compliance.
Because the proposed regulation is at an early stage, we don’t advise that organisations rush to review their AI systems immediately. This is because, firstly, it’s possible that the articles within the draft regulation will change during the review process; and secondly, the requirements for high-risk systems will need further clarification for companies to understand where their solutions fall.
As the law continued to develop, we will keep you informed on its impact and what you need to do to comply.
Need help?
If you need help on where to start with your compliance or cyber security needs, contact us for a friendly, no-obligation chat.
"*" indicates required fields