On 31 August, the UK House of Commons Science, Innovation, and Technology Committee (SITC) published an interim report for the implementation of a regulatory regime for AI. The report sets out 12 challenges of AI governance – covering the purpose, risks to the economy and society as well as its benefits.
This follows a growing urgency to ensure that UK regulations are not left behind the increasing pace of technological advances, nor falling behind the EU and the US in setting regulatory frameworks for governing AI and these technological advancements. The EU is forging ahead with its AI Act and the US has published a blueprint for an AI Bill of Rights for the development of AI regulations.
“The AI white paper should be welcomed as an initial effort to engage with this complex task, but its proposed approach is already risking falling behind the pace of development of AI,” the SITC committee said in an interim report on AI governance. “This threat is made more acute by the efforts of other jurisdictions, principally the European Union and the United States, to set international standards.”
Back in March 2023, the UK government published an AI whitepaper highlighting its approach to regulating AI, in which it stated it would ensure the responsible development of the technology whilst maintaining public trust. The same paper also noted the government’s framework will be underpinned by five principles to guide responsible innovation and use of AI: safety and security, transparency and explainability, fairness, accountability, governance, contestability, and redress.
“We’re now saying that we need to go beyond the five principles that the government set out in March… [which] are at a higher level of abstraction,” SITC Chair, Greg Clark stated, he went on to say that the 12 challenges are set out to be “more concrete and specific” and that the committee is due to take evidence from UK regulators on October 25th this year (2023).
12 challenges of AI governance that must be met
The government’s interim report set out the importance of ensuring policymakers adopt measures that safely leverage technology’s benefits and encourage innovation, whilst providing appropriate safeguards. The government has identified the following key challenges that AI governance must address:
1) Bias – AI can lead to unacceptable biases
2) Privacy – AI tools rely on data and therefore privacy challenges may arise as individuals may be able to be identified and their data may be used in a way beyond what is expected
3) Misrepresentation (in AI-generated material) – AI may generate material that misrepresents behaviour, opinions or character and could even be used for fraudulent purposes
4) Access to data – for AI to be effective, AI developers and researchers need access to large high-quality datasets which only a few organisations possess.
5) Access to computing power – to develop AI researchers require access to significant computing power for the development, training and refining of AI models and tools. As with vast data sets, access to computing power is limited to a few organisations due to the high cost of such power.
6) Black box challenge (i.e., unexplainable AI) – some AI models and tools have become ‘black boxes’ and cannot explain their decision-making processes.
7) Open-source challenge (i.e., should powerful models be made open-source or proprietary?)
8) IP (Intellectual Property) and copyright issues – AI uses other creators’ content, sometimes without permission. Therefore, the policy must ensure creators’ rights are protected.
9) Liability (i.e., who is liable for the harms done?) – the supply chains and players involved in AI models and tools are becoming increasingly complex. Therefore, policy must stipulate which stakeholders are responsible for harm caused by AI.
10) Employment challenge (i.e., job disruption by AI) – AI will impact the employment market by disrupting how people perform the jobs and jobs available.
11) International Coordination challenge (i.e., development of governance frameworks must be an international undertaking) – a global approach to AI must be coordinated as AI is a global technology.
12) Existential challenge (i.e., protections for national security) – there are security concerns regarding the use of AI with some believing AI is a real threat to human existence. As such, governance should be established to protect national security on an international scale.
What does this mean for your organisation?
Perhaps, you are considering the use of ChatGPT to help you streamline your business processes. It is prudent to fully understand the risks of such technology before you start using it. ChatGPT itself, recently told the ICO, “Generative AI, like any other technology, has the potential to pose risks to data privacy if not used responsibly.”
- Our advice:
Completing data protection impact assessments (DPIAs) and seeking professional advice will help towards understanding the risks to your business posed by using AI systems. A robust DPIA will ensure that you have identified and assessed the risks to individuals and, most importantly, taken appropriate steps to mitigate those risks. Whilst many projects, especially AI projects, will always carry some degree of risk, it is imperative that these are minimised. A thorough DPIA will also demonstrate to the ICO that you are taking data protection seriously and have implemented relevant processes to protect individuals’ rights.
Data Protection by design and default
Under the UK GDPR, organisations must implement appropriate technical and organisational measures that reflect the data protection principles and safeguard individual rights. One key element of this is to ‘design’ AI systems that have privacy embedded into them. In practice, this means that privacy issues need to be carefully considered and fully addressed at the creation and development stage of new projects to ensure that the systems are designed in such a way that they provide the level of protection required under the law. It also means that you should use a ‘privacy first’ approach, ensuring that you do not collect and process excessive personal data. Data minimisation (restricting the collection of personal data to what is necessary) and purpose limitation (ensuring that the personal data is only used for the purpose for which it was collected and nothing more) should be closely observed.
What is next for the AI regulatory regime?
From the information available to date, it would seem that the UK is taking a different approach to the regulation of AI compared to the EU. Whilst the EU is adopting a risk-based approach (unacceptable risk, high risk, limited risk, and minimal risk), the UK plans to adopt a principles-based approach using the five principles mentioned above. The latter suggests a more flexible approach than the EU and these two different regimes may result in challenges for organisations operating across both jurisdictions.
However, whether or not the UK does diverge from the EU in relation to the regulation of AI, both have similar objectives, namely, to protect the user but encourage innovation.
Do you need advice on GDPR or data protection?
We are a team of highly experienced experts with qualifications in GDPR and data protection. We can tailor packages to suit your organisation as we know, no two organisations are the same and there is no no one-size-fits-all when it comes to the security of your valuable data. Contact us today for a free quote or to chat about your compliance needs.