The launch of large language and image AI models, sometimes called generative AI or foundation models, seems to have woken up the world to the reality that artificial intelligence is no longer a confined to the deep, secretive realms of technology companies, but is, well…a reality, currently impacting many lives.
We may not be zooming around in driverless cars (yet) but anything you read or see, particularly on the internet, could be AI generated. And if you have an Alexa in your home, you already have an AI virtual assistant at your beck and call. Naturally, such an enormous step in humanity’s journey trips off feelings of excitement and trepidation.
One reason for the latter is that, like the internet, governments around the world are struggling to put in place regulations around AI. Ordinary members of the public are crying out for this new technology to be controlled, but many jurisdictions seem reluctant to set rules and boundaries. In this article, I explore the different approaches taken by the UK and EU in their proposals for regulating AI and what this means for people and businesses.
UK AI White Paper
The Department for Science, Innovation and Technology published a white paper on 29 March 2023 entitled AI Regulation: A Pro-Innovation Approach, which sets out the UK Government’s plan for regulating AI development. It makes clear that going forward, the UK will take a ‘pro-innovation’ stance and regulation will be light-touch.
“Responding to risk and building public trust are important drivers for regulation. But clear and consistent regulation can also support business investment and build confidence in innovation. Throughout our extensive engagement, industry repeatedly emphasised that consumer trust is key to the success of innovation economies. We therefore need a clear, proportionate approach to regulation that enables the responsible application of AI to flourish. Instead of creating cumbersome rules applying to all AI technologies, our framework ensures that regulatory measures are proportionate to context and outcomes, by focusing on the use of AI rather than the technology itself.”
The proposed framework will be underpinned by five key principles, namely:
- Safety, security and robustness
- Appropriate transparency and explainability
- Fairness
- Accountability and governance
- Contestability and redress
These principles will not be implemented in legislation in the short term as the government is determined not to impinge on AI-driven investment, innovation, and entrepreneurial opportunities. Instead, the principles will be applied by regulators who can use their ‘domain-specific expertise to tailor the implementation of the principles to the specific context in which AI is used.’ In due course, legislation will be passed that requires regulators to have regard to the principles.
The EU’s AI Act
Similar to its stance on Genetically Modified Organisms (GMOs), the EU’s approach to AI is firmly in the ‘safety first’ camp. The proposed EU AI Act was born out of a 2021 white paper on the bloc’s digital strategy. According to an article on the European Parliament website,
‘Parliament’s priority is to make sure that AI systems used in the EU are safe, transparent, traceable, non-discriminatory and environmentally friendly. AI systems should be overseen by people, rather than by automation, to prevent harmful outcomes.
Parliament also wants to establish a technology-neutral, uniform definition for AI that could be applied to future AI systems.’
The AI Act creates different regulations for AI systems based on their risk profiles, namely (i) an unacceptable risk, (ii) a high risk, and (iii) low or minimal risk.
Title II defines AI systems whose use is considered unacceptable as contravening EU values, for instance by violating fundamental rights, and are therefore banned. These include real-time facial recognition, social scoring, and cognitive behaviour manipulation.
AI systems defined as high risk are divided into two categories:
- AI systems that are used in products falling under the EU’s product safety legislation.
- AI systems falling into eight specific areas that will have to be registered in an EU database:
- Biometric identification and categorisation of natural persons
- Management and operation of critical infrastructure
- Education and vocational training
- Employment, worker management and access to self-employment
- Access to and enjoyment of essential private services and public services and benefits
- Law enforcement
- Migration, asylum and border control management
- Assistance in legal interpretation and application of the law.
Developers of large language and image models such as ChatGPT which fall under the category of generative AI will have to disclose certain information and comply with transparency requirements, for example:
- State that the content is AI generated.
- Develop safeguards to prevent illegal content from being distributed.
- Disclose any copyright data used to train the AI model.
Similar to the EU GDPR, the AI Act will cover not only EU companies but also overseas organisations that provide AI services to EU citizens. Power to impose significant fines for non-compliance is also provided for by the Act.
Concluding comments
The UK and the EU are taking vastly different approaches concerning the regulating of AI. Whether we are right and they are wrong is a question for those to answer in the future as they look back on history.
However, although we are no longer part of the EU, as the UK’s largest trading partner, the EU can, through the AI Act, dictate how UK businesses develop AI systems. A light-touch regulatory landscape may be nice in theory, but in practice, British organisations will be forced to comply with the EU AI Act if they want to trade with their nearest neighbour.
To discuss any of the points raised in this article, please contact Ann-Maree Blake.