It is because of AI’s potential to change the world – for both good and bad that many feel it needs to be regulated. In May of this year, the CEO of Google Sundar Pichai said that “AI is too important not to regulate, and too important not to regulate well.”
The explosion of AI and the hype around it has led to an increasing degree of regulatory scrutiny and around the world we are seeing different approaches being taken to regulation of AI but also some commonalities in the approaches.
Watch Ann-Maree Blake‘s keynote presentation at the Consulegis conference event in Cardiff.
UK and EU’s Stance on AI Regulation
Both the European Union and the United Kingdom have stepped up to the AI regulation plate with enthusiasm but have taken different approaches:
The EU has put forth a broad and prescriptive proposal in the AI Act which aims to regulate AI by adopting a risk-based approach that increases the compliance obligations depending on the specific use case. The UK, in turn, has decided to abstain from new legislation for the time being, relying instead on existing regulations and regulators with an AI-specific overlay.
In the EU the thinking is about “empowering people with a new generation of technologies” whereas in the UK the thinking is about “driving growth and unlocking innovation.”
How will the EU Regulate AI?
The EU is looking to put in place what is probably the most ambitious framework in the world in terms of AI regulation.
It is very much aiming to be a global leader in AI regulation, in much the same way as it has with data protection via the GDPR.
What the EU AI Act does it that it looks at the risk of AI systems and it tries to deal with the risks in a practical way by categorising AI systems into 4 levels of risk which are: unacceptable, high, limited, and minimal or no risk.
If it is passed, it will require companies to assess AI system risks before those systems are put into use.
Companies will be required to obtain permits for high-risk AI, and provide transparency and accountability for those high risk AI systems.
How will the UK Regulate AI?
The UK is not proposing to create umbrella legislation. Instead its approach, which is set out in a white paper published in March 2023 sets out the ambition of the UK being “the best place in the world to build, test and use AI technology”.
Broadly speaking the approach in the White Paper Establishing a pro-innovation approach to AI regulation rests on two main elements: firstly, AI principles that existing regulators (such as the ICO and the FCA (Financial Conduct Authority)) will be asked to implement, and secondly, a set of new ‘central functions’ to support this work.
Global Standards for Regulating AI
The regulation of AI is a complex and whether there will eventually be a true global standard for AI regulation remains to be seen.
However, companies should not wait until there is a clear regulatory framework in place in their jurisdiction as there are key things they can do now including:
- making sure that boards can demonstrate board-level oversight of AI risks. That involves asking the management to put AI on the agenda of board meetings to receive both management’s views and perspective from outside advisors.
- determining who is responsible for AI governance within their organisation and establish an AI committee.
- putting in place policies that govern the development and use of AI, and making sure that those policies align with the approach to regulation taken in the jurisdiction or jurisdictions which are relevant to their business.
- keeping a register of their use of AI tools and systems to understand how AI is used within their organisation.
- watching legislative and regulatory developments that apply to their business, such as the proposed EU’s AI Act and AI Liability Directive, and any new or updated guidance from UK regulators (which is expected within the next 6 to 12 months.
To discuss any of the points raised in this article, please contact Ann-Maree Blake or fill in the form below.