Latest Posts

Approaches to the Regulation of AI

Approaches to the Regulation of AI

It is because of AI’s potential to change the world – for both good and bad that many feel it needs to be regulated. In May of this year, the CEO of Google Sundar Pichai said that “AI is too important not to regulate, and too important not to regulate well.”

The explosion of AI and the hype around it has led to an increasing degree of regulatory scrutiny and around the world we are seeing different approaches being taken to regulation of AI but also some commonalities in the approaches.

Watch Ann-Maree Blake‘s keynote presentation at the Consulegis conference event in Cardiff.

UK AND EU’S STANCE ON AI REGULATION

Both the European Union and the United Kingdom have stepped up to the AI regulation plate with enthusiasm but have taken different approaches:

The EU has put forth a broad and prescriptive proposal in the AI Act which aims to regulate AI by adopting a risk-based approach that increases the compliance obligations depending on the specific use case. The UK, in turn, has decided to abstain from new legislation for the time being, relying instead on existing regulations and regulators with an AI-specific overlay.

In the EU the thinking is about “empowering people with a new generation of technologies” whereas in the UK the thinking is about “driving growth and unlocking innovation.”

HOW WILL THE EU REGULATE AI?

The EU is looking to put in place what is probably the most ambitious framework in the world in terms of AI regulation.

It is very much aiming to be a global leader in AI regulation, in much the same way as it has with data protection via the GDPR.

What the EU AI Act does it that it looks at the risk of AI systems and it tries to deal with the risks in a practical way by categorising AI systems into 4 levels of risk which are: unacceptable, high, limited, and minimal or no risk.

If it is passed, it will require companies to assess AI system risks before those systems are put into use.

Companies will be required to obtain permits for high-risk AI, and provide transparency and accountability for those high risk AI systems.

HOW WILL THE UK REGULATE AI?

The UK is not proposing to create umbrella legislation. Instead its approach, which is set out in a white paper published in March 2023 sets out the ambition of the UK being “the best place in the world to build, test and use AI technology”.

Broadly speaking the approach in the White Paper Establishing a pro-innovation approach to AI regulation rests on two main elements: firstly, AI principles that existing regulators (such as the ICO and the FCA (Financial Conduct Authority)) will be asked to implement, and secondly, a set of new ‘central functions’ to support this work.

GLOBAL STANDARDS FOR REGULATING AI

The regulation of AI is a complex and whether there will eventually be a true global standard for AI regulation remains to be seen.

However, companies should not wait until there is a clear regulatory framework in place in their jurisdiction as there are key things they can do now including:

  • making sure that boards can demonstrate board-level oversight of AI risks. That involves asking the management to put AI on the agenda of board meetings to receive both management’s views and perspective from outside advisors.
  • determining who is responsible for AI governance within their organisation and establish an AI committee.
  • putting in place policies that govern the development and use of AI, and making sure that those policies align with the approach to regulation taken in the jurisdiction or jurisdictions which are relevant to their business.
  • keeping a register of their use of AI tools and systems to understand how AI is used within their organisation.
  • watching legislative and regulatory developments that apply to their business, such as the proposed EU’s AI Act and AI Liability Directive, and any new or updated guidance from UK regulators (which is expected within the next 6 to 12 months.

To discuss any of the points raised in this article, please contact Ann-Maree Blake or fill in the form below.

Read More
Is AI Generated Content Protected By Copyright Law?

Is AI Generated Content Protected By Copyright Law?

At the time of writing, 100 million people around the world have used ChatGPT and more than  15 billion images  have been created using text-to-image algorithms since last year.

Worryingly, 68% of employees have not informed their boss that they are using artificial intelligence generated content (AIGC) when undertaking certain tasks such as writing emails and marketing/sales content, scheduling meetings, creating images, and analysing data.

The reason lack of employer oversight is concerning is that the law surrounding AIGC is, to put it generously, unfit for purpose, especially regarding intellectual property (IP). This article, part one in a two– part series, will provide a snapshot of the latest information around the issue of whether AIGC can be protected under copyright law.

Copyright law is governed by the Copyright, Designs and Patents Act (CDPA) 1988. Copyright seeks to protect the form of creative ideas, not the ideas themselves (these can be protected via confidentiality). Copyright provides a vehicle for the authors of original work to protect their creativity and stop others from using it without permission for their own advantage.

The following categories of works are protected under UK copyright law:

  • Original literary, dramatic, musical, or artistic works which, in the case of literary, dramatic, or musical works are recorded in some way.
  • Sound recordings, films, or broadcasts.
  • The typographical arrangements of published editions.

Both primary and secondary works are protected under the CDPA 1988, though primary works receive stronger protection because they require more significant amounts of creativity and originality.

In the case of literary, dramatic, musical, or artistic works, the author or creator of the work is usually the first owner of any associated copyright. The exception to this is if any of the aforementioned works are created by an employee in their course of their employment. In this case, the employer is the copyright owner unless there is an agreement to the contrary. Where there are two or more authors who have created a work, they may have joint ownership of the copyright if their contributions are indivisible or co-authorship where separate contributions can be identified.

Under the CDPA 1988 computer-generated works are defined as “generated by computer in circumstances such that there is no human author of the work”. Therefore, the law suggests content generated by an artificial intelligence (AI) can be protected by copyright (more on this below).

Let us imagine that one of your employee logs onto ChatGPT and inputs the following:

“1000 words on why triple glazing is better than double glazing”

ChatGPT provides the employee with a 1000-word output. They lightly edit the piece, for example, by adding a call to action, and then publish it on the organisation’s website as a blog.

Who owns the copyright? There are five possibilities:

We can discount possibility one under the CDPA 1988 as the AIGC was made in the course of employment. Possibility four can also be dismissed because the CDPA 1988 does not recognise a non-human as the author or owner of a work. And given the Government’s response to the 2021 AI consultation, this stance is unlikely to change in the near future. Possibility three cannot apply because under Open AI’s terms and conditions, “Subject to your compliance with these Terms, OpenAI hereby assigns to you all its right, title and interest in and to Output.”

This leaves possibility two and five. The latter is currently being fought out in various lawsuits across both sides of the Atlantic.

Therefore, we are left with possibility two – the employer. The next challenge is to establish whether the AI created article can fulfil the CDPA 1988 requirements of originality, authorship, ownership, and duration of the copyright.

It is arguable that the current level of sophistication of AICG does not allow for originality. Everything ‘created’ by AICG is already in existence. The developers simply scraped pre-existing content from the internet (without permission, hence the lawsuits) and trained their models on the enormous streams of pre-existing data. The employee cannot be the true ‘author’ of the article (thereby allowing them to pass on ownership to their employer) because they did not create it. We have already established that ChatGPT cannot be the author/owner of the work, and Open AI has assigned its rights to the person who inputs the request into ChatGPT. The issue of duration of the copyright also creates problems as in many cases, the length of the copyright protection is attached to the lifespan of the author. And as you may have guessed, machines cannot die.

The answer to the question – who owns the copyright of an AICG work is, under current copyright law…no one, because the current legislation does not cover AICG. The above paragraph is confusing and contradictory because that is the current state of the law.

At present, AIGC lacks protection under the provisions of the CDPA 1988. Interestingly, United States District Court Judge Beryl A. Howell recently ruled that AI generated artwork cannot be copyrighted under current US law. In her decision, Judge Howell wrote that copyright has never been granted to work that was “absent any guiding human hand,” adding that “human authorship is a bedrock requirement of copyright.”

Wrapping up

Although AIGC does not benefit from copyright protection under the current CDPA 1988, this does not mean that the law cannot be amended to change the status quo. The Act is already contradictory, given that “the legal concept of originality is defined with reference to human authors and characteristics like personality, judgment, and skill” but originality can be applied to computer-generated work.

By amending the Act to extend authorship to non-human authors, not only could end-consumers rely on some form of IP protection, but it would also encourage investment in AI technology because innovators would be able to rely on IP law to protect their creative efforts.

In part two of this series on AIGC and copyright we will examine the risks of copyright infringement, both when training AI models and using the outputs of AI tools.

To discuss any of the points raised in this article, please contact Marcus Rebuck or fill in the form below.

Read More
Zoom Reverses Policy: Commitment to Customer Data Privacy Takes Centre Stage

Zoom Reverses Policy: Commitment to Customer Data Privacy Takes Centre Stage

In an age where technology intertwines seamlessly with our daily lives, safeguarding personal data has become a paramount concern. Recently, Zoom, a prominent player in the virtual communication realm, found itself at the heart of a controversy that shed light on the delicate balance between AI advancement and customer data privacy. The company’s policy changes related to AI training on customer data sent shockwaves through the tech community, prompting a swift reversal and a renewed commitment to protecting user information.

UNRAVELING THE POLICY CHANGES

In March 2023, Zoom introduced amendments to its terms and conditions which seemingly granted the company extensive latitude in utilizing customer data for training artificial intelligence (AI) models. These amendments were not noticed until early August and once they came to public attention they set off a storm of public concern and scrutiny. Reports from various media outlets questioned the potential ramifications of these policy shifts on user privacy and the ethics of data usage.

THE BACKLASH AND COMPANY RESPONSE

The uproar sparked by the policy changes compelled Zoom to respond swiftly and decisively. The company published a blog post on 7 August 2023 which it subsequently edited on 11 August 2023 outlining its stance. In the post, Zoom clarified that it had no intention of exercising the sweeping rights granted by the revised terms. The company went further, asserting its commitment to customer data privacy and its respect for user concerns.

The company’s subsequent policy update explicitly stated that AI models would not be trained using customer video, audio, or chats without obtaining consent from the customers themselves. This commitment to obtaining explicit permission before utilizing personal data for AI training purposes marked a significant step toward safeguarding user information.

THE BROADER IMPLICATIONS

Zoom’s experience serves as a poignant reminder of the growing tension between technological advancement and individual privacy rights. The incident has broader implications for the tech industry as a whole. It highlights the importance of transparent communication, robust privacy policies, and a proactive approach to addressing user concerns in the face of evolving technologies.

Find out more from Ann-Maree Blake and our Data Protection and Privacy service.

Read More

trusted legal excellence

Get in Touch

Contact us today to discover how we can support you with legal solutions that stand out from the rest.

Get in Touch