search icon Hamburger icon Times icon Caret Down icon Quote Left icon Facebook icon Linkedin icon Linkedin icon box twitter icon Google plus icon Youtube icon Angle Left Icon Angle Right Icon Download Icon Hubspot Icon Align Justify Icon Angle Down Icon home Icon check Icon check Icon X Twitter icon

Litigation Funding Podcast Litigation Funding Podcast Series

Litigation Funding Podcast Series: Artificial Intelligence & The Future of Intellectual Property en

SHARE

In this podcast series, Emily O'Neill General Counsel UK for Deminor Litigation Funding, undertakes interviews with global professionals to discuss different aspects of litigation and litigation funding.

Deminor welcomes you to join this conversation as we summarise the key elements of the conversations between Emily O'Neill and these experts, as captured in the podcast transcripts below. 

Litigation Funding Podcast Series: Artificial Intelligence & The Future of Intellectual Property

Podcast Preface:

Deminor General Counsel UK, Emily O'Neill (EON), speaks with Chris Mammen (CM), leading IP lawyer, AI thought leader, and Managing Partner at international law firm Womble Bond Dickinson.

In this interview, Emily and Chris discuss the rapid development of generative AI tools, how they’re impacting issues around IP, copyright, ownership and inventorship, and how the landscape is likely to change as this technology progresses.

Podcast Transcript:

EON – Welcome to today's podcast. I'm joined by Chris Mammen of Womble Bond Dickinson.

Chris is an expert in AI for IP and has talked on the topic and considered the topic in a range of different forums. There's a lot of discussion around AI ownership and inventorship at the moment for IP rights and how they interact. So welcome, Chris.

EON – AI is a really broad term. IP ownership in the work product of AI is a controversial topic. Can you outline the questions around the current law that are being raised by AI work products?

CM – Sure. So, let's start by saying that AI in various forms has been around for a long time. We have what is sometimes referred to as extractive AI or analytic AI, but that's not what we're talking about in this current round.

Starting about 18 months ago, generative AI came onto the scene, and what that’s uniquely able to do is mimic recognisably human outputs, whether that’s texts, visual images, music or audio files. It's able to generate these outputs by ingesting lots of human-created work and learning how to mimic those things.

There are a variety of questions coming up right now about the copyright issues surrounding the inputs of all of that human-created content into the AI, as well as questions about copyright on the outputs, and what kinds of protections or infringement risks there are on the output side.

So it's useful to talk about both of those as conceptually separate topics. There's also, as you know, some interesting questions on the patent side that have been driven largely by Professor Stephen Thaler and his AI algorithm DABUS, for which he's tried to get patents around the world over the last several years, so we can talk about that as well.

On the input side, what we're seeing at the moment is a lot of litigation, mostly in the US, but there are a couple of cases in the UK as well, arguing that the act of training an AI using copyrighted material is unauthorised copying-- that it does not fall within the fair use exemption, which is something that exists in US copyright and in some analogous versions of other copyright schemes around the world. We don't know the answer to this yet, as it's a question that is being actively litigated as we speak.

There are something like two dozen lawsuits pending in the US, brought by computer coders, visual artists, musical artists, and of course, authors. And so we're watching those cases with great interest.

There are a couple of cases focused on copyright issues on the outputs and two in particular that I will mention. There's one that was brought by Getty Images, and there's one that's brought by the New York Times.

The Getty Images case has counterparts pending in both the US and the UK, and in large part, these cases focus on a problem in AI, referred to as memorisation. That’s when the output of the generative AI looks uncannily like one of the copyrighted or protected inputs. So if something passes through and is more or less an exact copy on the output side, there's an argument that it's unauthorised copying.

There is also a question of whether using the copyrighted inputs and mixing them up in some fashion so that they are contributing ingredients to some hybrid output, may also constitute copying. That question hasn't really come up yet, and some of the AI platforms have tried to pre-empt that conversation by issuing promises that under certain conditions they will indemnify users of their platform if they're accused of copyright infringement based on entering prompts into the AI platform.

There's a lot of detail in that, and I don't want to go into the detail, but the main issues on the copyright side relate to the inputs and outputs.

One other interesting thing that's unique to the UK is that, since 1988, there's been a special provision in the UK Copyrights Act that allows for special copyright of a shortened term for computer-generated works or machine-generated works.

I'm not entirely clear what was going on in the 1980s to prompt Parliament to add that to the UK Copyright Act, but it's an important difference between the UK and many of the other economies around the world, including the US.

EON – You talked about inputs and outputs, and I've been considering the risks of AI output and its use in business. These generative AI systems are trained on the data lakes of third-party information, so what are the risks to businesses of using AI to generate content for their business?

CM – There are a couple of risks associated with this. The first is this risk of memorisation that I mentioned in connection with the New York Times and Getty Images cases. If a company is using generative AI to create content, there is some risk, but I can't quantify how much of a risk it is that the outputs could be memorised from some copyrighted material.

There is a risk that there might be some user exposure on the fair use input side that passes through to the users, depending on how those cases involving fair use and copyrighted inputs come out. So there's some risk there.

Another risk to keep in mind is that other than this special provision in UK law, copyright authorities have generally said that AI-generated works cannot be protected. So if a company is using generative AI to produce advertising campaigns, write computer code, or create images for use in marketing materials or whatever, those may not be protectable as copyright. They may be in the public domain. So there's a risk of them being free for others to use.

The fourth category of risk is the extent to which companies themselves are content creators. The widespread use of AI, if it's determined to be protected by the fair use exemption, may have a significant impact on the market value of the content created by traditional creators.

In other words, if a company is not only using generative AI to create outputs but also owns copyrights in materials that are out there and potentially used as inputs, if there's currently a market value for those materials, that market value may go down. So there's a risk of loss of value for that created content.

EON – You mentioned that, in most jurisdictions, AI-generated content is not protectable by copyright. Could it be protected as a trademark if a business has used AI to generate content in that way?

CM – In theory, things that are protectable by trademark will have some values that identify the brand, and in theory, it doesn't matter whether it was created by an AI or an artist or random. And so there is a narrow window for protection of things like that. But that window may be fairly limited, and the reason for that, at least as articulated by the US Copyright Office, is that the amount of creative activity that goes into typing a prompt into a generative AI platform isn't sufficient creative control over the outputs.

If you go into something like DALL·E and type a prompt in, it'll give you four outputs, and then if you type the same prompt again, it'll give you four different outputs, and you don't necessarily have that much creative input as between those 4, 8 or 16 outputs. So for now, the Copyright Office has said that the level of creative input is not enough.

EON –  Let's move on to talking about inventions and patents. In AI-generated inventions, there have been the DABUS cases in a number of jurisdictions, but what is the current thought process around inventorship? And can AI itself be an inventor?

CM – I'm not aware of any jurisdiction in which there has been a determination that an AI can be an inventor. The one place that Professor Thaler and his team have prevailed in getting a patent issued is South Africa, but the process of obtaining a patent there is more like copyright registration or trademark registration, in the sense that there's no substantive examination and there's no requirement to name an inventor. So DABUS patents have been issued in South Africa, but applications have been rejected everywhere else because an inventor has to be human.

In the US, that determination has gone all the way up now to the Supreme Court. The Patent Office rejected Thaler's application. Thaler challenged that in Federal Trial Courts, who rejected it. Then it went to the US Court of Appeals for the Federal Circuit, which also rejected it. Then, the Supreme Court declined to hear the case.

In the UK, the outcome was the same. There's a determination that an inventor, as named on a UK patent application, has to be a human. There's a slight difference in the UK process, which makes it a much narrower case. In the UK, the application form says you need to name the inventor if you know who it is, and whoever you name has to be a human. If you don't know who to name as the inventor, then you, as the applicant, have to say how you came to have the right to file this patent application.

This created some interesting conundrums for Professor Thaler, who really wanted to list DABUS on that inventor form. The courts, all the way up to the UK Supreme Court, said you can't put DABUS as the name of the inventor. Thaler's response was, as the owner of the computer system on which DABUS is installed, I have accession rights to whatever the outputs of DABUS are. Therefore, if anybody can apply for a patent for what DABUS came up with, it's got to be me.

Very interestingly, both the UK Court of Appeal and the Supreme Court went all the way back to the writings of Blackstone to talk about this law of accession. This is the idea that if I own a piece of property and there's an apple tree on that property and the apple tree produces apples, I also own the apples as the product of the tree.

The courts said, well, that may well apply to apples, but it does not apply to intangibles like an algorithmic output from an AI that happens to be on a physical computer that you own. So it was an interesting twist between British legal history and this new question.

EON –  I think the accession point is an interesting argument, and actually maybe slightly different, because although you may own the physical computer, you might only have a licence to use, or a limited licence to use the software, and then the argument will come down to who owns the actual AI. Then, it could be that the company that developed the AI ends up owning a lot of inventions.

CM – Yes, you're right. It is potentially much more complicated in many cases than the apple tree example. So it probably was not a bad idea to steer clear of drawing that analogy.

EON –  What's the current position in terms of who owns the output of whatever the AI has produced? Do the terms and conditions on the AI platform say that the company that invented the AI will own any output, or that the user will own any output? How is that being dealt with at the moment? 

CM – That's a really interesting question. Let me start with this provision in the UK copyright law, which says - and I'm going to quote this imprecisely - but it says something to the effect that the owner of the copyright in the computer-generated work will be the person or entity responsible for making the arrangements for the work to be generated.

The way it’s articulated is a mouthful, and it's not at all clear whether that means the owners of the computers, the owners of the software, or the person who enters the prompt, or if it's shared in some fashion. So I think that there will likely be some further analysis of that question as we move forward.

You raise a good question beyond that, about who owns the outputs. There's no IP rights in those outputs. So from that perspective, in the US, they go into the public domain, they're not owned. That does not necessarily answer the question of whether there are contractual terms and conditions that specify whether it’s the platform or the users that own, or are entitled to use, the outputs.

So to the extent that there are rights in this, as between other private parties, you would need to look at contractual rights. It would be a separate question, between the parties involved in a use of the platform, and third parties who may find one of those images or a piece of AI-generated text online. To what extent can they freely use it and have some liability to either of those parties? That could be a third-party beneficiary question.

EON –  Recently, the USPTO issued some guidance around inventors and AI. Could you talk us through what that means?

CM – The guidance that was issued in early February 2024 from the USPTO really doesn't change the status quo, but it may serve to clarify the status quo a bit.

The two items that have come out in the past couple of weeks firstly cover the responsibility of practitioners before the US Patent Office to make sure that whatever they submit to the Patent Office is truthful and accurate, regardless of whether the practitioner used AI to generate some of that work product.

That just reiterates that the human who puts their signature on the document before it goes in, is responsible for what's in that document, regardless of how it was created. That's a common-sense solution and an existing doctrine solution to this new technology.

Similarly, in the invention context, the Patent Office has issued some further guidance that reiterates that AIs cannot be named inventors. Humans who work with AIs, using them as research tools, should be the named inventors if they qualify under existing guidelines for being named inventors.

I would venture to say that in most instances, there will be a human involved in the process, who has a level of participation that qualifies under the existing standards for being a named inventor.

There may be less quantum of human involvement in the inventing process than there might have been in past years, where multiple humans are named as inventors. In those instances where there's a team of multiple humans, there may be some who have a very limited involvement, but they still cross the threshold that suffices to be named as a human inventor. That's the same threshold that applies now.

We can imagine that in the future there will be a general artificial intelligence that comes up with stuff such that there are no humans in the loop that can be named as an inventor. That question is not currently addressed by these recent clarifications by the Patent Office. But for now, the humans who are involved in the process are the ones who can be named as an inventor, and they have to have the normal, usual requirements for being named as an inventor.

EF – It sounds like an iterative process as the technology develops?

CM – Yes.

EON – There's a lot of discussion around the rapid development of AI capabilities and the potential for AI in the future. I think what everyone's concerned about is, is AI going to render us all out of a job?

CM – I'm optimistic that that's not going to be the case. Every industrial revolution, and this is appropriately thought of as another industrial revolution, changes the mix of jobs, changes the mix of things that humans do and the ways that we participate in the economy.

But we have continued to grow, the kinds of jobs change, and there will be new jobs that we haven't even thought of. For now, as we were talking about with Patent Office practitioners, the final step in the process, of having a human being exercise human judgement, is still there, and I think it’s very firmly still there, even as these technological developments help us do our jobs better, faster and cheaper.

Litigation Funding Podcast Series with Emily O'Neill

Litigation Funding Podcast Series - Next Steps and Further Information:

Thanks for joining Deminor's Litigation Funding Podcast Series as we dive deep into core topics in funding litigation.

Keep a lookout for our upcoming conversations as Deminor General Counsel UK, Emily O'Neill, speaks with several more experts to get their insights into different aspects of litigation funding.

If you would like to connect with either Emily or Chris on LinkedIn, please click on the links below:

Emily O’Neill – Deminor General Counsel UK and Global IP lead

Chris Mammen Intellectual Property Litigator and Partner at Womble Bond Dickinson

***

Further Reading:

 

Deminor

Written on March 26, 2024 by

Deminor

Deminor helps businesses and investors monetise legal claims.

Further reading

Litigation Funding Podcast Series: Putting the ‘Fun’ into ‘Funding': Kritische Betrachtung der Prozessfinanzierung

Litigation Funding Podcast Series: Putting the ‘Fun’ into ‘Funding': Kritische Betrachtung der Prozessfinanzierung

In dieser Podcast-Serie führt Dr. Malte Stübinger, General Counsel Deutschland für Deminor Litigation Funding, Interviews mit globalen Fachleuten, um ...

Read more
Litigation Funding Podcast Series: Law Firm Funding - How to Identify and Present Cases to Funders

Litigation Funding Podcast Series: Law Firm Funding - How to Identify and Present Cases to Funders

In this podcast series, Emily O'Neill General Counsel UK for Deminor Litigation Funding, undertakes interviews with global professionals to discuss ...

Read more
Newsletter