- Illustrations
- Yuto Tamura
- —
- Date
- 22 September 2025
- Tags
Fully automated luxury plagiarism: who owns what in the age of AI
The emergence of generative AI has sparked countless lawsuits about creative copyright – but in doing so, has reignited a conversation artists have been having for centuries.
Share
Share
Light and Shade is a series exploring the challenges at the heart of the AI-creative conversation. As AI becomes increasingly present across the creative industries, the series examines the opportunities and dilemmas our community grapples with. It is grounded in interviews with technologists, researchers, artists, designers, creative founders, writers, lecturers and environmental and computational experts, offering a fuller view of the many sides of the story of AI’s creative influence.
Trying to discern the difference between inspiration and theft has bemused the creative industries for hundreds of years. But with the advent of generative AI we appear to have reached an epochal moment in this debate. Technology that harvests data from pre-existing works, and then uses it to generate entirely new content, raises a whole host of new questions about ownership and appropriation. If something is mechanically produced based on somebody else’s work: do they own it? Or if it looks different enough, can we understand it as something new?
For many, these are serious times that require us to think deeply about what we want the future of creative ownership to look like. For others, it’s all a fuss over nothing. To them, the story is the same as it’s ever been: people copy. That’s how art gets made.
Recently, a number of lawsuits have challenged conventional understandings of creative ownership in the AI age. The headlines are full of such cases. From Getty Images suing Stability AI, to authors Andrea Bartz, Charles Graeber and Kirk Wallace Johnson bringing a class action lawsuit against Anthropic – another collection of authors, including Ta-Nehisi Coates and George Saunders, have already brought a case against OpenAI and Microsoft. Content publishers and producers are going to war. But the origins of this battle go back much further than the creation of generative AI.
To look forward, we first need to look back.
A copy of Warhol’s Campbell’s soup can
Statute of Anne 1710
This story, like all good stories, starts a long time ago. The Statute of Anne was enacted in Great Britain in 1710, and is widely considered the first modern copyright law. It was the first to recognise the legal rights of authors over publishers. It introduced time limits on how long a publisher could own the rights to a work, and ensured that rights would default back to the author when this time elapsed. For the first time, copyright was placed under the jurisdiction of the state, taking power away from the private publishers’ guild and into the hands of the government – with the aim of redressing the balance of power and improving public access to creative works.
“There’s still no clear answer to who owns what in this era, and that’s worrying.”
Angela Oduor Lungati
The foundation of copyright
With the advent of copyright, the idea of ownership was introduced to the creative process. It’s here that the long road to understanding our current moment begins – when the focus of ownership shifted from the publisher to the author.
Perhaps more important than the specifics of the statute was the idea: the creator deserves to have their rights protected. And in fact, beyond the terms of a publishing agreement, ownership should default to them. This sets up the framework through which we still understand ownership today. Outside of a publishing agreement in which the creator is compensated, the work they created belongs to them – until, after a much longer period, it ultimately passes into the public domain. The author needs protection.
Fast forward 300 years, and the arrival of generative AI is not just disrupting the principles of copyright law, but, some would say, is threatening their very foundations. Here is a technology that is trained on the largely unauthorised use of pre-existing copyrighted works, but then generates entirely new material. In other words, every time you use a generative AI model you’re borrowing from other people’s work – often without knowing it.
The question then becomes: what rights should the original creative have? Perhaps for the first time, since the Statute of Anne, we’re unclear about the essential lines between inspiration, author and publisher. In the eyes of some, there is now too much distance between the laws of yesterday and the technology of tomorrow. Technologist and Creative Commons chair Angela Oduor Lungati, who advocates for a fairer future involving AI, thinks that a radical rethink is required. “Copyright doesn’t feel like the right mechanism anymore,” she says.
Tonia Samsonova is the founder of Exactly.ai – a platform that lets brands and creative teams train private generative AI models on their own assets to produce on-brand visuals at scale. She believes these new threats to creative rights mean we must champion fair AI, rather than abstaining from it entirely. “The industry must adapt, ensure fair compensation for creators and stop pretending AI will disappear.”
A copy of a copy of Warhol’s Campbell’s soup can copy
Folsom v. Marsh 1841
The case of Folsom v. Marsh concerned the publication of a biography of George Washington which used – extensively – excerpts of the first president’s letters, which had been published previously. The plaintiff argued this was a copyright infringement, given that the biography used 300 pages copied verbatim. A Massachusetts circuit court ultimately agreed, ruling against the biographer. In doing so, it set a precedent for “fair use” that would be used for years to come. Principally, this meant any claim of fair use would hinge on a few factors: the nature of the copied work, the amount of material copied, and whether or not the alleged infringement impacts the market for the original work.
“These are entire systems built on unattributed labour from other people.”
James Bridle
Fair enough
Folsom v. Marsh introduces us to the next important idea when unpicking the implications of generative AI on creative ownership: fair use. Fair use is a legal framework for permitting the use of copyrighted works in certain contexts. It normally comes up as a form of defence against an accusation of copyright infringement, taking into consideration factors such as how much material has been used and what for. Quoting a line from a novel in a newspaper article, for instance, would be considered fair use. Using a copyrighted photograph as an album cover without permission – probably not.
Sadly, not all cases are that straightforward. Fair use is a slippery concept. Even the judge who ruled on Folsom v. Marsh in 1841 described copyright law as “almost evanescent” in nature. Well if it was evanescent then, try understanding it in the context of AI. Many argue that provided the output of a generative AI model is substantially different from the original material it was trained on, then the use is more like studying than copying – and so, is fair. Indeed, that was the outcome of a lawsuit earlier this year, when 13 authors took on Meta, arguing their novels had been used without authorisation. Judge Vince Chhabria of San Francisco concluded that Meta’s use of the material was “transformative”, and therefore fair.
But many take another view. In May of this year, the United States Copyright Office released a lengthy report on this very topic. Its conclusions were mixed. It argued that the fair use defence was a “matter of degree”, but acknowledged that defining this degree in the era of AI was the challenge. An AI model drawing on vast reserves of data is unlikely to clearly copy one artist, so could be fair use – whereas a model drawing just on the work of Picasso will make work that looks like Picasso’s, so that isn’t. The trouble is, most models exist somewhere between these two extremes.
However, one thing the report did refute was the assumption that AI training is “inherently transformative”. The report argued that drawing an analogy between an AI and a human simply “learning” from material was disingenuous. Unlike a human, who will absorb an image but remember it imperfectly – an AI model can learn and reproduce with complete accuracy.
A copy of a copy of a copy of Warhol’s Campbell’s soup can copy
Andy Warhol and “Orange Prince”
If one artist provides a spiritual precursor to generative AI, it’s Andy Warhol. From his penchant for replication to the “factory” production line of his studio, he embraced many of the ideas that now have the art world gripped in fear. More recently, a case was put to the Supreme Court that sharpened this comparison.
In 1981 Lynn Goldsmith photographed Prince in her studio. In 1984, Vanity Fair licensed that image so Andy Warhol could create an illustration for their cover – which became his famous Orange Prince image. Following this, Warhol created a Prince Series of a further 15 images, which the Warhol foundation have since profited from commercially, selling and licensing the artwork for merchandise.
When Prince died in 2016, Vanity Fair re-ran the cover without crediting Goldsmith. Discovering this, and the wider Prince Series which she’d been previously unaware of, she contacted the Warhol Foundation who pre-emptively sued her, claiming fair use. Goldsmith counter-sued and the case ended up in front of the Supreme Court, who ruled against the Warhol Foundation, finding the image to not be “transformative” enough when compared to the original to constitute fair use.
“The industry must adapt."
Tonia Samsonova
Inspiration or appropriation
Distinguishing a new piece of art from the material it’s been inspired by is not a science. In legal terms, it often comes down to the hard to define measure of whether an image is “transformative” – that is, discernibly changed from its source material. The transformative clause is designed to give artists the wiggle room to absorb inspiration, but also protect artists from being ripped off by their peers. By ruling in favour of Lynn Goldsmith, the Supreme Court has fallen on the side of the original creator. It’s a victory, perhaps, for those who fear for the erosion of ownership rights in the age of AI – a very real concern for artists who believe their styles are being replicated by AI models in a clearly derivative way. Inspiration, surely, this is not.
There is still, however, a huge amount of grey area. As with fair use, defining “transformative” is a matter of degree. Then there’s the question of who is culpable for the creative act in the first place. So far lawsuits have only been filed against tech companies, but could we also see charges brought against individuals creators who used generative AI and unwittingly created derivative outputs? Angela Oduor Lungati recognises this murkiness as reflective of deeper shortcomings in how we understand generative AI. “There’s still no clear answer to who owns what in this era,” she says, “and that’s worrying.”
Not everyone thinks it’s that complicated. Paula Scher is a partner at Pentagram – the world’s largest independent design consultancy. Having worked with generative AI on a high-profile project for the US government, she is sanguine about the arrival of this new technology – and what it means for creative ownership. In her eyes, the principles haven’t changed: “People steal, or they borrow. If you borrow, you can make it your own. It’s all part of a process of a community of designers who make things and are influenced by each other’s work.” Artist Grayson Perry said something similar recently. Speaking to the Charleston literature festival, he said he didn’t “really mind” if his work was used to train AI models, as he’d been “ripping off” others throughout his career.
A copy of a copy of a copy of a copy of Warhol’s Campbell’s soup can copy
Shepard Fairey – Barack Obama “Hope” Poster (2008)
If you’re able to cast your mind back to the US elections of 2008, one image will likely spring to mind. Barack Obama’s face, set in red, beige and blue, with the word “HOPE” printed underneath. The poster was created by designer Shepard Fairey but based on a photo taken by photographer Mannie Garcia. Fairey was consequently sued by The Associated Press (who owned Garcia’s photo) for using the image without permission – Fairey defended himself on the grounds of fair use.
The case was ultimately settled out of court, with Fairey paying an undisclosed amount (and agreeing to share rights) to the “Hope” image. However, Fairey then faced criminal charges for initially lying about which image he used to produce the poster (in attempt to avoid liability) – and destroying evidence – leading to him receiving two years of probation and 300 hours of community service. The case stands as a warning shot as to the internet’s new role in found imagery being appropriated for a mass audience.
“That looks a lot like theft to me”
The case of the “Hope” poster is an interesting final chapter in the road towards our current moment – a copyright lawsuit where the internet played an important role in the appropriation of an existing piece of work. The photographer involved, Mannie Garcia, even said of the lawsuit: “I don’t condone people taking things, just because they can, off the internet… But in this case I think it’s a very unique situation.”
Perhaps a unique situation in 2008, it’s a commonplace conundrum now. Another important word in understanding our current moment is “scraping”. Scraping describes the process of an AI model harvesting images from the internet to form datasets, which it then learns from. The trouble is, like Shepard Fairey, many AI companies are less than transparent about where they’ve sourced their material – with many declining to disclose the datasets their models are trained on. For many, this lack of transparency is indicative that AI companies know they are scraping creative works without permission.
“The whole argument about AI and copyright is very simple,” says artist, writer and technologist James Bridle. “These are entire systems built on unattributed labour from other people. And when there’s no remuneration involved in that, and there’s profit sought at the other end, that looks a lot like theft to me.”
A copy of a copy of a copy of a copy of a copy of Warhol’s Campbell’s soup can copy
“Copyright doesn’t feel like an adequate mechanism anymore.”
Angela Oduor Lungati
What comes next?
The question then becomes, what should these new solutions look like – and on whose principles should they be founded? Many critics of generative AI will hope that the current swell of lawsuits will push the technology back and challenge what they perceive to be its inherently exploitative, derivative nature. In the eyes of others, it’s too late for that. As the founder of an AI platform that is attempting to find a new solution, Tonia Samsonova is clear on this: the industry needs to wake up to the new structure of ownership, and start crediting people accordingly. “Creators should charge an additional fee for AI because their work will inevitably be reproduced with AI, so their fees should increase,” she says. “If clients use AI to reproduce visuals, they should pay more up front. Not just for the design, but for the engine behind it. Contracts and pricing models in the creative industry must evolve to reflect the true value of original IP.”
If you haven’t been keeping an eye on the many ongoing intellectual property lawsuits, now’s the time to start. What unfolds in the coming years will set the course for the future of creative ownership in the age of AI. As things currently stand, it’s a battle framed in terms of artists versus big tech. What remains to be seen is whether or not these two supposedly opposing sides can find common ground. The story that started with the Statute of Anne is about to gain a significant new chapter. The question is: who will be the ones to write it?
A copy of a copy of a copy of a copy of a copy of a copy of Warhol’s Campbell’s soup can copy
Uncover the full Light and Shade series
Explore the challenges at the heart of the AI-creative conversation with our series of insights-driven articles below.
Brought to you by
Insights
Insights is a visual research department within It’s Nice That helping creative teams with sticking points. We deliver research on cultural landscapes, audience tastes, communities and talent to unlock your creative approach.