Creative AI robot

Stable Diffusion, unstable claims – AI copyright claims thrown out in the High Court

The High Court has now passed down its decision in Getty Images v Stability AI, a decision that will shape the legal landscape as AI develops. While the outcome may be appealed, it is important to understand the precedent currently established as the decision gives important guidance on how UK law views generative AI, online access to them, and the content they create.

How did we get here?

Getty Images is a global visual media company – in particular, a major licensor of stock images.

In turn, Stability AI is a London-based company that leverage a generative AI model called Stable Diffusion. Stable Diffusion is designed to be given a prompt by a user and generate an image based on that prompt.

Prompt: ‘A photograph of an astronaut riding a horse’

Generated image by Stable Diffusion:

Astronaut on a horse

In order to train their model, Stability AI used over 5 billion images and corresponding captions, including images and captions owned by Getty Images.

Getty Images brought five claims against Stability AI to the High Court in 2023:

  • Primary copyright infringement alleging that Stability AI copied Getty’s copyrighted images during the training of Stable Diffusion;
  • Secondary copyright infringement alleging that Stability AI imported an ‘article’ that was an infringing copy of Getty’s works;
  • Sui Generis database right infringement alleging the illegal scraping of Getty’s image libraries;
  • Trade mark infringement alleging that the AI-generated outputs contained Getty’s trade marked watermark; and
  • Passing off alleging that the presence of the AI-generated watermarks could mislead consumers.

The claims to primary copyright infringement and sui generis database rights were dropped by Getty during the trial because Getty could not prove that the training and scraping happened in the UK.

What is an AI model and model weight?

Before continuing, it is useful to briefly explain what is meant by the terms ‘AI model’ and ‘model weight’, since they appear throughout the decision.

An AI model contains millions, if not billions, of numerical values, called model weights. Model weights are used to transform an input and produce an output. Each AI model also has an architecture which tells the model the exact sequence of events needed to transform the input to produce the output.

When an AI model (like Stable Diffusion) is trained, it looks at a lot of data (such as image captions) and gradually adjusts these weights so that it can accurately generate images associated with the caption.

A helpful way to view this is that each model weight tells the model how strongly a feature (like a colour, shape, or word pattern) should influence another. When combined, all the weights form the model’s internal ‘knowledge’; while these model weights result from reviewing millions of examples, they are merely a collection of numbers and do not directly store copies of the training data.

During training, each time the AI model takes in an image caption from the training dataset, it generates an image and sees how well it compares to the actual image associated with the caption. It uses these differences to adjust all the model weights.

Copyright

Getty’s main copyright and database right claims were never fully decided. The ‘training’ claim fell away because Getty could not show that any training took place in the UK, and the ‘output’ claim became purely academic after the specific prompts relied upon were blocked. This left Getty with secondary copyright infringement and trade mark infringement claims.

The secondary copyright infringement claim focused on whether the Stable Diffusion model could be considered an ‘infringing copy’. The Court rejected this. Although the law can apply to intangible items, an AI model that does not store or reproduce the original training images is not an infringing copy.

The Court set out a strict and positivist interpretation of the law, based on the fact that the model weights concerned record learned information, not stored reproductions.

The Court decided that downloading model weights into the UK could amount to importation of an intangible item. However, this only matters if the item itself is infringing, which the model was not. By contrast, hosted access (where users run the model online without downloading it) does not involve importation at all.

As a result, the secondary copyright claim failed.

Trade marks

Getty succeeded only in part on its trade mark claim. The Court found infringement in relation to certain earlier versions of the Stable Diffusion model that produced images displaying the Getty Images or iStock watermarks. These findings were confined to a handful of examples drawn from older model iterations, and not the newer editions of the tool. Examples of the watermark reproduction can be seen below:

Getty watermarked image of a crowd

Getty footballer image watermarked

The Court was careful not to extrapolate these findings more broadly. It declined to accept that the presence of watermarks in a few early outputs demonstrated a systemic issue across all versions of Stable Diffusion or across the wider output space. Later versions of the model incorporated filtering mechanisms designed to detect and remove watermarks, and the evidence before the Court suggested that these were largely effective. Getty was also unable to produce UK-based examples of watermark replication from newer releases.

Getty argued that Stability had exploited or harmed the reputation of the Getty Images mark. This was unsuccessful since there was a lack of evidence of unfair advantage or detriment, with the court noting that watermarked images are undesirable and typically discarded.

Why the trade mark win was limited

The Court’s finding in Getty’s favour was narrow, reflecting both the limited evidence and the way generative AI systems develop.

Regarding evidence, the Court required clear, verifiable examples of infringing outputs linked to specific model versions and platforms. General claims about how the model behaved were not enough.

Later versions of the Stability model introduced effective fixes such as watermark filters and better safeguards. As a result, the finding was limited to a few historic examples rather than any general fault with the model.

Trade mark risk remains, but rights holders can address it by gathering clear, version-specific evidence of infringing outputs. Success will depend on demonstrating direct links between the offending material and the developer’s platform, rather than relying on broad claims about general model behaviour.

Implications for AI and IP

For copyright, the message is clear. Models will not be treated as infringing copies unless they actually store reproductions of training data. The Copyright, Designs and Patents Act (CDPA) does not easily apply to training that happens outside the UK, and hosted models fall outside the scope of ‘importation’. The Court also made clear that expanding copyright law to cover this kind of technology is a matter for Parliament, not the courts. In turn, the judgement sets out some practical measures generative AI developers can take to avoid copyright infringement.

For trade marks, rights holders still have some leverage, but only where a mark is clearly and specifically reproduced. As the technology develops and stronger filtering is introduced it is becoming increasingly difficult to prove that data containing a mark was used in training. Outputs are now far less likely to reproduce watermarks or logos. With effective filters in place, generative AI developers can prevent this type of infringement.

A potential appeal would need to address the Court’s key finding that models, which do not store or reproduce copyright works, are not infringing copies, and its careful approach to evidence in the trade mark claim. In the meantime, debate around text and data mining and AI training is likely to grow. For now, the UK position is clear: no secondary copyright infringement for models that only store weights, and limited trade mark liability confined to clear examples where watermarks and logos are reproduced.

If you would like to discuss this matter further, please do not hesitate to contact the authors – Jacob Larking and Francesco Di Lallo – or your usual Barker Brettell attorney.

Share