Generative AI and copyright: what the first European rulings mean for startups and scale-ups

Generative AI is rapidly moving from an experimental toy to serious business infrastructure. This also makes the question more urgent where the legal boundary lies between learning from existing content and unauthorized copying. Two recent European statements show that, especially for startups and scale-ups, that border suddenly becomes a lot more tangible.
No items found.
Insights
Caylun J. Scholtens
09.05.2026

Generative AI and copyright: what the first European rulings mean for startups and scale-ups

For tech companies that develop or integrate generative AI, intellectual property has long ceased to be a side issue. The discussion is not only about training data, but also about what a model remembers, what it gives back to users and who is legally responsible for it.

That is exactly why the first European statements about generative AI and intellectual property are so relevant. In the German case between GEMA and OpenAI, the judge was asked whether a design infringes when protected lyrics end up in the system without permission and come out via output. The British case between Getty Images and Stability AI was partly about another question: when is an AI model itself an infringing copy, and what happens if output contains watermarks that are very similar to protected brands?

Together, these statements do not provide a fully completed legal final picture. They do clarify where the real risks for AI companies lie, and why startups and scale-ups can no longer separate their product choices, training data and output control from IP law.

Why these statements matter for AI practice

The discussion about generative AI and copyright is often conducted in an abstract way. Can a model train on protected material? Is output a new creation or a derivative of existing work? And what is the responsibility if, with a simple prompt, a user receives something in return that looks very much like a protected work?

For young tech companies, these are not theoretical questions. When you build a foundation model, integrate an AI feature into existing software, or give customers access to generative output, you make choices about data, architecture, prompting and moderation. These choices can be legally difficult.

That is precisely why the German statement is so remarkable. The judge looked not only at the training process as such, but also at the fact that the model was able to return full or mostly recognizable lyrics with simple prompts. This was seen as strong evidence that those protected works were reproducibly stored in the model. The crux of the opinion is therefore not only that protected works were worked with, but that the system had apparently remembered those works in such a way that they could reappear.

For startups, this is an important signal. The legal discussion is thus moving from a general question about AI training to a more concrete one: what can your model actually reproduce?

The core of GEMA/OpenAI: When memorization becomes a legal issue

The case between GEMA and OpenAI involved lyrics by authors affiliated with GEMA. According to the judge, ChatGPT was able to generate complete or mostly complete lyrics of well-known songs with relatively simple prompts. This gave GEMA the suspicion that generative AI was trained on protected works, transformed into something much more concrete.

If it comes out, that's legally relevant

OpenAI argued that the model does not store training data as such, but learns patterns and structures. This is a familiar defense in AI cases: the model would not work as a database of copies, but as a statistical system that generates new content.

The German court did not agree to this in this case. Even if there is no immediately recognisable copy in the classical sense in the model, according to the court, there can still be a reproducible stored work. According to the judge, the fact that precise lyrics came back with simple prompts indicated the memorization of training data. And that memorization was classified as a reproduction.

This is legally relevant because reproduction law in Europe is broad. This may include not only a visible one-to-one copy, but also indirect or differently designed reproduction. For AI companies, this means that the technical statement “we do not store source files” is not necessarily enough if the result shows that protected works are still reproducible.

Not only training, but also how the model works counts

The ruling also shows that judges do not only look at the input side. It's not just about what data the system has seen, but also what the system can then give back. This makes product testing, red teaming and output analysis more legally relevant than many companies have dealt with so far.

For a startup that builds AI, that's an uncomfortable but clear message. A model that spits back protected texts or other recognisable works when used normally not only raises technical quality issues, but also raises immediate IP risks.

Why the appeal to text and data mining didn't apply here

An important part of the German case was invoking the text and data mining exception. In Europe, this exception is often seen as a possible legal basis for AI training. The German court also did not rule out that training generative AI could in principle fall under text and data mining.

However, that did not help OpenAI in this case.

The reason is significant. According to the judge, more than just analyzing data to derive patterns and relationships happened here. The training data was not only dissected, but also reproduced itself. And once that happens, the assessment changes. The exception is intended to make technological innovation possible, but not when it affects the interests of rights holders.

In doing so, the ruling provides an important nuance that is essential for startups. It's not enough to assume that AI training automatically falls under a text and data mining exception. Even if that route is theoretically open, it is still necessary to look at what the model actually does, what works were used and whether the interests of rights holders are disproportionately harmed.

There was something else in this case. The judge pointed out that the lyrics were not lawfully accessible and that rights were reserved for text and data mining. These are also factors that can limit the scope for invoking the exception. For AI companies, this is a warning against overly simple assumptions about “openly available” material. Available is not automatically available for free use.

The output itself can also be infringed

The German ruling goes one step further than the training issue. Not only was the storage or memorization of protected works found problematic, the output itself was also considered legally relevant.

When ChatGPT generated full or partially recognisable lyrics, the judge qualified that output as unlawful reproduction. With five songs, the agreement was almost complete. Other examples involved recognisable parts, such as choruses or verses, supplemented with newly invented text. Even then, according to the court, it could still be a case of copying copyrighted traits.

It didn't stop there. Presenting such output to users was also seen as a form of disclosure, an independent act of exploitation that requires permission. In other words: those who not only have the protected content internally in a system, but also show it to users, are at an additional legal risk.

For startups and scale-ups, that is perhaps the most practical part of the statement. The discussion doesn't stop with data ingestion. The way in which output is made available is also legally important.

The provider can be responsible himself

A classic reflex in platform land is to point to the user. After all, it enters the prompt. It asks for the output. So the risk lies there, right?

The German judge made short work of that in this case. OpenAI, not the user, was held responsible. The reason: OpenAI determines the architecture of the model, the storage of training data and thus has a significant influence on what the system gives back. Under those circumstances, OpenAI was not seen as a mere intermediary.

This is a crucial lesson for AI startups. As soon as you, as a provider, have a substantial influence on the functioning of the system, it becomes difficult to transfer all responsibility to end users. How your model is built, what guardrails you set up, what restrictions you apply, and how you manage output are not just technical design choices. They play a direct part in the legal division of risk.

For SaaS companies that add generative AI as a feature, this is just as relevant as it is for model builders. Even those who “only” put an AI layer on top of a product should ask themselves how much influence the company itself has on the concrete results that customers see.

Getty v Stability AI: Why copyright didn't break through there

The British case between Getty Images and Stability AI shows that the outcome depends very much on the precise facts and on the legal route that ultimately remains.

Getty raised several copyright claims, but two of them dropped. The claim about unauthorised training of Stable Diffusion with Getty content was withdrawn because it was not possible to prove that the training and development of the model had taken place in the United Kingdom. As a result, the question of jurisdiction came into focus. The claim about infringing output was also withdrawn, because the relevant prompts were now blocked by Stability.

What remained was the claim that Stable Diffusion itself, as a model, would be an infringing copy of protected works. The British judge did not agree. According to that judge, the fact that protected works were used in the training process did not mean that the model parameters themselves should be seen as copies of those works. The parameters were actually seen as the result of learned patterns and structures.

This is not a license for startups, but it is an important nuance. Not every judge will automatically assume that a design itself is a copyright-relevant copy. It is precisely the precise technical effect of the model and the way in which it is legally valued that can lead to a different outcome.

That makes this case particularly valuable as a contrast to the German ruling. While the German court focused on memorization and reproducibility, the British judge looked differently at the nature of model parameters when it came to the remaining copyright question.

Trademark law can come into the picture unexpectedly

Although Getty did not make it through copyright on the remaining basis, the company was partly right on trademark law.

That's where the business revolved around Getty and iStock's characteristic watermarks. According to Getty, Stable Diffusion was able to generate output that showed characters that were identical or similar to those brands. The judge did not agree with that, for three specific images generated by older versions of Stable Diffusion.

The reasoning was that such output could falsely suggest an economic link between Getty and Stability and could lead to confusion. At the same time, Getty was not right on all points. Damage to reputation or distinctive character had not been sufficiently demonstrated, and the court also found an unjustified advantage from the reputation of the brands implausible, partly because that branded output was unintentional and undesirable.

For AI companies, this is a point that is easily underestimated. The risks of generative AI are not just in copyright law. Brand use, watermarks, indications of origin and other recognizable signs can also play a role, especially when output contains visual elements that suggest a connection with an existing brand.

For image generation startups and tools with creative output, this is another reason not only to pay attention to content takeover, but also to notice any noticeable pollution in output.

What this means in concrete terms for startups and scale-ups

At Startup-Recht, we see that many AI companies mainly focus on speed, functionality and product market fit. Understandable, but these statements show that IP risks can lie deep in the product.

The first question, therefore, is not only whether a model performs well, but also whether it appears to have reproduced protected content. The easier a system returns recognisable texts, images or brand expressions, the harder it becomes to explain away the risk as mere statistical learning.

In addition, it becomes clear that output governance is not a luxury. Blocking prompts, limiting unwanted reproduction, and monitoring known risk patterns can make a big difference. Not because it removes every legal problem, but because the concrete functioning of the system is important.

This is also contractually relevant. Startups that purchase AI from third parties, use white-label or integrate it into their own stack would be wise to take a close look at agreements about training data, liability, indemnities and restrictions on use. Especially when a supplier makes broad commercial claims about “safe” or “royalty-free” use of output, some skepticism is healthy.

Finally, these statements show that the legal analysis does not fit into one box. Copyright, disclosure, trademark infringement, and questions of jurisdiction can be intertwined. For scale-ups that operate internationally, this becomes even more complex. Where the model is trained, which market is served and which output is concretely generated can be decisive.

The most important takeaway

The first European statements about generative AI and intellectual property do not yet provide definitive rest. There are too many questions open for that, appeals are pending and the last word in Europe has not yet been said.

Nevertheless, the direction is clear. Anyone who builds or offers generative AI cannot suffice with the idea that a model only learns patterns and that any problems lie with the user. As soon as protected works appear to be reproducible in the system and return as output, the legal risk becomes concrete.

For startups and scale-ups, that's the real message. Generative AI is not only a product issue, but also an IP issue. And the sooner those two end up in the same design conversation, the less likely that legal friction will only become visible when the model is already live.

Testimonials

What our clients say

Startups and scale-ups enjoy working with us. Here’s what they think of our expertise and approach:

We hired Startup-Recht to draft our general terms and service agreements. The result was fast, high-quality, and perfectly tailored to our needs thanks to the revision rounds. They really took the time to understand our business context. Professional, reliable, and a pleasure to work with.
Daan Witte
Co-founder AcuityAi
legal expertise for fast moving startups in regulated industries. Startup-Recht provides the legal foundation for us to innovate at Pabel AI.
Stan Haaijer
Co-founder Pabel B.V.
Good, energetic lawyers with clear and strong subject-matter expertise. They respond quickly and think proactively, finding solutions for innovative and sometimes complex issues within our sector: Open Source Consulting. The documents were delivered on time, and communication throughout was clear and prompt.We also had the documents reviewed by several other lawyers, who were impressed by their quality. Substantive feedback was addressed thoroughly and with great care. This gives us confidence in our new legal foundation.Thank you for the pleasant collaboration—looking forward to working together again soon.
Niels Verhage
Co-founder Rogue IT Consulting B.V.
Maarten and Caylun from Startup-Recht are supporting me in setting up my business. They do so in a very pleasant and professional manner. As an entrepreneur, it’s extremely valuable to be able to rely on their expertise in startups.I can reach out with questions whenever they arise and always receive a prompt response. In addition, they take all legal work off my hands and assist with drafting the right documents.In short, I am very happy with this collaboration and can highly recommend them.
Erik Maessen
Founder CoachChecker B.V.
We had a very pleasant collaboration. They thought along with us carefully, truly understood our vision, and supported us in a professional and approachable way. The communication was personal and clear throughout. Definitely highly recommended.
Luc de Graag
Co-founder Tikt.ai
We had an excellent experience working with Startup-Recht. Their team combines professionalism with a genuine understanding of startups’ needs, guiding us through every step with clarity and efficiency. They didn’t just answer our questions – they anticipated challenges and offered practical solutions that gave us real peace of mind. Highly recommended for any young company looking for reliable legal support.
Luis Martinez
Co-founder UpTo
Logo staallokaal
At Startup-Recht, the mix of young entrepreneurship and solid legal advice is pure gold. As an entrepreneur, you know you need to sort out your terms, but it rarely gets done—until Startup-Recht sits down with you. They guide you through what really matters and create terms that fit your company. The perfect balance between customer-focused and legally safe. Still in doubt? Have a coffee with the guys and you’ll be convinced.
Sybrandus Pietersma
Mede-eigenaar Staallokaal B.V.
Very satisfied with Startup-Recht. They helped us draft multiple contracts and general terms and managed to translate our services and workflow perfectly into strong legal documents. Everything was clearly explained, and they even covered points we hadn’t thought of. Fast communication, clear advice, and a top result.
Daniël Coenen
Mede-oprichter Digiswift B.V.
We engaged Startup-Recht to draft our terms and conditions and service agreement. The result was delivered quickly, of high quality, and fully tailored to our needs thanks to the revision rounds. In addition, Startup-Recht provided valuable input within the context of our business.

Professional, reliable, and a pleasure to work with.
Paul Brandsma
Mede-oprichter AcuityAi

Startup-Recht assisted me in a professional and careful manner. Their work was characterized by speed, transparency, and a smooth process – all at a very reasonable rate. I consider the collaboration trustworthy and highly recommendable.

Michael de Jong
Webdeveloper & Founder
Maarten and Caylun did an excellent job helping us draft strong legal terms and meet the right compliance standards. We didn’t have much prior knowledge, but they took the time to explain everything clearly and gave valuable advice for the future. Overall, we were very well supported and would definitely recommend Startup-Recht.
Robin Jonckers
Co-founder Copywise Ai
Caylun en Maarten van Startup-Recht

Meet your modern legal partner. Work becomes easier, faster, and more secure.

Book a consultation