General-purpose AI under the AI Regulation: What startups need to know about models, systems, and systemic risks

Generative AI is often discussed as one category, but legally this is a lot more nuanced. For startups and scale-ups that build with language models, AI assistants or agents, that distinction directly affects their compliance position. That's why it's important to understand how the AI regulation approaches general-purpose AI, and where the biggest grey areas lie.
No items found.
Insights
Caylun J. Scholtens
18.04.2026

Why general-purpose AI (GPAI) is more than a legal label for startups and scale-ups

General-purpose AI, often abbreviated to GPAI, has been given its own place in the AI regulation. That is not strange. Indeed, models that can be used for many different tasks do not fit well into a classic legal framework that is mainly built around specific applications with a clear purpose.

For startups and scale-ups, this is more than a theoretical point. Today, many tech companies no longer build from scratch, but on or around widely deployable models. Think of AI assistants, agents, search functionalities, content tools, or internal workflow software with generative functions. Then the difference between a model, a system and a general-purpose system is suddenly not only legally interesting, but also directly relevant to product development, risk management and positioning towards customers and investors.

Why general-purpose AI got its own regime

The AI regulation works for AI systems with a risk-based structure. Some applications are prohibited. Other systems are considered to be high-risk and come under a heavy package of obligations. In addition, there are specific transparency obligations for certain categories, such as chatbots and generative AI.

For general-purpose AI models, this system alone did not appear to be a good fit. In fact, the classic high-risk framework presupposes two things: that the internal workings of the technology are sufficiently transparent, and that the application has a clearly defined purpose. That's exactly where big generative models come in. The internal workings are not always really comprehensible to developers either, and the same model can be used in very different contexts.

That is why the European legislator has opted for a separate framework for GPAI models. That is a fundamental choice. It is not the individual application that is then central, but the widely applicable model itself.

For tech companies, this is relevant because it shows where the legislator focuses. Those who develop or market a generally deployable model will have to deal with a different type of analysis than those who incorporate a separate AI feature into a product.

AI model, AI system and GPAI system: why this distinction matters

In practice, the concepts of model and system are often intertwined. Legally, that distinction actually matters a lot.

An AI model can be seen as the underlying intelligent method. The model is the core technology that runs functionality, but it is not yet a full-fledged application in itself. An AI system is the application in which that technology is used. In other words, the model is the engine, the system is the car.

The AI regulation adds a third concept: the GPAI system. This is an AI system that is based on a GPAI model and can therefore be used for various purposes.

Why is this relevant? Because GPAI models have their own regulatory framework, while GPAI systems basically fall back into the wider regime for AI systems. That sounds clear, but in the implementation it actually causes uncertainty. It is not always clear exactly which rules should land at which level, at model level or system level.

At Startup-Recht, we see that this distinction becomes particularly important as soon as a tech company not only uses a model, but also builds its own application layer, workflow or agentic interface around it. Then you want to understand early on what exactly you're working with from a legal point of view.

When is a model a GPAI model?

Not every AI model is automatically a general-purpose model. For this qualification, it is essential that a model can competently perform a wide range of different tasks and that it can be integrated into various downstream systems or applications.

That second criterion sounds broad, and it is. After all, many models can be built into something. But that doesn't automatically make them a GPAI model. The core lies mainly in the broad range of employability. A model that primarily solves one specific problem, such as recognizing text in images or transcribing audio, can be used in many products without having a really general character.

That distinction is important. This is because the broad applicability of a model can have two very different causes. Sometimes it comes from real general capabilities, such as large language models. Sometimes it is mainly because a very specific problem recurs in many places. The second category is less likely to be obvious as a GPAI.

At the same time, the demarcation is not completely tight yet. The definition in the regulation seems to limit the concept of GPAI somewhat, while the considerations point towards a wider explanation, especially for large generative models. As a result, there remains room for interpretation.

For startups and scale-ups, that's no detail. The qualification question largely determines which legal framework comes into play. Especially for companies that are working on their own model, or presenting their product as a widely deployable AI infrastructure, it is wise not to ask that question late in the process.

When does a GPAI model have a systemic risk?

Within the GPAI category, the AI regulation makes another distinction. Not every GPAI model is treated the same. The heavier obligations apply to models with a systemic risk.

To do this, the regulation primarily looks at models with high-impact capabilities. These are models whose capacities match or exceed those of the most advanced GPAI models. Benchmarks play a role in that assessment. In addition, the regulation contains a technical proof presumption based on the amount of computing power used in the training.

In addition, the European Commission can also designate a model as a systemic risk model if it has similar capabilities or similar impact. This looks at factors related to scale, capacities and usage.

For most startups, this will not immediately play out at the level of their own model training. Training models on that scale is available to a limited number of parties. But that doesn't mean the topic is irrelevant. As soon as you build on advanced third-party models, integrate with them, or develop a product that relies heavily on such models, the question of systemic risk and associated governance certainly comes closer.

What does risk management for GPAI models include?

For providers of GPAI models with systemic risk, the AI regulation includes a tougher regime. At its core, it involves four types of obligations: model evaluation, assessment and mitigation of possible systemic risks, monitoring and reporting serious incidents, and appropriate cybersecurity for the model and infrastructure.

The most important obligation is the second: assessing and limiting systemic risks, which sounds clear, but is still quite openly formulated in practice. That is precisely where there is a lot of uncertainty.

Which risks count as systemic risks?

A systemic risk must essentially result from the advanced capabilities of the model and can have significant consequences on a social scale. This raises the bar higher than with everyday product risks.

The regulation itself provides only limited guidance on which concrete risks fall under it. It is clear, however, that attention is paid to risks that can spread widely in the value chain and have a major impact on health, safety, public safety, fundamental rights or society as a whole.

In the preliminary elaboration of the Code of Practice, the focus seems to be strongly on large-scale and potentially catastrophic risks, such as cyberattacks, chemical, biological, radiological and nuclear applications, large-scale harmful manipulation, large-scale prohibited discrimination and loss of human supervision. In addition, there are additional risks related to physical infrastructure and fundamental rights that must be taken into account in any case.

That is a remarkable choice. The emphasis is thus shifting to abuse and loss of control on a large scale, and less to the more mundane risks that many companies think of first in practice.

Which is probably not likely to count as a systemic risk

A good example is hallucination. Without a doubt, this is one of the best-known risks of generative AI. However, it is not obvious that hallucination is seen as a systemic risk in this specific sense.

There are two reasons for this. First, hallucination does not seem to stem from a model's most advanced capabilities, but rather from its limitations. Secondly, it usually has no consequences on the social scale that the regulation envisages here. Improper output can be annoying or harmful for users, but that does not immediately make it a systemic risk within the meaning of this regime.

This is an important insight for startups. Not every known AI risk automatically falls into the same legal category. Those who manage risk should therefore not only look at what feels technically or operationally exciting, but also at the exact legal risk layer that applies.

The relationship with the DSA: relevant to AI in major platforms

Systemic risk obligations are reminiscent of the Digital Services Act system, especially for very large online platforms and search engines. That is no coincidence. Both regimes look at social risks of powerful technology at scale.

However, the differences outweigh the similarities. The AI regulation works with a specific concept of systemic risk that focuses on advanced GPAI models. The DSA looks much more broadly at social risks of platform services.

The overlap mainly lies in the creation and distribution of harmful or illegal content. A powerful model can generate certain content faster, cheaper, and on a larger scale, and then a platform can further distribute that content. It is precisely at that intersection that the regimes touch each other.

In addition, for GPAI models that are embedded in a very large online platform, it is important that the assessment of systemic risks can in principle take place under the DSA. If that assessment is already done there, an additional assessment under the AI regulation is not necessarily necessary, unless there are new risks that fall outside the DSA.

This is relevant for scale-ups that incorporate AI deep into platform functionality, because the legal analysis will not be neatly pigeonholed. Platform law and AI regulation can be an extension of this.

GPAI systems and prohibited AI practices: this is where the most friction lies

Perhaps the most interesting point is the position of GPAI systems themselves. The AI regulation does define them, but does not treat them as a completely independent regulatory object. As a result, exactly how they fit into the wider structure of the regulation remains unclear.

It is obvious that transparency obligations for certain systems can also affect GPAI systems. This is already less simple when it comes to the rules for high-risk systems. Because these rules often start from systems that are intended for a specific purpose, it is not obvious that a generally applicable system will quickly fall under them.

Most of the discussion is about prohibited AI practices. In its guidelines, the European Commission takes the line that providers of GPAI systems can also be responsible if their system can reasonably be used in a way that falls under prohibited practices. In that approach, providers should incorporate effective and verifiable safeguards to prevent or mitigate foreseeable abuse.

This point of view is understandable from a safety perspective, but it also raises fundamental questions. An important objection is that, in practice, the relevant safeguards often have to be built in not at the system level, but rather at the model level. A GPAI system is often little more than a layer of software around the functionality of an underlying model. Then it feels artificial to let the toughest risk logic mainly land on that application layer.

In addition, this approach is subject to legal certainty. If it is not clear exactly which measures are sufficient, it becomes difficult for providers to determine in advance what they must comply with. This is all the more true because the penalty regime for prohibited AI practices is very severe.

This is a serious point of attention for startups and scale-ups. Not because every GPAI system would automatically be a prohibited application, but precisely because the border has not yet sharply crystallized.

What does this mean in concrete terms for startups and scale-ups?

The key message is that general-purpose AI cannot legally be approached as one homogeneous block. The first question is always: do you work at model level, system level, or both?

For teams that build their own models

Those who develop a widely deployable model themselves should think early about qualifying. Is this a general-purpose model, or a specialist model with a limited function? This question is not only legally relevant, but also determines which type of risk management will come into focus later.

For teams that primarily build products on existing models

Anyone who mainly develops an application, assistant or agent on top of an existing model is still dealing with the GPAI framework, but often more indirectly. Then it becomes especially important how the distinction between model and system works out, and to what extent certain system obligations, transparency obligations or discussions about prohibited applications become relevant.

For founders, legal and investors

For this group, the relevance lies mainly in predictability. The AI regulation already provides a framework, but still leaves room for interpretation at crucial points. Examples include the definition of the GPAI concept, the implementation of systemic risks and the question of how far Article 5 extends to general-purpose AI systems. This means that product claims, governance choices and risk assessments should not be organized too statically.

At Startup-Recht, this is where we see a great deal of value in a sober approach. Not everything has to be fixed to a decimal point today, but you do want to understand early on where the real uncertainties lie and which of them are strategically relevant to your business.

Lastly

The AI regulation tries to get a grip on a category of technology that is characterized by broad employability, rapid development and technical complexity. This provides a separate regime for GPAI models, with additional obligations for the most advanced models and persistent questions about the position of GPAI systems.

For startups and scale-ups, the most important lesson is therefore not only what the rules say, but also where they still rub. Those who work with general-purpose AI would do well to distinguish clearly between model and system, take the discussion about systemic risks seriously and stay alert to further clarification in codes of practice, guidelines and enforcement. It is precisely in that playing field that the coming period will determine how much legal space there really is to innovate with general-purpose AI.

Testimonials

What our clients say

Startups and scale-ups enjoy working with us. Here’s what they think of our expertise and approach:

We hired Startup-Recht to draft our general terms and service agreements. The result was fast, high-quality, and perfectly tailored to our needs thanks to the revision rounds. They really took the time to understand our business context. Professional, reliable, and a pleasure to work with.
Daan Witte
Co-founder AcuityAi
legal expertise for fast moving startups in regulated industries. Startup-Recht provides the legal foundation for us to innovate at Pabel AI.
Stan Haaijer
Co-founder Pabel B.V.
Good, energetic lawyers with clear and strong subject-matter expertise. They respond quickly and think proactively, finding solutions for innovative and sometimes complex issues within our sector: Open Source Consulting. The documents were delivered on time, and communication throughout was clear and prompt.We also had the documents reviewed by several other lawyers, who were impressed by their quality. Substantive feedback was addressed thoroughly and with great care. This gives us confidence in our new legal foundation.Thank you for the pleasant collaboration—looking forward to working together again soon.
Niels Verhage
Co-founder Rogue IT Consulting B.V.
Maarten and Caylun from Startup-Recht are supporting me in setting up my business. They do so in a very pleasant and professional manner. As an entrepreneur, it’s extremely valuable to be able to rely on their expertise in startups.I can reach out with questions whenever they arise and always receive a prompt response. In addition, they take all legal work off my hands and assist with drafting the right documents.In short, I am very happy with this collaboration and can highly recommend them.
Erik Maessen
Founder CoachChecker B.V.
We had a very pleasant collaboration. They thought along with us carefully, truly understood our vision, and supported us in a professional and approachable way. The communication was personal and clear throughout. Definitely highly recommended.
Luc de Graag
Co-founder Tikt.ai
We had an excellent experience working with Startup-Recht. Their team combines professionalism with a genuine understanding of startups’ needs, guiding us through every step with clarity and efficiency. They didn’t just answer our questions – they anticipated challenges and offered practical solutions that gave us real peace of mind. Highly recommended for any young company looking for reliable legal support.
Luis Martinez
Co-founder UpTo
Logo staallokaal
At Startup-Recht, the mix of young entrepreneurship and solid legal advice is pure gold. As an entrepreneur, you know you need to sort out your terms, but it rarely gets done—until Startup-Recht sits down with you. They guide you through what really matters and create terms that fit your company. The perfect balance between customer-focused and legally safe. Still in doubt? Have a coffee with the guys and you’ll be convinced.
Sybrandus Pietersma
Mede-eigenaar Staallokaal B.V.
Very satisfied with Startup-Recht. They helped us draft multiple contracts and general terms and managed to translate our services and workflow perfectly into strong legal documents. Everything was clearly explained, and they even covered points we hadn’t thought of. Fast communication, clear advice, and a top result.
Daniël Coenen
Mede-oprichter Digiswift B.V.
We engaged Startup-Recht to draft our terms and conditions and service agreement. The result was delivered quickly, of high quality, and fully tailored to our needs thanks to the revision rounds. In addition, Startup-Recht provided valuable input within the context of our business.

Professional, reliable, and a pleasure to work with.
Paul Brandsma
Mede-oprichter AcuityAi

Startup-Recht assisted me in a professional and careful manner. Their work was characterized by speed, transparency, and a smooth process – all at a very reasonable rate. I consider the collaboration trustworthy and highly recommendable.

Michael de Jong
Webdeveloper & Founder
Maarten and Caylun did an excellent job helping us draft strong legal terms and meet the right compliance standards. We didn’t have much prior knowledge, but they took the time to explain everything clearly and gave valuable advice for the future. Overall, we were very well supported and would definitely recommend Startup-Recht.
Robin Jonckers
Co-founder Copywise Ai
Caylun en Maarten van Startup-Recht

Meet your modern legal partner. Work becomes easier, faster, and more secure.

Book a consultation