Digital product safety for startups: how AI, software and cybersecurity rules interact

Why digital product safety isn't just about hardware anymore
For a long time, product safety focused primarily on tangible products. The logic behind this was clear: a product had to meet certain essential requirements before it could enter the market, technical standards helped to make those requirements concrete, and manufacturers had to intervene if a product turned out to be unsafe afterwards. That classic framework worked well as long as “product” meant hardware in practice.
That reality has changed. More and more functions that used to be performed by physical components or human actions are now in software. Software controls devices, monitors safety, processes data, makes automated decisions, and can be remotely modified via updates after release. As a result, it is logical that digital product safety has also started to affect separate software, and not just software that is built into hardware.
For tech companies, this is a fundamental shift. As soon as software falls under product safety logic, not only the code itself comes into view, but also how updates, changes, and corrections are organized. In practice, this means that a release cycle is no longer just a technical process, but can also become a compliance issue. This is especially true when it comes to AI that substantial changes can again have consequences for compliance and assessment.
At Startup-Recht, we see that for founders, this is often the time when the legal puzzle becomes complicated. A software team thinks in terms of iterations, sprints and patches. The European safety framework thinks more in terms of qualifications, essential requirements, compliance and corrective measures. These two worlds are now increasingly overlapping.
Safety now means more than preventing physical damage
Moreover, digital product safety is no longer just about whether someone can be physically injured as a result of a defective product. With software and AI, the risk picture is changing. A system can function technically well and still cause serious problems, for example due to discriminatory outcomes, invasion of privacy or other interference with users' rights and interests.
This broader safety concept is reflected in the AI regulation and in the CRA. The AI regulation links safety not only to health and classic product risks, but also to fundamental rights. That is why certain AI applications are prohibited, high-risk AI systems have additional requirements such as data, documentation and human supervision, and systemic risks are also identified in general AI models. The CRA focuses on the cybersecurity of products with digital elements, with attention to safety by design, vulnerabilities and patching.
For startups and scale-ups, this is important because it broadens the legal analysis than “does the product work safely?” Also questions like “how robust is the system?” , “how are vulnerabilities detected?” , and “what risks arise for users other than physical damage?” can become relevant. In particular, tech companies that use AI in sensitive contexts, or build software that is deeply intertwined with devices, processes or critical functions, face a heavier assessment bar.
At the same time, this also shows why implementation is difficult. As soon as legislation tries to capture fundamental rights and product safety in one system, certification becomes more complex. Not everything that is legally relevant can easily be translated into a technical standard or a tick-off checklist. This makes digital product safety legally ambitious, but not automatically easy to apply.
Four regimes that you should keep apart as a tech company
The AI Regulation
The AI regulation focuses on AI systems, i.e. software that falls under the definition included therein. The framework is risk-based: some applications are prohibited, some systems are classified as high-risk and have more stringent requirements, and lighter applications have more limited obligations. In addition, it doesn't matter whether AI runs as standalone software or is built into a physical product. AI that is used as a service can also be within reach.
For startups, it is particularly important that the AI regulation does not regulate all software, but specifically AI software. This immediately creates a border problem: software with advanced functionality does not automatically fall under this framework, while software that does qualify as AI sometimes faces heavy obligations. This limited focus on AI therefore does not solve the entire security issue surrounding software.
The CRA
The CRA is broader from a cyber security perspective. It covers products with digital elements, including hardware, standalone software, software modules and certain external data processing services provided by the manufacturer. The focus is on cybersecurity requirements, dealing with vulnerabilities, and making patches available.
For product-driven tech companies, the CRA is therefore often the most tangible framework. If you build software that is marketed as a product, or software that supports a digital product, you will soon find yourself in this sphere. At the same time, there is also an important limitation: the CRA is product-oriented. Pure online services do not automatically fall under the same regime. As a result, the coverage is ambitious, but not completely comprehensive to all modern software models.
NICHE 2
NIS2 looks less at the product and more at the organization and its systems. It's about cybersecurity from a business perspective. Within the scope, organizations must take appropriate and proportionate measures and report significant incidents according to a tiered reporting schedule. In addition, governance and security in the supply chain also play a clear role.
This is relevant for scale-ups because growth not only has a commercial effect, but can also determine whether a company falls within a more stringent regulatory framework. So the question is not only what you build, but also what sector you operate in, how big your organization will be and what role you play in a chain.
DORA
DORA is the sector-specific variant for financial entities. This regime is stricter and more detailed than NIS2 and focuses, among other things, on ICT risk management, incident reporting, resilience testing, third-party risk and threat monitoring. Within the financial sector, DORA basically functions as a lex specialis compared to NIS2 for substantive security requirements.
For fintechs, insurtechs and other growth companies in the financial chain, this is crucial. Not every tech company affects DORA directly, but as soon as you operate in that sector, or as a supplier, closely connect with such parties, the playing field immediately becomes a lot more intensive.
The real bottleneck: when is software a product and when a service?
The most difficult question in digital product safety is often not which rule exists, but which rule applies to your software model. After all, software can be marketed as a product, for example as downloadable software or as part of a device, but also as a service via a license, subscription or cloud environment. This is where friction between the different regimes occurs.
NIS2 requires organizations to secure their own systems, regardless of whether they sell software or offer software as a service. The CRA, on the other hand, focuses on products with digital elements and therefore mainly on software as a product. This means that the same company can face different obligations depending on how its technology is legally qualified.
This is extra relevant for SaaS companies. Certain cloud solutions may fall under the CRA if they support the functionality of a product with digital elements, but exactly where that boundary lies is not clearly defined. This creates legal uncertainty for companies that work with hybrid models, i.e. software that is part product, part service and part infrastructure.
In practice, this means that founders are not too quick to assume that one label is sufficient. “We're a SaaS company” doesn't mean enough legally. “we're building AI” or “we deliver embedded software” is also too crude without further analysis. The relevant question is always how the solution is offered, what function it fulfills, which products or networks it is connected to and within what regulatory framework the company itself operates.
Why compliance can quickly become tough for startups
The overlap between these regimes involves a practical risk: over-regulation and over-reporting. If different frameworks simultaneously call for reporting, risk management and documentation obligations, compliance risks becoming an end in itself. This is not only expensive, but can also distract from the actual technical improvement of the product or system.
That problem is even sharper for software than for hardware. Modern software is constantly being updated. Release cycles are short, updates follow each other quickly and vulnerabilities need to be tackled practically. A system of recurring assessment and certification therefore does not always adapt smoothly to the reality of software development. The result may be that compliance will interfere with the speed at which tech companies normally build and improve.
For smaller companies, there is something more. Reporting, certification and technical documentation require time, people and budget. Large companies can often set up such processes as separate compliance projects. For startups and scale-ups, those obligations actually run directly through product development. That makes prioritization difficult: every euro and hour that goes to formal compliance does not go to engineering, validation or commercial growth.
Open source adds another layer. The regulations do try to make a distinction there, but also leave grey zones, especially when voluntary or community-driven software in practice grows into commercial deployment or support. Especially in the startup ecosystem, where open source is regularly a building block of the product, that deserves extra attention.
What startups and scale-ups should do with this in concrete terms
For tech companies, digital product safety starts with qualification. First, you need to clarify whether your software is legally approached primarily as an AI system, digital product, service or regulated business activity. Without that step, it is almost impossible to specifically determine which obligations have priority.
This is followed by the translation into the development process. If you work with software updates, vulnerability management and ongoing product improvement, compliance must be in line with that and not only start after release. In practice, this means that product teams, security and legal must work together earlier, precisely because changes in software can also have consequences for compliance, reporting obligations or documentation.
In addition, it is wise to be alert to duplications. Especially where AI regulation, CRA and NIS2 intersect, similar information obligations or risk assessments can come back along different routes. For scale-ups that scale up quickly, serve multiple markets or combine different product forms, a central compliance perspective prevents a lot of unnecessary friction.
At Startup-Recht, we see that this is particularly relevant for companies that are growing fast on a technically strong product foundation. In the early phase, regulations sometimes seem like something for later. But in digital product safety, in particular, architecture, release policy and documentation choices are made early, while the legal impact is often only visible later. Those who make that translation too late are not only at legal risk, but also delays in product development and go-to-market.
The most important takeaway
Digital product safety is no longer a niche topic for hardware manufacturers. For startups and scale-ups, it now also affects software, AI, cloud functionality and the cyber resilience of organizations themselves. The biggest challenge here is not only in new rules, but especially in the overlap between rules that are built from different logic.
That is why the key question for tech companies is not whether they will have to deal with digital product safety, but where exactly the focus lies in their own model. If you ask that question on time, you can make product development, security and compliance much more closely linked. And that is exactly where the most profit can be made in a rapidly growing tech environment.



















