AI regulation in the workplace: What startups and scale-ups need to know now

AI tools are increasingly appearing in recruitment, performance management and internal work processes. This is attractive for startups and scale-ups, because speed, scalability and efficiency count heavily. That's why it's important to know that the AI regulation hits the workplace faster than many teams think.
No items found.
Insights
Caylun J. Scholtens
30.04.2026

The use of AI in the workplace feels like a logical next step for many young growth companies. A tool that filters applications, a system that monitors performance or an assistant that answers internal questions from employees, it seems efficient and scalable. But under the AI regulation, not every AI application is the same. Some applications are prohibited, others quickly fall into the high-risk category, resulting in hefty obligations.

This is extra relevant for startups and scale-ups. Especially in fast-growing organizations, HR processes are often automated to save time, work more consistently and still be able to do a lot with small teams. The AI regulation in the workplace makes it clear that these efficiency gains cannot be separated from fundamental rights, transparency and the position of employees and candidates.

Workplace AI falls under the AI regulation faster than many teams think

A first important lesson is that the term AI system is understood broadly. This does not mean that every system is immediately prohibited or must automatically meet heavy obligations. It does mean that organizations should not assume too quickly that a tool is “just software” and therefore remains out of the picture.

In practice, this is an important starting point for tech companies. Those who work with tooling for hiring, people analytics, productivity, internal support or workforce management should keep in mind that such a system may fall within the scope of the AI regulation. Especially when a tool analyses behavior, makes predictions, ranks, monitors or supports decisions that have an impact on candidates or employees.

In addition, the AI regulation is not the only framework that needs to be taken into account. At Startup-Recht, we regularly see that founders and operations teams approach AI primarily as a productivity issue, while the legal playing field is wider. In addition to the AI regulation, rules concerning data protection, discrimination, well-being at work and information or consultation obligations may also become relevant. For a growing company, this is not a detail, but a governance question.

Emotion recognition in the workplace is in principle prohibited

The most direct and striking message from the AI regulation in the workplace is that emotion recognition is in principle prohibited in this context. This concerns AI systems that derive or determine the emotions of natural persons in the workplace, except in a limited exception for medical or safety purposes.

This ban is drastic, precisely because its scope is broad.

The ban goes beyond facial recognition

When it comes to emotion recognition, many people immediately think of cameras that analyze facial expressions. But it doesn't stop there. These are systems that identify or derive emotions based on biometric data. This can include classic biometric features, but also behavioral biometric data, such as walking patterns, posture, movements, eye movements, or even the way you type.

For startups and scale-ups, this is an important warning. A tool that says it can measure stress, frustration, motivation or engagement from webcam images, voice, keystrokes, or movement data quickly falls into the ban category. Even when such a tool is positioned as an innovation in the field of wellbeing or team performance, this does not automatically change the legal qualification.

The reasoning behind the ban is clear. The legislator and the European Commission question the effectiveness and scientific basis of these types of systems. Emotions are context-dependent, culturally coloured and difficult to measure objectively. Especially in an employment relationship, where there is an unequal balance of power, the use of such systems can lead to adverse treatment, invasion of privacy and violation of human dignity.

Not every signal or feeling falls under emotion recognition

At the same time, nuance is needed. Not every system that measures something automatically falls under this ban. The AI regulation makes it clear that physical states such as pain or fatigue are not the same as emotions. Even the mere detection of visible signals, such as a smile, frown or voice volume, is not in itself prohibited emotion recognition, as long as those signals are not used to identify or deduce emotions.

That distinction is practically important. A system that only records voice volume in a call center does not automatically fall under the ban. The same goes for an application that detects driver fatigue. But as soon as a biometric system tries to deduce that someone is angry, stressed, bored, or anxious, you end up in a completely different category.

Workplace means more than the office

For many tech companies, work is no longer just in the office. There is hybrid work, remote work, mobile work and project-based work with freelancers or contractors. That is precisely why it is relevant that the concept of workplace is explained in a broad way. The ban applies not only at a physical location, but also at virtual, mobile, open, private and temporary workplaces.

This makes the impact of the AI regulation on the workplace greater than many organizations expect. A system that analyses emotions during online meetings, remote calls or hybrid collaboration can therefore fall under the ban just as much as a camera in the workplace.

It is also remarkable that the workplace is interpreted broadly enough to include candidates in a recruitment and selection process. For scale-ups that experiment with AI in recruitment, this is a crucial point. A tool that tries to measure candidates' emotions during job interviews can therefore end up in the prohibited zone before an employment contract even exists.

The exception for medical or safety purposes is narrow

There is an exception, but it must be read restrictively. This does not simply include general monitoring of stress levels, burnout signals or depression risks at work. Relying on “safety” cannot be widely used to cover all kinds of business interests either. It's about protecting life and health, not overall efficiency, productivity, or comfort.

This is a relevant reality check for companies that want to position AI as a wellbeing tool. A system that claims to measure the mental state of employees to prevent burnout may sound caring, but it can actually be legally problematic. And even if an application does fall within the exception, caution remains necessary. According to the line discussed, it must be possible to demonstrate an explicit need and consultation with employees or their representatives is obvious.

For HR, high risk is almost the main rule

While emotion recognition in the workplace is primarily a prohibition question, in many other cases it's about whether an AI system is seen as a high risk. For HR applications, the answer is often yes, or at least faster than expected.

The AI regulation specifically refers to AI systems that are intended for recruitment and selection, such as analyzing and filtering applications and assessing candidates. Systems that influence decisions about employment conditions, promotion, termination of the employment relationship, task assignment, monitoring and evaluation of performance or behavior also fall into the picture.

This is a broad playing field for startups and scale-ups. Think of software that ranks candidates automatically, people analytics tools that identify performance patterns, systems that measure productivity or attendance, or tooling that helps determine who gets which tasks. Even if AI does not make the final decision, the impact on fundamental rights and career opportunities can be significant.

The underlying concern is understandable. Such systems can reproduce historical discrimination, provide opaque outcomes, and deeply interfere with people's working lives. This not only applies to recruitment, but also to internal growth, assessment and retention of employees.

For tech companies, it is precisely the latter that is relevant. In a scale-up, roles shift rapidly, teams grow at a rapid pace and processes often professionalize while the operation is already running. There is then a great temptation to use AI as a scaling solution for recruitment, talent review or performance management. But the closer a tool comes to assessment, selection or behavioral control, the sooner high risk comes into view.

The escape route exists but is narrow

The AI regulation leaves room for an exception to the high-risk qualification for certain HR-related systems, but that space is limited. These are systems that do not pose a significant risk to health, safety or fundamental rights, partly because they do not substantially influence decision-making.

This can happen, for example, when AI only performs a limited procedural task, such as structuring unstructured data, classifying documents or detecting duplicates. Even systems that only add an extra layer to a previously completed human activity can sometimes remain out of high risk. An example that fits well with that idea is improving the readability of a vacancy text.

Furthermore, AI that only detects deviations in previous decision-making patterns or merely performs a preparatory task, such as document translation, can remain outside the high-risk classification under circumstances.

Nevertheless, restraint is essential here. As soon as a system is used to profile natural persons, that escape route is in any case not available. And apart from that, the exception must be applied restrictively. For startups and scale-ups, it is therefore risky to label a tool as “innocent” too quickly because there is still a person in the process. If the AI outcome is actually guiding, a high-risk qualification is still obvious.

So what does the AI regulation specifically require of employers?

When an AI system qualifies as a high risk in the workplace, concrete obligations follow. That is not a purely paper exercise. It directly affects governance, internal processes and how teams work together.

The user of such a system must take appropriate technical and organizational measures to use the system according to the provider's instructions. Human supervision must be organized by people with sufficient skill, training and authority. Insofar as the user has influence on the input data, it must be relevant and sufficiently representative of the intended purpose. The operation of the system must also be monitored and automatically generated logs must be kept appropriately under our own control, with a minimum period of six months.

In addition, there is a specific obligation to provide information to employees and employee representatives. Before a high-risk AI system is deployed or used in the workplace, stakeholders must be informed that they will be subject to the use of such a system.

For startups and scale-ups, this means that AI implementation is not just an HR or IT project. At Startup-Recht, we see that these types of processes only become workable if legal, HR and IT work together. Anyone who uses AI in recruitment or people management without clear responsibilities, human control and internal documentation quickly builds risk into the system itself.

Even with a lower risk, you are not ready

Not every AI system in the workplace is prohibited or high-risk. But even then, the story doesn't end. Transparency obligations apply to certain AI systems with limited risk. A practical example is a chatbot that allows employees to ask questions about internal policies or procedures. In such a case, it must be clear that the user is communicating with an AI system and not with a human colleague.

In addition, AI literacy applies more than just to high-risk applications. Organizations must ensure an adequate level of knowledge and understanding among staff and other individuals who use AI systems on their behalf. This is not a one-off formality, but an ongoing process that must match the context, the knowledge of the people involved and the way in which AI is deployed within the organization.

For young tech companies, this is perhaps one of the most practical lessons. An AI policy is not only useful for structuring risks, but also for clarifying internal expectations. Which tools can be used? For what purposes? What kind of human control is required? What should never happen to employee or candidate data? Such choices belong in policy, training and daily practice.

What does this mean in concrete terms for startups and scale-ups?

The AI regulation in the workplace mainly requires growth companies to be sharper in the phase before implementation. Not only when a tool is live, but already during selection, purchasing and internal decision-making.

If you use AI in recruitment, you should check early on whether the tool assesses, ranks or filters candidates in a high-risk way. If you use AI for performance or workforce management, you need to see if the system monitors or evaluates behavior, productivity, or personal characteristics. And as soon as a tool claims to measure emotions, stress, engagement, or motivation based on biometric signals, a prohibition request is inevitable.

For scale-ups with hybrid teams, there is something else. The workplace is not limited to the office. Digital work environments, online calls and remote workflows also fall into the picture. AI governance must therefore change with the actual way of working, not with an old-fashioned view of the workplace.

The most important practical reflex is therefore not: what can this tool technically do? The better question is: what does this tool do legally and organizationally with candidates, employees and decision making? If you ask that question too late, you run the risk of implementing a system that is not only inefficient, but also directly under legal pressure.

Conclusion: AI in the workplace requires more than smart tooling

The AI regulation in the workplace makes it clear that AI in HR is no longer an informal experiment. Emotion recognition in the workplace is in principle prohibited, and many AI applications in recruitment, personnel management and performance monitoring will soon be seen as high risk.

For startups and scale-ups, the challenge lies not only in compliance, but in mature implementation. Those who use AI in human processes must understand in advance which category a system falls into, what obligations come with it and which limits should not be crossed. Especially in growth companies, where speed is often a competitive advantage, care thus becomes a strategic part of good employment.

Testimonials

What our clients say

Startups and scale-ups enjoy working with us. Here’s what they think of our expertise and approach:

We hired Startup-Recht to draft our general terms and service agreements. The result was fast, high-quality, and perfectly tailored to our needs thanks to the revision rounds. They really took the time to understand our business context. Professional, reliable, and a pleasure to work with.
Daan Witte
Co-founder AcuityAi
legal expertise for fast moving startups in regulated industries. Startup-Recht provides the legal foundation for us to innovate at Pabel AI.
Stan Haaijer
Co-founder Pabel B.V.
Good, energetic lawyers with clear and strong subject-matter expertise. They respond quickly and think proactively, finding solutions for innovative and sometimes complex issues within our sector: Open Source Consulting. The documents were delivered on time, and communication throughout was clear and prompt.We also had the documents reviewed by several other lawyers, who were impressed by their quality. Substantive feedback was addressed thoroughly and with great care. This gives us confidence in our new legal foundation.Thank you for the pleasant collaboration—looking forward to working together again soon.
Niels Verhage
Co-founder Rogue IT Consulting B.V.
Maarten and Caylun from Startup-Recht are supporting me in setting up my business. They do so in a very pleasant and professional manner. As an entrepreneur, it’s extremely valuable to be able to rely on their expertise in startups.I can reach out with questions whenever they arise and always receive a prompt response. In addition, they take all legal work off my hands and assist with drafting the right documents.In short, I am very happy with this collaboration and can highly recommend them.
Erik Maessen
Founder CoachChecker B.V.
We had a very pleasant collaboration. They thought along with us carefully, truly understood our vision, and supported us in a professional and approachable way. The communication was personal and clear throughout. Definitely highly recommended.
Luc de Graag
Co-founder Tikt.ai
We had an excellent experience working with Startup-Recht. Their team combines professionalism with a genuine understanding of startups’ needs, guiding us through every step with clarity and efficiency. They didn’t just answer our questions – they anticipated challenges and offered practical solutions that gave us real peace of mind. Highly recommended for any young company looking for reliable legal support.
Luis Martinez
Co-founder UpTo
Logo staallokaal
At Startup-Recht, the mix of young entrepreneurship and solid legal advice is pure gold. As an entrepreneur, you know you need to sort out your terms, but it rarely gets done—until Startup-Recht sits down with you. They guide you through what really matters and create terms that fit your company. The perfect balance between customer-focused and legally safe. Still in doubt? Have a coffee with the guys and you’ll be convinced.
Sybrandus Pietersma
Mede-eigenaar Staallokaal B.V.
Very satisfied with Startup-Recht. They helped us draft multiple contracts and general terms and managed to translate our services and workflow perfectly into strong legal documents. Everything was clearly explained, and they even covered points we hadn’t thought of. Fast communication, clear advice, and a top result.
Daniël Coenen
Mede-oprichter Digiswift B.V.
We engaged Startup-Recht to draft our terms and conditions and service agreement. The result was delivered quickly, of high quality, and fully tailored to our needs thanks to the revision rounds. In addition, Startup-Recht provided valuable input within the context of our business.

Professional, reliable, and a pleasure to work with.
Paul Brandsma
Mede-oprichter AcuityAi

Startup-Recht assisted me in a professional and careful manner. Their work was characterized by speed, transparency, and a smooth process – all at a very reasonable rate. I consider the collaboration trustworthy and highly recommendable.

Michael de Jong
Webdeveloper & Founder
Maarten and Caylun did an excellent job helping us draft strong legal terms and meet the right compliance standards. We didn’t have much prior knowledge, but they took the time to explain everything clearly and gave valuable advice for the future. Overall, we were very well supported and would definitely recommend Startup-Recht.
Robin Jonckers
Co-founder Copywise Ai
Caylun en Maarten van Startup-Recht

Meet your modern legal partner. Work becomes easier, faster, and more secure.

Book a consultation