The EU AI Act Clock Is Ticking: What Dutch SMBs Must Do Before August 2026
The Deadline Most SMBs Are Ignoring
On August 2, 2026, the EU AI Act enters its full enforcement phase. National regulators — including the Dutch Autoriteit Persoonsgegevens and Germany's Bundesnetzagentur — gain the power to issue fines. The rules have technically been on the books since August 2024, but until now most obligations were phased in gradually. In less than four months, that grace period ends.
If you run a small or mid-sized business and you've deployed any kind of AI — a customer support chatbot, an automated invoice reader, a lead scoring tool, a CV screener — this applies to you. Not just to the tech giants building frontier models. The Act regulates how AI is used, not just how it's built, and most of its obligations fall on whoever puts the system into operation.
The good news: for the vast majority of SMBs, compliance is genuinely manageable. The bad news: "we'll get to it later" is no longer a viable plan.
What Actually Changes on August 2
Let's cut through the legal jargon. Three things matter practically for small businesses:
First, enforcement goes live. National supervisory authorities — one per EU member state — can now conduct audits, demand documentation, and impose penalties. Fines for the most serious violations (using prohibited AI practices) can reach €35 million or 7% of global annual turnover, whichever is higher. For most SMB violations, the cap is lower but still meaningful: up to €15 million or 3% of turnover for high-risk system breaches, and up to €7.5 million or 1% for providing incorrect information to authorities.
Second, high-risk system obligations kick in fully. If your AI touches hiring, credit scoring, education, critical infrastructure, or any of the other categories the Act classifies as "high-risk," you face a full set of requirements: risk management, data governance, technical documentation, human oversight, and CE marking. This is where most SMBs underestimate their exposure — an AI CV screener or an automated loan underwriting tool sounds routine, but it's explicitly high-risk under the Act.
Third, transparency rules apply to everyone. Even if your AI use is low-risk, you have obligations around disclosure. Customers must know when they're talking to an AI. AI- generated content must be labeled. Employees must be informed when AI is used in workplace decisions that affect them. These rules are simple but catch many businesses off guard.
The Risk Categories, Without the Jargon
The Act sorts AI systems into four buckets. Figuring out which bucket you're in is the single most important compliance step, because the obligations scale dramatically with risk level.
- Prohibited (banned entirely): Social scoring, real-time biometric surveillance in public spaces, emotion recognition in workplaces and schools, and a few others. If you're doing any of this, you need to stop. Full stop. Prohibitions have been enforceable since February 2025.
- High-risk: AI used in hiring, employee management, access to education, credit scoring, essential public services, law enforcement, migration, or justice. Also covers AI that is a safety component of products like medical devices, vehicles, or machinery. Full set of compliance obligations — this is where the heavy lifting happens.
- Limited risk (transparency obligations): Chatbots, emotion recognition systems outside workplaces/schools, deepfakes, and AI-generated content. You must disclose clearly that AI is being used. That's the main obligation.
- Minimal risk (no obligations): Spam filters, inventory optimization, AI-powered video games. Most internal productivity tools fall here. No specific obligations, but general best practices still apply.
Most SMBs are using limited-risk or minimal-risk AI. If that describes you, compliance is mostly about disclosure, basic documentation, and staff awareness — not a six-figure consulting project.
The Four-Step Plan for Dutch SMBs
Here's a practical roadmap you can execute between now and August 2. It won't make you "certified" — there's no SMB certification scheme — but it will put you in a defensible position if a regulator comes knocking.
Step 1: Inventory your AI. Make a simple list of every AI tool your business uses. Include the chatbot on your website, the AI features in your CRM, the generative tools your marketing team uses, the translation service, the meeting transcription software, the OCR tool that reads your invoices. You probably have more AI in your stack than you realize. For each one, note: what it does, what data it processes, who uses it, and who provided it.
Step 2: Classify each system. Using the four risk categories above, assign a bucket to each tool on your list. Be honest. If you're using AI to help filter job applications, that's high-risk, even if a human makes the final call. The Act explicitly covers "AI systems intended to be used for the recruitment or selection of natural persons."
Step 3: Address the gaps. For limited-risk systems, update your website and onboarding flows to disclose AI use. Add a simple notice to your chatbot: "You're chatting with an AI assistant." For high-risk systems, you have more work: you'll need documentation of how the system makes decisions, what data it was trained on, how you monitor it, and how humans can override it. This is where most SMBs will need help.
Step 4: Train your team. The Act requires "AI literacy" for staff using AI systems. This doesn't mean everyone needs a PhD in machine learning — it means your team understands what the AI does, what its limitations are, and when to escalate. A 30-minute internal session and a one-page reference document is often enough for most SMB contexts.
The SMB Advantage Hidden in the Act
One thing that often gets buried in compliance discussions: the AI Act explicitly includes support measures for SMBs and startups. These aren't marketing copy — they're in the text of the law.
- Free sandbox access. Every EU member state must establish regulatory sandboxes where SMBs can test AI systems under supervision. Access is prioritized for SMBs and free of charge. The Netherlands' sandbox is coordinated through the Autoriteit Persoonsgegevens.
- Proportional fees. Conformity assessment fees must be proportional to company size. A 10-person business shouldn't pay the same as a multinational. Some assessments are free or heavily subsidized for SMBs.
- Simplified documentation. The Commission is required to develop simplified technical documentation templates specifically for SMBs. These must be accepted by national authorities.
- Risk-based enforcement. Regulators have stated clearly that early enforcement will prioritize high-risk applications and bad actors — not SMBs with imperfect paperwork. The goal is a reasonable path to compliance, not punishing small businesses trying to do the right thing.
What Most SMBs Get Wrong
After talking with dozens of business owners about this, we see the same misconceptions repeatedly:
"We just use ChatGPT, so we're not affected." You're the deployer of an AI system. That comes with obligations — especially around staff awareness, disclosure when generating customer-facing content, and making sure the tool's use case isn't actually high-risk.
"Our vendor handles compliance for us." Your vendor is responsible for their system as a provider. You're still responsible for how you use it. If you take a general-purpose chatbot and deploy it to screen applicants, you've created a high-risk use case regardless of what the vendor says about their product.
"We're too small to be on anyone's radar." True in the early months, probably. But enforcement won't stay centralized forever, and customer complaints are a common trigger for investigations. A single disgruntled applicant can put your hiring AI under scrutiny.
"Compliance will slow us down." The opposite, actually. Businesses that handle compliance early build AI with better data hygiene, clearer documentation, and more reliable outputs. That's a competitive advantage, not a tax.
The Privacy-First Deployment Option
There's one architectural choice that dramatically simplifies compliance for many SMBs: keeping AI processing on-premises. When customer data never leaves your office, several of the hardest compliance questions — international data transfers, third-party risk assessments, sub-processor agreements — simply don't apply.
This is why we've seen a noticeable shift among Dutch SMBs toward edge AI and self-hosted models over the past six months. Open-weight models like Google's Gemma 4 and Mistral's latest releases have closed the performance gap with cloud APIs enough that on-premises deployment is viable for most standard business tasks: customer support, document processing, internal search, and workflow automation.
Start Now, Not in July
Four months sounds like a lot of runway, but anyone who has been through a GDPR compliance project knows how fast it disappears. The SMBs that will be in the best position on August 2 started their inventory and classification work in Q1 2026. If you haven't started yet, the next three weeks should be dedicated to Steps 1 and 2 above — understanding what AI you have and what risk bucket each system falls into. Everything else builds from there.
At Mithilab, we help businesses navigate EU AI Act compliance without getting lost in legal complexity. Our AI Compliance service walks you through the inventory, classification, and documentation process — and if on-premises deployment makes sense for your situation, our Edge AI solutions can take several compliance headaches off the table entirely.
If the August 2 deadline has been nagging at you, let's have a quick conversation. A 30-minute call will tell you where you actually stand — and whether you have a compliance problem or just a documentation problem. Usually, it's the second one.