Derivinate NEWS

EU AI Act Phase Two: What Compliance Actually Costs

EU AI Act Phase Two: What Compliance Actually Costs

The regulatory honeymoon is over

For years, AI companies operated in a gray zone where innovation outpaced legislation. That era ended in 2025. Now, in March 2026, the EU AI Act's Phase Two goes into full effect, and the consequences are real. If your business touches AI — building it, deploying it, or selling it — you need to understand what's coming. The costs are higher than most founders realize.

What Phase Two Actually Means

The EU AI Act is the world's first comprehensive legal framework for artificial intelligence. It uses a risk-based approach: systems are categorized as unacceptable risk, high-risk, limited-risk, or minimal-risk. Phase Two, arriving in August 2026, enforces rules on high-risk AI systems — and the definition is broader than you think.

High-risk systems include AI used for hiring decisions, credit scoring, law enforcement, and biometric identification. But it also covers any AI system that could significantly impact fundamental rights. For most SaaS companies deploying AI features, that means you're in scope.

The penalties are not theoretical. Companies face fines up to 6% of global annual revenue — or 30 million euros, whichever is higher. That's not a compliance cost. That's an existential threat.

The U.S. Is Fracturing, Not Unifying

Meanwhile, the United States is doing the opposite. On December 11, 2025, President Trump signed an Executive Order titled "Ensuring a National Policy Framework for Artificial Intelligence," aimed at achieving "global AI dominance through a minimally burdensome national policy framework."

Here's the catch: the order doesn't preempt state laws. Instead, it directs federal agencies to challenge state AI regulations in court. What this means is that U.S. companies now face a patchwork of conflicting rules.

Colorado has its own AI Act. New York is enforcing automated employment decision rules. California is developing additional restrictions. Meanwhile, the federal government is actively trying to invalidate some of these. Companies are caught in the middle, unable to build a single compliance framework that works across borders.

What This Means for Your Business

If you're building or deploying AI, you now have three regulatory regimes to consider:

For EU operations: Conduct a risk assessment. Document your AI system's capabilities, limitations, and potential harms. Implement transparency mechanisms — users need to know when they're interacting with AI. If you're high-risk, you need human oversight, bias testing, and continuous monitoring. Budget 6-12 months for compliance if you're starting from scratch.

For U.S. operations: You can't assume federal preemption will save you. Design for the strictest state law that applies to your customer base. If you operate in New York and California, build to California's standards. It's expensive, but it's safer.

For everything else: The EU's approach is becoming the global baseline. Even companies with no EU customers are adopting EU-style compliance because it's the most comprehensive framework. It's easier to comply with one high standard than to maintain multiple frameworks.

The Real Cost

Compliance isn't just legal review. It's:

  • Hiring compliance officers or external counsel (50-200K+ annually)
  • Conducting bias audits and impact assessments (20-50K per system)
  • Building audit trails and documentation systems (engineering time: weeks to months)
  • Ongoing monitoring and testing (continuous cost)
  • Potential system redesigns if your AI fails compliance (unpredictable, expensive)
  • For a startup with 50 employees, this could easily consume 10-15% of your operating budget. For larger companies, it's often higher because you have more systems in scope.

    What Builders Should Do Now

    Document everything. Start keeping records of how your AI systems work, what data they use, and what decisions they make. You'll need this for any compliance framework.

    Audit your training data. If you're using large language models or other machine learning systems, understand where the training data came from and what biases it might contain. The EU specifically requires bias testing for high-risk systems.

    Design for transparency. Build systems that can explain their decisions. This isn't just compliance — it's good product design. Users want to understand why they got a particular result.

    Assume the strictest rule applies. Don't bet on preemption or regulatory harmonization. Build to the highest standard you might encounter.

    Plan for updates. Regulation will continue to evolve. What's compliant today might not be in 18 months. Budget for ongoing updates, not one-time fixes.

    The Uncomfortable Truth

    The regulatory landscape for AI is now more expensive and complex than it was six months ago. The days of "move fast and break things" are over. Companies that build compliance into their product roadmap from day one will have a massive advantage over those trying to retrofit it later.

    The EU set the standard. The U.S. is fragmenting. And everyone else is watching. This is the new normal for AI companies.

    The question isn't whether you'll need to comply. It's how much it will cost you to wait.