Derivinate NEWS

AI-Generated Content Isn't Yours — Here's Why

AI-Generated Content Isn't Yours — Here's Why

The Supreme Court Just Settled It

On March 2, 2026, the U.S. Supreme Court declined to hear *Thaler v. Perlmutter*, effectively ending the legal question of whether AI systems can be copyright authors. The ruling is blunt: AI cannot be an author under the Copyright Act. This was the final blow to anyone hoping to copyright purely machine-generated content. But here's what actually matters for your business: the real legal battlefield isn't about who owns AI outputs anymore. It's about who's liable when those outputs infringe someone else's copyright.

The Supreme Court's decision leaves intact a D.C. Circuit ruling that AI-generated works lack the "human authorship" required for copyright protection. Translation: if your business relies entirely on AI to generate marketing copy, product descriptions, or design assets, those outputs live in a legal gray zone. You can't copyright them. Competitors can copy them. You have no legal recourse.

But that's the easy part of the problem.

The Real Liability Trap: Training Data

While courts were debating who can own AI outputs, the actual litigation war was happening elsewhere. Dozens of copyright infringement lawsuits are advancing toward dispositive rulings in 2026, and they're not about the outputs. They're about the training data.

The central legal question: Is training an AI model on unlicensed copyrighted works a violation of copyright law, or does it qualify as "fair use"?

The first significant ruling came in February 2025, when a federal court sided with Thomson Reuters in *Thomson Reuters Enterprise Center v. ROSS Intelligence*. ROSS had used Thomson Reuters' Westlaw headnotes to train its AI legal research platform. The court rejected ROSS's fair use defense, finding that ROSS created a direct competitor to Westlaw and harmed both the market for Thomson Reuters' services and the potential market for AI training data. The court emphasized that factor four of the fair use analysis—the effect on the market for the original work—"strongly favored Thomson Reuters."

This matters because it signals how courts are thinking about AI training. If your AI tool competes directly with the copyrighted work you trained on, fair use becomes a very weak defense.

What This Means for Your Vendor Agreements

Here's where it gets practical. If you're using an AI tool built by someone else—Claude, ChatGPT, Cursor, whatever—you're now exposed to a risk you probably didn't think about: What if that tool was trained on copyrighted material without permission? Who's liable?

The answer depends entirely on your contract.

Most AI vendors have inserted liability disclaimers into their terms of service. OpenAI, for example, has been clear that it won't indemnify customers for claims that its training data infringes third-party IP. That means if The New York Times or Getty Images or any other copyright holder sues *you* for using an AI tool that was trained on their content, you're on your own.

Negotiating for broad IP indemnification is now critical. You should be pushing vendors to cover claims that the AI tool, its outputs, or its training data infringe third-party intellectual property. Most won't agree to full indemnification—the liability is too uncertain—but you can negotiate partial coverage or require them to carry insurance.

If your vendor won't budge on indemnification, you need to ask harder questions: What data was this model trained on? Can they prove it was licensed or falls under fair use? If they can't answer those questions clearly, you're gambling with your business.

The Copyright Office Already Made Its Position Clear

The U.S. Copyright Office has been consistent since 2022: purely machine-generated works cannot be copyrighted. In rejecting Dr. Stephen Thaler's attempt to copyright AI-generated art, the Copyright Office stated that "copyrightable creative works require human authors."

But here's the nuance that matters: works that combine human creativity with AI assistance might still qualify for copyright protection. If a human artist uses AI as a tool to create something, and that human made the creative choices that matter, the resulting work could be copyrightable.

This creates a perverse incentive: businesses that want copyright protection for their AI-generated content need to add human involvement. A marketer who uses AI to draft copy and then edits it, rewrites it, and makes substantive creative decisions might be able to claim copyright. A business that just hits "generate" and publishes the output cannot.

The New Liability Playbook

For small businesses and startups using AI, here's what you need to do right now:

Audit your AI usage. Document which tools you're using, what you're using them for, and whether you're publishing the outputs directly or modifying them. If you're modifying them, document that process. It could matter legally.

Demand transparency from vendors. Ask your AI provider specifically: What data was your model trained on? Do you have licenses for copyrighted material in your training set? Can you prove fair use? Get it in writing. If they won't answer, that's a red flag.

Negotiate indemnification clauses. Even partial indemnification is better than none. Push back. The cost of a copyright lawsuit is far higher than the cost of negotiating better terms now.

Consider human-in-the-loop workflows. If copyright protection matters for your outputs, make sure humans are making substantive creative decisions. This isn't just legal cover—it often produces better results anyway.

Get insurance. Some business insurance policies now cover IP infringement claims related to AI usage. It's worth asking your broker about.

The Uncertainty Isn't Over

The Supreme Court's decision on AI authorship was clear. But the bigger questions are still being decided in district courts and appeals courts across the country. The New York Times' lawsuit against OpenAI and Microsoft is still in discovery. Dozens of other cases are advancing. The Authors Guild case. The Getty Images case. Each one could shift the legal landscape.

What's not in question anymore: you don't own AI-generated content just because an AI made it. And if that AI was trained on copyrighted material, the liability could be yours.

The smart move isn't to avoid AI. It's to use it with your eyes open about who's liable when things go wrong. Because they will.