AI-Generated Content: Who Owns It, Who's Liable, What Courts Actually Ruled
The Supreme Court just answered one question about AI and copyright. The answer created about ten more.
On March 2, 2026, the Court declined to hear Thaler v. Perlmutter—a case that asked whether an AI system could be the author of a copyrightable work. The decision was clean: AI cannot be an author. But the real legal chaos? That's just beginning.
The Thaler decision left intact the D.C. Circuit's March 2025 ruling that copyright law requires human authorship. But here's what matters for your business: the court was careful to say that works created "by or with the assistance of artificial intelligence" aren't automatically disqualified. The human authorship requirement doesn't mean AI-assisted work can't be copyrighted. It just means the human has to be involved.
That's the gray zone where actual business happens. And it's where the law is still getting written.
The Authorship Question Nobody Can Answer Yet
The Copyright Office has been trying to draw the line, but it keeps moving.
In January 2025, the Copyright Office issued its Copyrightability Report. The message: using AI as a tool doesn't defeat copyright protection. Neither does incorporating AI-generated content into a larger work. But the office didn't say where the line is. How much AI is too much? At what point does AI contribution become the author instead of the tool?
That's not academic. It's everything.
Consider a designer who uses Midjourney to generate 10 variations of a concept, then hand-edits three of them, combines elements, and creates something new. Is that copyrightable? The Copyright Office says yes—probably. But it depends on how much creative direction the human provided, how much editing happened, how much the final output reflects human judgment versus machine generation.
No bright line. Just fact-specific analysis that could go either way in court.
The Training Data Lawsuits Are Still Flying
While Thaler settled the narrow question of AI-as-author, a dozen other lawsuits are still grinding through the courts over something more consequential: whether AI companies can train models on copyrighted works without permission.
The Authors Guild sued OpenAI for allegedly copying complete books to build ChatGPT. In October 2025, a federal judge in New York denied OpenAI's motion to dismiss the "output-based" copyright infringement claims—meaning the case can proceed to discovery and potentially trial.
That's significant. OpenAI argued that generating text similar to training data isn't infringement. The judge disagreed. The case will now determine whether the similarity between training data and generated output can constitute copyright infringement.
The stakes are enormous. If courts rule that AI training on copyrighted material—even if transformative—requires permission or fair use, the entire training paradigm shifts. Models would need licensing agreements, or they'd need to use only public domain or licensed data.
Liability: The Indemnification Shuffle
Because the law is unclear, the market is solving the problem with indemnification clauses.
Google announced in 2023 that it would indemnify customers against copyright claims related to generated output from its AI tools. The indemnity covers both Google's use of training data and the output customers create. If a customer gets sued for copyright infringement related to something generated by Google's AI, Google covers the legal costs and damages.
This is not altruism. It's risk transfer. Google is betting that copyright claims against customer output will be rare enough that the indemnity is cheap insurance. But it also signals something important: Google believes there's real liability exposure here.
Other vendors are taking different approaches. Some offer indemnification only for certain use cases. Others require customers to use their tools in specific ways—human review, modification, citation of sources—to qualify for coverage. The variation matters. If your AI vendor won't indemnify you, you're holding the legal risk yourself.
What This Actually Means for Your Business
The legal landscape breaks into three tiers:
Tier One: Human-directed AI work. You use an AI tool to assist with creative or analytical work, but you make the significant creative decisions, edit the output substantially, and the final product reflects your judgment. This is almost certainly copyrightable, and you almost certainly own it. The risk here is lowest.
Tier Two: AI-assisted work with human review. You generate multiple outputs, select the best ones, make moderate edits, and use the result. This is probably copyrightable, but the copyright strength depends on how much human work went into it. If sued, you'd need to prove human authorship was substantial. The risk is moderate.
Tier Three: Minimal human involvement. You prompt an AI, use the output with little or no modification, and claim copyright. This is the Thaler scenario. Courts are unlikely to grant copyright protection. Your risk is high.
The liability question cuts differently. Even if you own the copyright in your AI-generated work, you could still face infringement claims if the output resembles training data. Your indemnification from the AI vendor—if you have it—matters enormously here. If you don't have it, you're betting that the AI company trained responsibly and won't face successful copyright claims.
That's not a bet I'd recommend.
The Fair Use Question That Isn't Settled
Courts haven't directly ruled on whether training AI models on copyrighted data constitutes fair use. The Copyright Office's January 2025 report suggested that transformative use—a key fair use factor—might favor AI companies. Using copyrighted text to train a model that generates novel output could be transformative.
But the Authors Guild lawsuit suggests courts might disagree. The judge's decision to let the case proceed means the fair use question will eventually be litigated. And fair use is notoriously fact-specific. Even if courts rule that some AI training is fair use, they might rule that specific training practices—copying entire books, for instance—aren't.
What Businesses Should Do Right Now
First: Check your indemnification. If you're using an AI tool commercially, you need to know whether the vendor indemnifies you against copyright claims. If not, you're self-insuring. Price that risk into your decision.
Second: Document human involvement. If you're claiming copyright in AI-generated work, be able to prove the human creative contribution. Keep records of prompts, edits, iterations, and decisions. Courts will want evidence.
Third: Avoid the Thaler trap. Don't claim copyright in output you barely touched. The further you move toward Tier Three, the weaker your legal position.
Fourth: Watch the Authors Guild case. When it settles or goes to trial, the outcome will reshape how AI companies train models and what liability they face. That will cascade down to what indemnification you can negotiate and what legal risk you actually carry.
The Supreme Court answered one question. The hard ones are still in play.