Derivinate NEWS About

Grammarly's Fake Experts Reveal AI's Identity Problem

Grammarly's Fake Experts Reveal AI's Identity Problem

Grammarly launched a feature that let users get writing feedback "inspired by" Stephen King, Carl Sagan, and Julia Angwin — except none of them consented. The company didn't ask. It just took their publicly available work, trained a model on it, and sold access to a fake version of their voice.

When Angwin discovered this in early March, she didn't think it was possible. "I had thought of deepfakes as something that happens to celebrities, mostly around images," she told the BBC. "Editing is a skill... it's my livelihood, but it's not something I've ever thought about anyone trying to steal from me before. I didn't even think it was stealable."

She was wrong. It was. And now Grammarly is facing a class-action lawsuit filed in the Southern District of New York with minimum damages of $5 million — actual damages to be calculated based on company earnings from the feature. Within 24 hours of the filing, 40+ people had contacted the lead attorney.

This is the story everyone's telling: company made a mistake, got caught, apologized, disabled the feature. CEO Shishir Mehrotra posted a LinkedIn apology acknowledging the tool had "misrepresented" expert voices. Problem solved.

Except it's not solved. And the real problem was never the feature itself.

The Logic Gap That Reveals Everything

Here's how Superhuman (which partnered with Grammarly on Expert Review) defended itself: "These experts are mentioned because their published works are publicly available and widely cited."

Read that carefully. The company is arguing that public availability equals permission to commercialize someone's identity.

That's the entire debate compressed into one sentence. And it's where you see the fundamental misunderstanding — or deliberate disregard — of what consent means in the AI era.

Public availability doesn't mean consent. Stephen King's novels are publicly available. That doesn't mean a company can train a model on them and then sell access to "Stephen King's voice" without asking. It's the difference between reading a book and being impersonated by a machine trained on your words.

The feature didn't just use names. It created fake personas that gave writing feedback in those people's voices. Julia Angwin tested the output and found it was bad — "making the sentences worse, more complex." She was essentially being impersonated by a machine that gave worse advice than she would give, under her name, to people paying for her expertise.

"The idea that my name would be in there giving people terrible advice is actually really appalling," she said.

The Response That Confirms the Problem

When Grammarly got caught, it didn't pivot to an opt-in model. It offered an opt-out.

That choice is revealing. The company understood this was wrong enough to give people a way out — but not wrong enough to ask permission first. It made the problem someone else's responsibility to fix. Hundreds of writers had to actively discover they'd been impersonated, then take steps to stop it. The burden was on them.

Contrast that with how a company that actually believed in consent would have acted: build the feature, get explicit permission from each person you're going to impersonate, launch with those who said yes. If you can't get permission from Stephen King to use his voice, you don't get to use his voice.

Instead, Grammarly launched first and asked permission later — by making people opt out.

That's not a mistake. That's a business model decision.

Why This Matters More Than the Feature Removal

The media coverage has largely treated this as "company made a mistake and fixed it." The feature got disabled by March 12. Problem solved. Move on.

But the lawsuit matters more than the feature removal. Here's why: this case will establish legal precedent for whether AI companies can treat human identity as a commodity without consequences.

If Grammarly settles quietly, we get one outcome. If the class-action succeeds and damages are calculated based on company earnings from the feature, we get another. If the company fights it and loses, we get a third — one that makes every AI company think twice before training models on people's identities without permission.

The feature was just the test case. The lawsuit is the actual battle.

This also happens at a moment when AI companies are moving aggressively into identity-adjacent territory. Meta announced four new generations of custom AI chips designed to reduce reliance on Nvidia. Yann LeCun's new AMI Labs raised $1.03 billion backed by Nvidia and Bezos Expeditions, focused on "world models" that can simulate reality in increasingly detailed ways. These are the infrastructure plays that will power the next generation of AI products — products that will need training data.

The question is: where does that training data come from? And who decides?

Right now, the answer is: wherever companies can find it, and they decide.

The Broader Pattern

This isn't isolated. It's part of a larger pattern where AI companies are extracting value from human work without compensation or consent.

Grammarly's Expert Review is just the most visible case because it's personal — you can see your own name being used. But the logic extends everywhere. Every LLM trained on internet text without explicit permission from the authors is operating on the same assumption: public availability equals usable material.

The difference with Grammarly is that it wasn't just using the work — it was impersonating the person. That crosses a line that feels intuitively wrong to people, even if the legal framework is still catching up.

Julia Angwin's reaction captures this perfectly. She didn't think identity could be stolen because she didn't think of it as property. But identity *is* property when someone is selling access to a fake version of your voice. And that's what Grammarly did.

What Happens Next

The lawsuit will take time. Class-action litigation moves slowly. But the momentum is there — 40+ people contacting the lead attorney within 24 hours suggests this will be more than a nuisance suit.

The real question is whether this becomes a precedent that forces AI companies to rethink their training data practices, or whether it becomes an isolated settlement that the industry absorbs and moves past.

Superhuman VP Alex Gay's defense — "their work is publicly available" — suggests the company still doesn't understand what it did wrong. That confidence in the wrongness of their logic might be the most dangerous thing here. It means they're not alone in thinking this way.

Every AI company with a training pipeline is making the same bet Grammarly made: that they can use human work and human identity without explicit permission, because it's publicly available. Most of them just haven't gotten caught yet.

This lawsuit is a test of whether that bet pays off. If Grammarly wins or settles cheaply, the industry learns that the risk is manageable. If Grammarly loses and damages are substantial, the industry learns something different.

The feature is already gone. But the question it raises — what do AI companies owe the humans whose identities they're using to train and power their products — is just getting started.