Grammarly pulls AI tool mimicking Stephen King and other writers

Grammarly disabled its “Expert Review” AI feature after backlash and a lawsuit over using the names and writing styles of famous authors and journalists, like Stephen King and Julia Angwin, without their consent. The company apologized and removed the feature, but faces legal action for allegedly misappropriating writers’ identities for commercial gain.

Grammarly, a popular writing tool, recently disabled an AI feature called “Expert Review” after facing backlash for mimicking the writing styles of well-known authors and academics, such as Stephen King and Carl Sagan, without their consent. The feature, developed by the tech firm Superhuman (which operates Grammarly), offered users writing feedback “inspired by” these prominent figures. However, many of those impersonated were upset to find their names and reputations being used as “AI personas” for commercial gain, leading to public criticism and legal action.

The controversy escalated when investigative journalist Julia Angwin, a contributing opinion writer for The New York Times, became the lead plaintiff in a class-action lawsuit against Superhuman and Grammarly. Angwin expressed shock at discovering her professional identity being marketed as a product, likening the experience to a form of digital theft. The lawsuit alleges that the company misappropriated the identities of hundreds of writers to boost profits from its paid subscription service, and seeks to prevent the platform from attributing advice to experts that they never actually gave.

Angwin and her lawyer, Peter Romer-Friedman, highlighted the poor quality of the AI-generated suggestions, with Angwin describing the imitations as “sloppergangers”—a play on “AI slop”—because the edits attributed to her were often worse than the original text. The legal team argues that using someone’s name for commercial purposes without consent is unlawful, and that the burden of opting out should not fall on the writers themselves. The lawsuit claims damages exceeding $5 million, though the final amount will depend on the company’s earnings from the tool.

Initially, Superhuman responded to the criticism by offering an opt-out option for those impersonated, but this was widely criticized as inadequate. Many affected writers and journalists argued that consent should have been obtained before their names were used, not after. The backlash prompted Superhuman’s CEO, Shishir Mehrotra, to apologize publicly, admitting that the company “fell short” and promising to rethink their approach to incorporating expert voices into their platform.

Despite the apology and the removal of the feature, Superhuman maintains that the legal claims are “without merit” and intends to defend itself in court. The company insists that the AI agent used only publicly available information to generate its suggestions and that the “Expert Review” feature had limited usage before being taken down. Moving forward, Superhuman says it is working on a better way to involve experts in its platform that will benefit both users and the experts themselves.