Grammarly faces class action over 'Expert Review' AI feature
TL;DR
- 1Grammarly fait face à un recours collectif concernant sa fonction IA 'Expert Review', mené par la journaliste Julia Angwin.
- 2La fonction aurait utilisé l'identité d'auteurs établis pour présenter des suggestions de rédaction sans leur consentement, violant les droits à la vie privée et à la personnalité.
- 3L'incident met en lumière des préoccupations cruciales concernant la confidentialité des données, la propriété intellectuelle et le développement éthique de l'IA pour tous les outils et plateformes d'écriture IA.
AI-powered writing assistant Grammarly is now facing a class-action lawsuit over its recently discontinued 'Expert Review' feature. Journalist Julia Angwin is leading the lawsuit, accusing Grammarly of violating her privacy and publicity rights by using her identity and likeness to present AI-generated editing suggestions without her consent or knowledge. This legal challenge underscores a growing tension between innovative AI functionalities and the ethical imperative of data privacy and intellectual property within the burgeoning AI tools market.
The controversial 'Expert Review' feature, which Grammarly deactivated just ahead of the lawsuit becoming public, presented users with advanced editing suggestions. The issue, as highlighted by Angwin and reported by sources like Wired AI, was that these suggestions were often framed as coming from established authors, academics, or other recognizable figures. Angwin, a Pulitzer Prize-winning journalist, alleges that her persona was used in this manner, effectively turning her and other writers into 'AI editors' without any form of compensation or explicit permission, as detailed by TechCrunch AI. This practice raises serious questions about consent, deepfakes, and the unauthorized commercial use of individuals' identities.
For Grammarly, a dominant player in the AI writing assistant space, this lawsuit represents a significant challenge that could impact user trust and potentially lead to substantial legal costs and damages. More broadly, the incident sends a clear message across the AI tools competitive landscape. Developers of AI writing and content generation tools—from Jasper to Writer.com and others—must scrutinize their data sourcing, model training practices, and feature implementations for ethical compliance. The core issue of using personal data and identities without explicit consent directly threatens the credibility and user adoption of AI technologies, especially as the New York Times notes, the lines between human and AI action become increasingly blurred.
The lawsuit forces a crucial conversation on the responsibility of AI tool providers. It emphasizes that features, no matter how innovative, must adhere to stringent ethical standards regarding personal data, intellectual property, and publicity rights. For users of AI tools, this case highlights the importance of understanding how their data, and potentially others' identities, are utilized by the platforms they employ. As AI continues to evolve, the demand for transparency, consent, and robust ethical frameworks will only intensify, shaping the future development and adoption of AI-powered solutions.
Sources
Weekly AI Newsletter
Trends, new tools, and exclusive analyses delivered weekly.