
NZ Media News
Back to latest




AI Identity Theft Lawsuit Rings Alarm for Marketers on Ethical Content Generation
Grammarly faces a lawsuit over its AI-powered 'Expert Review' feature, accused of using real individuals' identities without consent. This case underscores the urgent need for marketers to navigate the ethical and legal complexities of AI in content creation, particularly concerning intellectual property and personal likeness.
What Happened
- •Journalist Julia Angwin is suing Grammarly, alleging the company used her identity for its AI-powered 'Expert Review' feature without permission.
- •The lawsuit claims Grammarly's AI feature generated content suggestions attributed to real individuals, including Angwin, to lend credibility.
- •This practice reportedly involved using the likenesses of various experts without explicit consent for commercial purposes.
- •The legal action highlights a growing concern over AI systems leveraging public personas and intellectual property without proper authorisation.
- •The case was reported on 11 March 2026, by The Verge, detailing the ongoing legal challenge.
Why It Matters for NZ Marketers
- •NZ marketers must scrutinise AI tool usage to avoid similar legal and reputational risks, especially when generating content that mimics human expertise.
- •The case sets a precedent for how AI-generated content, particularly that leveraging personal identities, will be regulated and viewed in New Zealand.
- •Local consumers and professionals may become more sensitive to AI-generated content that appears to misappropriate identities, impacting trust.
- •NZ brands using AI for content creation or 'expert' endorsements need to ensure robust consent mechanisms are in place.
- •The incident prompts a review of local intellectual property and privacy laws concerning AI's use of personal data and likenesses.
Strategic Implications
- •Implement stringent ethical guidelines and legal reviews for all AI tools used in marketing, particularly for content generation and persona emulation.
- •Prioritise transparency with audiences about the use of AI in content creation, clearly distinguishing human-authored from AI-assisted material.
- •Develop clear internal policies for obtaining explicit consent when AI tools interact with or reference real individuals' identities or work.
- •Invest in AI solutions that offer verifiable provenance and ethical sourcing of data, reducing risks of intellectual property infringement.
- •Educate marketing teams on the evolving legal landscape surrounding AI, identity, and content ownership to mitigate future liabilities.
Future Trend Signals
- •Expect increased legal challenges globally regarding AI's use of personal identity, likeness, and intellectual property.
- •Regulatory bodies will likely introduce stricter guidelines for AI transparency and consent in commercial applications.
- •Demand for 'ethical AI' solutions with built-in compliance features will grow significantly.
- •Consumer trust will increasingly hinge on brands' transparent and responsible use of AI.
Sources
Editorial note: This analysis is original, AI-assisted editorial content. All source material is attributed with links. No full articles are reproduced. Short excerpts are used under fair dealing principles.
Related Analysis
More posts sharing similar topics

AI & CommerceData & Privacy
AI Ethics in Content Creation: A Global Dialogue with Local Echoes for NZ Marketers

AI & CommerceData & Privacy
Wealth Lists: A Proven Engagement Strategy for NZ Media and Marketers

AI & CommerceData & Privacy
Ageism in Education Funding: A Growing Challenge for NZ's Mature Workforce

AI & CommerceData & Privacy
Copyright Clash: Publishers Challenge AI Training Data

AI & CommerceData & Privacy
