
NZ Media News
Back to latest




Grammarly's AI Ethics Shift: A Precedent for Authenticity in Marketing
Grammarly has deactivated its 'Expert Review' AI feature following concerns that it used real writers' styles without explicit consent. This move signals a growing industry awareness of ethical AI deployment, particularly regarding the representation and 'cloning' of human expertise in automated content generation.
What Happened
- •Grammarly disabled its 'Expert Review' AI feature on 11 March 2026.
- •The feature claimed its edit suggestions were 'inspired by' real writers, including journalists from The Verge, without their permission.
- •Grammarly stated the deactivation allows them to 'reimagine the feature' to offer experts greater control over their representation.
- •The decision follows public criticism regarding the ethical implications of AI mimicking human expertise without consent.
- •This action reflects a broader industry response to concerns about AI's impact on intellectual property and individual identity.
- •The company aims to reintroduce the feature with improved user utility and stronger expert consent mechanisms.
Why It Matters for NZ Marketers
- •NZ marketers frequently leverage AI tools for content creation, from copywriting to social media posts, making ethical AI use a critical consideration.
- •The precedent set by Grammarly highlights the importance of transparency and consent when AI models draw upon or mimic human work, impacting local content creators.
- •NZ brands risk reputational damage if their AI-generated content is perceived as unethically sourced or misrepresenting expertise.
- •This incident could influence how NZ agencies and brands structure their contracts with AI vendors, demanding clear ethical guidelines.
- •It prompts a review of internal AI policies for NZ marketing teams, especially regarding the use of public data for AI training and content generation.
- •For NZ businesses, ensuring authenticity and trust in marketing communications is paramount, and AI ethics directly impacts this.
Strategic Implications
- •Prioritise ethical AI guidelines in all marketing technology adoption and content creation processes.
- •Implement clear consent mechanisms if AI tools are trained on or derive inspiration from specific individuals' work.
- •Foster transparency with audiences about the role of AI in content generation, particularly when expert insights are involved.
- •Invest in AI tools that offer robust controls over data sourcing and ethical model training.
- •Educate marketing teams on the evolving landscape of AI ethics and intellectual property rights.
- •Regularly audit AI-generated content for originality, authenticity, and potential ethical breaches.
Future Trend Signals
- •Increasing scrutiny on AI models' data provenance and training methodologies.
- •Development of more sophisticated consent frameworks for individuals whose work informs AI systems.
- •Emergence of 'ethical AI' as a key differentiator for technology providers and marketing agencies.
- •Greater emphasis on human oversight and verification in AI-driven content workflows.
Sources
Editorial note: This analysis is original, AI-assisted editorial content. All source material is attributed with links. No full articles are reproduced. Short excerpts are used under fair dealing principles.
Related Analysis
More posts sharing similar topics

AI & CommerceCreator Economy
Influencer Disclosure: ACCC Fine Sets Trans-Tasman Precedent for Brands

AI & CommerceCreator Economy
AI Content Authenticity Under Scrutiny: Publisher's Withdrawal Signals Broader Marketing Challenge

AI & CommerceCreator Economy
Global Streamers Maintain Grip as Foreign Investment Shapes Film Financing

AI & CommerceCreator Economy
AI Content: Navigating the IP Minefield for NZ Marketers

AI & CommerceCreator Economy
