.jpg&h=570&w=855&q=100&v=20250320&c=1)
“An insult to life” and “utterly disgusting” were how Hayao Miyazaki, the famed founder of Japanese animation powerhouse Studio Ghibli, described AI in a resurfaced 2016 video that recently went viral. Miyazaki said he would “never wish to incorporate this technology into my work at all,” adding, “I strongly feel that this is an insult to life itself.” His words have reignited debate about the ethical concerns surrounding AI tools trained on copyrighted creative works and what this means for the future of human artists and creative industries.
The debate intensified last month after a new image generator tool from ChatGPT allowed users to transform popular internet memes or personal photos into the distinct style of Studio Ghibli animations, flooding the internet with Ghibli-style portraits. Even ChatGPT CEO Sam Altman changed his profile picture on social media platform X to a Ghibli-style portrait. However, it remains unclear whether ChatGPT had a license to use the Ghibli style for training or if this constituted a blatant copyright breach.
Artist Karla Ortiz expressed strong disapproval, telling the Associated Press, “That’s using Ghibli’s branding, their name, their work, their reputation, to promote (OpenAI) products. It’s an insult. It’s exploitation.”

Meanwhile, tech giants like OpenAI and Google are lobbying governments to allow their AI models to train on copyrighted material, arguing that applying fair use protections to AI "is a matter of national security”. Some countries, including Japan and Singapore, have already amended laws to permit the use of copyrighted works for AI training, while the UK and Hong Kong are also considering similar exceptions. However, critics argue that this is not ‘fair use,’ and if such policies make AI platforms more expensive because companies must pay artists for each referenced work, so be it.
Seth Hays, managing director at APAC Gates, a Taiwan-based management and digital rights consultancy, explains, “We have seen some governments explore options for allowing a text and data mining (TDM) exception to copyright infringement, which benefits the AI industry and is intended to attract more investment and innovation in this sector.” He adds that some proposals include an opt-out for copyright owners, though the technical feasibility of this remains challenging.
In addition to legal exceptions, some legislators and industry groups are advocating for new rights to protect creative works. For example, the Japan Anime Association calls for laws protecting the ‘style’ of certain anime works. Another approach involves copyright management organisations (CMOs) that collect fees on behalf of rights holders.
Hays believes the best path forward is to enhance existing intellectual property (IP) laws, particularly underdeveloped areas such as Rights of Publicity or Likeness (to combat deepfakes), trade dress (protecting the look and feel of businesses), trademark dilution, passing off, and unfair competition. “Many countries don’t recognise these species of IP, and should develop them,” he says, emphasising the need for courts, especially in common law jurisdictions, to update jurisprudence for the 21st century.
On the other hand, Marc Hoag, a California-licensed SaaS and tech attorney specialising in AI and copyright law, argues that the idea creators are harmed by AI training is a category error. “AI doesn’t store or retrieve works, it transforms patterns into weights,” he explains. “Real harm happens at the output level, where end users intentionally misuse AI to clone or infringe. That’s where regulation belongs. Until we acknowledge that training is not copying, we’re just misdiagnosing the problem and risking innovation.”
Hoag further adds that AI training does not involve copying or storing works in a fixed sense: “It transforms inputs into statistical weights, not outputs. Forcing licensing here would be like trying to license every article you’ve ever read before forming your own opinion.” He suggests that if lawmakers want compensation in principle, the only sensible solution is a legislative carve-out exempting training from copyright entirely.
A fine line between innovation and exploitation
The tension between innovation and exploitation is palpable among artists and creators. Many have pushed back against the claim that training AI models on copyrighted works does not breach the law. Visual artists and publications like The New York Times have filed copyright lawsuits against AI companies such as OpenAI, Microsoft, Stable Diffusion, Midjourney, and DeviantArt. One lawsuit alleges that AI image generators violate the rights of millions of artists by ingesting massive collections of digital images and producing derivative works that compete with the originals.
In the advertising industry, which relies heavily on creativity, many agencies tread carefully amid legal uncertainties about using scraped data for AI training. Michael Titshall, APAC CEO of RGA, says his agency only uses platforms with proper enterprise agreements to ensure systems and principles are in place to avoid copyright infringement. “Legal agreements between platforms, brands, and agencies are essential so that everyone’s rights and responsibilities are clear,” he says.
Titshall highlights the importance of strong guardrails when creating work for clients. “If a tool allows the public to generate anything without limits—like in the Studio Ghibli example—that’s a red flag,” he warns. “Just like we wouldn’t copy an artistic style like Studio Ghibli’s without permission 10 years ago, the same rules apply now. We rely on experienced humans who understand creative copyright to make sure what we create is original, respectful, and legally sound.”

While Titshall supports text and data mining for AI training, he insists the advertising industry must uphold its long-standing commitment to originality. “I believe text and data mining for AI training can be considered fair use. But here too, intent matters. If the output is clearly designed to mimic a specific artist’s style or creative identity, that crosses a line,” he says. “It’s absolutely critical that we apply the same principles we’ve always followed to protect creators’ rights. As brands, agencies, and members of the creative community, we have a responsibility to uphold those standards. It’s something I care deeply about.”
It’s not just about compliance—it’s also about creative integrity
Similarly, BBDO recently established a global innovation and AI community council, a forum where employees share learnings and explore how emerging technology can enhance creativity while adhering to best practices.
Camilla Gleditsch, head of agency communications at BBDO Asia, emphasises, “It’s not just about compliance—it’s also about creative integrity. AI should support great thinking, not shortcut it. We’re not looking to use AI to generate more for the sake of it. We’re already entering a time where we’ll have to navigate a sea of ‘rubbish’ AI-generated content—and I say rubbish because it’s often content produced without the right intent, without integrity, or without the right mindset.”
Whether new legislation can effectively balance the interests of creators, AI developers, and consumers remains uncertain. Given how fast technology and business models evolve, it is unlikely laws will delve into the specifics of individual commercial deals. “We may find that court cases will promote licensing deals,” says Hays. “Indeed, guidance from both Singapore and Japan indicate that policymakers prefer voluntary, amicable resolution of this conflict through licensing. Of course, how a firm can meaningfully compensate all copyright holders in their training data may not be economically possible.”
Despite these challenges, there is widespread support for AI companies fairly compensating creators when their work is used for training. “If a model benefits from someone’s creative work, that value should be acknowledged,” says Gleditsch. “Compensation models may not be straightforward, but the principle is good. Long term, this could actually elevate advertising creativity—encouraging better AI tools, more ethical use, and partnerships that respect the original spark behind the work… so we don’t move into a future where creativity becomes automated to the point of losing its soul.”
Others side with the AI industry’s argument that training on copyrighted works constitutes fair use. Katya Obolensky, managing director at VCCP, and a creator whose books and blog posts have been used to train AI models, explains, “AI models are transforming the training data into learned connections, not simply regurgitating it.” She points out that even Avatar director James Cameron supports training on a wide variety of data, recognising this distinction.
Obolensky expresses concern that requiring payment for every piece of training data would concentrate power in the hands of a few wealthy companies. “That isn’t a healthy situation for the advertising industry or anybody else,” she says. “In my opinion, we should allow free use of training data for AI models that are released freely as open source, which enables and encourages creativity and innovation.”
Meanwhile, RGA’s Titshall feels that if a tool is finetuned to replicate a creator’s style or identity, that creator deserves compensation on their terms, just as in traditional production. “Before AI, if you wanted a specific look or feel, you didn’t just copy someone, you hired them,” he says. “An illustrator, a voice-over artist, a filmmaker—each brought their distinctive style and they were paid accordingly. The same should apply now.”
From his perspective, AI need not disrupt the fundamentals of advertising. “If we treat compensation with the same logic we’ve always used—and map AI processes to existing and familiar creative workflows—it’s entirely possible to move forward in a fair and sustainable way,” Titshall concludes. “Ultimately, infringement of rights always needs to be guarded against. I think the future of trust in this space will rely on things like enterprise agreements, transparent commitments from AI providers, and clear acceptance of liability—both by those providers and by agencies.”