![Majority of marketers are unprepared to combat deepfakes: study](https://cdn.i.haymarketmedia.asia/?n=campaign-asia%2fcontent%2fshutterstock_2431635551.jpg&h=570&w=855&q=100&v=20170226&c=1)
Apart from their devastating impact on the lives of celebrities and individuals, deepfakes pose a significant reputational risk for marketing organisations as well.
In a report focused on the B2B sector, Forrester revealed that deepfakes are a concern area for nearly 70% of marketing leaders who specifically worry about staff impersonation and false public statements.
![](https://cdn.i.haymarketmedia.asia?n=campaign-asia%2fcontent%2f20250210034549_Slide1.jpg&c=0)
However, only 20% of marketing heads believe their organisations are updated on deepfakes. A mere 17% have social listening or content verification systems—which could form a bulwark against deepfakes.
Potential financial losses such as fraudulent transactions and phishing are a concern for 74% of surveyed organisations. However, less than 30% are considering using AI to fight the threat of AI-powered deepfakes, pointing to a gap in technological preparedness.
While nearly 65% of executives believe deepfakes could harm public trust in their brand, proactive steps to educate consumers and build resilience are uncommon across industries.
A related phenomenon highlighted by the report is ‘positive deepfakes’—in which AI agents and clones of executives can be used in certain customer service situations instead of the real executives. But without adequate transparency and disclosure, positive deepfakes too could have an adverse impact on trust.
In a blog, Karen Tran, principal analyst at Forrester said, “ Deepfakes are not just a distant threat; they are a present danger with the potential for long-lasting repercussions, as they can target corporate executives, disrupt business operations, and erode stakeholder confidence.”
With deepfakes likely to get more ubiquitous and—thanks to advancements in generative AI— more credible and authentic, Tran believes marketers should have defences in place against these threats.
She recommended recognising deepfakes as a critical risk and putting in place multi-functional teams including legal on standby. Planning against deepfakes should be a part of every crisis communication strategy, and companies would do well to run simulations to gauge their levels of preparedness. Tran concluded that brands ought to build trust in an era of “deep doubt,” to ensure that even if a deepfake-related incident does occur, the greater credibility of a brand will help it recover.