Ida Axling
May 30, 2024

Should adland take Mark Read’s deepfake as a wake-up call?

Industry experts call for collaboration and a focus on human relationships amid the rise of deepfake scams and fake information.

Clockwise from top left: Fern Miller, Jennifer Heape, Catherine Offord and Oliver Feldwick
Clockwise from top left: Fern Miller, Jennifer Heape, Catherine Offord and Oliver Feldwick

When WPP chief executive Mark Read was targeted by deepfake scammers earlier this month, it was a stark reminder of the potential dangers of AI.

Fraudsters impersonated Read using an artificial intelligence voice clone, a fake WhatsApp account and YouTube footage, with the aim to solicit money and personal details from another WPP executive. 

The scam was communicated in an email to the WPP leadership team, seen by The Guardian, where Read also said that the attempt had not been successful.

While Read is currently the only agency holding group boss who is known to have been targeted by this kind of scam, this wasn't an isolated incident.

Earlier this year, UK-based engineering group Arup lost £20m in a deepfake scam after an employee in Hong Kong was tricked into sending money to fraudsters by an AI-generated video call.

In a recent column for Campaign, EssenceMediacom X chief transformation officer Sue Unerman wrote about the increase in deepfakes and fake information.

“Unless AI tools for detecting and deterring deepfakes grow at a faster rate than the deepfake proliferation, then the impact on trust might be sudden and irreversible,” Unerman warned, and called on everyone in the industry to pay attention.

Campaign asked adland whether what happened to Mark Read should be a wake-up call for the industry.

 

Jennifer Heape

Head of product, co-founder and chief creative officer, Vixen Labs (part of House 337)

AI has the potential to bring about phenomenal positive change, yet like any powerful tool, it can be manipulated and misused. This should be a wake-up call not because it was an attack at the heart of adland, but because it illustrates how AI can be weaponised.

Deepfakes are not just daft. False information that could sway elections and threaten democracy is profoundly dangerous.

It would be all too easy to throw our hands up and say there’s nothing to be done. But we can use our skills as creative problem-solvers and communicators to set up frameworks for the ethical use of AI, helping to educate the public on its dangers as well as its vast potential for good.

Catherine Offord

Chief notorious officer, Notorious Communications

Yes. But…

Clearly this is something to be taken very seriously. However, when you consider the hybrid virtual world in which adland operates it feels like an obvious move from fraudsters.

We also need to consider that the sophistication of deepfake technology can replicate someone’s image, voice and movements but what it can’t do is replicate human relationships.

What this incident should do is highlight the importance of deep connections within agencies and particularly at an agency leadership level, so that any imposters can be spotted quickly when they display behaviours that we wouldn’t recognise.

Fern Miller

Co-founder and chief strategy officer, Uncharted Studio

It shouldn’t take a frustrated attempt to defraud one of our leaders to alert the industry to the risks of cybercrime (but it will help).

Our clients are all grappling with the challenge already, and this year there will be more than 40 significant elections worldwide – potentially providing a global platform for disinformation experiments.

In the near future, deepfakes will affect our definition of authenticity; how we value celebrity; and how safe we feel in any online experience that involves a camera or microphone. Understanding and resolving these issues will help us to explore the creative potential of AI technology without fear.

Oliver Feldwick

Chief innovation officer, T&P Group

It’s not so much a wake-up call for our industry, as a wake-up call for society and industry at large. Technology can and will be a huge force for good, but only if we’re all actively involved in making that happen.
 
The printing press spread knowledge, but also hate and misinformation. We can put rockets into space, but also fire rockets at each other. Nuclear energy powers us, but also threatens to destroy us.
 
Every new technology gives us new superpowers. Unfortunately, those powers can be used for good and bad. We’ve seen this with scammers and spammers in the early days of the internet. Generative AI is the next generation of this.
 
There isn’t an easy answer to this – simply not using technology isn’t an option. We can’t regress to a time without these technologies, so we must progress towards solutions.
 
Ultimately, education, collaboration and regulation will all help limit the impact of bad actors. Watermarking, the Content Authenticity Initiative, combined with better media literacy and training can limit a lot of these threats. Over time, much like with spam, the algorithms to detect and protect will get better and limit the opportunity for these kinds of scams.

Source:
Campaign UK
Tags

Related Articles

Just Published

1 day ago

Tech On Me: Are Chinese tech giants doing enough to ...

In this week's edition: Chinese social media platforms take on xenophobia, Australia looks to prevent teens from using social media, Meta's plans to introduce generative AI into the metaverse, among other tech news in the region.

1 day ago

Samsung’s new global campaign taps travel bug to ...

The work by BBH Singapore shows how new AI features in the Galaxy S24 like 'circle to search' turn travel photos into mobile tools.

1 day ago

Agency Report Cards 2023: We grade 31 APAC networks

Campaign Asia-Pacific presents its 21st annual evaluation of APAC agency networks based on their 2023 business performance, innovation, creative output, awards, action on DEI and sustainability, and leadership.

1 day ago

Agency Report Card 2023: Wavemaker

With a sharp ascent to the top spot in Campaign’s Media rankings for 2023, Wavemaker had a solid year of performance even amidst an uncertain economic landscape.