Olivia Parker
Feb 1, 2019

Deepfake videos menace publishers and brands

The rise of video manipulation is spurring news publishers to take action, explains the director of the BBC World Service Group. But risk experts say it's only a matter of time before brands get hit too.

A screenshot from a BBC report experimenting with deepfakes, in which the newsreader Matthew Amroliwala appears to be able to speak Mandarin, a language he doesn't know in real life.
A screenshot from a BBC report experimenting with deepfakes, in which the newsreader Matthew Amroliwala appears to be able to speak Mandarin, a language he doesn't know in real life.

The proliferation of products like Photoshop means we have had the ability to edit images easily for years. There’s something particularly absorbing about video, however, that makes the idea of manipulation in that medium feel unsettling.

Until recently, the difficulty involved in altering elements of a film to convincingly convey a different impression to the original has largely prevented the practice. This is now changing. The tools for creating ‘deepfake videos’, as they are known, have developed partly thanks to academic research on artificial intelligence and deep learning, hence the name. In the last couple of years they have become much more widespread, cheaper and easier to use.

At the moment, deepfakes are perhaps most commonly associated with the porn industry, which has seen operators appropriate video manipulation tools in order to, for example, graft celebrities’ faces onto actors’ bodies. The American actor and producer Jordan Peele made the broader, (even) more nefarious potential of deepfake video clear in April, when he published a clip of Barack Obama in which the former US president appeared to be making a speech that he’d never given in reality (below). His lips were synchronised to the new words, which were spoken by Peele impersonating Obama’s voice.

It’s easy to imagine the numerous ways such technology could be deployed, and the impact. Alongside potential threats to public security caused by the spread of misinformation, the propagation of deepfakes could see us reach a time where people simply stop believing what they see in the digital space. It’s for all these reasons that entities including governments, news publishers and (some) commercial brands are starting to take the subject so seriously of late.

The BBC takes on deepfakes

Jamie Angus has been leading a major drive to counter fake news since becoming director of the BBC World Service Group last year. Video forgery is a particular source of interest, he tells Campaign Asia-Pacific during an interview in Hong Kong, because of how rapidly the technology is developing to allow amateur, or “bedroom fakers”, to use it as they like.

“We see a lot of problems with our own news content being faked by other people, and that poses a reputational challenge to us as international broadcasters,” says Angus. “We're asking ourselves how to detect it, how to automate the detection of it, how to team up with other international broadcasters to come up with a common set of standards that automate this kind of watching exercise.”

The problems the BBC is facing from fake videos fall across the spectrum. At one end are instances in which non-BBC films or clips get labelled as such, most likely without malicious intent, and then shared. Angus gives the example of a documentary-style video uploaded to YouTube last year, which appeared to show that Domino’s pizzas in India were made using “fake cheese”. The title of the video claimed that the clip was from a BBC news report, the film went viral across the country and, at Domino’s request, the broadcaster agreed to publish a confirmation that they had not made it.

Jamie Angus, director of the BBC World Service Group


At the other end of the spectrum are videos faked up to look like BBC segments for the specific purpose of spreading false news. This occurred in 2017 during the Kenyan elections, says Angus, when a TV report doctored to appear as though it came from the BBC’s Focus On Africa show claimed to depict the poll results for a certain set of candidates. Because the video looked professional enough to take in the casual observer, the BBC decided to release a statement calling it out as fake

Publishing clarifications may be of limited use. Once a fake video has flown around the web and, of particular concern, been shared across unsearchable and hard-to-track ‘dark social’ mediums like the private messaging apps WhatsApp and WeChat, it can be very hard to contain the damage. Integrity, reputation and safety are all at stake and crises could conceivably affect all kinds of players in the digital space.

Damage control for brands

Communicators and marketers realised the potential impact of deepfake videos early on, says David Ko, SVP and Asia lead at the digital consultancy Ruder Finn Innovation Studios Asia (RFI). But some are apparently taking the threat more seriously than others. Campaign asked several industry communications professionals for comment, whose responses suggested it was not high on their agenda.

To raise understanding of the issue, Ko’s company added in Q3 2018 training on deepfake video scenarios to its crisis simulation service, Sonar, which it delivers to brands across Asia. 

During this one-day workshop, a brand’s crisis management team will experience a simulated communications disaster in which one of the ‘escalation’ options is the appearance of a deepfake video. “We might be training a bank,” Ko says, “and we might have a deepfake video created of the CEO of that bank being interviewed on the BBC, for example, talking about perhaps their disregard for the environment harm of palm oil. Or a CEO denying culpability in a ‘Me Too’ sexual assault in the workplace type of situation. It could be really anything, but we make the video as realistic as possible, and we put them in a situation where this video is being shared on dark social, so within private sharing channels, and it can potentially incite outrage or incite violence and the company has to learn how to deal with the fallout from that.”

The best response in such a case would amount to finding out how far the faked video has spread and then correcting the record using all possible channels, including owned, earned and paid media, says Ko.

The BBC published this article following the release of a fake report on the Kenyan elections in 2017


Neither Ko nor Angus is aware of an occasion thus far in which a commercial brand has been targeted in this way, but Ko says that none of the brand communicators he’s spoken to are taking it lightly.

Personally, he is “100% certain” that something will happen, and feels pessimistic about the coming impact of deepfakes. The closed nature of dark social, coupled with different markets’ varying degrees of sophistication around consuming and verifying the truth in shared information, could spell “a recipe for disaster”, he thinks.

Angus concurs that while types of misinformation and the way they spread can vary in different regions—internet users in Asia, for example, are often members of large, extended chat groups that can be a “potent vector” for spreading content—the challenge is the same around the world, and much of it comes back to the nature of social media platforms. “The platforms themselves unintentionally make the sharing and promotion of this content easier and they make the sharing and promotion of quality content harder because of the way they are set up, and I think that's an issue for brands, just as it is for news publishers,” he says.

The BBC is currently in conversation with WhatsApp on this subject, in particular about content sharing during the Indian elections in April and May. WhatsApp wasn’t designed to be a platform for digital news, but given that this is how people are using it, the BBC wants to explore whether it will allow news providers to publish their content directly into it. “We've said ‘OK, so how could the BBC publish a sort of BBC-verified fact about the elections on every day of the campaign into WhatsApp, such that it appeared in people's messaging if they'd opted into the BBC?'” Angus says.

Pre-emptive action

Deepfakes may yet prove to bring a silver lining for traditional media players. Readers are more likely to seek news via a publication’s own website if potential fakes start to riddle the wider internet. This in turn, of course, might affect where brands direct their advertising spend. In Ko’s view, people are coming out of a “honeymoon” period with social media that’s lasted some eight to ten years, and are realising once more that trusted news sources have an important role to play.

David Ko, SVP and Asia lead at Ruder Finn Innovation Studios


While Ko doesn’t think that there’s much commercial brands can do to mitigate deepfake video damage beyond reacting in the event of a crisis, Angus wants news providers to take a more proactive approach. A priority for the BBC in 2019, therefore, is to try to generate a set of protocols for getting computers to help solve the deepfake problem. This would involve automating the indexing and searching of online content to detect manipulated video, allowing editors to decide quickly how to respond. Digital watermarks that could reveal any interference on a video is another option the BBC is exploring.

Angus also thinks it’s worthwhile bringing the concepts behind deepfakes to wider public attention, a bid several publishers are engaged in. A report for the Wall Street Journal in October, for example, showed the journalist Jason Bellini being “deepfaked” to appear to be able to dance like Bruno Mars. In November, the BBC released a film as part of its Beyond Fake News season in which newsreader Matthew Amroliwala delivered a script in Spanish, Mandarin and Hindi, despite only being able to speak English. The deepfake was built using software made by the AI startup Synthesia and is, according to Angus, “mostly convincing”. It took two weeks to make: but as this technology develops that timescale is likely to get a lot shorter.

Those for whom deepfakes are on the radar believe it's only a matter of time before the technology is used in a way that the world won't be able to ignore. Pooling resources—whether from brands, journalists, platforms or governments—to combat the threat may be a course of action that becomes increasingly necessary.

Source:
Campaign Asia

Related Articles

Just Published

15 hours ago

40 Under 40 2024: Mamaa Duker, VML

Notable achievements include leading VML through a momentous merger, helping to reel in big sales, and growing WPP’s ethnic and cultural diversity network by a mile.

16 hours ago

Will you let your children inherit a world without ...

A raw, unflinching look at the illegal wildlife trade, starring Ray Winstone, will force you to confront the horrifying truth... and act.

17 hours ago

Campaign CMO Outlook 2024: Why marketers still want ...

In the second part of the Outlook series, global marketers weigh in on Amazon Prime’s move into ad-tier streaming, how video-on-demand will reshape strategies, and where it's still falling short.

19 hours ago

Jaguar's identity crisis: A self-inflicted wound ...

Jaguar's baffling attempt at reinvention from feline grace to rock-based abstraction is a masterclass in brand self-sabotage, says Resonant's Ramakrishnan Raja—and it risks destroying the marque entirely.