Eighteen months after launching at the Cannes Lions Festival of Creativity in 2019, the Global Alliance for Responsible Media has ramped up its fight against harmful content on social media.
In collaboration with the World Federation of Advertisers (WFA), the global alliance said Wednesday it has established 11 definitions for what constitutes “harmful content” and developed universal metrics that track each category consistently across platforms.
The definitions, which include topic areas such as adult and explicit content, profanity, terrorism and hate speech, are laid out on a sliding scale from low risk to high risk to accomodate brands with different risk appetites.
Platforms including Facebook, YouTube and Twitter have committed to develop tools that enable brands to track and avoid harmful content across their platforms. They have also agreed to independent audits by the Media Ratings Council so advertisers can hold them accountable to KPIs.
The council will complete its audit of YouTube before the end of November, and the global alliance aims to have all major platforms either audited or in the process of an audit by the end of the year.
Creating consistent definitions for harmful speech was an imperative first step toward tackling brand safety issues across the social media landscape, said Stephan Loerke, CEO of the WFA.
“You had platforms operating with their own definitions of harmful content, tools that were bespoke to the platforms and, on average, a change in safety policies every two weeks,” he said. “That created an inefficient and unscalable system.”
While the platforms often publish stats that highlight what percentage of harmful content is caught by its AI systems, that information isn’t necessarily helpful to brands, nor is it independently vetted, Loerke said.
“Whether [the content] is caught by AI or people is irrelevant,” he said. “What we care about is, how much harmful content has been seen, what is its prevalence and are we making progress every quarter in reducing that prevalence?”
Facebook, YouTube, Twitter, Snap, TikTok and Pinterest have also agreed to make detailed plans for consistent brand safety and content adjacency control tools available by the end of the year.
“This is not about prescribing to platforms how they should go about eliminating harmful content,” Loerke said. “They are best positioned to manage that. We are only interested in outcomes.”
Behind the scenes
Since the first brand safety scandal erupted on YouTube in 2016, platforms have become much more willing to play along with advertisers’ demands.
“Three years ago, this was a media technician question,” Loerke said. “The unique pressures in our society... have led brands to understand there's much more at stake than brand reputation. They have a responsibility in knowing what type of content they fund.”
Brands that made statements about their values and spoke out in support of protests against racial inequality across the globe this summer know they will be held accountable for their actions -- including where they spend their media dollars. That’s elevated brand safety and content adjacency to a board-level conversation.
“CEOs and boards at major companies are asking questions, prioritizing this, and that is a sea change,” Loerke said. “That sends a message to the platforms.”
It’s also a good look for the platforms to collaborate with brands on initiatives like this as they face regulatory scrutiny around the world, Loerke added.
The global alliance knows there is much more work to do to hold platforms accountable for harmful content, but the announcement marks “an important first step” in standardizing brand safety controls for the industry, Loerke said.