The advertising world is reeling. Just last week, Elon Musk, in a move many are calling a death knell for responsible advertising, sued the World Federation of Advertisers (WFA), leading to the dissolution of the Global Alliance for Responsible Media (GARM). This industry watchdog, formed in the wake of the horrific Christchurch massacre, was tasked with the crucial mission of stemming the flow of ad dollars to platforms peddling harmful content.
But while advertisers abandoned X (formerly Twitter) over its questionable safety measures, a chilling question emerges: is any platform truly safe? A report by Adalytics has exposed the unsettling truth: even with brand safety partners like IAS and DoubleVerify, ads for Fortune 500 companies—from Amazon, HP, Disney, and Apple—have been found nestled alongside content promoting racism, violence, and explicit pornography.
In one example, an Amazon back-to-school campaign promoting notebooks and backpacks ran on a page titled 'L**** [Redacted] All Black People'. Many of these brands had invested in pre-bid and post-bid brand safety technologies, even blocking all user-generated content—yet the ads slipped through.
This begs the question: how can these sophisticated, AI-powered systems, entrusted with millions of dollars and the reputations of global brands, fail so spectacularly? And is the promise of automated brand safety nothing more than a carefully constructed illusion, a myth propagated by the tech industry?
Campaign turned to industry experts in APAC for answers
Richard Brosgill
CEO, APAC, Assembly
The promise of innovation through technology, and AI in particular, is often touted as the be-all and end-all solution to many of today’s challenges, but that's placing too much trust in a technology that is still in its early stages of development. The media industry is one of constant change—quick and sudden shifts are the norm. While brand safety tools are undoubtedly important, we cannot have the expectation of simply setting them up and walking away. They offer first, second, and even third lines of defence, but human intervention is needed to keep an active eye and ensure other controls are in play.
I don't believe there is a single solution that guarantees 100% brand safety or suitability, especially in Asia where different contextual nuances exist and are subjective to each brand considering their unique needs and risk tolerance. The need for additional controls and having talent feeding in is still essential to fill in the grey areas and navigate the contextual nuances that automated systems may miss.
There's a lot we can do to take a more proactive approach to ensure that tailored solutions are created to match the needs of brands. From leveraging a combination of different verification tools, building custom blacklists/whitelists, fostering direct publisher relationships, and building a network of trusted publishers, the key is to marry technology and human expertise together to build brand safety strategies that ensure suitable ad placements for our clients.
For now, a '+ talent' approach is still needed. AI can enhance our capabilities, but cannot entirely replace the human ingenuity and expertise. Talent will continue to play a key role in defining, shaping and refining our strategies to brand safety, which is why talent development should and always will be a top priority.
Arielle Garcia
Director of Intelligence
Check My Ads
‘Complete brand safety’ goes beyond merely avoiding objectionable content; it involves knowing exactly where ads appear and what they fund.
Current standards, often manipulated by vendors like IAS and DV, focus on superficial safety measures rather than offering true transparency. These vendors promote AI-driven filters and black-box solutions, which may not give marketers the detailed page-level data they need to ensure ads align with their brand values.
Achieving genuine brand safety requires active media oversight, auditing campaigns, and selecting vendors that offer transparency at the impression level. While transparency remains challenging in walled garden environments, recent developments, like Google's decision to provide YouTube placement reporting for PMAX campaigns, show that demanding transparency can lead to change.
Ultimately, investing in quality media and proper stewardship protects both brands' and marketers' investments, potentially eliminating the need for expensive and ineffective brand safety technologies.
Jay Friedman
CEO, Goodway Group
Complete brand safety is not an unrealistic goal, but it may not be practical. As with all business decisions, risk and reward must be weighed, and decisions must follow. To achieve the right balance, brands should consider a few things. First, not all brand-unsafe content is created equally.
Most brands will not classify the worst content from Adalytics report as being equally unsafe as a legitimate news report about a military conflict taking place in the world. Both may make the brand unsafe list, but most brands will not deem them equally dangerous. Second, traditional media should be used as a guide.
Brands have long strived to place ads within major news programs or reality TV, which may contain some brand-unsafe components. If a brand is comfortable in these environments, segmenting truly unsafe content from content some may not agree with is wise.
On automating brand safety, real-time technical automation of what even the most conservative brands view as brand unsafe is unrealistic—AI or no AI. However, that is the question the Adalytics report raises. The words and concepts on the pages in the Adalytics report do not require AI to determine exceptionally unsafe. If brands are to pay for a product that promises any form of brand safety, it must catch the most egregious content 100% of the time.
Shamsul Chowdhury
Executive VP of paid social, Jellyfish
Brands have always wanted to ensure that they appear in spaces appropriate for their representation. This is why brand safety has been a hot topic since social media platforms took off. However, given today's influx of content, the notion that brands can be in an entirely brand-safe environment is unrealistic. Even with the strictest settings on social platforms, there is simply too much content for AI, algorithms, or humans to ward off anything potentially deemed unsafe for brands.
The concept of automated brand safety sounds excellent, but the utopian outlook fails to realise that no matter how smart AI gets, it will always be trained from existing models and playing a catch-up game. Humans will continue to find loopholes to circumvent brand safety measures. No amount of AI, no matter how sophisticated, will ever be able to curtail the nuances of human speech and behaviour entirely.
Fern Potter
Senior vice president of strategy and partnerships, Multilocal
Brand safety should be a non-negotiable standard in the advertising industry, ensuring that ads are never placed alongside content that could damage a brand's image. Historically, brand safety was easier to manage, but today’s vast digital landscape, with its explosion of websites and user-generated content, makes this more complex.
Programmatic advertising introduced tools like post-bid blocking and live monitoring to help maintain brand safety, but these technologies are not foolproof. A combination of human expertise and advanced technology is necessary to vet publisher domains, monitor campaigns, and optimise performance, all while ensuring alignment with brand standards. As AI advances, it will play a more significant role in enforcing brand safety, though the rise of AI-generated content and automated websites poses new challenges.
The industry must depend on trade bodies to set and update standards to keep pace with these developments. In addition, as publishers rely more on advertising revenue, they may increasingly align their content with brand values, potentially influencing editorial freedom and contributing to the growth of independent platforms like Substack.
Anders Lithner
CEO, Brand Metrics
Two kinds of content get labelled harmful. One can indeed be disastrous to a brand, such as hate speech, racism, and explicit pornographic images. In most cases, the other is not harmful, such as professional journalism about essential topics, even if it includes things like war, pandemics, and crime.
When advertising next to things that the audience thinks are important, the ad is, by association, also perceived as important. Nobody thinks an insurance company ad inside a serious article about terrorism means that the brand is in favour of terrorism. And the affluent audience that consumes professional content is more likely to be consumers than those that lurk in dodgy web forums and feeds.
The problem with AI and keyword blocking is that they can quickly identify ‘scary’ words, thereby needlessly blocking professional content. They’re less good at identifying user-generated feeds' often polarised, sarcastic, and unstructured content. If that is our hope, then complete brand safety is an industry tech illusion.
This post is filed under... Sounding Board: APAC experts speak on marketing and comms issues |