
Google’s 2024 Ads Safety Report highlights a notable shift in how AI is being used—not just to react to policy breaches, but to pre-empt them. The company’s increasing reliance on Large Language Models (LLMs) has shaped enforcement efforts over the past year, particularly in addressing scams and platform abuse at scale.
According to the report, Google removed 5.1 billion ads and restricted a further 9.1 billion over the course of the year. Crucially, it suspended 39.2 million advertiser accounts, the majority before any ads were served. On the publisher side, action was taken against 1.3 billion pages and more than 220,000 entire sites.
Where previous systems required extensive data training, LLMs now detect patterns of abuse with a smaller set of inputs—an efficiency that Google says has enabled it to respond more rapidly to emerging threats. In practical terms, AI models were responsible for detecting 97% of the publisher pages Google acted upon in 2024.
The same technology is now being used to scrutinise advertiser identity at the point of onboarding—flagging suspicious payment details or business impersonation signals before violations occur. According to the company, verified advertisers now account for over 90% of ads on its platform.

Scam activity—particularly involving the impersonation of public figures using AI-generated content—saw a rise in sophistication. In response, Google deployed a dedicated team to address the trend, suspending over 700,000 advertiser accounts tied to such activity and citing a 90% drop in user reports. Across the year, 415 million scam-related ads were blocked or removed, and over five million advertiser accounts were suspended for associated policy violations.
Regional data offers a glimpse into market-specific enforcement:
-
India: Over 247 million ads removed, 2.9 million advertiser accounts suspended
-
Australia: 205.7 million ads removed, 841,000 accounts suspended
-
Japan: 203.5 million ads removed, 1.4 million accounts suspended
The broader trend points to increasingly proactive enforcement powered by AI—but the report also acknowledges that prevention is an evolving task. Despite advancements, concerns remain. Investigations by Time and Adalytics have flagged serious lapses: from election-related misinformation in India slipping past filters, to ad revenue being directed to websites hosting child sexual abuse material.
Australia: 205.7 million ads removed and 841,000 ads accounts suspended.
India: Over 247 million ads removed and 2.9 million ads accounts suspended

Japan: 203.5 million ads removed and amd 1.4 million ads accounts suspended.
The report serves as a reminder that AI, while enabling faster and broader coverage, is not infallible. Ad safety, particularly at scale, demands constant iteration, wider cross-industry collaboration, and transparent mechanisms for public scrutiny.