Brand safety in the digital advertising space has become a pressing concern as a new study sheds light on the urgent need for more effective tools to prevent ads from appearing alongside harmful content. The report, published by ad quality firm Adalytics, reveals that ads for major brands like Meta, Microsoft, Procter & Gamble, and Amazon have been found on webpages featuring racial slurs, explicit sexual content, and violent imagery.
The research conducted by Adalytics analyzed ad placements and their source code on the open web, uncovering disturbing examples of major brands‘ ads appearing next to inappropriate content. Some of the offensive content observed included pornographic images, racial slurs, and explicit sexual references. Surprisingly, many of these ads were placed by ad agency holding companies like Publicis, IPG, WPP, and Omnicom.
One striking example highlighted in the report is Amazon advertising back-to-school products on a page with racially insensitive content. Similarly, an HP ad for a desktop computer appeared next to a wiki page about child pornography, while a video ad for Apple’s Safari browser was found on a page about anal sex. These instances occurred on Fandom.com, an entertainment-focused wiki platform, as well as on other domains.
In response to the report’s findings, a Fandom spokesperson emphasized the platform’s commitment to brand safety and stated that they have implemented additional safety measures to prevent ads from running on low-traffic wikis with inappropriate content. However, the report underscores a broader industry-wide problem that requires vigilance and proactive measures from publishers and ad agencies.
Despite brands investing in pre-bid and post-bid brand safety tech, keyword blocking, and other safeguards, the study found that ads were still appearing in unsuitable contexts. This raises questions about the effectiveness of current brand safety measures and the role of ad verification vendors like Integral Ad Science and DoubleVerify in ensuring ads are placed in suitable environments.
The use of AI in ad verification and brand safety has been touted as a solution to prevent ads from appearing alongside risky content. However, the study’s findings suggest that automated systems may have inaccurately classified webpages, allowing ads to run in inappropriate contexts. Adalytics recommends increased transparency around the use of AI in the adtech ecosystem to enable independent evaluations of brand safety solutions.
The response from ad verification providers like DoubleVerify has been mixed, with the company refuting the claims made in the report and asserting that the results were manufactured. The debate highlights the complexities of ensuring brand safety in the digital advertising landscape and the need for collaboration among stakeholders to address the issue effectively.
In conclusion, the study’s findings underscore the challenges brands face in maintaining brand safety in the digital advertising space. As the industry grapples with evolving technologies and increasing complexities, it is crucial for advertisers, publishers, and ad verification vendors to work together to uphold industry standards and protect brands from appearing alongside harmful content.