Leaked documents from Facebook have highlighted flaws at its core—specifically fundamental mechanics that have allowed hate speech and misinformation to flourish. While many of these issues relate to the Like button and enabling user sharing, a high proportion also comes from heavy reliance on artificial intelligence (AI) for both driving automated content recommendations and moderation. And while Facebook has increased its use of human moderators in conjunction with AI evaluation to improve its content moderation, the debate continues about whether its team is big enough to eliminate all harmful or misleading posts.

Disregarding brand safety issues like ads appearing next to inappropriate content can be a costly mistake for advertisers, with over three quarters (81%) of U.S. consumers finding it annoying when a brand appears next to low-quality content and 62% of consumers saying they will stop using a brand altogether if its ads are placed adjacent to low-grade content.

So how can brands shelter their ads in safe and positive environments that amplify their message rather than risk their reputation?

Harness AI-driven contextual suitability.

Clearly, ad placement matters to consumers, which further drives the importance of ensuring ads are appropriately positioned. In an attempt to avoid risky situations, content adjacency has long been a preoccupation for brands, whereby they can protect their reputation by ensuring ads appear next to relevant and suitable topics.

So far, the effectiveness of widely adopted brand safety solutions has been mixed, particularly keyword blocking. Although initially embraced by brands seeking better control over where ads are seen and the ability to avoid specific topics and terms, it is increasingly being recognized as simply too broad for the diverse and nuanced digital media space.

The rising demand for more tailored safeguards, however, has fueled progress toward the next evolution in advertising security: contextual intelligence analysis that offers a deeper understanding of content suitability. This uses AI algorithms to determine the sentiment, meaning and context of content, thereby measuring its appropriateness in line with specific brand preferences.

Ensure human moderation.

Despite major advances in AI sophistication, content assessment tools often lack the capabilities to pick up on subtle contextual signals and indicators of sentiment, especially for mediums such as video, where they struggle to understand nuances and wordplay. To address this, it’s important for brands to have local teams of human moderators that understand the specific contextual elements of an ad and the impact it can have on its audience. Moderators bring an additional layer of evaluation to help spot the cultural differences, political ideologies and subject matters AI might miss, especially at the local market level.

Altogether, human moderation remains essential for uncovering variations in meaning, language and sentiment that could impact local suitability.

Pick the right publisher partners.

Transparency is becoming increasingly important in programmatic advertising as the relationship between the buy and sell sides can make it difficult to determine not just ROI, but where the ad is ending up. When multiple players are bidding on the same inventories, it can be hard to keep track of ad placements or even verify the reliability of the various different publishers involved.

That’s why it’s important for brands to be selective with their trading partners. Supply path optimization (SPO) remains a crucial focus for brands aiming to slim down their vendors to ensure they prioritize partners based on clarity and value, not just results. Brands should base partner selection on the stringency of each platform’s suitability protocols and the transparency it provides into media placement.

For instance, in addition to offering access to premium trustworthy sites, platforms should be fully compliant with industry initiatives designed to maintain media clarity across the supply chain and enable secure trading by authorized buyers and sellers, such as ads.txt, sellers.json and the recently introduced buyers.json.

While algorithms and AI advancements have their uses, insufficiently moderated technologies can spell disaster for brands. Hopefully, brands will learn from Facebook’s misfortune and decrease their reliance on unmonitored algorithms. If all advertisers take steps to improve brand safety and suitability, it will benefit the industry as a whole by encouraging greater transparency and relevance across the board.

(As published on Forbes)