In its audit of the programmatic supply chain, the ANA found that 21 percent of ad impressions landed on made for advertising (MFA) sites — content farms designed solely to syphon ad spend — wasting a stunning $13 billion a year. Such sites typically rely on low-paid labour to churn out web pages, but the proliferation of generative AI platforms such as ChatGPT could send production into overdrive.

Hundreds of AI-generated sites have already been detected, and many more are on the way. Without urgent action to clean up the already congested programmatic pipes, both ends of the supply chain will suffer: Advertisers will pour even more spend into wasted impressions, and publishers will see their legitimate inventory devalued by a flood of low quality supply. More broadly, public trust in online content will be further eroded at a time when information integrity is already under threat.

Generative AI compounds the MFA issue by creating media that is not as immediately identifiable as typical farmed content — which is usually riddled with obvious errors — making it more effective at bypassing or even manipulating SEO and content recommendation algorithms. As with the related issue of fake news and disinformation, AI-generated MFA sites are also more likely to find their way into natural user-driven content sharing through social media.

While there has been a lot of press and commentary around the ANA's discovery, as well as an expanded and clarified definition of MFAs to help the industry unite its approach, it's surprising there hasn't been a more urgent response. Yes, there will always be those that exploit grey areas and loopholes in complex systems to scrape a profit, but the sheer scale of this issue means nothing short of an emergency response reflects poorly on programmatic's reputation.

It's hard to imagine any other supply chain discovering more than a fifth of what passes through it is effectively stolen and not downing tools until the perpetrators are stopped. Advertising impressions may not be as tangible as a truck full of TVs or a cargo ship carrying precious metals, but just because an asset is digital doesn't mean we should ignore when it goes missing.

If we do not get a handle on this issue now, we are sleepwalking into disaster when the flood of AI generated MFAs hits our industry.

Transparency, curation, and AIs of our own can cut MFAs from the supply chain.

Cracking down on MFAs and fortifying the digital advertising supply chain against fraudulent AI-generated content will require a cross-industry commitment. At the top level, we must double down on efforts to bring greater transparency to opaque programmatic systems and encourage the trend towards closer relationships between advertisers and publishers.

Direct deals and private marketplaces provide a level of curation that limits the number of bad actors that can slip through the cracks, while also aligning with the push for supply-path optimisation to limit wasted spend and reduce digital advertising's carbon footprint. The supply side and its technology partners must advocate the benefits of this approach to brands and agencies, who — despite the availability of high-quality supply — still often default to open programmatic over a curated approach.

It's important to note that any crackdown on MFAs must not take a heavy-handed approach, as this can harm minority-focused publications that are often disintermediated from industry discussions and decision making. For example, early attempts to stifle MFAs by blocking traffic buying have disproportionately affected perfectly legitimate Black-owned media.

AI-generated MFA sites are also a brand safety issue, but crude brand safety tools have been a sore point for publishers for some time, as a broad blocklist approach can see legitimate content demonetised. This situation is rapidly improving, however, as there have been significant advancements in contextually and semantically aware AI-powered brand safety tools that can gauge whether content is appropriate.

This new generation of AI-powered brand safety technology can be leveraged to prevent advertisers unwittingly spending on MFA inventory and will have to be constantly trained to filter out AI content. Yes, our best defence against AI will be more AI, in an arms race that will determine the future of digital advertising. As AI-generated MFA content spreads, we will need other AIs that can detect such content and remove it from the supply chain.

Ad tech innovators that stay one step ahead of fraudsters by deploying AI will be vital in preparing for the evolution of MFAs. We don't know how this game of cat and mouse will play out, but there is even hope that AI detection models could finally end MFAs. After all, though unscrupulous, they are a business like any other, and if we can cut the source of their profits then they will no longer be able to operate.