In recent years, large language models powered by AI have been gaining popularity, generating vast amounts of text, images, and video content. While this technology has its merits, there is a growing concern among advertisers that their ad dollars may inadvertently support low-quality content. A recent report by NewsGuard shed light on the issue, revealing the rapid proliferation of questionable websites publishing AI-generated content and monetizing it.
The researchers at NewsGuard conducted an in-depth analysis of hundreds of programmatic ads served on AI-generated websites. They discovered that these websites were churning out hundreds of articles a day, ranging from plagiarized versions of real news articles to click-bait headlines promoting unproven or potentially harmful remedies. The alarming findings indicate that advertisers may unknowingly fund and associate their brands with subpar and misleading content.
During their investigation, NewsGuard identified nearly 400 ads for 141 major brands across more than 50 websites in Germany, France, Italy, and the U.S. The advertisers, however, were likely unaware that their ads were running on AI-generated websites, as they had entrusted the placement of their ads to third-party platforms, such as Google Ads. This lack of transparency raises concerns about brand safety and the potential damage that can be inflicted on a brand’s reputation when associated with low-quality or misleading content.
The monetization of AI-generated websites is largely driven by big ad-tech companies that are incentivized to maximize their profits. These companies often fail to ensure human oversight or accuracy checks on the content published on these sites. As a result, reliable AI-generated news sites are being created and monetized without adequate quality control.
Jack Brewster, NewsGuard Enterprise Editor, highlights the complicity of ad-tech companies in this issue, stating, “The creation of reliable AI-generated news sites is being incentivized by the monetization of big ad-tech companies who are monetizing these sites en masse.” He further emphasizes that these companies do not appear to check for human oversight or accuracy, contributing to the proliferation of low-quality AI-generated websites.
When brands unknowingly have their ads served on AI-generated websites, they risk associating their brand with content that is of low quality, misleading, or potentially harmful. This poses significant brand safety concerns and can lead to a loss of consumer trust and loyalty. Advertisers invest a great deal of time, effort, and resources into carefully crafting their brand image, and it is crucial for them to protect their brand reputation by ensuring their ads are placed in safe and trustworthy environments.
According to the NewsGuard research, significant companies in a variety of sectors, including banking, streaming services, technology, automotive, sportswear, and pet supplies, presented their adverts on websites created by AI. More than 90% of the recognized ads were delivered by Google Ads, which emphasizes the need for stronger safeguards to protect brand safety and stop ad placements on subpar AI-generated websites.
Advertisers are looking for strategies to avoid or lessen the hazards brought on by programmatic advertising and generative AI as they become more obvious. Profiting from the opportunity, businesses like DoubleVerify are providing brand safety systems that expressly address the problems brought on by AI-generated content.
DoubleVerify, a leading brand safety and ad verification company, reported a 56% increase in their brand safety tech in the first quarter of 2023 compared to the previous year. This substantial increase is attributed to the rise of AI content farms, which amplify brand safety concerns. By investing in their own AI tools, DoubleVerify aims to detect and prevent the placement of ads alongside low-quality, misleading, or harmful content. Their focus is on developing advanced technology that can detect violations across multiple languages and content formats, including video.
While the debate surrounding AI-generated content continues, it is essential to recognize that the quality of the content itself is more critical than whether it was created by generative AI. Mark Zagorski, CEO of DoubleVerify, emphasizes the need to prioritize content quality, stating, “The interesting thing is whether or not this is created by generative AI is less of a factor than what the content is itself.” He further emphasizes the importance of using precise and targeted measures to ensure the placement of ads in suitable environments.
The challenges posed by generative AI extend beyond brand safety concerns. Evelyn Mitchell-Wolf, a senior analyst at eMarketer, highlights the additional complexities that AI introduces to the programmatic ad ecosystem. Mitchell-Wolf suggests that traditional publishers face an “existential crisis” as they grapple with the decision to utilize generative AI tools, invest in human-created content, or provide AI models with access to quality content for training purposes. The increasing surface area for low-quality AI-generated content creates a snowball effect, making it even more challenging to control the quality and suitability of the content being served alongside programmatic ads.
In response to NewsGuard’s report, Google reviewed the AI-generated websites mentioned and took action to remove ads from many of them due to policy violations. While Google acknowledges that AI-generated content does not necessarily violate their policies, they have strict guidelines governing the type of content that can be monetized on their platform. Google’s focus is on content quality rather than how it was created. They block or remove ads if violations are detected, such as harmful or spammy content, as well as content that is solely copied from other sites.
The issue of programmatic ad placement goes beyond the risks associated with generative AI. A recent study by the Association of National Advertisers (ANA) revealed that “made for advertising” (MFA) websites accounted for a significant portion of impressions and ad spend in the programmatic media supply chain. This finding underscores the lack of control that advertisers often have over where their ads are placed and the urgent need for increased programmatic transparency.
Keri Bruce, an attorney at Reed Smith involved in the ANA’s report, points out that AI makes it easier to create websites at a faster rate, leading to brand suitability challenges and enabling “bad actors” to profit even more. She suggests that advertisers focus on inclusion lists rather than solely relying on exclusion lists to regain control over their ad placements. With thousands of websites available for ad placement, it becomes essential to strike a balance between reaching a wide audience and ensuring brand safety.
As generative AI continues to reshape the advertising landscape, advertisers must remain vigilant and proactive in protecting their brands from the risks associated with programmatic ads. The rapid growth of AI-generated content underscores the need for enhanced safeguards and brand safety measures. Ad-tech companies, advertisers, and industry stakeholders must work together to establish transparent practices, enforce strict content guidelines, and develop advanced technologies that can accurately identify and prevent ad placements on low-quality and misleading AI-generated websites.
By prioritizing content quality, investing in brand safety technologies, and promoting programmatic transparency, advertisers can navigate the challenges posed by generative AI and ensure the integrity of their brand messaging and reputation. As technology evolves, it is crucial to adapt and stay ahead of the curve, leveraging the benefits of AI while mitigating its potential risks. With a comprehensive understanding of the brand risks associated with programmatic ads in the age of generative AI, advertisers can make informed decisions and safeguard their brands in this ever-changing landscape.
First reported by Digiday.