A study commissioned by Raptive (processing advertising sales for sites such as Half-Baked Harvest and Stereogum) shows that people’s trust will fall by nearly 50% if an article feels AI is being generated.
That skepticism also strikes the brand directly. The survey saw a 14% drop in both purchase considerations and willingness to pay premiums for products advertised along with content recognized as AI-Made.
“This goes back to the stories of advertisers who work with trusted partners who know their policies, and they know that they are doing the right thing to make sure your ads appear in the right place.”
The study looked at 3,000 US adults to understand how people respond to content based on whether they think they were human-created or AI-created. Participants were presented with five articles from the sects of travel, food, finance, and parenting, with titles such as “Traveling across the UK” paired with related brand ads. Half of the article was generated by AI.
After reading the content, people were asked to rate both the ads they saw in several attributes, such as emotional connection and reliability.
According to Anna Blender of Data Strategy and Insight, the most surprising takeaway from the survey is when people think something is being generated, and whether it really generates AI or not, it makes that content much worse across metrics like trust and reliability.
In one test, 300 people said the content was AI maids, and another 300 people saw the same article, but were told it was written by a human. Those who believed that this work was generated by AI are 14% less likely to consider purchasing the product featured in adjacent ads.
As AI tools become ubiquitous, the same goes for the rise of AI span content across the Internet. A flood of images, articles and videos generated by low-quality, suspicious AI has sparked the term AI slop, like a viral video of a cat that “saves” a baby from death. According to The New York Times, the phrase appeared in places such as 4Chan, Hacker News and YouTube comments. Anonymous users often show off their niche knowledge within groups.
That skepticism has evolved into what is now described as the foul odour of AI, an increase in feelings of mistrust, even when content feels AI is being generated.
Still, more media executives are betting on AI as the basis of their strategy. In May, Business Insider unlocked 21% of its staff, doubling AI, with CEO Barbara Peng saying “there is a great opportunity for businesses that will first use AI.” Local outlets, including Metrowest Daily News, Milford Daily News and Wicked Local, deploy AI to generate articles for their websites. Meanwhile, NewsGuard identifies more than 1,200 AI-generated news and information sites with little or no human surveillance.
As more publishers turn to AI for faster output, even cleverly crafted AI content can bring unintended consequences and reinforce the growing sentiment of AI stink, Bannister said. He also said that the study emphasizes the importance of humans involved in creating AI content, but publishers emphasize that they clarify their AI policies and explicitly label them whether AI is being used to create content.
AI guilt by the association
Raptive research shows that trust damage doesn’t stop with content alone, it’s spilling into ads placed alongside AI content. Those who saw ads adjacent to content that was perceived to have AI generated said they were 17% less premium, 19% less artificial, 16% less relevant, 14% less relevant, and 11% unreliable.
Without visible AI labels, the tone and lack of human touch is sufficient to erode brand perception, and research found that it lacks engagement, conversion rates and long-term equity.
“If you buy an ad for 5 cpm, for example, and this ad is 15% worse than the other ads, then you have a loss,” Bannister said. “It’s real money and your media investment is far less efficient.”