What will happen when AI-generated media becomes ubiquitous in our lives? How does this relate to what we have experienced so far and what changes will it bring to us? ?

This is the first part of a two-part series examining how people and communities will be affected by the expansion of AI-generated content. I’ve talked in some detail about environmental, economic, and labor issues, as well as discrimination and social prejudice. But today I would like to dig a little deeper and focus on the psychological and social impact of AI-generated media and the content we consume, particularly its relationship to critical thinking, learning, and the conceptualization of knowledge.
Hoaxes have been perpetrated using photographs essentially since photography was invented. The moment we begin to have a media form that is believed to show us the true, unmediated reality of a phenomenon or event, people begin to manipulate that media form to great artistic and philosophical effect. It was also the moment when I started figuring out how to do it. (As well as humorous or simply deceptive effects.) Nevertheless, we have a form of unwarranted trust in photographs, and we relate to them with a balance of trust and skepticism. I’ve built it.
When I was growing up, the Internet was not yet widely available to the general public, and very few families had access to it, but by the time I was a teenager, everything had changed and everyone I knew had access to AOL Instant. I was spending time using it. Messenger. Around the time I graduated from graduate school, the iPhone was released and the era of smartphones began. I say all this again to emphasize that the creation and consumption of culture has changed incredibly rapidly, beyond recognition, in just a few decades.
I believe we are now entering a whole new era, especially in the media and cultural content we consume and create, with the advent of generative AI. This is similar to when Photoshop became widely available and people started noticing that their photos were sometimes retouched and started questioning whether they could trust how the images looked. (Readers may find the ongoing conversation around “what is photography” an interesting extension of this issue?) But still, Photoshop is expensive and difficult to use effectively. required a skill level. As such, most of the photos we encountered were relatively true to life. And I think people generally expected images in advertisements and movies to not be “real.” Our expectations and intuitions have had to adapt to changes in technology, and more or less have done so.
Now, AI content generators have democratized the ability to artificially create or modify any type of content, including images. Unfortunately, it is very difficult to estimate how much of online content is likely to be generated by AI. If you Google this question, you’ll see a reference to a Europol article that claims that number will be 90% by 2026. However, if you read it, you will find that the research paper says nothing of the kind. You may also find a paper by some AWS researchers cited stating that the number is 57%, which is also a misreading (they say that about machine-translated text content We’re talking about it, not the text generated from the entire (image or video). To my knowledge, there is no reliable scientifically based research that actually shows how much of the content we consume is likely to have been generated by AI. Even if it existed, it would be outdated the moment it was published.
But if you think about it, this makes perfect sense. A big part of why AI-generated content continues to emerge is that it’s difficult to tell whether what you’re looking at was actually created by a human, and whether its representation reflects reality. Because it is more difficult than ever before in human history. How do you count something, or even estimate its number, if you obviously don’t know how to identify it in the first place?
We’ve all encountered content of questionable origin. We see images that take us to the uncanny valley, we have strong suspicions that product reviews on retail sites sound unnaturally positive and generic, and we use generative AI and bots to I think it must have been created by someone else. Ladies, have you tried to find haircut inspiration photos online lately?In my own personal experience, over 50% of the photos on Pinterest and other similar sites are It’s clearly AI-generated and there are obvious signs. Untextured skin, rubbery features, straps and necklaces that disappear into nowhere, images that obviously don’t include hands, and things that are never shown. It’s easy to ignore these, but when they’re widespread, you start to question whether you’re looking at a heavily filtered real image or completely AI-generated content. My job is to understand these things, but I often don’t understand them myself. I heard that dating apps for single men are full of fraudulent bots based on generated AI, and the method for checking them has been given a name: the “Potato Test.” If you ask a bot to say “potato,” it will ignore you, but a real human might. Small everyday areas of our lives are being invaded by AI content without our consent or approval.
What’s the point in dumping AI slop into all these online spaces? The best-case scenario goal is to get enough ad impressions to earn a few cents from advertisers. It might be to provide persuasive nonsense text or images to get people to click on the site where the ad is running. Artificial reviews and images of online products are generated by truckloads, so drop shippers and cheap junk sellers can trick customers into buying slightly cheaper products than their competitors, giving them genuine products. You can make them look forward to getting it. Perhaps because the product is incredibly cheap, disappointed buyers may accept the loss and save themselves the trouble of getting their money back.
Even worse, bots that use LLM to generate text and images can be used to lure people into scams. Scaling up such a scam costs pennies, since the only real resource needed is computing. If you can steal one person’s money every day, it’s worth it. Very often. AI-generated content can be used for criminal exploitation, such as pig butchering scams, AI-generated CSAM, non-consensual intimate images, and even extortion schemes.
AI-generated images, videos, and text also have political motives. In this US election year, groups around the world with different angles and objectives are creating AI-generated images and videos to support their viewpoints and propaganda through generative messages. I sent out a message. AI bots on social media, especially formerly Twitter, have largely stopped moderating content to prevent abuse, harassment, and bias. The hope of those distributing this material is that uninformed Internet users will absorb the message through continued and repeated access to this content, and that all items they recognize as artifacts will become unknown. The number is accepted as legitimate. Moreover, this material creates an information ecosystem in which truth is impossible to define or prove, neutralizing good actors and their attempts to break through the noise.
A small portion of AI-generated online content is actual attempts to create attractive images just for fun, or relatively innocuous boilerplate text generated to fill out a company’s website. . But as we all know, the internet is rife with scams – rich and nimble schemers – and advances in generative AI have ushered in a whole new era in these fields. (And these applications have a huge negative impact on actual creators, energy, the environment, and other issues.)
I find myself painting a pretty grim picture of the online ecosystem. Unfortunately, while that’s accurate, I think it’s only going to get worse. I’m not arguing that there aren’t good uses for generative AI, but increasingly I think the negatives will have a bigger, more direct, and more harmful impact on our society than the positives. I am now convinced.
I think about this as follows. We have reached a point where it is unclear whether we can trust what we see or read, and we routinely cannot know whether the entities we encounter online are humans or AI. How does this affect our reactions to what we encounter? It would be foolish to expect that our thinking will not change as a result of these experiences, and I am very worried that the changes we are making are not for the better.
However, that ambiguity is a big part of the challenge. It’s not that we know we’re consuming unreliable information, it’s that we are inherently unrecognizable. You can never be sure. Critical thinking and critical media consumption habits are helpful, but the expansion of AI-generated content may outpace our critical capabilities, at least in some cases. This seems to have great implications for our concepts of trust and trust in information.
In my next article, I’ll go into more detail about how this can affect our thoughts and ideas about the world around us, and what our community can do about it. I’ll think about it.