Adam Mosseri, the head of Instagram, recently shared his thoughts on the prevalence of AI on social media on Threads. As AI advances and content becomes more persuasive, Mosseri believes social media platforms need to do more to label and identify content created by AI.
“Generative AI is clearly producing content that is difficult to discern from real-world records,” Mosselis said in a series of posts on Threads. He believes that with users now having easy access to AI-generated content, platforms need to do more to stop the spread of misinformation. “Our role as an internet platform is to label AI-generated content as best we can,” Mosseri said.
The Instagram chief went on to admit that “some content” will likely be overlooked by labels. So social media platforms like Twitter, Instagram and Threads “need to also provide context about who’s sharing” so users can better decide whether to trust the content, he said. is thinking.
“It will become increasingly important for viewers and readers to have insight when consuming content that purports to describe or record reality,” Mosseri said. “My advice is to *always* consider who you’re talking to.”
Mosseri cautions others that chatbots can lie or give false information to users. Having unwavering trust in AI-powered search engine results can have exactly the same effect. Therefore, it is always important to check the source and make sure it is a reliable account before quoting it in any way.
At the moment, Threads, Instagram and Facebook don’t have the measures Mosseri believes should be accessible. None of Meta’s platforms offer contextual AI content monitoring capabilities suitable for combating misinformation. However, everything could change in the new year. Just recently, Meta released a new watermarking tool for generated AI videos.