Social media may not be social because it focuses on AI
Getty
Last week, BuzzFeed CEO Jonah Peretti called out the parent meta of social media platforms Tiktok and Facebook in an open letter to their respective CEOs Zhang Yiming and Mark Zuckerberg, writing, “I don’t care much about the content.” He wrote that he was “very” instead. I’m more interested in technology and AI. ”
Peretti further suggested that social networks prioritized content that could spark anger and lead to more negative emotions in order to increase user engagement. Peretti’s comments were also used to announce that BuzzFeed will enter the social media space on a platform designed to spread joy and return to playful and creative expression, but BuzzFeed’s chief said We’ve put a spotlight on how artificial intelligence and other technologies are doing. We’ve changed social media.
And perhaps even worse.
“Social media platforms increasingly utilize advanced machine learning technologies (e.g. deep neural networks and reinforcement learning) to promote feeds, medium content and user engagement,” says Baylor University’s assistant professor of computer science. Dr. Pablo Livas explained.
Rivas said these systems are excellent at predicting individual preferences and surfaceing the most clickable material. It can inadvertently reward sensational or polarized content. However, it doesn’t have to be that way.
“If used thoughtfully, AI can act as a powerful force for positive online interaction by highlighting factual, meaningful contributions,” Rivas added. “This requires not only robust algorithm design, but also commitment to ethical principles: transparency, fairness and respect for user autonomy.”
AI removes the social elements
There was also no shortage of stories about how AI could allow misleading images and videos to be used to create misleading images and videos. AI has already made social media much less social as a result.
“AI plays a big role in how we see the world. For better or worse, machine learning algorithms allow platforms to spy on users to spy on Panders in prejudice, impulses and fear. and warned Dr. Arthur O’Connor, the academic director of data science at the New York University School of Professional Studies.
With more Americans relying on social media for news and information, the ability of AI to manipulate what users see and read is a major concern.
“For all the imaginary dystopias of Skynet or Matrix, perhaps the biggest risk of AI is not how AI changes and how it advances,” added O’Connor. “When algorithms can instantly generate content optimized for rapid consumption, a feedback loop is created. Attention span leads to simpler content requests, further reducing attention span.
O’Connor said psychologists call this “cognitive off-road.” Here we tend to rely on external tools rather than developing internal capabilities.
“Instead of working on a problem, you simply encourage AI to get immediate answers and solutions,” he continued. “In doing so we risk atrophying critical thinking and reasoning. Calculators do what they have made into arithmetic skills, smartphones remember phone numbers, and GPS navigation takes us in our direction. .”
Additionally, chatbots and virtual assistants provide a kind of pseudo-sociality that can simulate social interactions without real human dialogue challenges and opportunities for growth, exacerbating alienation and loneliness .
“The use of AI on social media certainly did not create social fragmentation and isolation of our time, but clearly does not improve it,” O’Connor suggests. .
Rather than focusing on AI, how it is used
Peretti may have tried to blame social media companies’ AI for focusing on the problem, but the real problem is a simple question of how technology is used It might be. AI has the potential to improve social media, but that requires some important changes.
Social Media Platforms, Associate Professor of Text and Technology Programs and the Faculty of English at the University of Central Florida, Dr. Merstanphil, said there is a need to rethink how social media platforms are fundamentally used. It states.
“Additional algorithms need not only be optimized for attention, but also something like posts that bring productive and respectful conversations,” Stanfill said.
“It’s something you can understand and implement, but it goes against the platform’s interest in attention more than anything,” Stanfill added. “It’s just a little game in the mall that changes because of moderation, as platform tactics change to reduce harmful use, and so does tactics that people try to get worse on each other, but that’s just a little bit of a game in which they can change to do so. You can use human judgment to understand those problems and change how content moderation algorithms react based on that understanding.”
In other words, generative AI is not inherently harmful technology, it can do amazing things. But like many technologies, it is harmful and requires human surveillance to properly use. AI may need to improve “training” and is already being used to prevent the technology from generating responses to prompts that are deemed antisocial or dangerous.
“These techniques play the same role as human content moderators in some of the major social media and internet companies when removing hate speech from user-generated content, but many companies do not have the same I have recently abandoned that effort,” O’Connor said.
Additionally, some of the training takes time and requires large teams to work on.
“This is expensive to have machine learning do that, but it will go a long way towards more positive use of these technologies,” Stanfill continued.
The final question is what role social media companies play in regulating content with digital AI fingerprints.
“There is a lot of evidence that major news outlets are linked to some degree of political bias, but social media companies that support all types of perspectives (positive or negative) are at odds with the principles of freedom of speech. ” said O’Connor. “At the end, it’s really about what end users choose to post, subscribe, read.”