As mom and creator Kathy Pedraise were scrolling through Tiktok around Christmas, she began to notice the surplus of her accounts to promote imitations using AI avatars. It was the holiday shopping season so she barely thought of it — until she came across a video where she found the equal parts ridiculously dangerous. It is a clip of a woman who claims she spent 13 years as a “butt doctor,” revealing secret advice she learned while she was inhabited. Her Magical Solution: Amazon supplements to combat iron deficiency, and a health newsletter promises tips for weight loss, clean intestines, and better health. Pedraise knew it was a fake. But she also saw it had 5.2 million views. And a simple search revealed another concern.
This discovery is one of the latest issues that social media platforms have been fighting since the development and democratization of artificial intelligence generators, and the development of AI Doctor Ads. Because AI is easier than ever, it claims that ads from AI Talking Head, who claim to be a medical professional, have permeated the robust wellness ecosystem of social media. This is not separated into one app. On Facebook, Instagram, X, and Tiktok, certain types of AI health videos (using AI avatars to convince medical expertise) have become a de facto way to convince people that their and their unproven products are legal. Unlike AI images just a few years ago, many of these videos feature a combination of real footage and AI, making them look very realistic at first glance and edited exactly like the popular camera content in video apps. Internet safety has always been our biggest concern for seafarers and creators. But as technology develops and becomes more difficult to identify, it’s up to some creators to Rolling Stone to put this type of AI spread down.
Editor’s Pick
As a cosmetics chemist, Javon Ford shares his expertise with nearly half a million followers on Tiktok, exposing the viral health and skincare claims. He posted about AI doctors after realising that many videos have promoted a harmful skincare epidemic, such as using beef for sunscreen. With the help of people in his comments, Ford linked some videos to an app called Captions. These avatars can be searched by race, gender and settings. It provides users with the opportunity to choose between someone sitting in a car, outside, or walking down the street. Also, it doesn’t look completely AI-generated, but it’s a video of real people who have changed their lips with AI to match the script they entered. Aai’s Terms of Service prohibits users from “misinating their identity or using (the app) to impersonate others,” but it is unclear how those guidelines are enforced. (Representative of the caption.
For Ford, the concern is that these ads are growing in a social media atmosphere that has already suffered from a lack of interest in context and critical thinking. “I’m not too worried about AI competing in terms of content. So have you ever asked ChatGPT for math questions? My problem is, they’re not revealing AI in these ads or blatantly lying,” Ford tells Rolling Stone. “Scientific illiteracy is always the best. Unfortunately, literacy is generally rather low. And it’s important that people still think critically and be more vigilant and considerate about how they absorb this information.”
A recent report from Journalism Watchdog Media Issues found dozens of accounts on Tiktok use “wellness buzzwords,” AI generation influencers, and personal testimonials to promote a variety of supplements and health products. Olivia Little, a senior investigator at Media Matters and author of the report, told Rolling Stone that Tiktok’s “wellness scam” has only become more elaborate over the past four years and is incredibly dangerous.
Related content
“It’s a consumer safety issue, a user safety issue, because every medical professional has a network of companies or accounts that are impersonating people, including doctors, surgeons and more, providing maliciously fake or manufactured testimony,” Little says. “It all comes down to the false credibility they gave themselves.
Advances in artificial intelligence have developed rapidly over the past three years as companies have made their models available to the public. However, as AI becomes more difficult to detect sharper, social media companies are not developing guidelines for AI content at the same speed. Meta’s content guidelines on Facebook and Instagram say, “You need AI labels for digitally created, modified or modified photos or videos. However, if you search both platforms quickly, dozens of AI-generated doctor videos still look out at the scenery without revealing them.
The same disclosure is required on Tiktok. This prohibits AI content that displays “fake authoritative sources” like accounts that pretend to be doctors to promote supplements. After Media Matters published the report, the accounts they flagged were deleted. A Tiktok spokesman confirmed to Rolling Stone that the account is prohibited for violating the “spam and deceptive behaviour” guidelines and removed additional profiles and videos that were flagged in the magazine. However, an hour after Tiktok deleted his account, others kept popping up. (Meta declined to comment.)
Trend Stories
Pedrayes, who shares a video on consumer safety and advocacy with her 2 million followers, says she’s got a lot of her content revolves around how she stays safe online, what she thinks will grow to encompass AI Fronted’s claims as a program, and the video will become more elaborate.
“Now I’ve found the basics of ‘Oh, I watched this video online.’ How can you know if that is true? ” (people) seem to lack those skills,” she says. “I think we’re concerned that algorithms will encourage this from the platform field, and that other creators will be shown on other apps to mimic that content.