AI Doctors Shilling Health Products Are All Over Social Platforms

AI Doctors Shilling Health Products Are All Over Social Platforms

When mom and creator Cathy Pedrayes was scrolling TikTok around Christmas, she started to notice a surplus of accounts using AI avatars to promote knockoff products. Since it was the holiday shopping season, she thought little of it — until she stumbled across a video that felt equal parts ridiculous and dangerous. It was a clip of a woman claiming to have spent 13 years as a “butt doctor” revealing secret advice she learned during her residency. Her magic solution: a supplement from Amazon to fight iron deficiency, and a health newsletter that promised tips for weight loss, a clean gut, and better health. Pedrayes knew it was fake. But she also saw that it had 5.2 million views. And a quick search revealed another worry: there were hundreds more just like it. 

@cathypedrayes

Millions of people watched this video, it was promoted as a result and I’m hoping no one bought what they’re selling. This is AI. It’s fake and I’m noticing a ton of accounts using this tactic. • They repost the same kind of clickbait headline about a spouse cheating, weight loss or something that sounds weird like ‘butt doctor.’ • They build up the account as a ‘person’ and then change the username to a company. • At the end of the video they typically promote a health supplement and/or a newsletter. Even if you can’t tell what’s AI- if clickbait is a red flag for you, that’ll help protect you from AI garbage like this (& a lot of scams in general). #ai #scamalert #scammers #response #reaction #safetytips #onlinesafety #digitalsafety

♬ Not Like Us – Instrumental – DC Beatz

This discovery is one of the latest problems social media platforms are fighting since the development and democratization of artificial intelligence generators: AI doctor ads. AI is easier than ever to produce and as a result, ads with AI talking heads that claim to be medical experts are infiltrating social media’s robust wellness ecosystem. This isn’t isolated to one app. On Facebook, Instagram, X, and TikTok, a particular kind of AI health video — one that uses an AI avatar to convince people of medical expertise — has become the defacto way for accounts to convince people that they, and their unproven products, are legit. Unlike AI images from just a few years ago, many of these videos feature a combination of real footage and AI, which results in avatars who look extremely lifelike at first glance — and are edited exactly the same as direct to camera content that’s popular on video apps. For scrollers and creators alike, internet safety has always been a top concern. But as the technology develops and becomes harder to discern, several creators tell Rolling Stone it’s up to platforms to try and stop this type of AI spread — before it gets impossible to detect. 

As a cosmetic chemist, Javon Ford shares his expertise with close to half-a-million followers on TikTok,  debunking viral health and skincare claims. He posted about the AI doctors after realizing that many of the videos promoted harmful skincare fads like using beef tallow for sunscreen. With the help of people in his comments, Ford linked some of the videos to an app called Captions.ai which offers creators the ability to turn written prompts into videos from a variety of avatar hosts. These avatars can be searched by race, gender, and setting — giving users the opportunity to choose between people sitting in their cars, outside, or even walking down the street. They also don’t appear to be fully AI generated, but videos of real people that have their lips altered with AI to match the inputted script. Under Captions.ai’s Terms and Conditions, users are prohibited from “misrepresenting your identity or using [the app] to impersonate any other person,” but it’s unclear how those guidelines are enforced. (Representatives for Captions.ai did not respond to Rolling Stone’s requests for comment.) 

For Ford, the worry is that these ads are multiplying in a social media atmosphere that’s already suffering from a lack of interest in context and critical thinking. “I’m not really concerned about AI competing in terms of content. I mean, have you ever asked Chat GPT a math question? My issue is that they’re not disclosing the AI in these ads or they’re just blatantly lying,” Ford tells Rolling Stone. “Scientific illiteracy is at an all time high. Literacy in general is pretty low, unfortunately. And it’s important that people are able to still think critically and become more vigilant and mindful about how we absorb this information.” 

A recent report from journalism watchdog Media Matters found that dozens of accounts on TikTok use “wellness buzzwords,” AI-generated influencers, and fabricated personal testimonies to promote a variety of supplements and health products. Olivia Little, a senior investigative researcher at Media Matters and the author of the report, tells Rolling Stone that TikTok “wellness scams” have only gotten more elaborate in the past four years, something she considers incredibly dangerous. 

“It’s a consumer safety issue as well as a user safety issue, because you have a network of companies or accounts impersonating anyone from a medical professional, like doctors, surgeons to, even more maliciously, providing fake or fabricated testimony,” Little says. “It’s all based on the false credibility that they’ve given themselves to, quite literally, trick the consumer.” 

Advancements in artificial intelligence have developed rapidly in the past three years, as companies have made models available to the general public. But as AI gets sharper and harder to detect, social media companies haven’t developed guidelines for AI content at the same speed. Meta’s content guidelines for Facebook and Instagram require an AI label for any photo or video that’s been “digitally created, modified, or altered.” But a quick search of both platforms shows dozens of AI generated doctor videos without any disclosures and still raking in the views. 

The same disclosure is also required on TikTok, which prohibits AI content that shows “fake authoritative sources” — like accounts that pretend to be doctors to promote supplements. When Media Matters reached out to TikTok flagging dozens of accounts, each of them were removed. A spokesperson for TikTok confirmed to Rolling Stone that the accounts were banned for violating the “Spam & Deceptive Behaviors” guidelines and removed additional profiles and videos flagged by this magazine. But one hour after TikTok removed the accounts, others continued to pop up. (Meta declined to comment.) 

Trending Stories

Pedrayes, who shares videos about consumer safety and advocacy to her 2 million followers, tells Rolling Stone that a lot of her content revolves around how to stay safe online, something she thinks will grow to encompass AI fronted claims as the programs, and videos, get more elaborate. 

“Now I’m finding that just the basics of ‘Oh, I saw this video online. How do I know if it’s true?’ It seems like [people] are missing those skills,” she says. “I think from a platform space, it’s concerning that algorithms would promote this, and also that it would show up in other apps for other creators to kind of mimic that content.[Platforms] definitely have a responsibility to do more in toning this down.” 

Be the first to comment

Leave a Reply

Your email address will not be published.


*