Budapest Post

Cum Deo pro Patria et Libertate
Budapest, Europe and world news

TikTok empowered these plus-sized women, then took down some of their posts. They still don't know why

TikTok empowered these plus-sized women, then took down some of their posts. They still don't know why

After losing her marketing job due to the pandemic and then gaining 40 pounds, Remi Bader, 25, began spending more time on TikTok. She built up a following by posting about clothing items not fitting her correctly and her struggle to find larger sizes in New York City stores.

But in early December, Bader, who now has more than 800,000 followers, tried on a too-small pair of brown leather pants from Zara, and viewers caught a glimpse of her partially naked butt. TikTok quickly deleted the video, citing its policy against "adult nudity." It was upsetting to Bader given that her video, which was meant to promote body positivity, was taken down while videos from other TikTok users that appear sexually suggestive remain on the app. "That to me makes no sense," she said.

Julia Kondratink, a 29-year-old biracial blogger who describes herself as "mid-sized," had a similarly unexpected takedown on the platform in December.

TikTok deleted a video featuring her wearing blue lingerie due to "adult nudity." "I was in shock," she told CNN Business. "There wasn't anything graphic or inappropriate about it."

And Maddie Touma says she has watched it happen to her videos multiple times. The 23-year-old TikTok influencer with nearly 200,000 followers has had videos of her wearing lingerie, as well as regular clothing, taken down. It made her rethink the content she posts, which can be a difficult tradeoff since her mission is body positivity.

"I actually started to change my style of content, because I was scared my account was going to either be removed or just have some sort of repercussions for getting flagged so many times as against community guidelines," Touma said.

Scrolling through videos on TikTok, the short-form video app especially popular among teens and 20-somethings, there's no shortage of scantily clad women and sexually suggestive content. So when curvier influencers like Bader and Touma post similar videos that are then removed, they can't help but question what happened: Was it a moderator's error, an algorithm's error or something else? Adding to their confusion is the fact that even after appealing to the company, the videos don't always get reinstated.

Remi Bader has amassed a following of nearly 800,000 on TikTok.


They're not the only ones feeling frustrated and confused. Adore Me, a lingerie company that partners with all three women on sponsored social media posts, recently made headlines with a series of tweets claiming that TikTok's algorithms are discriminating against its posts with plus-sized women, as well as posts with "differently abled" models and women of color. (After its public Twitter thread, TikTok reinstated the videos, Ranjan Roy, Adore Me's VP of strategy, told CNN Business.) The issue isn't new, either: Nearly a year ago, the singer Lizzo, who is known for her vocal support of body positivity, criticized TikTok for removing videos showing her in a bathing suit, but not, she claimed, swimwear videos from other women.

Content-moderation issues aren't limited to TikTok, of course, but it's a relative newcomer compared to Facebook, Twitter, and others that have faced blowback for similar missteps for years. Periodically, groups and individuals raise concerns that the platforms are inappropriately and perhaps deliberately censoring or limiting the reach of their posts when the truth is far less clear. In the case of the plus-sized influencers, it's not evident whether they're being impacted more than anyone else by content takedowns, but their cases nonetheless offer a window to understand the messy and sometimes inconsistent content moderation process.

The murkiness of what actually happened to these influencers highlights both the mystery of how algorithms and content moderation work and also the power that these algorithms and human moderators — often working in concert — have over how we communicate, and even, potentially, over whose bodies have a right to be viewed on the internet. Those in the industry say likely explanations range from artificial intelligence bias to cultural blindspots from moderators. But those outside the industry feel left in the dark. As Bader and Adore Me found, posts can disappear even if you believe you're following the rules. And the results can be confounding and hurtful, even if they're unintentional.

"It's frustrating for me. I have seen thousands of TikTok videos of smaller people in a bathing suit or in the same type of outfit that I would be wearing, and they're not flagged for nudity," Touma said. "Yet me as a plus sized person, I am flagged."

A sense of not knowing is pervasive


For years, tech platforms have relied on algorithms to determine much of what you see online, whether it's the songs Spotify plays for you, the tweets Twitter surfaces on your timeline, or the tools that spot and remove hate speech on Facebook. Yet, while many of the big social media companies use AI to complement the experience their users have, it's even more central to how you use TikTok.

TikTok's "For You" page, which relies on AI systems to serve up content it thinks individual users will like, is the default and predominant way people use the app. The prominence of the "For You" page has created a pathway to viral fame for many TikTok users, and is one of the app's defining features: Because it uses AI to highlight certain videos, it occasionally enables someone with no followers to garner millions of views overnight.

But TikTok's choice to double down on algorithms comes at a time of widespread concerns about filter bubbles and algorithmic bias. And like many other social networks, TikTok also uses AI to help humans sift through large numbers of posts and remove objectionable content. As a result, people like Bader, Kondratink and Touma who have had their content removed can be left trying to parse the black box that is AI.

TikTok told CNN Business that it doesn't take action on content based on body shape or other characteristics, as Adore Me alleges, and the company said it has made a point of working on recommendation technology that reflects more diversity and inclusion. Furthermore, the company said US-based posts may be flagged by an algorithmic system but a human ultimately decides whether to take them down; outside the United States, content may be removed automatically.

"Let us be clear: TikTok does not moderate content on the basis of shape, size, or ability, and we continually take steps to strengthen our policies and promote body acceptance," a TikTok spokesperson told CNN Business. However, TikTok has limited the reach of certain videos in the past: In 2019, the company confirmed it had done so in an attempt to prevent bullying. The company statement followed a report alleging the platform took action on posts from users who were overweight, among others.

While tech companies are eager to talk to the media and lawmakers about their reliance on AI to help with content moderation — claiming it's how they can manage such a task at mass scale — they can be more tight lipped when something goes wrong. Like other platforms, TikTok has blamed "bugs" in its systems and human reviewers for controversial content removals in the past, including those connected to the Black Lives Matter movement. Beyond that, details about what may have happened can be thin.

AI experts acknowledge that the processes can seem opaque in part because the technology itself isn't always well understood, even by those who are building and using it. Content moderation systems at social networks typically use machine learning, which is an AI technique where a computer teaches itself to do one thing — flag nudity in photographs, for instance — by poring over a mountain of data and learning to spot patterns. Yet while it may work well for certain tasks, it's not always clear exactly how it works.

"We don't have a ton of insight a lot of times into these machine learning algorithms and the insights they're deriving and how they're making their decisions," said Haroon Choudery, cofounder of AI for Anyone, a nonprofit aimed at improving AI literacy.

But TikTok wants to be the poster child for changing that.

A look inside the black box of content moderation


In the midst of international scrutiny over security and privacy concerns related to the app, TikTok's former CEO, Kevin Mayer, said last July that the company would open up its algorithm to experts. These people, he said, would be able to watch its moderation policies in real time "as well as examine the actual code that drives our algorithms." Almost two dozen experts and congressional offices have participated in it — virtually, due to Covid — thus far, according to a company announcement in September. It included showing how TikTok's AI models search for harmful videos, and software that ranks it in order of urgency for human moderators' review.

Eventually, the company said, guests at actual offices in Los Angeles and Washington, D.C. "will be able to sit in the seat of a content moderator, use our moderation platform, review and label sample content, and experiment with various detection models."

"TikTok's brand is to be transparent," said Mutale Nkonde, a member of the TikTok advisory council and fellow at the Digital Civil Society Lab at Stanford.

Even so, it's impossible to know precisely what goes into each decision to remove a video from TikTok. The AI systems that large social media companies rely on to help moderate what you can and can't post do have one major thing in common: They're using technology that's still best suited to fixing narrow problems to address a problem that is widespread, ever changing, and so nuanced it can even be tricky for a human to understand.

Because of that, Miriam Vogel, president and CEO of nonprofit EqualAI, which helps companies decrease bias in their AI systems, thinks platforms are trying to get AI to do too much when it comes to moderating content. The technology is also prone to bias: As Vogel points out, machine learning is based on pattern recognition, which means making snap decisions based on past experience. That alone is implicit bias; the data that a system is trained on and a number of other factors can present more biases related to gender, race, or many other factors, as well.

"AI is certainly a useful tool. It can create tremendous efficiencies and benefits," Vogel said. "But only if we're conscious of its limitations."

For instance, as Nkonde pointed out, an AI system that looks at text that users post may have been trained to spot certain words as insults — "big", "fat", or "thick", perhaps. Such terms have been reclaimed as positive among those in the body positivity community, but AI doesn't know social context; it just knows to spot patterns in data.

Furthermore, TikTok employs thousands of moderators, including full-time employees and contractors. The majority are located in the United States, but it also employs moderators in Southeast Asia. That could result in a situation where a moderator in the Philippines, for instance, may not know what body positivity is, she said. So if that sort of video is flagged by AI, and is not part of the moderator's cultural context, they may take it down.

Moderators work in the shadows


It remains unclear exactly how TikTok's systems misfired for Bader, Touma and others, but AI experts said there are ways to improve how the company and others moderate content. Rather than focusing on better algorithms, however, they say it's important to pay attention to the work that must be done by humans.

Liz O'Sullivan, vice president of responsible AI at algorithm auditing company Arthur, thinks part of the solution to improving content-moderation generally lies in elevating the work done by these workers. Often, she noted, moderators work in the shadows of the tech industry: the work is outsourced to call centers around the world as low-paid contract work, despite the often unsavory (or worse) images, text, and videos they're tasked with sorting through.

To fight unwanted biases, O'Sullivan said a company also has to look at every step of building their AI system, including curating the data that's used to train the AI. For TikTok, which already has a system in place, this may also mean keeping a closer watch on how the software does its job.

Vogel agreed, saying companies need to have a clear process not just for checking AI systems for biases, but also for determining what biases they're looking for, who's responsible for looking for them, and what kinds of outcomes are okay and not okay.

"You can't take humans outside of the system," she said.

If changes aren't made, the consequences may not just be felt by social media users, but also by the tech companies themselves.

"It lessened my enthusiasm for the platform," Kondratink said. "I've contemplated just deleting my TikTok altogether."

AI Disclaimer: An advanced artificial intelligence (AI) system generated the content of this page on its own. This innovative technology conducts extensive research from a variety of reliable sources, performs rigorous fact-checking and verification, cleans up and balances biased or manipulated content, and presents a minimal factual summary that is just enough yet essential for you to function as an informed and educated citizen. Please keep in mind, however, that this system is an evolving technology, and as a result, the article may contain accidental inaccuracies or errors. We urge you to help us improve our site by reporting any inaccuracies you find using the "Contact Us" link at the bottom of this page. Your helpful feedback helps us improve our system and deliver more precise content. When you find an article of interest here, please look for the full and extensive coverage of this topic in traditional news sources, as they are written by professional journalists that we try to support, not replace. We appreciate your understanding and assistance.
Newsletter

Related Articles

0:00
0:00
Close
Polish MEP: “Dear Leftists - China is laughing at you, Russia is laughing, India is laughing”
Western Europe Records Hottest June on Record
BRICS Expands Membership with Indonesia and Ten New Partner Countries
Elon Musk Founds a Party Following a Poll on X: "You Wanted It – You Got It!"
China’s Central Bank Consults European Peers on Low-Rate Strategies
France Requests Airlines to Cut Flights at Paris Airports Amid Planned Air Traffic Controller Strike
Poland Implements Border Checks Amid Growing Migration Tensions
Emirates Airline Expands Market Share with New $20 Million Campaign
Amazon Reaches Milestone with Deployment of One Millionth Robot
Yulia Putintseva Calls for Spectator Ejection at Wimbledon Over Safety Concerns
House Oversight Committee Subpoenas Former Jill Biden Aide Amid Investigation into Alleged Concealment of President Biden's Cognitive Health
Amazon Reaches Major Automation Milestone with Over One Million Robots
Extreme Heat Wave Sweeps Across Europe, Hitting Record Temperatures
Meta Announces Formation of Ambitious AI Unit, Meta Superintelligence Labs
Robots Compete in Football Tournament in China Amid Injuries
China Unveils Miniature Insect-Like Surveillance Drone
Marc Marquez Claims Victory at Dutch Grand Prix Amidst Family Misfortune
Germany Votes to Suspend Family Reunification for Asylum Seekers
Budapest Pride Parade Draws 200,000 Participants Amid Government Ban
Southern Europe Experiences Extreme Heat
Xiaomi's YU7 SUV Launch Garners Record Pre-Orders Amid Market Challenges
Jeff Bezos and Lauren Sanchez's Lavish Wedding in Venice
Russia Launches Largest Air Assault on Ukraine Since Invasion
Massive Anti-Government Protests Erupt in Belgrade
Iran Executes Alleged Israeli Spies and Arrests Hundreds Amid Post-War Crackdown
Hungary's Prime Minister Criticizes NATO's Role in Ukraine
EU TO HUNGARY: LET THEM PRIDE OR PREP FOR SHADE. ORBÁN TO EU: STAY IN YOUR LANE AND FIX YOUR OWN MESS.
Hungarian Scientist to Conduct 30 Research Experiments on the International Space Station
NATO Members Agree to 5% Defense Spending Target by 2035
NATO Leaders Endorse Plan for Increased Defence Spending
U.S. Crude Oil Prices Drop Below $65 Amid Market Volatility
International Astronaut Team Launched to Space Station
Macron and Merz: Europe must arm itself in an unstable world
Germany and Italy Under Pressure to Repatriate $245bn of Gold from US Vaults
Iran Intensifies Crackdown on Alleged Mossad Operatives After Sabotage Claims
Trump Praises Iran’s ‘Very Weak’ Response After U.S. Strikes and Presses Israel to Pursue Peace
Oil Prices Set to Surge After US Strikes Iran
BA and Singapore Airlines Cancel Dubai Flights Amid Middle East Tensions
Trump Faces Backlash from MAGA Base Over Iran Strikes
Meta Bets $14 B on Alexandr Wang to Drive AI Ambitions
FedEx Founder Fred Smith, ‘Heart and Soul’ of the Company, Dies at 80
Chinese Factories Shift Away from U.S. Amid Trump‑Era Tariffs
Pimco Seizes Opportunity in Japan’s Dislocated Bond Market
Labubu Doll Drives Pop Mart to Status as China’s Most Valuable Toy Maker
Global Coal Demand Defies Paris Accord Goals
United States Conducts Precision Strikes on Iran’s Nuclear Sites
US strikes Iran nuclear sites, Trump says
Telegram Founder: I Will Leave My Fortune to Over 100 of My Children
16 Billion Login Credentials Leaked in Unprecedented Cybersecurity Breach
Senate hearing on who was 'really running' Biden White House kicks off
×