Studies indicate that Meta and X sanctioned advertisements containing hate speech and incitements to violence prior to Germany’s federal elections.
A recent study by a German corporate responsibility organization has uncovered that social media platforms Meta (
Facebook) and X (formerly Twitter) sanctioned advertisements featuring anti-Semitic and anti-Muslim messages ahead of Germany’s federal elections.
As part of the research, scholars presented 20 advertisements that included violent language and hate speech aimed at minority communities.
The findings revealed that X approved all 10 ads submitted to its platform, while Meta gave the green light to 5 out of 10. The ads contained calls for violence against Jews and Muslims, likening Muslim refugees to 'viruses' and 'rodents,' and even urging their extermination or sterilization.
One advertisement advocated for burning synagogues to 'halt the Jewish globalist agenda.' The researchers pointed out that the ads were flagged and removed before being published, yet the results raise concerns about the content moderation practices of these social media platforms.
The organization behind the research has submitted its conclusions to the European Commission, which is anticipated to initiate an investigation into potential violations of the EU Digital Services Act by Meta and X. The timing of this report is particularly sensitive, given the impending federal elections in Germany, stirring worries about the impact of hate speech on the democratic process.
In the past,
Facebook has been embroiled in controversy due to the Cambridge Analytica scandal, where a data intelligence firm influenced elections globally through similar tactics, resulting in a $5 billion fine.
Moreover,
Elon Musk, the owner of X, has been criticized for reportedly interfering in the German elections and endorsing the far-right AfD party.
It remains ambiguous whether the approval of these ads stems from Musk’s political biases or his broader commitment to 'free speech' on X. Musk has dismantled X's content moderation system, replacing it with a 'community notes' approach, allowing users to provide context to posts for alternative perspectives.
Mark Zuckerberg, CEO of Meta, has introduced a similar system for
Facebook, although he mentioned that AI-based detection systems for hate speech and illegal content would still be employed.
However, this change has led to concerns, especially with reports indicating that extremist right-wing content is being increasingly elevated on platforms like X and TikTok, influencing public opinions.
The economic downturn and a rise in violence associated with Muslim migrant-related incidents in recent months have exacerbated tensions.
It is uncertain whether the rise in extremist content is a consequence of real-world events or if social media algorithms are promoting such narratives to enhance user engagement.
Regardless, both Musk and Zuckerberg have shown a readiness to reduce content moderation despite pressures from the EU and German authorities.
It remains unclear if this investigation will prompt the EU to enforce stricter regulations on X,
Facebook, and TikTok, but it highlights the ongoing struggle to balance free speech with the prevention of extremist content dissemination.
The study illustrates the broader issue that hate speech frequently aligns with political agendas, complicating the responsibilities of social media platforms in content moderation.
While discussions around regulatory measures continue, the question of who ought to regulate digital discourse—private corporations or government bodies—stays unresolved.
Just like traditional media, social platforms might face increasing scrutiny regarding their management of user-generated content.