Budapest Post

Cum Deo pro Patria et Libertate
Budapest, Europe and world news

A Facebook Watchdog Group Appeared To Cheer A Law That Could Hurt Journalists

A Facebook Watchdog Group Appeared To Cheer A Law That Could Hurt Journalists

The Real Facebook Oversight Board wants content moderation, and it wants it now. What happens when journalists are targeted?

In the extended universe of the techlash, the Real Facebook Oversight Board presents itself as the Avengers.

The members of the group, described on its website as a “‘Brains Trust’ to respond to the critical threats posed by Facebook’s unchecked power,” were summoned from the four corners of the internet by Carole Cadwalladr, the activist British journalist who broke the Cambridge Analytica scandal.

(The group is not affiliated with Facebook and was started last year in confusingly named opposition to Facebook’s creation of its official Oversight Board, or, colloquially, “Facebook Supreme Court.”)

They include some of the biggest names and loudest voices in the movement to hold tech platforms accountable for their influence: people like Shoshana Zuboff, who invented the idea of “surveillance capitalism”; Roger McNamee, the early Facebook investor who has been publicly critical of the company; Yaël Eisenstat, the ex-CIA officer and former head of election integrity operations for political ads at Facebook; and Timothy Snyder, the Yale historian of fascism.

So it was strange to see this superteam on Wednesday tweeting in what appeared to be a celebratory fashion a decision from the Australian High Court (the country's version of the Supreme Court) that does nothing directly to check Facebook’s power while harming the interests of the press.

The Real Facebook Oversight Board only wrote one word in response to the news, "BOOM," followed by three bomb emojis. But that one word is revealing, not just of a mindset among some tech critics that removing unwanted content inherently creates a positive impact, but of the reality that the interests of journalists are not always aligned — as has largely been assumed — with the most prominent critics of the platforms.

In a statement, a spokesperson for the Real Facebook Oversight Board disputed BuzzFeed News' characterization of the "BOOM" tweet, writing, "We made no comment on the law, and have not taken a position on it. The position attributed to us in this column is simple false."


The 5–2 decision, which came down earlier this week, lays the foundation for defamation suits against Facebook users for comments left on their pages. That means Australian news organizations — and potentially all Australians on social media, though it’s unclear for now — could be responsible for defamatory comments left under their posts on the platform, even if they aren’t aware the content exists.

To avoid lawsuits, these newsrooms may have to shut down comments on their Facebook pages or shift resources from newsgathering to fund content moderation on a massive scale. That’s about as far from the United States’ permissive legal regime for internet content — the one many critics of social media’s influence loathe — as it gets. This is, as Mike Masnick wrote for Techdirt, “the anti-230,” Section 230 being the controversial part of the Communications Decency Act which, with a few exceptions, protects websites from being sued in the United States for content created by its users. “It says if you have any role in publishing defamatory information, you can be treated as the publisher.”

The ruling, meanwhile, says nothing about Facebook’s liability for hosting defamatory content.

“Every major internet company now has a group of haters who will never be satisfied,” said Eric Goldman, who codirects the High Tech Law Institute at the Santa Clara University School of Law. “They are opposed to anything that would benefit their target. It leads to wacky situations.”

One such wacky situation: Fox News and the Wall Street Journal have spent years attacking Section 230 for protecting the platforms they allege are prejudiced against conservatives. Now their owner, Rupert Murdoch, potentially faces a new universe of defamation claims in the country of his birth, where he still owns a media empire.

Another: A tech watchdog group that includes Laurence Tribe, the constitutional law scholar, and Maria Ressa, the Filipina journalist who has been hounded by the Duterte regime through the country’s libel laws, has released a favorable public statement about the expansion of defamation liability — an expansion that, as Joshua Benton suggested at Nieman Lab, presents a tempting model for authoritarians around the world.

Started in September 2020, the Real Facebook Oversight Board promised to provide a counterweight to the actual Oversight Board. Itself a global superteam of law professors, technologists, and journalists, the official board is where Facebook now sends thorny public moderation decisions. Its most important decision so far, to temporarily uphold Facebook’s ban of former president Trump while asking the company to reassess the move, was seen paradoxically as both a sign of its independence and a confirmation of its function as a pressure relief valve for criticism of the company.

On its website and elsewhere, the Real Facebook Oversight Board criticizes the original board for its “limited powers to rule on whether content that was taken down should go back up” and its timetable for reaching decisions: “Once a case has been referred to it, this self-styled ‘Supreme Court’ can take up to 90 days to reach a verdict. This doesn’t even begin to scratch the surface of the many urgent risks the platform poses.” In other words: We want stronger content moderation, and we want it faster.

Given the role many allege Facebook has played around the world in undermining elections, spreading propaganda, fostering extremism, and eroding privacy, this might seem like a no-brainer. But there’s a growing acknowledgment that moderation is a problem without a one-size-fits-all solution, and that sweeping moderation comes with its own set of heavy costs.

In a June column for Wired, the Harvard Law lecturer evelyn douek wrote that “content moderation is now snowballing, and the collateral damage in its path is too often ignored.” Definitions of bad content are political and inconsistent. Content moderation at an enormous scale has the potential to undermine the privacy many tech critics want to protect — particularly the privacy of racial and religious minorities. And perhaps most importantly, it’s hard to prove that content moderation decisions do anything more than remove preexisting problems from the public eye.

Journalists around the world have condemned the Australian court’s decision, itself a function of that country’s famously soft defamation laws. But the Real Facebook Oversight Board’s statement is a reminder that the impulses of the most prominent tech watchdog groups can be at odds with a profession that depends on free expression to thrive. Once you get past extremely obvious cases for moderation — images of child sexual abuse, incitements to violence — the suppression of bad forms of content inevitably involves political judgments about what, exactly, is bad. Around the world, those judgments don’t always, or even usually, benefit journalists.

“Anyone who is taking that liability paradigm seriously isn’t connecting the dots,” Goldman said.

AI Disclaimer: An advanced artificial intelligence (AI) system generated the content of this page on its own. This innovative technology conducts extensive research from a variety of reliable sources, performs rigorous fact-checking and verification, cleans up and balances biased or manipulated content, and presents a minimal factual summary that is just enough yet essential for you to function as an informed and educated citizen. Please keep in mind, however, that this system is an evolving technology, and as a result, the article may contain accidental inaccuracies or errors. We urge you to help us improve our site by reporting any inaccuracies you find using the "Contact Us" link at the bottom of this page. Your helpful feedback helps us improve our system and deliver more precise content. When you find an article of interest here, please look for the full and extensive coverage of this topic in traditional news sources, as they are written by professional journalists that we try to support, not replace. We appreciate your understanding and assistance.
Newsletter

Related Articles

0:00
0:00
Close
UN's Top Court Declares Environmental Protection a Legal Obligation Under International Law
"Crazy Thing": OpenAI's Sam Altman Warns Of AI Voice Fraud Crisis In Banking
The Podcaster Who Accidentally Revealed He Earns Over $10 Million a Year
UK Government Considers Dropping Demand for Apple Encryption Backdoor
Japanese Man Discovers Family Connection Through DNA Testing After Decades of Separation
Russia Signals Openness to Ukraine Peace Talks Amid Escalating Drone Warfare
Switzerland Implements Ban on Mammography Screening
Pogacar Extends Dominance with Stage Fifteen Triumph at Tour de France
President Trump Diagnosed with Chronic Venous Insufficiency After Leg Swelling
CEO Resigns Amid Controversy Over Relationship with HR Executive
NVIDIA Achieves $4 Trillion Valuation Amid AI Demand
Tulsi Gabbard Unveils Evidence Alleging Political Manipulation of Intelligence During Trump Administration
Centrist Criticism of von der Leyen Resurfaces as she Survives EU Confidence Vote
Trump Announces Coca-Cola to Shift to Cane Sugar in U.S. Production
FIFA Pressured to Rethink World Cup Calendar Due to Climate Change
Zelensky Reshuffles Cabinet to Win Support at Home and in Washington
"Can You Hit Moscow?" Trump Asked Zelensky To Make Putin "Feel The Pain"
Church of England Removes 1991 Sexuality Guidelines from Clergy Selection
Superman Franchise Achieves Success with Latest Release
Hungary's Viktor Orban Rejects Agreements on Illegal Migration
Air India Pilot’s Mental Health Records Under Scrutiny
Jamie Dimon Warns Europe Is Losing Global Competitiveness and Flags Market Complacency
Moonshot AI Unveils Kimi K2: A New Open-Source AI Model
Martha Wells Says Humanity Still Far from True Artificial Intelligence
Nvidia Becomes World’s First Four‑Trillion‑Dollar Company Amid AI Boom
EU Delays Retaliatory Tariffs Amid New U.S. Threats on Imports
Trump Proposes Supplying Arms to Ukraine Through NATO Allies
US Opens First Rare Earth Mine in Over 70 Years in Wyoming
Bitcoin Reaches New Milestone of $116,000
Severe Heatwave Claims 2,300 Lives Across Europe
Declining Beer Consumption Signals Cultural Shift in Germany
Emails Leaked: How Passenger Luggage Became a Side Income for Airport Workers
Polish MEP: “Dear Leftists - China is laughing at you, Russia is laughing, India is laughing”
Western Europe Records Hottest June on Record
BRICS Expands Membership with Indonesia and Ten New Partner Countries
Elon Musk Founds a Party Following a Poll on X: "You Wanted It – You Got It!"
China’s Central Bank Consults European Peers on Low-Rate Strategies
France Requests Airlines to Cut Flights at Paris Airports Amid Planned Air Traffic Controller Strike
Poland Implements Border Checks Amid Growing Migration Tensions
Emirates Airline Expands Market Share with New $20 Million Campaign
Amazon Reaches Milestone with Deployment of One Millionth Robot
Yulia Putintseva Calls for Spectator Ejection at Wimbledon Over Safety Concerns
House Oversight Committee Subpoenas Former Jill Biden Aide Amid Investigation into Alleged Concealment of President Biden's Cognitive Health
Amazon Reaches Major Automation Milestone with Over One Million Robots
Extreme Heat Wave Sweeps Across Europe, Hitting Record Temperatures
Meta Announces Formation of Ambitious AI Unit, Meta Superintelligence Labs
Robots Compete in Football Tournament in China Amid Injuries
China Unveils Miniature Insect-Like Surveillance Drone
Marc Marquez Claims Victory at Dutch Grand Prix Amidst Family Misfortune
Germany Votes to Suspend Family Reunification for Asylum Seekers
×