Meta's decision marks a significant shift in content moderation strategy as fact-checking initiatives are set to be eliminated in the United States.
Meta Platforms, Inc., the parent company of
Facebook and Instagram, has announced the discontinuation of its fact-checking programs in the United States, effective imminently.
This decision was initially revealed in early January and confirmed last Friday, as the timeline for the termination draws near.
As part of this transition, the company will halt the production of new fact-checks and dismantle its team of fact-checkers.
Instead, users of the platforms can expect to see a new feature that will allow community comments similar to those on X, aimed at alerting others to potential misinformation.
However, these community-generated alerts are not expected to carry any punitive measures against users who post them.
In February, Meta initiated a rollout of its new community fact-checking program by onboarding initial testers.
This new program relies on algorithms similar to those deployed on X, though it is noted that community alerts have yet to be publicly displayed alongside posts.
The move signals a significant change in the approach to content moderation on Meta's platforms, as the company embraces a post-truth era where traditional fact-checking mechanisms are being phased out.
The full implications of this policy shift remain largely unclear, as details regarding future changes or enhancements to misinformation management are sparse.
Meanwhile, the European market prepares for potential alterations as the effects of this policy unfold.