Back in January, Meta made a bold move — it dropped third-party fact-checking on Facebook and Instagram and replaced it with community notes. The company said this change was about deepening its “commitment to free expression.” But not everyone is buying that explanation. Critics argue that there might be political motivations at play, and they’re worried that this shift could make it even easier for disinformation and toxic content to spread on their platforms. These concerns are valid, but there are larger questions lurking underneath all of this: Does fact-checking actually work? I mean, can it really stop people from believing falsehoods? And how distinct are facts from fiction?
In all honesty, fact-checking isn’t the magical cure-all that some may see it as. Sure, when it comes to non-polarizing topics, it can help people avoid falling victim to false beliefs. But once you dive into more divisive territory, like climate change, politics or vaccines, it becomes a whole different story. If someone has already made up their mind about a certain topic, a little “fact-check” label isn’t going to change it. Indeed, Tali Sharot, a cognitive neuroscience professor at University College London, found that when people were shown information that aligned with what they already believed on climate change, they simply became more entrenched in their views. And when they were presented with information that challenged those beliefs? They didn’t budge.
So while fact-checking might help keep more neutral, factual topics from turning into battlegrounds, it’s not doing much to heal the information divide once it’s already there. In a country as polarized as the United States is right now, that divide isn’t likely to shrink anytime soon — no matter how many fact-check labels you slap on a post.
Fact-checking itself can be inherently controversial, as it rests on the belief that one can clearly establish what a fact is and what it isn’t. This belief is simply not true. In many fields, the issue at hand is more debatable than it may seem at first glance. Throughout history, we see a wide range of ‘facts’ which were eventually disproven: bloodlettings, lobotomies or the idea that the sun revolves around the Earth in astronomy. Evidently, it isn’t as easy to establish a fact as some may assume. Indeed, settled facts can be quite evasive.
During the COVID-19 pandemic, there was quite a bit of myth-spreading regarding the origin of the pandemic. One piece of information initially labeled false — or even racist — was the lab leak theory, which held that the virus that causes COVID-19 originated in a virology lab in Wuhan. Current understanding of the pandemic has changed and the CIA now considers the lab leak theory at least plausible. In fact, Meta even initially curbed the spread of this theory on Facebook and Instagram by removing posts promoting it, only reversing the ban in May of 2021.
The problem with such measures is that they prevent genuine debate about topics. This has serious negative consequences. By stifling debate, Meta may have contributed to the growing distrust of scientists and public officials in the United States. By requiring institutions to fact-check everything, we are making them arbiters of truth in situations where the truth can be hard to establish.
While fact-checking may be bad, community notes may not be much better. The closest thing to Meta’s community notes is X’s community notes. X’s community notes have so far failed at flagging misinformation in time. But there are other models for this sort of tool, — such as Wikipedia — which are seemingly more successful at community-based fact-checking. This is mainly because of Wikipedia’s insistence on verifiability for the claims made by its writers. Even so, community notes on social media have yet to be proven effective in any meaningful capacity.
So, while it’s still too early to judge the impact of Meta’s adoption of community notes, it’s clear that fact-checking isn’t the straightforward solution it’s often made out to be. Psychological biases and philosophical ambiguities blur the line between truth and belief, turning fact-checking into a contentious and often counterproductive tool. Far from settling debates, it can stifle them. In theory, fact-checking aspires to uphold truth; in practice, it risks becoming a blunt instrument — one that, without nuance, may do more harm than good.