Skip to Content, Navigation, or Footer.
The Tufts Daily
Where you read it first | Saturday, April 20, 2024

Op-Ed: Tragedy of the commons

When the internet first premiered, its upside was limitless. This was especially true in the political science community, where the internet was seen as a great equalizer and unifier. In his 1998 paper, "Can technology save democracy?", Tracy Westen predicted that the internet would create more democratic participation by providing the platform for issue-based campaigns. According to Westen, the internet could create cross-cutting cleavages and facilitate closer links of communication between citizens and their representatives. The internet would also engender large-scale debates and allow for direct forms of political communication such as online voting and electronic polling.

Unlike previous technological advancements in communication, like print media and television, the internet would be cheap; easily accessible from any public library or computer, without any monthly bills. By providing affordable access, scholars predicted that the internet would reduce the information and knowledge gaps created by media forms that allowed the wealthy to get superior and more accessible forms of information.

These utopian prognostications only increased with the advancements of “Web 2.0,” the social media networks of Facebook and Twitter. Web 2.0 would evolve democracy, techno-evangelists argued, giving rise to Democracy 2.0, e-Democracy and teledemocracy. At the time, these predictions were logical. After all, like democracy, social media is founded on the principles of individual identity, communal action and participatory dialogue. In a 2008 Time article entitled “The Citizen Watchdogs of Web 2.0,” Jeremy Caplan argued that social media would propel civic engagement by allowing movements to more easily gain support and form cohesive strategies. That same year, Efthymios Constantinides and Stefan Fountain found that Web 2.0 creates massive social networks that are still centered around the opinions and contents generated by the users.

More so than any other network, Facebook appeared poised to fulfill these predictions. Facebook boasts two billion monthly users, making it the world’s largest social media website. With so many users from so many different countries inhabiting the same space, Facebook became the digital commons that political scientists dreamed about. Facebook is a primary driver of news, with Pew Research Center studies showing forty-five percent of all American adults report that they get some news from Facebook. Facebook is also a facilitator of communication, serving to unite people across distances. The civic foundations are there, upon which Facebook could become the Democracy 2.0 engine it was expected to be.

Of course, it hasn’t worked out that way. Facebook has faced severe criticism over the past year for its role in the 2016 election cycle, when the network became an incubator for false news stories. It has now been disclosed that some of these stories were generated by Russian agentsin an attempt to polarize the American electorate. Facebook has long been seen as a driver of polarization, its algorithms producing news and conversations that seek to validate users' existing opinions.

It is because of these criticisms that Facebook announced this past month that it will be enacting significant changes to its signature News Feed. Announcing these changes in a Facebook post, CEO Mark Zuckerberg wrote, “I'm changing the goal I give our product teams from focusing on helping you find relevant content to helping you have more meaningful social interactions.” This means that Facebook’s algorithm will now prioritizes posts from friends, family and groups, rather than the content from brands, publishers and the media that has come to dominate News Feeds in recent years. This includes news stories generated by reputable publications. "We feel a responsibility to make sure our services aren't just fun to use, but also good for people's well-being," wrote Zuckerberg.

These changes are a recognition that Facebook may not want the pressures that come with being the digital commons. Rather than increasing its vetting or oversight, Facebook responded to criticisms about the fake news stories circling its network by limiting the amount of news that users receive. This is because Facebook doesn’t want to be accused of bias. Of 156 false election stories identified in a 2017 analysis from Stanford University, all but 41 were overtly pro-Donald Trump or anti-Hillary Clinton. Facebook is one of the wealthiest companies in the world, generating nearly $18 billion in profit just last year. It has the capability to eliminate the majority of its fake news stories. But to do so would be to target conservative propaganda.

According to Zuckerberg, Facebook’s algorithm changes are meant to get back to the network’s mission, “help us connect with each other.” But Facebook has done nothing to help diversify the type of connections it is forming, to engage us in conversations across differences. Instead, it is moving away from news to avoid controversy, criticism and government regulation. As long as profits are at stake, the dream of Democracy 2.0 will have to wait.