The Network Effect of Chaos

Can we combat a rising tide of disinformation on the internet, which threatens the global community the web was meant to enable?

Illustration of a man talking to an audience through a megaphone shaped like the Twitter bird logo; coming out of the megaphone are elongated red bubbles, some enclosing words like "rigged," "conspiracy" and "fraud."
Elena Daly

In the internet’s early days, idealists believed it to be an inherently democratizing force. This technology of pure free speech would enable everyone to communicate with everyone else, tearing down authoritarian walls by empowering ordinary citizens.

Instead, the destruction of old communication barriers has led to a deluge of false information. Both misinformation (information that is incorrect) and disinformation (information that is intentionally incorrect) have spiked alongside the rise of social media. The sheer scale of the numbers is mind-boggling. During the 2016 elections, for example, 126 million Americans were exposed to fake Facebook ads from Russian operators, and Twitter, at the height of its cleanup efforts, was removing dozens of disinformation bots every second.

The storming of the U.S. Capitol on Jan. 6 and the events that followed have provided a recent object lesson not just on the dangers of disinformation but on potential strategies for combating it. Within a week of Twitter’s suspension of President Donald Trump’s account after the Capitol attack, misinformation about Joe Biden’s election victory posted on the platform had decreased by nearly 75%.

Many other platforms followed Twitter’s lead, taking down thousands of accounts that pushed false information about the election’s validity. Perhaps most dramatically, Amazon shut down the servers of the right-wing social media site Parler in the wake of revelations that Parler had been extensively used during the planning of the Capitol violence.

The coordinated dissemination of disinformation has been sourced to actors around the world. Russian pseudo-state operations, financed by Vladimir Putin’s allies, have been of particular concern, as they deploy “troll farms” of employees posting with fake accounts and working in conjunction with bot networks to spread disinformation on social media.

It may seem odd that the campaigns targeting the United States show little in the way of a cohesive strategy. Although the 2016 presidential election elicited a variety of Russian-sourced efforts that boosted Trump’s candidacy, other disinformation campaigns have run an eclectic gamut: a hoax about poisoned Thanksgiving turkeys; an invented chemical-plant explosion; anti-vaccination propaganda; and support for California and Texas secession, electoral college reform, anti-Muslim immigration policies, Black Lives Matter, even the organization of anti-Trump protests once he took office.

Each of these campaigns relied on very little original content. They simply co-opted, amplified and distorted the content of existing efforts. Even the Putin-operative efforts largely exploit disinformation networks already in existence on American social media.

And while the campaigns have some common threads — the exploitation of existing political cleavages, for instance — there is no coherent underlying message or political goal. The campaigns simultaneously push multiple, contradictory, often obviously fictional and conspiracy theory-laden accounts of events.

This lack of coherence is not a bug, but a feature. The chaos is the point.

The chaos aims not to get people to believe in a particular truth, but to convince them the truth cannot be known at all. Democratic institutions collapse without a trust in truth. Without a belief in a basic shared reality — the reality of who won an election, say — democracy cannot function, and the door is opened to charismatic populists and their alternative truths.

So, what can we do to limit disinformation’s impact on social media? Regulating social media companies may seem like the obvious solution, but it’s a difficult policy to deploy effectively. How would we regulate the operations of a company that provides communication to people in every country on the planet?

What’s more, the value of social media to their users is dependent upon everyone’s being on the same platforms. This idea — that the value of a communication platform to any given user is proportional to how many other people are also on it — is called the network effect. Breaking up the big social media companies would mean destroying what gives them value to their users in the first place.

Social media tends to be naturally monopolistic within a given market niche. If you tried to break up Facebook by dividing its user base along national borders, you would merely create 200-plus national monopolies around the world. Divide a user base along any topical or individual characteristics, and you have created the worst sort of echo chambers, preventing the communication among dissimilar individuals the internet was supposed to enable.

What about taking down content, or de-platforming individuals? Some defenders of social media platforms argue that free-speech principles make this a tricky proposition. But disallowing the posting of lies is not the same as censoring opinions. As the cliché goes, everyone has a right to their own opinion; they do not have a right to their own facts.

It is not censorship to delete lies, especially when they have profound public-health implications (anti-vaccination propaganda) or dangerous implications for democratic stability (lies about election fraud). These are the electronic equivalents of yelling “fire” in a crowded theater, the classic commonsense example of how freedom of speech is not an absolute and unconditional right.

Trusting the social media companies to self-regulate has left us with mounting disinformation. It won’t be a simple task to stem the flow of false messaging. But democracy may rest upon our success.

Steven Wilson is an assistant professor of politics.