This story originally appeared in The New York Times Dec. 31, 2023
https://www.nytimes.com/2023/12/31/books/review/elon-musk-trust-misinformation-disinformation.html?referringSource=articleShare&smid=nytcore-ios-share&utm_source=pocket_saves

CRITICS NOTEBOOK

The Problem of Misinformation in an Era Without Trust

Elon Musk thinks a free market of ideas will self-correct. Liberals want to regulate it. Both are missing a deeper predicament.


Pablo Delcan

By Jennifer Szalai
Dec. 31, 2023

When the billionaire entrepreneur Elon Musk sat down for his profanity-laced interview at The New York Times’s DealBook Summit in late November, his petulant dropping of F-bombs received a lot of attention. Less noticed but far more revealing was his evident disdain for a humble word beginning with the letter T. “You could not trust me,” Musk said, affecting an air of tough-guy indifference in his shearling-collared flight jacket and shiny black boots. “It is irrelevant. The rocket track record speaks for itself.”

Musk wasn’t being pressed on “the rocket track record” — after all, the number of civilians in the audience who were in the market for a spaceship was presumably few. But though he seemed loath to acknowledge it, the question of trust is at the core of X, the social media platform he acquired in October 2022, and where he recently replied to an antisemitic post with the words “You have said the actual truth.” (Calling it “literally the worst and dumbest post that I’ve ever done,” he still hasn’t removed it.)

Asked at the summit to comment on his trustworthiness, Musk rattled off statistics about launching rockets into orbit and boasted about making “the best cars.” (More than two million of those cars have since been recalled under pressure from regulators concerned about the Autopilot software.) But he resisted reflecting on how getting people to engage on X — which is built around information and social relationships — might be qualitatively different from getting earthlings to Mars. There were moments when his defiance shaded into incomprehension. The word “trust” didn’t seem to compute.

One of the first changes Musk made to X was to stop putting as many resources into maintaining the trust he valued so little. He started charging for a blue check mark, which had reliably signaled (at no cost) that a notable account was “verified” and not an impersonation. (The only thing a blue check mark reliably signals now is that someone is willing to pay Musk $8 a month — or maybe not, since he has comped the marks for some celebrity accounts.) He gutted the platform’s content moderation team. He reinstated accounts that had been suspended for peddling hate speech and harmful falsehoods about vaccines.

Critics warned that Musk’s moves were allowing misinformation to flourish, but dependable measurements of misinformation on X have become much harder to come by — largely because they have been stymied by X. In addition to suing an organization that tracks hate speech and falsehoods on social media and rejecting legislators’ calls for transparency, X now charges researchers up to $42,000 a month for access to data-gathering tools that once were free. All the while, Musk has insisted that he’s making X a great place to be. For proof we are left with two extremely limited forms of corroboration: testimonials from individual users and Musk’s word.

“My aspiration for the X platform is that it is the best source of truth, or the least inaccurate source of truth,” he said at the DealBook summit. Less than two weeks later, he welcomed back to the platform the far-right Infowars host Alex Jones, who had been banned in 2018 for harassment. Among the lurid conspiracy theories peddled by Jones was the ghastly lie that the mass shooting of children at Sandy Hook Elementary School was a hoax. Musk proposed that Community Notes — X’s crowdsourced fact-checking program — would “respond rapidly to any AJ post that needs correction.” A few hours later Musk wrote that a Community Note attached to one of his own posts was “being gamed by state actors.”


Elon Musk at The New York Times’s DealBook Summit in New York City on Nov. 29, 2023. Haiyun Jiang for The New York Times

Taken together, Musk’s comments and actions have been erratic to the point of bewildering. And for anyone who worries about the proliferation of misinformation in our brave new digital world, they’re undoubtedly appalling, too. But they also expose a fundamental problem at the heart of the debate over what should be done. Musk’s platform brings together people and information, and to sort that information into true and false requires an underlying sense that its source can be relied on. In other words, this sorting process requires trust — the very thing that Musk spurns.

By reveling in the chaos, Musk has turned X into an experiment in whether “the best source of truth” means anything without a foundation of trust to support it. Analyses and policy prescriptions can help us understand the problem of misinformation and the history of disinformation campaigns. But Musk’s insistence that anything goes on his platform exposes some of the deeper assumptions we often take for granted — about how we know what we know, and why we believe what we do.

Guardians of Truth

The election of Donald Trump in 2016 spawned a steady supply of books about misinformation and disinformation, though there is a critical distinction between the two. “Misinformation” is false information that people sincerely believe and unwittingly spread; “disinformation,” which comes from what the Soviets called dezinformatsiya, is deliberate deception. Revelations about Russian interference in the 2016 election meant that the two terms were used prolifically and often interchangeably.

“On Disinformation: How to Fight for Truth and Protect Democracy,” a new book by Lee McIntyre, captures some of the current alarm. His book is a pocket-size polemic warning of “truth killers” running “a coordinated campaign” intended “to spread disinformation out to the masses — in order to foment doubt, division and distrust — and create an army of deniers.” McIntyre, a philosopher and research fellow at Boston University, points to conspiratorial falsehoods about Covid vaccines and the 2020 election. It’s not that accurate information about the vaccines and the election was unavailable — it’s that it was competing against a fire hose of falsehoods made infinitely easier to disseminate on social media and the internet.


Demonstrators at a protest against vaccine and mask mandates on the National Mall in Washington, D.C., in January 2022. Kenny Holston for The New York Times

But as Thomas Rid explains in “Active Measures: The Secret History of Disinformation and Political Warfare” (2020), the panicked attention Americans have paid to disinformation since 2016 is perhaps an overcorrection, a belated response to a phenomenon that they had been underestimating before. Russian disinformation was nothing new; if anything, Rid says, “overestimating” its power served only to amplify “the effects of an adversarial campaign that was, although poorly executed, designed to be overestimated.” Russian trolls placed thousands of ads on Facebook — but none of the most popular contained what Rid, a political scientist at Johns Hopkins, calls “sharp, corrosive disinformation.” They were meant “to build communities” and “exacerbate existing tensions” — what the Soviets used to call “deepening the contradictions.”

Rid is scholarly and exacting; while unsparing in his depiction of how disinformation operations can slowly erode open societies, he says that the internet has actually made such campaigns “harder to control, harder to steer and harder to isolate engineered effects.” McIntyre, by contrast, adopts a more simplistic approach, treating the internet as a disinformation force multiplier. He favors more regulation of social media platforms, brushing aside First Amendment concerns: The spread of “bad speech,” he says, is like yelling “Fire!” in a crowded theater.

This is precisely the kind of argument that Jeff Kosseff deplores in “Liar in a Crowded Theater: Freedom of Speech in a World of Misinformation” (2023). Where McIntyre calls for action, Kosseff, a professor of cybersecurity law at the United States Naval Academy, urges caution. He doesn’t deny that technology can amplify lies, and that lies — whether deliberately engineered or not — can be dangerous. But he points to “the unintended consequences of giving the government more censorial power.” Better, he says, to use measures already in place, including “counterspeech,” or countering a lie with a truth; and punishing people for the things they do (shooting up a pizzeria with an AR-15) instead of the things they say (falsely claiming that the pizzeria houses a pedophile ring).

Of course, by the time someone has acted on a lie it is often too late to prevent harm. Like a number of authors writing about misinformation since 2021, Kosseff recounts what happened on Jan. 6, when insurrectionists stormed the Capitol because they believed President Trump’s lies that the election was stolen. Despite Kosseff’s stalwart defense of the First Amendment, he allows for the possibility that the “Big Lie” may not fall under protected speech, citing novel approaches by legal scholars who have tried to address this issue.


Rioters inside the Capitol on Jan. 6, 2021. Erin Schaff/The New York Times

Where McIntyre deploys combat metaphors, declaring that desperate times call for desperate measures, Kosseff wants government agencies to embrace “candor” and “humility” when communicating with the public, and he promotes teaching media literacy to children — all of which sound reasonable. But at a moment when even elementary school reading lists generate rancorous dispute, any attempt to engineer our way out of our predicament throws us back on our profound disagreements over how to define the predicament in the first place: For instance, nearly 40 percent of Americans consider the fact that Joe Biden is the legitimate winner of the 2020 election to be fake news.

At the same time, some critics have argued that the war on disinformation is being most fervently waged by those who have something to gain from positioning themselves as the rightful guardians of truth. In a cover story for Harper’s Magazine in 2021, the BuzzFeed reporter Joseph Bernstein (now a reporter for The Times) wrote about what he calls “Big Disinfo”: an industrial complex of think tanks, media companies and academic centers that emerged during the Trump years to study the effects of disinformation. These institutions present themselves as providing an essential service to an impressionable public — sorting good information from bad.

Such a project would seem to run counter to the libertarian proclivities of Silicon Valley, yet Bernstein observes that Big Disinfo has received support from Big Tech. He argues that a company like Meta, which makes billions from advertisers, also happens to have a vested interest in promoting the idea of an impressionable public — one susceptible to being influenced on platforms that are “magically persuasive.” Bernstein adds that this Big Info-Big Tech juggernaut is joined by backers of the political center, who want to believe that people are effectively brainwashed by lies instead of being truly, and not always baselessly, upset with the status quo. Fixating on disinformation reduces an enormous political conundrum — why some people will readily believe something, even when it might kill them — into something amenable to a technocratic fix.

The notion that certain parties have much to gain by stoking fears of disinformation can come across as conspiratorial (and has been pushed to cynical lengths by Republican lawmakers eager to halt research into disinformation ahead of the 2024 election). But critiques like Bernstein’s invite a salutary skepticism toward power in a debate that can sometimes verge on the surprisingly credulous. In “On Disinformation,” McIntyre asks, apparently with a straight face: “Doesn’t the fact that the U.S. Army is taking the threat of disinformation so seriously suggest that the rest of us might do so too?”

The ‘Crisis of Authority’

Which brings us back to trust. McIntyre’s appeal to the Army as a sterling source of trustworthy information will raise some readers’ eyebrows. A more clarifying take on trust is laid out by Chris Hayes in “Twilight of the Elites: America After Meritocracy,” a book that was published in 2012 and turned out to be extraordinarily prescient. In it, Hayes describes how elite malfeasance — the forever wars after 9/11; the 2008 financial crisis — was deeply corrosive, undermining the public’s trust in institutions. This “crisis of authority” is deserved, he says, but it has also left us vulnerable. A big, complex democracy requires institutional trust in order to function. Without it, “we really do risk a kind of Hobbesian chaos, in which truth is overtaken by sheer will-to-power.”

Recognizing how deep this crisis goes leaves us in a difficult place. Getting people to reject demonstrable lies isn’t simply a matter of bludgeoning them with facts. As the communications scholar Dannagal Goldthwaite Young writes in “Wrong: How Media, Politics and Identity Drive Our Appetite for Misinformation” (2023), the impulse to berate and mock people who believe conspiratorial falsehoods will typically backfire: “The roots of wrongness often reside in confusion, powerlessness and a need for social connection.” Building trust requires cultivating this social connection instead of torching it. But extending compassionate overtures to people who believe things that are odious and harmful is risky too.

A faulty rocket goes up in flames; a faulty car crashes into a tree. Faulty information works differently. Musk’s vision for an informational free-for-all on X has made a foundational quandary impossible to ignore — and the enormous challenges of any workable fix much more plain to see. He is asking people to believe him and daring them not to. “Trust no one, not even no one,” he posted on X in October, relishing both the bad joke and the nihilism. Or as he put it in November, “Just because it came out of my mouth, does not mean it’s true.”


Jennifer Szalai is the nonfiction book critic for The Times.
More about Jennifer Szalai