, , , , , , , , , , , , , , , , , , , , , , , ,

I’ve been cheering for Elon Musk in his effort to remake Twitter into the kind of “public square” it always held the promise to be. He’s standing up for free expression, against one-party control of speech on social media, and especially against government efforts to control speech. That’s a great and significant thing, yet as Duke economist Michael Munger notes, we hear calls from the Biden Administration and congressional Democrats to “keep an eye on Twitter”, a not-so-veiled threat of future investigative actions or worse.

Your Worst Enemy Test, Public or Private

As a disclaimer, I submit that I’m not an unadulterated fan of Musk’s business ventures. His business models too often leverage wrong-headed government policy for profitability. It reeks of rent seeking behavior, whatever Musk’s ideals, and the availability of those rents, primarily subsidies, violates the test for good governance I discussed in my last post. That’s the Munger Test (the “Your Worst Enemy” Test), formally:

You can only give the State power that you favor giving to your worst enemy.

On the other hand, Musk’s release of the “Twitter Files” last weekend, with more to come, is certainly a refreshing development. Censorship at the behest of political organizations, foreign governments, or our own government are all controversial and possibly illegal. While we’d ordinarily hope to transact privately at arms length with free exchange being strictly an economic proposition, one might even apply the Munger Test to the perspective of a user of a social media platform: would you trust your worst enemy to exercise censorship on that platform on the basis of politics? Like Donald Trump? Or Chuck Schumer? If not, then you probably won’t be happy there! Now, add to that your worst enemy’s immunity to prosecution for any content they deem favorable!

Cloaked Government Censorship?

Censorship runs afoul of the First Amendment if government actors are involved. In an interesting twist in the case of the Twitter Files, the two independent journalists working with the files, Matt Taibbi and Bari Weiss, learned that some of the information had been redacted by one James Baker, Twitter’s Deputy General Counsel. Perhaps not coincidentally, Baker was also formerly General Counsel of the FBI and a key figure in the Trump-Russia investigation. Musk promptly fired Baker from Twitter over the weekend. We might see, very soon, just how coincidental Baker’s redactions were.

Mark Zuckerberg himself recently admitted that Facebook was pressured by the FBI to censor the Hunter Biden laptop story, which is a key part of the controversy underlying the Twitter Files. The Biden Administration had ambitious plans for working alongside social media on content moderation, but the Orwellian-sounding “Disinformation Governance Board” has been shelved, at least for now. Furthermore, activity performed for a political campaign may represent an impermissible in-kind campaign donation, and Twitter falsely denied to the FEC that it had worked with the Biden campaign.


What remedies exist for potential social media abuses of constitutionally-protected rights, or even politically-driven censorship? Elon Musk’s remaking of Twitter is a big win, of course, and market solutions now seem more realistic. Court challenges to social media firms are also possible, but there are statutory obstacles. Court challenges to the federal government are more likely to succeed (if its involvement can be proven).

The big social media firms have all adopted a fairly definitive political stance and have acted on it ruthlessly, contrary to their professed role in the provision of an open “public square”. For that reason, I have in the past supported eliminating social media’s immunity from prosecution for content posted on their networks. A cryptic jest by Musk might just refer to that very prospect:

Anything anyone says will be used against me in a court of law.

Or maybe not … even with the sort of immunity granted to social media platforms, the Twitter Files might implicate his own company in potential violations of law, and he seems to be okay with that.

Immunity was granted to social media platforms under Section 230 of the Communications Decency Act (DCA). It was something many thought “the state should do” in the 1990s in order to foster growth in the internet. And it would seem that a platform’s immunity for content shared broadly should be consistent with promoting free speech. So the issue of revoking immunity is thorny for free speech advocates.

Section 230 And Content Moderation

There have always been legal restrictions on speech related to libel and “fighting words”. In addition, the CDA, which is a part of the Telecommunications Act, restricts “obscene” or “offensive” speech and content in various ways. The problem is that social media firms seem to have used the CDA as a pretext for censoring content more generally. It’s also possible they felt as if immunity from liability made them legally impervious to objections of any sort, including aggressive political censorship and user bans on behalf of government.

The social value of granting immunity depends on the context. There are two different kinds of immunity under Section 230: subsection (c)(1) grants immunity to so-called common carriers (e.g. telephone companies) for the content of private messages or calls on their networks; subsection (c)(2) grants immunity to social media companies for content posted on their platforms as long as those companies engage in content moderation consistent with the provisions of the CDA.

Common carrier immunity is comparatively noncontroversial, but with respect to 230(c)(2), I go back to the question: would I want my worst enemy to have the power to grant this kind of immunity? Not if it meant the power to forgive political manipulation of social media content with the heavy involvement of one political party! The right to ban users is completely unlike the “must serve” legal treatment of “public accommodations” provided by most private businesses. And immunity is inconsistent with other policies. For example, if social media acts to systematically host and to amplify some viewpoints and suppress others, it suggests that they are behaving more like publishers, who are liable for material they might publish, whether produced on their own or by third-party contributors.

Still, social media firms are private companies and their user agreements generally allow them to take down content for any reason. And if content moderation decisions are colored by input from one side of the political aisle, that is within the rights of a private firm (unless its actions are held to be illegal in-kind contributions to a political campaign). Likewise, it is every consumer’s right not to join such a platform, and today there are a number of alternatives to Twitter and Facebook.

Again, political censorship exercised privately is not the worst of it. There are indications that government actors have been complicit in censorship decisions made by social media. That would be a clear violation of the First Amendment for which immunity should be out of the question. I’d probably cut a platform considerable slack, however, if they acted under threat of retaliation by government actors, if that could be proven.

Volokh’s Quid Pro Quo

Rather than simply stripping away Section 230 protection for social media firms, another solution has been suggested by Eugene Volokh in “Common Carrier Status as Quid Pro Quo for § 230(c)(1) Immunity”. He proposes the following choice for these companies:

(1) Be common carriers like phone companies, immune from liability but also required to host all viewpoints, or

(2) be distributors like bookstores, free to pick and choose what to host but subject to liability (at least on a notice-and-takedown basis).

Option 2 is the very solution discussed in the last section (revoke immunity). Option 1, however, would impinge on a private company’s right to moderate content in exchange for continued immunity. Said differently, the quid pro quo offers continued rents created by immunity in exchange for status as a public utility of sorts, along with limits on the private right to moderate content. Common carriers often face other regulatory rules that bear on pricing and profits, but since basic service on social media is usually free, this is probably not at issue for the time being.

Does Volokh’s quid pro quo pass the Munger Test? Well, at least it’s a choice! For social media firms to host all viewpoints isn’t nearly as draconian as the universal service obligation imposed on local phone companies and other utilities, because the marginal cost of hosting an extra social media user is negligible.

Would I give my worst enemy the power to impose this choice? The CDA would still obligate social media firms selecting Option 1 to censor obscene or offensive content. Option 2 carries greater legal risks to firms, who might respond by exercising more aggressive content moderation. The coexistence of common carriers and more content-selective hosts might create competitive pressures for restrained content moderation (within the limits of the CDA) and a better balance for users. Therefore, Volokh’s quid pro quo option seems reasonable. The only downside is whether government might interfere with social media common carriers’ future profitability or plans to price user services. Then again, if a firm could reverse its choice at some point, that might address the concern. The CDA itself might not have passed the “Worst Enemy” Munger Test, but at least within the context of established law, I think Volokh’s quid pro quo probably does.

We’ll Know More Soon

More will be revealed as new “episodes” of the Twitter Files are released. We may well hear direct evidence of government involvement in censorship decisions. If so, it will be interesting to see the fallout in terms of legal actions against government censorship, and whether support coalesces around changes in the social media regulatory environment.