Yves here. Rajiv Sethi discusses how Bluesky subscribers are allowed not only allowed to mass ban followers of a member, but also a feature that amounts to more “guilty by association” ad hominem attacks if you are the sort that sees following “bad” people as an indicator the follower is suspect. Admittedly, as Sethi carefully explains, the ad hom feature is limited in reach; only people who follow the person making the charge can see the designation, and then only if one also activates the warning.

The fact that merely following people who some Bluesky users think engage in wrongthink can be used to generate a content warning is the social media version of precrime.

Bluesky has less than 1/10th the number of followers that Twitter has. One would like to think the enthusiasm, or at least tolerance, for censorship will put a ceiling on its reach. But the reality is we’ll have to see how this struggle over social media content plays out.

By Rajiv Sethi, Professor of Economics, Barnard College, Columbia University &; External Professor, Santa Fe Institute. Originally published at his site

A recent article by Renée DiResta is interesting for a number of different reasons.

To begin with, the audio accompanying the piece uses an AI voice generator from ElevenLabs that sounds quite human to me (though not much like DiResta herself). I imagine that it won’t be long before books and articles are widely available in voices that are close to indistinguishable from those of their authors.1 Jointly written pieces could be available with a menu of voices corresponding to the various contributors, and the ability to switch between them midstream.2 The impact on employment and pricing in the audiobook industry would be significant.

Second, DiResta observes that sorting across social media platforms is now being driven by ideology rather than preferences over features. The exodus from X to Bluesky following the November election was dramatic, and there may be a second wave coming in the wake of recent changes in content moderation policies at Threads.3 However, this “great decentralization” is operating at two different levels. In addition to ideological sorting across platforms, there is also greater sorting withinthem as content moderation becomes increasingly delegated.

To illustrate, DiResta describes the reaction on Bluesky to a recent arrival:

In mid-December, tensions erupted on the platform over the sudden presence of a prominent journalist and podcaster who writes about trans healthcare in ways that some of the vocal trans users on the platform considered harmful. In response, tens of thousands of users proactively blocked the perceived problematic account (blocks are public on Bluesky). Community labelers enabled users to hide his posts. The proliferation of shared blocklists included some that enabled users to mass-block followers of the controversial commentator… Shareable blocklists, however expansive they may be, are tools designed to empower users. However, a portion of the community did not feel satisfied with the tools. Instead, it began to ref-work the head of trust and safety on Bluesky, who was deluged with angry demands for a top-down response, including via a petition to ban the objectionable journalist. The journalist, in turn, also contacted the mods—about being on the receiving end of threatening language and doxing himself. The drama highlights the tension between the increased potential for users to act to protect their own individual spaces, and the persistent desire to have centralized referees act on a community’s behalf. And, unfortunately, it illustrates the challenges of moderating a large community with comparatively limited resources.

The “journalist and podcaster” referenced here is of course Jesse Singal, who quickly overtook Brianna Wu to become the most blocked person on Bluesky. As DiResta notes, those who decided to follow him ended up on lists that made it easy for others to block them en masse.4 In addition, their profiles began to carry a label placed in an entirely decentralized manner by a user on the platform. This badge is invisible to most people, but can be seen by anyone who subscribes to the community labeler and chooses to activate the content warning.

Among those I follow, there are currently dozens of people whose accounts are labeled in this way. These include some of the most valuable and informative accounts on the platform, such as that of Dartmouth political scientist Brendan Nyhan:

The Bluesky Elder badge (placed by a different community labeler) “is meant in jest and dates to early experiments in labeling. It is applied to the first 800,000 Bluesky accounts.” The Jesse Singal Follower badge is automated and appears “on the profile of accounts that follow Jesse Singal, for informational purposes.” Both badges also appear on DiResta’s account and on my own, as well as on scores of others spanning the conventional ideological spectrum, from Ryan Grim on the left to Robert George on the right.5

It’s worth dwelling a bit on what a label of this kind is meant to convey. There is a literal meaning, which is simply the statement of an indisputable and perhaps unremarkable fact. But there are also imputed meanings that arise from a shared understanding between the sender of the message and its recipient, much like the waving of a red handkerchief in court. In this particular case the badge will be interpreted by some as a warning that the flagged person might be tolerant of bigotry or harassment.

In order to avoid having this ad hominem inference being made about their character, some users will unfollow the objectionable account, or refrain from following it in the first place. And these decisions will sharpen the meaning of the label, since those who continue to carry it will be presumed to find the inference tolerable. But if large numbers of people do not respond in this way—because they reject the inference or are simply unaware of its existence—the meaning of the label will be diluted and the message conveyed will remain ambiguous.6

A third interesting aspect of DiResta’s article is her use of Albert Hirschman’s concepts of exit, voice, and loyalty to understand what is going on here.7 Block lists, badges, and even petitions calling for expulsion are examples of what Hirschman called voice, which he contrasted with exit in his analysis of organizations. One of his key insights was that entities such as firms, educational institutions, or political parties could recover from repairable lapses in performance provided that they had an adequate “time and dollar cushion” to allow for adjustments. If competing alternatives were readily available, those who relied on such organizations could easily jump ship in the face of a deterioration in quality, leading to their rapid collapse. But if exit were difficult or costly, then people would be more inclined to exercise voice instead. While this may be unpleasant for leaders of organizations to experience, it would not immediately threaten viability and could thus provide some breathing room for recuperation.

Whether people express their dissatisfaction using exit or voice is mediated by loyalty—greater attachment to an organization slows exit and strengthens voice. But loyalty can be a consequence of simply having no other viable alternatives available. Hirschman used this idea to argue against the Hotelling-Downs model of political competition, which suggests that party platforms will converge towards the preferences of the median voter. He argued, instead, that someone without an exit option will be “maximally motivated to bring all sorts of potential influence into play” in order to prevent “the party from doing things that are highly obnoxious to him.” Those who have “nowhere else to go” are accordingly “not powerless but influential.” This doesn’t always lead to greater organizational success, and Hirshman points to the nomination of Barry Goldwater by the Republican party in 1964 as an example.

What applies to political parties also applies to social media platforms, though the analogy is obviously imperfect. For platforms, it is network effects rather than psychological attachments that make exit costly, but the implications are similar. Those who have “nowhere else to go” will be maximally motivated to exercise voice, and this is what we are seeing at present on Bluesky.

DiResta argues that ideological sorting across and within platforms, facilitated in part by decentralized content moderation, will lead to increased polarization:

The idealistic goal of federalism in the American experiment was to maintain the nation’s unity while enabling local control of local issues. The digital version of this, however, seems to be a devolution, a retreat into separate spaces that may perhaps increase satisfaction within each outpost but does little to bridge ties, restore mutual norms or diminish animosity across groups. What happens when divergent norms grow so distinct that we can no longer even see or engage with each other’s conversations? The challenge of consensus is no longer simply difficult, it is structurally reinforced.

I’m not as pessimistic. As discussed in an earlier post, shareable lists and labels are instruments that can just as easily be used to dissolve boundaries as to put up walls. They are part of the rough and tumble of free expression online. Such expression—as argued recently by Amna Khalid, Chimamanda Ngozi Adichie, and Killer Mike—often serves as a weapon of the weak. But calls for expulsion are a different matter altogether, and I hope that the platform doesn’t bend to these wishes. If one denies to all what is offensive to some, it is the least powerful among us who will ultimately pay the price.