We were very gratified at the degree to which readers were concerned and came to our aid after Google put a demonetization sword of Damocles over our head in March. The online ad behemoth designated what it said were 16 posts since 2018 of grievous offenses, such as [ANTI_VACCINATION, HATEFUL_CONTENT, DEMONSTRABLY_FALSE_DEMOCRATIC PROCESS,HARMFUL_HEALTH_CLAIM]. The fact that the Google algo can’t even count was an obvious indicator of how careless the entire process was. (there were only 14 posts due to duplication and inclusion of a non-post). Google had already demonetized theses posts. The threat was that if we did not remove these posts and refrain from future falsely depicted bad behavior, Google would demonetize the entire site.

This was a serious ultimatum. Even though donations provide a substantial majority of our revenues, the loss of the comparatively small ad revenues would still hurt, particularly since we are now in a tough fundraising environment.

To spare you undue suspense, our ad agency notified us yesterday that Google has cleared all the targeted URLs, so now we are in good standing again. Please keep in mind we did not remove a single word from any post, much the less delete any. The only change we made was one where we thought there was material that could offend an advertiser, as in a not-very-well pixelated image of a decapitated head in a 2018. And its author, Mark Ames, suggested we delete it as non-essential to the story.

So while this stressful case finally came to a sound resolution as far as we were concerned, it only served to validate our depiction of the Google AI censorship. From our March post:

We posted briefly on a message from our ad service in which Google threatened to demonetize the site. The e-mail listed what it depicted as 16 posts from 2018 to present that it claimed violated Google policy. The full e-mail and the spreadsheet listing the posts Google objected to are at the end of this post as footnote 1.

We consulted several experts. All are confident that Google relied on algorithms to single out these posts. As we will explain, they also stressed that whatever Google is doing here, it is not for advertisers.

Given the gravity of Google’s threat, it is shocking that the AI results are plainly and systematically flawed. The algos did not even accurately identify unique posts that had advertising on them, which is presumably the first screen in this process. Google actually fingered only 14 posts in its spreadsheet, and not 16 as shown, for a false positive rate merely on identifying posts accurately, of 12.5%.

Those 14 posts are out of 33,000 over the history of the site and approximately 20,000 over the time frame Google apparently used, 2018 to now. So we are faced with an ad embargo over posts that at best are less than 0.1% of our total content.

And of those 14, Google stated objections for only 8. Of those 8, nearly all, as we will explain, look nonsensical on their face.

The post then went through the Google material and our posts in detail. For another high-level indication of how off-base these designations were, one heavily dinged for anti-vax was a foreign policy piece, on Chalmers Johnson, by Tom Engelhardt. Another sanctioned work was by the Barnard College professor of economics, Rajiv Sethi. A third was a cross post from VoxEU. None had gotten any complaints, let alone of the sort Google was lodging.

Our ad service did go to bat on our behalf with Google and registered well-supported objections. Rajiv Sethi also posted on l’affaire Google. But what we suspect turned the tide was the Matt Taibbi post, Meet the AI-Censored? Naked Capitalism. Taibbi focused on key issues, namely the chilling effect of this sort of campaign, as we recapped:

Taibbi nailed one of the key reasons why the Google sanctions were so off base: “…this is a common feature of moderation machines; they can’t distinguish between advocacy and criticism.” And he also cited examples where articles in highly respected medical journals which raised doubts about Covid vaccine performance triggered warnings like [HARMFUL_HEALTH_CLAIMS, ANTI_VACCINATION, HATEFUL_CONTENT].

Taibbi’s post was picked by Citizen Free Press and Real Clear Politics, so it got views on top of ones from his substantial readership.

Since the Google process was opaque (our ad service dealt with their rep who then forwarded our material to another Google team, so they never interacted directly with the deciders), I have no idea whether a normal escalation would have been treated fairly.

The only other similarly-situated Google victim I found by happenstance (one of only two sites where we talk operations shop) got a less severe threat, got Google to reverse on some posts, and got Google to identify the offending text on others, which he deleted rather than going another round.

By contrast, on its first go, the ad service had Google drop its objections to 13 of the 14 actual posts, then relented on the last one when the ad service asked Google to identify the supposedly. offending final piece.

So this was a total capitulation by Google when it faced a well-documented response backed by a high-profile journalist. While we are grateful that this is over for now, we have no reason to think that Google had taken any steps to correct the rogue algos that produced these flagrantly bad results, nor increased human review to catch the errors.

And don’t think that small independent sites won’t be subject to new censorship threats, given how determined the Biden Administration is to limit wrong-speech and think. Just look at the aggressive crackdown of pro-Palestine protests on campus. It’s not hard to imagine censorship of pro-Palestine speech is coming soon.

This entry was posted in Legal, Media watch, Politics, Social policy, Social values, Surveillance state, Technology and innovation on by Yves Smith.