As a gunman wrought terror in a Buffalo, New York supermarket over the weekend, the violence was, yet again, streamed live online. The latest U.S. mass shooting left 10 people dead and raised familiar questions about how much responsibility social media companies bear in amplifying extremism that can lead to violence and in circulating footage of these deeply disturbing incidents after the fact.
The 18-year-old suspect, now in custody, has been linked to a manifesto and Discord messages in which he describes being radicalized on 4chan, praises the Christchurch, New Zealand mosque shooter and lays out his plan to kill Black Buffalo residents. The shooter livestreamed the violence on Twitch, opting for the platform over Facebook because it doesn’t require viewers to log in, according to documents reviewed by TechCrunch.
It’s impossible to say if the possibility of livestreaming a mass shooting might inspire a person to commit violence that they wouldn’t have otherwise, but the technology does offer extremists an audience — and an archive of their actions that can live on well beyond the initial horror. Those ghoulish legacies can linger on, inspiring others to commit similar acts of violence.
Social media platforms have grappled with mass shootings for years now leveraging the usual combination of AI and human moderators, but those systems can still fail to stop viral content from spreading and multiplying when it counts.
The livestream of Saturday’s shooting was removed by Twitch within minutes, but the footage has already been copied and uploaded elsewhere. Versions of the video circulated widely on Facebook, where some users who flagged the video observed that the social network told them the content did not violate its rules. A widely shared clip of the video was uploaded to video hosting site Streamable, where it was viewed over three million times before the company took it down for violating the terms of service. Facebook did not respond to a question from TechCrunch about why its moderation systems gave the video the all-clear.
While the alleged shooter stated his interest in broadcasting to mainstream social media sites, he described spending the most time on 4chan, an online forum notorious for its total absence of content moderation and the extremist views that find a comfortable home there. He also spent time documenting his plans in a private Discord server, again raising the difficult questions of where platforms should draw the line at moderating private spaces.
In an interview with NPR, New York Governor Kathy Hochul called on social media companies to monitor content more aggressively to intercept extremists. Hochul proposed a “trigger system” that would alert law enforcement when social media users express a desire to harm others.
“This is all telegraphed. It was written out in a manifesto that was published on social media platforms. The information was there,” Hochul said. “They need to have algorithms in place that’ll identify very quickly the second information is posted so it can be tracked down by proper law enforcement authorities. They have the resources to do this. They need to take ownership of this because otherwise, this virus will continue to spread.”
But in the case of the Buffalo mass shooting, the suspect’s plans were shared privately on a messaging app and published openly to a website known for refusing to moderate content that it isn’t legally obligated to remove.
And, as many people have pointed out in the aftermath of the Buffalo tragedy, the “virus” Hochul describes is already here. The alleged shooter was inspired to action by an ideology known as “the great replacement,” once a fringe belief espoused by avowed white supremacists that stokes racist fears about the emerging non-white population majorities in countries like the U.S.
With those ideas now as easy to find on cable news or in Congress as they are on 4chan, no algorithm can deliver us from the violence they inspire.