Social media giants, including YouTube, Facebook and Twitter, are facing new criticism after they struggled to block livestreamed footage of a gunman shooting worshippers at a mosque in New Zealand.
The disturbing 17-minute livestream episode saw users uploading and sharing clips faster than social media companies could remove them.
The companies were already under scrutiny over the rise in online extremist content, but Friday’s troubling incident also underscored big tech’s difficulties in rooting out violent content as crises unfold in real time.
In a live point-of-view video uploaded to Facebook on Friday, the killer shot dozens of mosque goers, at one point going back to his car to stock up on weapons.
Critics hammered tech companies, accusing them of failing to get ahead of the violent video spreading.
Sen. Cory Booker (D-N.J.), a 2020 contender, said “Tech companies have a responsibility to do the morally right thing,” told reporters on Friday. “I don’t care about your profits.
“This is a case where you’re giving a platform to hate,” Booker continued. “That’s unacceptable and should have never happened, and it should have been taken down a lot more swiftly. The mechanisms should be in place to allow these companies to do that.”
“The rapid and wide-scale dissemination of this hateful content — live-streamed on Facebook, uploaded on YouTube and amplified on Reddit — shows how easily the largest platforms can still be misused,” Sen. Mark Warner (D-Va.) said.
Facebook said it took down the video as soon as it was flagged by the New Zealand Police. But that response suggested artificial intelligence (AI) tools and human moderators had failed to catch the livestream, which went on for 17 minutes.
By the time Facebook suspended the account behind the video, an hour and a half after it was posted, the footage had already proliferated across the Internet with thousands of uploads on Twitter, Instagram, YouTube, Reddit and other platforms.
More than 10 hours after the attack, the video could still be found through searches on YouTube, Twitter and Facebook, even as those companies said they were working to prevent the footage from spreading.
YouTube by Friday evening had removed thousands of videos related to the incident. Facebook and Twitter did not share numbers, but both said they were working overtime to remove the content as it appeared.
“Shocking, violent and graphic content has no place on our platforms, and we are employing our technology and human resources to quickly review and remove any and all such violative content on YouTube,” a YouTube spokesperson said. “As with any major tragedy, we will work cooperatively with the authorities.”
Facebook in May 2017 announced that it was hiring 3,000 more content moderators to deal with the issue of graphic video content, a move that Mary Anne Franks, a law professor at the University of Miami, said amounted to “kicking the can down the road.”
Hours before the shooting, the suspect apparently posted a manifesto on Twitter and announced his intention to livestream the mass shooting on 8chan, a fringe chatroom that he frequented.
New Zealand police confirmed the suspected gunman had penned the white nationalist, anti-immigrant screed, which is more than 70 pages.
Twitter deleted the account in question hours after the shooting took place, and it has been working to remove reuploads of the video from its service.
Facebook, Twitter, YouTube and other leading social media platforms have been grappling with how to handle extremist and white nationalist content for years, particularly as anti-immigrant sentiment has spiked in the U.S. and Europe. The companies have struggled to draw the line between freedom of speech and incendiary propaganda that has the potential to radicalize users.
In the U.S., because of Section 230 of the Communications Decency Act, the platforms are not held legally liable for what users post. Tech advocates credit that law with empowering the internet, but some lawmakers have questioned whether it should be changed.