View all Articles
Commentary By Jason L. Riley

Who Watches a Murder Streamed Live on Facebook?

Public Safety, Culture Policing, Crime Control, Culture & Society

Robert Godwin’s killing was replayed 1.6 million times. AI can help, but not with human nature.

The most shocking aspect of the Easter Sunday Facebook murder of 74-year-old Robert Godwin, Sr. might be that this sort of social media mayhem is losing its ability to shock.

In March, a video of a 15-year-old girl being sexually assaulted by several teenage boys was streamed on Facebook.

In February, a teenager was convicted of fatally shooting his friend; the killer implicated himself by sending a selfie with the dying victim on Snapchat.

In January, four people were arrested after broadcasting a video on Facebook that showed them taunting and beating a mentally disabled teenager who had been bound and gagged.

Already this year, a 14-year-old girl in Florida and a 33-year-old man in California have committed suicide on Facebook.

Last year, an armed woman in Maryland live-streamed her fatal standoff with the police, and a 12-year-old in Georgia recorded her own suicide by hanging via the Live.me app.

Shortly after Facebook launched its new video-streaming service last April, CEO Mark Zuckerberg told BuzzFeed that the goal was to support “the most personal and emotional and raw and visceral ways people want to communicate.” But preventing abuse of these platforms has been a challenge.

There’s been a smattering of calls from public officials and activists to suspend these streaming capabilities until better filters are in place, but the popularity and profitability of live video make that course of action unlikely. Besides, the safe-harbor provisions of the federal Communications Decency Act, passed by Congress two decades ago, give operators broad protection from liability for content posted by their users.

Sure, some grandstanding member of Congress can call for a hearing, or a state attorney general looking to boost his profile can announce a lawsuit, but neither is really necessary. Social media behemoths like Facebook, Twitter and YouTube currently have every incentive to protect their services from the freaks, sociopaths and others intent on spreading violent or disturbing images. “Facebook Murderer” or “YouTube Shooter” pasted in CNN bulletins and newspaper headlines is the kind of publicity that companies work to minimize without any prompting.

With nearly two billion users, Facebook wants to be not only the place where you connect with family and friends but also your main source of news and information. Those dual objectives have made it more willing to restrict objectionable content. Twitter took a more libertarian stance in its early days, though lately its position has softened in the name of thwarting harassment of the Milo Yiannopoulos variety. YouTube, meanwhile, has found a middle ground, allowing some content that Facebook doesn’t, but with warning labels for minors and the wary. “Let the marketplace sort it out,” says Ryan Radia, who follows telecom policy at the Competitive Enterprise Institute and welcomes the different approaches. “Let these platforms rise or fall based on consumer preferences.” That solution isn’t foolproof, he adds, but it’s vastly superior to Big Brother alternatives.

But for the steps that social media outlets already take to scrub commercial content, online human depravity would be even more prevalent. A fascinating 2014 article in Wired magazine details how social media relies on “an army of workers employed to soak up the worst of humanity in order to protect the rest of us.” Quoting an industry insider, the article speculates that these so-called content moderators—many of whom are young college graduates “just happy to have found employment that didn’t require a hairnet”—number well over 100,000, or “about twice the total head count of Google and nearly 14 times that of Facebook.”

In response to the Godwin murder, Mr. Zuckerberg said that for now Facebook would continue to rely mostly on users who report violations of company policy and that “the long-term solution is going to be having better artificial intelligence tools.” The bigger problem, however, is the demand side of the equation—i.e., the social media denizen who votes with his mouse, the person who doesn’t want to be protected.

The Godwin video was viewed more than 1.6 million times, even as the victim’s grandson pleaded on Twitter for people to stop sharing the link. The video of the girl in Georgia who committed suicide eventually found its way to YouTube and Facebook and garnered millions of combined views. And no one who saw the livestreamed sexual assault of the teenager in Chicago reported it to police. “As a society we have to ask ourselves, how did it get to the point where young men feel like it’s a badge of honor to sexually assault a girl . . . to not only do this to a girl, but broadcast it for the world?” a relative of the victim told the Chicago Tribune.

Social media is changing our habits but not our base appetites. Artificial intelligence won’t save us from ourselves.

This piece originally appeared in The Wall Street Journal

______________________

Jason L. Riley is a senior fellow at the Manhattan Institute, a columnist at The Wall Street Journal, and a Fox News commentator.

This piece originally appeared in The Wall Street Journal