Skip to main content

What Can Facebook Do About Live Murders and Suicides?

Steve Stephens recorded himself murdering an innocent victim and then uploaded the footage to Facebook. The horrific act has put Facebook under immense pressure to do something, but can the company prevent broadcasting acts of violence without fundamentally changing the purpose of the social media platform. WIRED explores Facebook's limited options.

Released on 04/18/2017

Transcript

I snapped.

Dawg, I just snapped, dawg.

I just snapped.

I just killed--

On April 16th, Easter Sunday,

a man named Steve Stephens recorded

himself murdering an innocent victim,

and then uploaded the footage to Facebook.

It wasn't technically live-streamed.

Stephens posted it soon after he filmed it,

but it nevertheless represented a gut check

for a platform that has pledged to reflect

humanity in its rawest form.

Our hearts go out to the family and friends

of Robert Godwin Sr.,

and we have a lot of work,

and we will keep doing all we can

to prevent tragedies like this from happening.

This is not the first time the social network

has been used to distribute horrific footage.

Since its launch, Live has provided an unedited look

at police shootings, rape, torture,

and enough suicides that Facebook will be integrating

realtime suicide prevention tools into the platform.

And live TV has hosted everything from suicides

to wardrobe malfunctions,

but there are systems in place to protect

against stuff like that happening too frequently.

Broadcasters have built in seven second delays

so they can bleep anything offensive before

it hits the airwaves.

And TV networks who run afoul, intentionally or not,

of the communications regulator, the FCC,

can face massive fines.

But those systems don't exist at Facebook.

Which hasn't been regulated by the FCC,

at least not yet.

And so much content pours in that

even if they were to impose a seven second delay,

it would seem impossible to catch something

offensive before it posts.

Instead, the company turns to armies of content moderators

who pour over material that other Facebook users

have flagged as inappropriate.

Removing it on a case by case basis.

From time to time, Facebook users have called on the company

to use its vaunted AI to automatically spot

violent, or otherwise inappropriate videos,

and prevent them from getting posted in the first place.

But the company has steadily resisted that role.

In part, that's because it's a really hard problem.

Facebook's AI may be powerful,

but can they be counted upon to always catch

malevolent content the instant it appears?

But the bigger problem is that determining

just what crosses the line can be more

complicated than it might seem.

Context matters.

Real life violence might be unacceptable,

but a trailer for a violent movie might be okay.

And sometimes, even real life violence can be acceptable,

or even important to share.

And the officer just shot him in his arm.

We're waiting for--

[Officer] Ma'am, just keep your hands where they are!

I will, sir. No worries.

[Officer] Fuck!

Facebook initially pulled the live video

of police shooting Philando Castile,

but reposted it after people complained

they were censoring important news.

That's why, for most of their histories,

Facebook and other social networks has presented

themselves not as broadcasters,

but as platforms that allow other people

to become broadcasters.

But that position has come under increasing pressure

over the last several months.

The spread of hate speech on Twitter,

and fake news on Facebook,

has even some of the company's own employees

wondering whether you can so easily separate

malevolent content from the platforms that

make it so easy to share it.

Even the platform companies have begun

to come around to this way of thinking with

Twitter at long last releasing tools for fighting abuse.

And Facebook, at least rhetorically,

beginning to accept its role as a media company.

A term it had resisted for most of its existence.

It's possible that this Facebook murder

will push those conversations further,

and start a difficult discussion of whether,

and how, Facebook might be more active

in monitoring and controlling the disturbing information

and images being distributed across its platform.

But Facebook is likely to tread very cautiously here.

Just because they give someone a tool,

does not mean they bear responsibility for all

the evil ways in which it can be applied.

Facebook promised to hold a mirror up to humanity.

It's not their fault if we don't always like what we see.