Facebook examines moderation policies following Robert Godwin’s murder
Facebook has launched a review of how it deals with violent content after taking more than two hours to remove a video which shows a man committing murder
Facebook has launched a review of how it deals with violent content on its network after coming under fire after taking more than two hours to remove a video which shows a man committing murder.
The video, posted on Sunday, appears to show suspect Steve Stephens approaching the victim, 74-year-old Robert Godwin Sr., at random in Cleveland, Ohio, US, before shooting and killing Godwin.
Stephens filmed a first-person view of the shooting and uploaded it to his Facebook page, where it remained for more than two hours before being taken down. By that time, the video had been copied, reposted and viewed millions of times.
Later Stephens took to Facebook Live to discuss the killing, saying he had killed 13 people - though US police say they are unaware of any other deaths.
“This is a horrific crime and we do not allow this kind of content on Facebook,” Justin Osofsky, the company's vice president of global operations, said in an online statement, adding that the company was "reviewing" the way it prioritises reports from users and looking at new technologies such as artificial intelligence to improve its reaction time.
"As a result of this terrible series of events, we are reviewing our reporting flows to be sure people can report videos and other material that violates our standards as easily and quickly as possible. In this case, we did not receive a report about the first video [which featured the suspect saying he intended to murder], and we only received a report about the second video - containing the shooting - more than an hour and 45 minutes after it was posted. We received reports about the third video, containing the man’s live confession, only after it had ended,” Osofsky said.
“We know we need to do better," he added.
The case yet again raises questions about the social networking site’s ability to moderate content, particularly when there is an active crime unfolding.
The incident comes on the eve of Facebook’s F8, an annual event for developers, and at a time when the company is working hard to promote its role as an enabler of civic engagement. Two months ago, CEO Mark Zuckerberg penned a 5,700-word manifesto outlining measures the social network was taking to address several challenges faced by humanity.
Within the letter, Zuckerberg explained that the company is researching systems that use artificial intelligence to look at photos and videos to flag content for review. “This is still very early in development, but we have started to have it look at some content, and it already generates about one-third of all reports to the team that reviews content for our community,” he said.
Last month a 15-year-old girl was raped by multiple people in Chicago, in an attack that was streamed on Facebook Live. In January three men were arrested in relation to a similar incident involving the live-streamed rape of a woman in Sweden. Last year 23-year-old Korryn Gaines used Facebook to broadcast a standoff with police in Baltimore, which ended in the mother of one being shot and killed. Facebook has also hosted videos showing the torture of a young man with disabilities in Chicago.