Facebook announced Monday that it will begin using artificial intelligence to address broadcasted suicide risks on its social network, according to the Washington Post. The AI tool will review posts for suicide risk and level of urgency among the posts of its 2 billion users, allowing Facebook employees to prioritize what posts need to be reviewed soonest. Speed in addressing risks is an important factor in preventing suicides, and the company hopes that AI will help users stay safe and find help.
According to Facebook, the tool will employ pattern recognition to automatically review posts and comments for a handful of key phrases that indicate that help is needed. The company said comments by other uses such as “Can I help?” or “Are you ok?” could signal that a post needs to be prioritized for review as soon as possible. Reviewers may call first responders to address these situations, or help direct other users how to get their friends help when necessary.
For live video posts, other users can report videos that indicate suicide risks. They would then be given information on how to contact a helpline for at-risk friends, or even law enforcement in dire situations. Broadcasters would also be prompted to contact a helpline.
In a blog post, Facebook’s vice president of product management Guy Rosen wrote:
“We’ve found these accelerated reports — that we have signaled require immediate attention — are escalated to local authorities twice as quickly as other reports.”
The program has been tested in the US, and will go into effect in most nations where Facebook operates. According to the company, the system will not go into effect in EU nations. While they did not go into detail on why this is the case, the EU has starkly different privacy and internet regulations than those in the US. Face has said that discussions are underway with EU authorities on how to put a program of that nature into place there.
The company began renewed efforts to address suicide risks on their network after several high-profile, live-streamed suicides in April. Facebook has announced plans to hire another 3,000 employees for its “community operations” team, which evaluates content posted on the site for violence, threats, and other issues.
After the string of suicides, Mark Zuckerberg admitted it would be a difficult problem to address, despite plans to use AI.
“No matter how many people we have on the team, we’ll never be able to look at everything,” Zuckerberg said in May.