Facebook already relies on the connections between people to help identify and prevent its users from harming themselves. The social media giant revealed Monday that artificial intelligence is also being employed increasingly in those efforts.
Guy Rosen, VP of product management at Facebook, wrote in a blog post about the additional steps the company is taking to spot posts and live videos where suicidal thoughts are being expressed.
Rosen said pattern recognition technology can help identify this type of content through signals used in the text of a post as well as the comments. Comments like “Are you OK?” and “Can I help?” can be strong indicators, he said.
Facebook is also looking for better ways to identify appropriate first responders and more quickly alert such services. More members are also being utilized on Facebook’s Community Operations team — the thousands of people around the world who review reports about content on Facebook — to deal with reports of suicide or self harm.
Rosen said that Facebook has been working on suicide prevention tools for more than 10 years. And in his own post about the efforts, CEO Mark Zuckerberg said, in response to a user who lost her significant other to suicide, that one of his “greatest regrets” is how long it can take to “develop important technology.”
“I feel like we have a responsibility to push forward, through whatever problems we’ll encounter along the way, so we can help people as quickly as possible,” Zuckerberg wrote in comments. “There will always be issues with anything we build, and it will never be perfect, but that can’t stop us from getting started. Hopefully these tools will improve so that the next person in your partner’s position will get the help they need sooner.”
Rosen said the AI enhancements are being first rolled out outside the United States and will eventually be available worldwide, with the exception of the European Union.