In yet another attempt to prevent suicides, Facebook is starting to roll out Artificial Intelligence (AI)-based tools to help identify when someone might be expressing thoughts of suicide, including on Facebook Live. The initiative – that will use pattern recognition to detect posts or live videos where someone might be expressing thoughts of suicide to help authorities respond faster -- will eventually be available worldwide, except the European Union, Facebook said in a blog post on Tuesday. "Facebook is a place where friends and family are already connected and we are able to help connect a person in distress with people who can support them. "It's part of our ongoing effort to help build a safe community on and off Facebook," wrote Guy Rosen, Vice President of Product Management at Facebook. In October, Facebook worked with first responders on over 100 wellness checks based on reports it received via its proactive detection efforts. "We use signals like the text used in the post and comments (for example, comments like "Are you ok?" and "Can I help?" can be strong indicators). "In some instances, we have found that the technology has identified videos that may have gone unreported," Rosen said.
Canada isn't the only one turning to technology in order to provide better support for those considering suicide or to prevent suicides from happening. In November, Facebook began a global rollout of its AI suicide prevention tools that reach out to users who post content that could be a sign of suicidal thought and allow other users to report content that they think might show signs of suicidal risk. Instagram, which is owned by Facebook, also released tools last year that allowed users to report live videos that showed signs of suicidal thought, which would prompt an offer of mental health resources to the person posting the content.
Facebook is using a combination of pattern recognition, live chat support from crisis support organizations and other tools to prevent suicide, with a focus on its Live service. There is one death by suicide every 40 seconds and over 800,000 people kill themselves every year, according to the World Health Organization. "Facebook is in a unique position -- through friendships on the site -- to help connect a person in distress with people who can support them," the company said Wednesday. The move by Facebook appears to aim to prevent the live-streaming of of suicides on the Live platform, which was launched in April last year, and allows people, public figures and pages to share live videos with friends and followers. The company said that its suicide prevention tools for Facebook posts will now be integrated into Live, giving people watching a live video the option to reach out to the person directly and to report the video to the company.
Facebook is turning to artificial intelligence to help identify posts showing suicidal tendencies, putting the technology in the powerful position to possibly help save lives. The social network, which has more than 1.8 billion users, announced a suite of new suicide prevention tools on Wednesday, including a streamlined reporting process aided by artificial intelligence, an easier way to get help during a Facebook Live video, and the option to get help via Messenger. But is a machine better than a human when it comes to flagging troubled posts? For now, Facebook is testing its algorithm in the United States, using pattern recognition in posts previously reported for suicide. Facebook will then make the option to report a post more prominent for anything that may be a red flag.