Inside Facebook's suicide algorithm: Here's how the company uses artificial intelligence to predict your mental state from your posts
In March 2017, Facebook launched an ambitious project to prevent suicide with artificial intelligence. Following a string of suicides that were live-streamed on the platform, the effort to use an algorithm to detect signs of potential self-harm sought to proactively address a serious problem. But over a year later, following a wave of privacy scandals that brought Facebook's data-use into question, the idea of Facebook creating and storing actionable mental health data without user-consent has numerous privacy experts worried about whether Facebook can be trusted to make and store inferences about the most intimate details of our minds. That data creation process alone raises concern for Natasha Duarte, a policy analyst at the Center for Democracy and Technology. "I think this should be considered sensitive health information," she said.
Jan-8-2019, 04:35:51 GMT