On January 3, 2021, a 23-year-old Mumbai youth’s decision to live-stream his suicide on Facebook ended up saving his life. Alerted by a Facebook team in Ireland, the Mumbai Police rushed to his home, pushed open the door and took the unconscious man to the hospital within an hour of the bid.
A few months ago, West Bengal Police informed a youth’s unsuspecting father of his son’s bid to end his life with a sharp weapon. The police were again alerted by a Facebook team after the youth live-streamed the bid on the social media platform. A tip-off from the Facebook office in the US in July 2018 helped the Guwahati police save a minor girl.
There are several more instances of Facebook alerting authorities about distressed people trying to harm themselves.
India, which has one of the highest suicide rates in the world, reported around 381 deaths by suicide a day in 2019, adding to 1,39,123 fatalities over the year, the National Crime Records Bureau (NCRB) data shows. The suicide rate for 2019 was 3.4 percent higher than that in 2018. The WHO says suicides can be prevented with timely intervention.
The social media giant has millions of users, churning out huge amounts of data in form of posts, texts or videos every day. It is humanly impossible to keep tabs on what is being said and shared and that is why Facebook relies on artificial intelligence (AI) to scan for signs of trouble.
How does Facebook know?
The Mumbai man and the West Bengal youth had live-streamed their suicide attempt while the girl in Guwahati had shared on her Facebook timeline the day she was planning to kill herself.
Those who saw her post could have reported it to Facebook. The social media major has a community operations team for reviewing such reports. The team offers support by connecting the person to local authorities, helplines or non-government groups working in the field.
The other way is Facebook’s artificial intelligence algorithm that identifies potential self-harm.
Why have an AI tool when there is a reporting mechanism?
In a February 21, 2018 blog, Facebook developers who worked on the tool said, “In the past, we’ve relied on loved ones to report concerning posts to us since they are in the best position to know when someone is struggling. However, many posts expressing suicidal thoughts are never reported to Facebook, or are not reported fast enough.” This was one of the big reasons the firm strengthened suicide prevention.
But would an AI tool be smart enough?
The AI tool was launched in 2018 but had been in the works for some time. In its initial avatar in 2017, Facebook used a machine-learning model to identify keywords or phrases known to be associated with suicide by working with experts. Words like kill, goodbye, sadness, depressed, or die can be associated with self-harm. The review team would then take a call on the next course of action.
But the same words can be used in a different context— “I can die of boredom” or “my work is killing me. These phrases can’t be construed as an indication of self-harm but a machine would not know that.
Facebook’s community operations team had to spend more time filtering what are known as false positives.
“…the machine learning model caught too many harmless false positives to be a useful filter for the human review team,” Catherine Card, Director of Product Management, said in a September 2018 blog post.
The team took trained the system using AI to create a more intelligent tool. “The smaller data set helped us develop a much more nuanced understanding of what is a suicidal pattern and what isn’t,” Dan Muriello, an engineer on the team that produced the tools, said in the same post.
Is that enough? No. That is just one factor. The tool also looks at the comments left on the post and the nature of the comments.
“Here, too, there is linguistic nuance to consider,” said Card in the post.
Posts that reviewers determined as serious cases tended to have comments like “tell me where you are” or “has anyone heard from him/her?” Potentially less-urgent posts had comments along the lines of “Call anytime” or “I’m here for you”.
Then there are patterns too—what were the previous posts and what did they talk about? If posts suggest self-harm, then what is the duration between them to understand if a user is in imminent danger. The tool considers all this.
What about videos, live streaming?
Nitish Chandan, who works with Cyber Peace Foundation, a cybersecurity think tank, told Moneycontrol that Facebook would have developed an algorithm for live-stream videos as well but that is resource intensive.
In these cases, the reporting mechanism is one way. The tool also uses comments and the users’ content pattern to ascertain the nature of these posts before flagging them to the community operations team for verification.
What if the posts are not in English?
Can machines understand Punjabi, Malayalam, Tamil or any other Indian languages? Yes, they can. That is one of the reasons why the tool has been helpful so far.
Facebook recently introduced a machine-translation model, which, according to reports, can translate 100 languages without relying on English data.
In India, one of the largest markets for Facebook, the company offers support in multiple regional languages. According to a report by Statista, there are about 310 million Facebook users in India.
These features make detection of self-harm possible, said Chandan. The developers said in a blog post that the technology helps cover a large number of languages and uses cross-lingual abilities to improve the system’s performance.
Janice Verghese, an advocate who also works with Cyber Peace Foundation, said content reviewers across the world helped with the decision making.
What happens when the system flags a suicide attempt?
Once self-harm is reported by people or flagged by the system, it is reviewed by Facebook’s community operations team which decides on action.
It can pan out in a couple of ways. When the user is not in imminent danger, the team will help the person find support, which could be a helpline or counselling services. Facebook has tie-ups with groups across countries to help such people.
In cases of urgent intervention, local authorities, like police, are alerted immediately.
How does Facebook know the location?
There are multiple ways to track the location.
1. Most of the users share the name of the city they live in
2. When users access Facebook through mobile, in most cases, they give the app access to geolocation.
3. As Chrome is often the preferred mode of browsing, it can tag location unless the user has the location turned off
4. Many users also tag their location when posting on Facebook.
5. Some users also share their mobile numbers on Facebook
One of these is enough to track approximate user location and alert authorities, Chandan said. Facebook may not be able to get the exact location in some cases, so police have to step in.
Can authorities track people in time?
Yes. In August 2020, the Facebook post of a 27-year-old man in Delhi was flagged by the system for self-harm. According to a PTI report, Facebook’s team alerted the Delhi Deputy Commissioner of Police (Cyber) Anyesh Roy and shared the number of the user. That should have been enough but the man had travelled to Mumbai, which police got to know after talking to the man’s wife whose number he was using.
After a concerted effort, Delhi and Mumbai police tracked the man down and dissuaded him from taking the extreme step. The man was in financial distress.
How accurate is the system?
Facebook does not share how accurate the tool is. The company is yet to respond to Moneycontrol’s queries about the tool. The story will be updated to add the company’s comments when they come.
According to a research paper published in Biomedical Informatics Insights by Coppersmith et al, the AI model implemented by social media majors like Facebook could be 10 times more accurate in predicting suicide attempts than those of clinicians.
What about privacy?
Sharing content on Facebook about mental health would not infringe on privacy any more than the usual posts do.
But what one needs to look out for is if the company is sharing information with third-party users, which can be a huge concern, said Chandan. Facebook is yet to respond to an email seeking its response on how it protects privacy.
Discover the latest Business News, Sensex, and Nifty updates. Obtain Personal Finance insights, tax queries, and expert opinions on Moneycontrol or download the Moneycontrol App to stay updated!
Find the best of Al News in one place, specially curated for you every weekend.
Stay on top of the latest tech trends and biggest startup news.