Reducing suicide contagion effect by detecting sentences from media reports with explicit methods of suicide Shima Gerani1∗ , Raphael Tissot1 , Annie Ying1 , Jennifer Redmon2 , Artemio Rimando and Riley Hun1 1Cisco Vancouver AI Lab 2Cisco Research Triangle Park fsgerani, rtissot, anying,jeredmon, rhun [email protected] Abstract of such harmful versus not harmful sentences and then pro- posed two approaches and evaluated them against multiple Research has shown that suicide rate can increase baselines. Currently, we have the baseline implementation by 13% when suicide is not reported responsibly. deployed.3 For example, irresponsible reporting includes spe- Detecting harmful sentences including detecting suicide cific details that depict suicide methods. To pro- method in text is useful in a number of use cases: mote more responsible journalism to save lives, we propose a novel problem called “suicide method • Media organizations and journalists can use this ap- detection”, which determines if a sentence in a proach to automatically pre-screen and correct articles news article contains a description of a suicide before publication. method. Our results show two promising ap- • Social media organizations can potentially use this tool proaches: a rule-based approach using category to filter out harmful sentences especially for at-risk indi- pattern matching and a BERT model with data viduals. augmentation, both of which reach over 0.9 in F- • For increasing awareness of how words can be harmful measure. to individuals at risk, our approach can be embedded any time content is generated on the web or being communi- cated. 1 Introduction The contributions of the paper are as follows: According to the World Health Organization, close to • We propose the problem of detecting harmful language 800,000 people die by suicide every year. The way an in- from text that contributes to suicide contagion and the dividual’s suicide is reported has shown to be a contributing sub-problem of the suicide method classification. factor in whether the death will be followed by another sui- cide. Research in this suicide contagion effect has shown that • We automate the suicide method classification task us- there is a 13% increase in suicide rates in the US following ing two approaches and compare them against several highly reported suicides [7]. baseline systems. To reduce this suicide contagion effect, mental health ex- perts have created best practice guidelines for media report- 2 Dataset and Experimental Setup 1 ing. One of the guidelines that research has shown to be the To the best of our knowledge there is no publicly available most harmful is explicitly depicting the suicide method in a dataset of labeled documents with respect to ethical reporting media report, especially affecting individuals at-risk of sui- of suicide. Therefore, we created our own data set. We pulled 2 cide. This specific guideline can be particularly unintuitive articles from the NewsAPI - there were 300,000 articles orig- to many reporters as it conflicts with a universal journalistic inally and we filtered that down to a set of 1625 articles that standard of reporting accurate details of an event. are in English and contained a report on the suicide of an in- In this paper, we propose a novel classification problem dividual in a textual format. A group of volunteers then man- called “suicide method detection” which can help journalists ually labeled this dataset and found 281 articles that report and media organizations automatically pre-screen articles be- the method of suicide. fore publication. This classification problem determines if a We further labeled these 281 news articles at the sentence sentence in a textual news article contains a description of a level, accounting to a total of 7350 labeled sentences, of suicide method that can have a harmful suicide contagion ef- which 397 were harmful. For our experiments, we randomly fect. To tackle this problem, we created a labeled data set split the 281 positive labeled news articles into train, devel- opment and test sets and then considered the sentences in the ∗ Contact Author associated split for the experiments. By doing the split at 1https://reportingonsuicide.org 2https://www.poynter.org/archive/2003/reporting-on-suicide/ 3https://reportingonsuicide.cisco.com/ the article level we decrease the chance of data leakage due recall and lower precision. Adding the constraint of referring to the presence of similar sentences about the suicide of the to suicide in the same sentence in V2 has clearly helped in same person in train and test sets. increasing the precision but lowers the recall by introducing Since the number of positive news sentences was too more false negatives. small to build any classification model, we used an external data-source to augment the training data. We crawled from 3.2 CPM: Category-based pattern matching 4 Wikipedia a list of famous people who died by suicide. In The suicide methods article on Wikipedia provides a catego- each the people’s page, we located the sentences referencing rization over different methods of suicide. We devised a rep- the method of suicide. As a result, we obtained 900 docu- resentative pattern for each category and marked a sentence ments, containing both valid and harmful sentences. Table 1 as harmful if it matched at least one of the patterns. Sim- shows the statistics on the number of sentences in each set. ilar to the dictionary-based approach, CPM applies pattern Table 1: Statistics on the number of harmful sentences in our train- matching. However, as opposed to exact matching of words, ing, dev and test set. patterns in CPM take the POS tags and dependency struc- ture of the sentences into account whenever necessary. Al- split harmful sentences valid sentences gorithm 1 shows an example of the pattern for the “jumping from height” category of suicide methods. Error analysis of train 1274 14797 the dictionary-based approach over training data revealed that dev 70 1079 false positives in this category were on sentences reporting a test 102 1705 person jumping from a height to escape and not for the pur- pose of suicide. Having category specific patterns helps us be more specific and enables us to rule out the potential false positives of each category of suicide methods. 3 Method In terms of the results (3rd row in Table 4), CPM hugely In this section, we explain the approaches for classifying sen- increased the precision (0.97) while having high recall (0.87) tences as valid or harmful. We started with a simple model as well. There is still room to improve the recall: just like any based on a dictionary of suicide method terms. We then im- rule-based approach, CPM is sensitive to the words and pat- proved the dictionary-based approach by targeting each cat- terns and might not be general enough to capture all possible egory of suicide method separately. Finally, we investigated variations of writing a sentence with a specific meaning. the impact of fine-tuning BERT, a state of the art sentence embedding model. We found that the lack of labeled train- Algorithm 1 Jumping from heights ing data as a challenge in training a strong model. As such, we propose augmentation methods to generate high quality text input labeled training sentences and show the effectiveness of such verbjump [jump; leap] data in training a strong BERT-based classifier. prepjump [from; out; off] objjump [window; cliff; apartment; building; balcony; roof] 3.1 Baseline: Dictionary based approach Our baseline approach leveraged a dictionary of terms con- patternjump verbjumpprepjumpobjjump sisting of suicide action terms such as (e.g. hang, shot, jump, return text matches patternjump and escape 2= text etc), suicide object terms such (e.g. bridge, cliff, poison, etc) as well as suicide indicator terms (e.g. suicide, killed him- self ). We manually extracted these terms from the suicide 3.3 BERT methods article on Wikipedia5. Table 2 shows the category of suicide methods that we considered as well as our dictionary BERT (Bidirectional Encoder Representations from terms. Transformers) [2] is a state of the art language representation We investigated two variations of the dictionary-based ap- model based on Transformers. It is designed to pre-train deep proach: V1) we consider a sentence as harmful if it contains bidirectional representations from unlabeled text by jointly at least one action (e.g jumped) or object term (e.g. gun). conditioning on both the left and the right context in all V2) we add a constraint that a sentence is harmful only if it layers. Training in BERT is based on a novel technique called contains at least one action or object term along with a sui- Masked Language Model (MLM) which allows bidirectional cide indicating term in that sentence. The purpose behind training as well as Next Sentence Prediction. Experimental the added constraint is to decrease false positives by increas- results show that a language model which is bidirectionally ing the probability that the action/object terms are used in the trained can have a deeper sense of language context than context of explaining the suicide method. single-direction language models [2]. The first two rows in Table 4 show the performance of these The input to BERT is a sequence which can be a single sen- two dictionary-based methods on our test data set. It was ob- tence or a pair of sentences packed together. The first token served that both variations had low F1 while V1 has a higher of every sequence should always be a special token ([CLS]). The final hidden state corresponding to this token denoted as 4https://en.wikipedia.org/wiki/List of suicides C 2 RH is used as the aggregate sequence representation for 5https://en.wikipedia.org/wiki/Suicide methods classification tasks.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages5 Page
-
File Size-