EMNLP-2020-Paper-Digests.Pdf
Total Page:16
File Type:pdf, Size:1020Kb
https://www.paperdigest.org 1, TITLE: Detecting Attackable Sentences in Arguments https://www.aclweb.org/anthology/2020.emnlp-main.1 AUTHORS: Yohan Jo, Seojin Bang, Emaad Manzoor, Eduard Hovy, Chris Reed HIGHLIGHT: We present a first large-scale analysis of sentence attackability in online arguments. 2, TITLE: Extracting Implicitly Asserted Propositions in Argumentation https://www.aclweb.org/anthology/2020.emnlp-main.2 AUTHORS: Yohan Jo, Jacky Visser, Chris Reed, Eduard Hovy HIGHLIGHT: In this paper, we examine a wide range of computational methods for extracting propositions that are implicitly asserted in questions, reported speech, and imperatives in argumentation. 3, TITLE: Quantitative argument summarization and beyond: Cross-domain key point analysis https://www.aclweb.org/anthology/2020.emnlp-main.3 AUTHORS: Roy Bar-Haim, Yoav Kantor, Lilach Eden, Roni Friedman, Dan Lahav, Noam Slonim HIGHLIGHT: The current work advances key point analysis in two important respects: first, we develop a method for automatic extraction of key points, which enables fully automatic analysis, and is shown to achieve performance comparable to a human expert. Second, we demonstrate that the applicability of key point analysis goes well beyond argumentation data. 4, TITLE: Unsupervised stance detection for arguments from consequences https://www.aclweb.org/anthology/2020.emnlp-main.4 AUTHORS: Jonathan Kobbe, Ioana Hulpu?, Heiner Stuckenschmidt HIGHLIGHT: In this paper, we propose an unsupervised method to detect the stance of argumentative claims with respect to a topic. 5, TITLE: BLEU might be Guilty but References are not Innocent https://www.aclweb.org/anthology/2020.emnlp-main.5 AUTHORS: Markus Freitag, David Grangier, Isaac Caswell HIGHLIGHT: We study different methods to collect references and compare their value in automated evaluation by reporting correlation with human evaluation for a variety of systems and metrics. 6, TITLE: Statistical Power and Translationese in Machine Translation Evaluation https://www.aclweb.org/anthology/2020.emnlp-main.6 AUTHORS: Yvette Graham, Barry Haddow, Philipp Koehn HIGHLIGHT: The term translationese has been used to describe features of translated text, and in this paper, we provide detailed analysis of potential adverse effects of translationese on machine translation evaluation. 7, TITLE: Simulated multiple reference training improves low-resource machine translation https://www.aclweb.org/anthology/2020.emnlp-main.7 AUTHORS: Huda Khayrallah, Brian Thompson, Matt Post, Philipp Koehn HIGHLIGHT: We introduce Simulated Multiple Reference Training (SMRT), a novel MT training method that approximates the full space of possible translations by sampling a paraphrase of the reference sentence from a paraphraser and training the MT model to predict the paraphraser's distribution over possible tokens. 8, TITLE: Automatic Machine Translation Evaluation in Many Languages via Zero-Shot Paraphrasing https://www.aclweb.org/anthology/2020.emnlp-main.8 AUTHORS: Brian Thompson, Matt Post HIGHLIGHT: We propose training the paraphraser as a multilingual NMT system, treating paraphrasing as a zero-shot translation task (e.g., Czech to Czech). 9, TITLE: PRover: Proof Generation for Interpretable Reasoning over Rules https://www.aclweb.org/anthology/2020.emnlp-main.9 AUTHORS: Swarnadeep Saha, Sayan Ghosh, Shashank Srivastava, Mohit Bansal HIGHLIGHT: In our work, we take a step closer to emulating formal theorem provers, by proposing PRover, an interpretable transformer-based model that jointly answers binary questions over rule-bases and generates the corresponding proofs. 10, TITLE: Learning to Explain: Datasets and Models for Identifying Valid Reasoning Chains in Multihop Question- Answering https://www.aclweb.org/anthology/2020.emnlp-main.10 AUTHORS: Harsh Jhamtani, Peter Clark HIGHLIGHT: To address this, we introduce three explanation datasets in which explanations formed from corpus facts are annotated. 1 https://www.paperdigest.org 11, TITLE: Self-Supervised Knowledge Triplet Learning for Zero-Shot Question Answering https://www.aclweb.org/anthology/2020.emnlp-main.11 AUTHORS: Pratyay Banerjee, Chitta Baral HIGHLIGHT: This work proposes Knowledge Triplet Learning (KTL), a self-supervised task over knowledge graphs. 12, TITLE: More Bang for Your Buck: Natural Perturbation for Robust Question Answering https://www.aclweb.org/anthology/2020.emnlp-main.12 AUTHORS: Daniel Khashabi, Tushar Khot, Ashish Sabharwal HIGHLIGHT: As an alternative to the traditional approach of creating new instances by repeating the process of creating one instance, we propose doing so by first collecting a set of seed examples and then applying human-driven natural perturbations (as opposed to rule-based machine perturbations), which often change the gold label as well. 13, TITLE: A matter of framing: The impact of linguistic formalism on probing results https://www.aclweb.org/anthology/2020.emnlp-main.13 AUTHORS: Ilia Kuznetsov, Iryna Gurevych HIGHLIGHT: To investigate, we conduct an in-depth cross-formalism layer probing study in role semantics. 14, TITLE: Information-Theoretic Probing with Minimum Description Length https://www.aclweb.org/anthology/2020.emnlp-main.14 AUTHORS: Elena Voita, Ivan Titov HIGHLIGHT: Instead, we propose an alternative to the standard probes, information-theoretic probing with minimum description length (MDL). 15, TITLE: Intrinsic Probing through Dimension Selection https://www.aclweb.org/anthology/2020.emnlp-main.15 AUTHORS: Lucas Torroba Hennigen, Adina Williams, Ryan Cotterell HIGHLIGHT: To enable intrinsic probing, we propose a novel framework based on a decomposable multivariate Gaussian probe that allows us to determine whether the linguistic information in word embeddings is dispersed or focal. 16, TITLE: Learning Which Features Matter: RoBERTa Acquires a Preference for Linguistic Generalizations (Eventually) https://www.aclweb.org/anthology/2020.emnlp-main.16 AUTHORS: Alex Warstadt, Yian Zhang, Xiaocheng Li, Haokun Liu, Samuel R. Bowman HIGHLIGHT: With this goal in mind, we introduce a new English-language diagnostic set called MSGS (the Mixed Signals Generalization Set), which consists of 20 ambiguous binary classification tasks that we use to test whether a pretrained model prefers linguistic or surface generalizations during finetuning. 17, TITLE: Repulsive Attention: Rethinking Multi-head Attention as Bayesian Inference https://www.aclweb.org/anthology/2020.emnlp-main.17 AUTHORS: Bang An, Jie Lyu, Zhenyi Wang, Chunyuan Li, Changwei Hu, Fei Tan, Ruiyi Zhang, Yifan Hu, Changyou Chen HIGHLIGHT: In this paper, for the first time, we provide a novel understanding of multi-head attention from a Bayesian perspective. 18, TITLE: KERMIT: Complementing Transformer Architectures with Encoders of Explicit Syntactic Interpretations https://www.aclweb.org/anthology/2020.emnlp-main.18 AUTHORS: Fabio Massimo Zanzotto, Andrea Santilli, Leonardo Ranaldi, Dario Onorati, Pierfrancesco Tommasino, Francesca Fallucchi HIGHLIGHT: In this paper, we propose KERMIT (Kernel-inspired Encoder with Recursive Mechanism for Interpretable Trees) to embed symbolic syntactic parse trees into artificial neural networks and to visualize how syntax is used in inference. 19, TITLE: ETC: Encoding Long and Structured Inputs in Transformers https://www.aclweb.org/anthology/2020.emnlp-main.19 AUTHORS: Joshua Ainslie, Santiago Ontanon, Chris Alberti, Vaclav Cvicek, Zachary Fisher, Philip Pham, Anirudh Ravula, Sumit Sanghai, Qifan Wang, Li Yang HIGHLIGHT: In this paper, we present a new Transformer architecture, "Extended Transformer Construction" (ETC), that addresses two key challenges of standard Transformer architectures, namely scaling input length and encoding structured inputs. 20, TITLE: Pre-Training Transformers as Energy-Based Cloze Models https://www.aclweb.org/anthology/2020.emnlp-main.20 2 https://www.paperdigest.org AUTHORS: Kevin Clark, Minh-Thang Luong, Quoc Le, Christopher D. Manning HIGHLIGHT: We introduce Electric, an energy-based cloze model for representation learning over text. 21, TITLE: Calibration of Pre-trained Transformers https://www.aclweb.org/anthology/2020.emnlp-main.21 AUTHORS: Shrey Desai, Greg Durrett HIGHLIGHT: We focus on BERT and RoBERTa in this work, and analyze their calibration across three tasks: natural language inference, paraphrase detection, and commonsense reasoning. 22, TITLE: Near-imperceptible Neural Linguistic Steganography via Self-Adjusting Arithmetic Coding https://www.aclweb.org/anthology/2020.emnlp-main.22 AUTHORS: Jiaming Shen, Heng Ji, Jiawei Han HIGHLIGHT: In this study, we present a new linguistic steganography method which encodes secret messages using self- adjusting arithmetic coding based on a neural language model. 23, TITLE: Multi-Dimensional Gender Bias Classification https://www.aclweb.org/anthology/2020.emnlp-main.23 AUTHORS: Emily Dinan, Angela Fan, Ledell Wu, Jason Weston, Douwe Kiela, Adina Williams HIGHLIGHT: In this work, we propose a novel, general framework that decomposes gender bias in text along several pragmatic and semantic dimensions: bias from the gender of the person being spoken about, bias from the gender of the person being spoken to, and bias from the gender of the speaker. 24, TITLE: FIND: Human-in-the-Loop Debugging Deep Text Classifiers https://www.aclweb.org/anthology/2020.emnlp-main.24 AUTHORS: Piyawat Lertvittayakumjorn, Lucia Specia, Francesca Toni HIGHLIGHT: In this paper, we propose FIND - a framework which enables humans to debug deep learning