Assessing BERT As a Distributional Semantics Model

Total Page:16

File Type:pdf, Size:1020Kb

Assessing BERT As a Distributional Semantics Model Proceedings of the Society for Computation in Linguistics Volume 3 Article 34 2020 What do you mean, BERT? Assessing BERT as a Distributional Semantics Model Timothee Mickus Université de Lorraine, CNRS, ATILF, [email protected] Denis Paperno Utrecht University, [email protected] Mathieu Constant Université de Lorraine, CNRS, ATILF, [email protected] Kees van Deemter Utrecht University, [email protected] Follow this and additional works at: https://scholarworks.umass.edu/scil Part of the Computational Linguistics Commons Recommended Citation Mickus, Timothee; Paperno, Denis; Constant, Mathieu; and van Deemter, Kees (2020) "What do you mean, BERT? Assessing BERT as a Distributional Semantics Model," Proceedings of the Society for Computation in Linguistics: Vol. 3 , Article 34. DOI: https://doi.org/10.7275/t778-ja71 Available at: https://scholarworks.umass.edu/scil/vol3/iss1/34 This Paper is brought to you for free and open access by ScholarWorks@UMass Amherst. It has been accepted for inclusion in Proceedings of the Society for Computation in Linguistics by an authorized editor of ScholarWorks@UMass Amherst. For more information, please contact [email protected]. What do you mean, BERT? Assessing BERT as a Distributional Semantics Model Timothee Mickus Denis Paperno Mathieu Constant Kees van Deemter Universite´ de Lorraine Utrecht University Universite´ de Lorraine Utrecht University CNRS, ATILF [email protected] CNRS, ATILF [email protected] [email protected] [email protected] Abstract based on BERT. Furthermore, BERT serves both as a strong baseline and as a basis for a fine- Contextualized word embeddings, i.e. vector tuned state-of-the-art word sense disambiguation representations for words in context, are nat- urally seen as an extension of previous non- pipeline (Wang et al., 2019a). contextual distributional semantic models. In Analyses aiming to understand the mechanical this work, we focus on BERT, a deep neural behavior of Transformers in general, and BERT in network that produces contextualized embed- particular, have suggested that they compute word dings and has set the state-of-the-art in several representations through implicitly learned syntac- semantic tasks, and study the semantic coher- tic operations (Raganato and Tiedemann, 2018; ence of its embedding space. While showing Clark et al., 2019; Coenen et al., 2019; Jawa- a tendency towards coherence, BERT does not fully live up to the natural expectations for a har et al., 2019, a.o.): representations computed semantic vector space. In particular, we find through the ‘attention’ mechanisms of Transform- that the position of the sentence in which a ers can arguably be seen as weighted sums of word occurs, while having no meaning corre- intermediary representations from the previous lates, leaves a noticeable trace on the word em- layer, with many attention heads assigning higher beddings and disturbs similarity relationships. weights to syntactically related tokens (however, contrast with Brunner et al., 2019; Serrano and 1 Introduction Smith, 2019). A recent success story of NLP, BERT (Devlin et al., Complementing these previous studies, in this 2018) stands at the crossroad of two key innova- paper we adopt a more theory-driven lexical se- tions that have brought about significant improve- mantic perspective. While a clear parallel was es- ments over previous state-of-the-art results. On tablished between ‘traditional’ noncontextual em- the one hand, BERT models are an instance of con- beddings and the theory of distributional seman- textual embeddings (McCann et al., 2017; Peters tics (a.o. Lenci, 2018; Boleda, 2019), this link is et al., 2018), which have been shown to be subtle not automatically extended to contextual embed- and accurate representations of words within sen- dings: some authors (Westera and Boleda, 2019) tences. On the other hand, BERT is a variant of the even explicitly consider only “context-invariant” Transformer architecture (Vaswani et al., 2017) representations as distributional semantics. Hence which has set a new state-of-the-art on a wide we study to what extent BERT, as a contextual em- variety of tasks ranging from machine translation bedding architecture, satisfies the properties ex- (Ott et al., 2018) to language modeling (Dai et al., pected from a natural contextualized extension of 2019). BERT-based models have significantly in- distributional semantics models (DSMs). creased state-of-the-art over the GLUE benchmark DSMs assume that meaning is derived from use for natural language understanding (Wang et al., in context. DSMs are nowadays systematically 2019b) and most of the best scoring models for represented using vector spaces (Lenci, 2018). this benchmark include or elaborate on BERT. Us- They generally map each word in the domain of ing BERT representations has become in many the model to a numeric vector on the basis of cases a new standard approach: for instance, all distributional criteria; vector components are in- submissions at the recent shared task on gendered ferred from text data. DSMs have also been com- pronoun resolution (Webster et al., 2019) were puted for linguistic items other than words, e.g., 350 Proceedings of the Society for Computation in Linguistics (SCiL) 2020, pages 350-361. New Orleans, Louisiana, January 2-5, 2020 word senses—based both on meaning inventories sentence coherence. In section 4, we test whether (Rothe and Schutze¨ , 2015) and word sense induc- we can observe cross-sentence coherence for sin- tion techniques (Bartunov et al., 2015)—or mean- gle tokens, whereas in section 5 we study to what ing exemplars (Reisinger and Mooney, 2010; Erk extent incoherence of representations across sen- and Pado´, 2010; Reddy et al., 2011). The default tences affects the similarity structure of the seman- approach has however been to produce represen- tic space. We summarize our findings in section 6. tations for word types. Word properties encoded by DSMs vary from morphological information 2 Theoretical background & connections (Marelli and Baroni, 2015; Bonami and Paperno, 2018) to geographic information (Louwerse and Word embeddings have been said to be ‘all- Zwaan, 2009), to social stereotypes (Bolukbasi purpose’ representations, capable of unifying the et al., 2016) and to referential properties (Herbe- otherwise heterogeneous domain that is NLP (Tur- lot and Vecchi, 2015). ney and Pantel, 2010). To some extent this claim spearheaded the evolution of NLP: focus recently A reason why contextualized embeddings have shifted from task-specific architectures with lim- not been equated to distributional semantics may ited applicability to universal architectures requir- lie in that they are “functions of the entire input ing little to no adaptation (Radford, 2018; Devlin sentence” (Peters et al., 2018). Whereas tradi- et al., 2018; Radford et al., 2019; Yang et al., 2019; tional DSMs match word types with numeric vec- Liu et al., 2019, a.o.). tors, contextualized embeddings produce distinct Word embeddings are linked to the distribu- vectors per token. Ideally, the contextualized na- tional hypothesis, according to which “you shall ture of these embeddings should reflect the seman- know a word from the company it keeps” (Firth, tic nuances that context induces in the meaning of 1957). Accordingly, the meaning of a word can be a word—with varying degrees of subtlety, rang- inferred from the effects it has on its context (Har- ing from broad word-sense disambiguation (e.g. ris, 1954); as this framework equates the meaning ‘bank’ as a river embankment or as a financial of a word to the set of its possible usage contexts, institution) to narrower subtypes of word usage it corresponds more to holistic theories of meaning (‘bank’ as a corporation or as a physical building) (Quine, 1960, a.o.) than to truth-value accounts and to more context-specific nuances. (Frege, 1892, a.o.). In early works on word em- Regardless of how apt contextual embeddings beddings (Bengio et al., 2003, e.g.), a straightfor- such as BERT are at capturing increasingly finer ward parallel between word embeddings and dis- semantic distinctions, we expect the contextual tributional semantics could be made: the former variation to preserve the basic DSM properties. are distributed representations of word meaning, Namely, we expect that the space structure en- the latter a theory stating that a word’s meaning is codes meaning similarity and that variation within drawn from its distribution. In short, word embed- the embedding space is semantic in nature. Simi- dings could be understood as a vector-based im- lar words should be represented with similar vec- plementation of the distributional hypothesis. This tors, and only semantically pertinent distinctions parallel is much less obvious for contextual em- should affect these representations. We connect beddings: are constantly changing representations our study with previous work in section 2 be- truly an apt description of the meaning of a word? fore detailing the two approaches we followed. More precisely, the literature on distributional First, we verify in section 3 that BERT embeddings semantics has put forth and discussed many math- form natural clusters when grouped by word types, ematical properties of embeddings: embeddings which on any account should be groups of similar are equivalent to count-based matrices (Levy and words and thus be assigned similar vectors. Sec- Goldberg, 2014b), expected to be linearly depen- ond, we test in sections 4 and 5 that contextualized dant (Arora et al., 2016), expressible as a unitary word vectors do not encode semantically irrelevant matrix (Smith et al., 2017) or as a perturbation features: in particular, leveraging some knowledge of an identity matrix (Yin and Shen, 2018). All from the architectural design of BERT, we address these properties have however been formalized for whether there is no systematic difference between non-contextual embeddings: they were formulated BERT representations in odd and even sentences using the tools of matrix algebra, under the as- of running text—a property we refer to as cross- sumption that embedding matrix rows correspond 351 to word types.
Recommended publications
  • Malware Classification with BERT
    San Jose State University SJSU ScholarWorks Master's Projects Master's Theses and Graduate Research Spring 5-25-2021 Malware Classification with BERT Joel Lawrence Alvares Follow this and additional works at: https://scholarworks.sjsu.edu/etd_projects Part of the Artificial Intelligence and Robotics Commons, and the Information Security Commons Malware Classification with Word Embeddings Generated by BERT and Word2Vec Malware Classification with BERT Presented to Department of Computer Science San José State University In Partial Fulfillment of the Requirements for the Degree By Joel Alvares May 2021 Malware Classification with Word Embeddings Generated by BERT and Word2Vec The Designated Project Committee Approves the Project Titled Malware Classification with BERT by Joel Lawrence Alvares APPROVED FOR THE DEPARTMENT OF COMPUTER SCIENCE San Jose State University May 2021 Prof. Fabio Di Troia Department of Computer Science Prof. William Andreopoulos Department of Computer Science Prof. Katerina Potika Department of Computer Science 1 Malware Classification with Word Embeddings Generated by BERT and Word2Vec ABSTRACT Malware Classification is used to distinguish unique types of malware from each other. This project aims to carry out malware classification using word embeddings which are used in Natural Language Processing (NLP) to identify and evaluate the relationship between words of a sentence. Word embeddings generated by BERT and Word2Vec for malware samples to carry out multi-class classification. BERT is a transformer based pre- trained natural language processing (NLP) model which can be used for a wide range of tasks such as question answering, paraphrase generation and next sentence prediction. However, the attention mechanism of a pre-trained BERT model can also be used in malware classification by capturing information about relation between each opcode and every other opcode belonging to a malware family.
    [Show full text]
  • Combining Semantic and Lexical Measures to Evaluate Medical Terms Similarity?
    Combining semantic and lexical measures to evaluate medical terms similarity? Silvio Domingos Cardoso1;2, Marcos Da Silveira1, Ying-Chi Lin3, Victor Christen3, Erhard Rahm3, Chantal Reynaud-Dela^ıtre2, and C´edricPruski1 1 LIST, Luxembourg Institute of Science and Technology, Luxembourg fsilvio.cardoso,marcos.dasilveira,[email protected] 2 LRI, University of Paris-Sud XI, France [email protected] 3 Department of Computer Science, Universit¨atLeipzig, Germany flin,christen,[email protected] Abstract. The use of similarity measures in various domains is corner- stone for different tasks ranging from ontology alignment to information retrieval. To this end, existing metrics can be classified into several cate- gories among which lexical and semantic families of similarity measures predominate but have rarely been combined to complete the aforemen- tioned tasks. In this paper, we propose an original approach combining lexical and ontology-based semantic similarity measures to improve the evaluation of terms relatedness. We validate our approach through a set of experiments based on a corpus of reference constructed by domain experts of the medical field and further evaluate the impact of ontology evolution on the used semantic similarity measures. Keywords: Similarity measures · Ontology evolution · Semantic Web · Medical terminologies 1 Introduction Measuring the similarity between terms is at the heart of many research inves- tigations. In ontology matching, similarity measures are used to evaluate the relatedness between concepts from different ontologies [9]. The outcomes are the mappings between the ontologies, increasing the coverage of domain knowledge and optimize semantic interoperability between information systems. In informa- tion retrieval, similarity measures are used to evaluate the relatedness between units of language (e.g., words, sentences, documents) to optimize search [39].
    [Show full text]
  • Bros:Apre-Trained Language Model for Understanding Textsin Document
    Under review as a conference paper at ICLR 2021 BROS: A PRE-TRAINED LANGUAGE MODEL FOR UNDERSTANDING TEXTS IN DOCUMENT Anonymous authors Paper under double-blind review ABSTRACT Understanding document from their visual snapshots is an emerging and chal- lenging problem that requires both advanced computer vision and NLP methods. Although the recent advance in OCR enables the accurate extraction of text seg- ments, it is still challenging to extract key information from documents due to the diversity of layouts. To compensate for the difficulties, this paper introduces a pre-trained language model, BERT Relying On Spatiality (BROS), that represents and understands the semantics of spatially distributed texts. Different from pre- vious pre-training methods on 1D text, BROS is pre-trained on large-scale semi- structured documents with a novel area-masking strategy while efficiently includ- ing the spatial layout information of input documents. Also, to generate structured outputs in various document understanding tasks, BROS utilizes a powerful graph- based decoder that can capture the relation between text segments. BROS achieves state-of-the-art results on four benchmark tasks: FUNSD, SROIE*, CORD, and SciTSR. Our experimental settings and implementation codes will be publicly available. 1 INTRODUCTION Document intelligence (DI)1, which understands industrial documents from their visual appearance, is a critical application of AI in business. One of the important challenges of DI is a key information extraction task (KIE) (Huang et al., 2019; Jaume et al., 2019; Park et al., 2019) that extracts struc- tured information from documents such as financial reports, invoices, business emails, insurance quotes, and many others.
    [Show full text]
  • Information Extraction Based on Named Entity for Tourism Corpus
    Information Extraction based on Named Entity for Tourism Corpus Chantana Chantrapornchai Aphisit Tunsakul Dept. of Computer Engineering Dept. of Computer Engineering Faculty of Engineering Faculty of Engineering Kasetsart University Kasetsart University Bangkok, Thailand Bangkok, Thailand [email protected] [email protected] Abstract— Tourism information is scattered around nowa- The ontology is extracted based on HTML web structure, days. To search for the information, it is usually time consuming and the corpus is based on WordNet. For these approaches, to browse through the results from search engine, select and the time consuming process is the annotation which is to view the details of each accommodation. In this paper, we present a methodology to extract particular information from annotate the type of name entity. In this paper, we target at full text returned from the search engine to facilitate the users. the tourism domain, and aim to extract particular information Then, the users can specifically look to the desired relevant helping for ontology data acquisition. information. The approach can be used for the same task in We present the framework for the given named entity ex- other domains. The main steps are 1) building training data traction. Starting from the web information scraping process, and 2) building recognition model. First, the tourism data is gathered and the vocabularies are built. The raw corpus is used the data are selected based on the HTML tag for corpus to train for creating vocabulary embedding. Also, it is used building. The data is used for model creation for automatic for creating annotated data.
    [Show full text]
  • NLP with BERT: Sentiment Analysis Using SAS® Deep Learning and Dlpy Doug Cairns and Xiangxiang Meng, SAS Institute Inc
    Paper SAS4429-2020 NLP with BERT: Sentiment Analysis Using SAS® Deep Learning and DLPy Doug Cairns and Xiangxiang Meng, SAS Institute Inc. ABSTRACT A revolution is taking place in natural language processing (NLP) as a result of two ideas. The first idea is that pretraining a deep neural network as a language model is a good starting point for a range of NLP tasks. These networks can be augmented (layers can be added or dropped) and then fine-tuned with transfer learning for specific NLP tasks. The second idea involves a paradigm shift away from traditional recurrent neural networks (RNNs) and toward deep neural networks based on Transformer building blocks. One architecture that embodies these ideas is Bidirectional Encoder Representations from Transformers (BERT). BERT and its variants have been at or near the top of the leaderboard for many traditional NLP tasks, such as the general language understanding evaluation (GLUE) benchmarks. This paper provides an overview of BERT and shows how you can create your own BERT model by using SAS® Deep Learning and the SAS DLPy Python package. It illustrates the effectiveness of BERT by performing sentiment analysis on unstructured product reviews submitted to Amazon. INTRODUCTION Providing a computer-based analog for the conceptual and syntactic processing that occurs in the human brain for spoken or written communication has proven extremely challenging. As a simple example, consider the abstract for this (or any) technical paper. If well written, it should be a concise summary of what you will learn from reading the paper. As a reader, you expect to see some or all of the following: • Technical context and/or problem • Key contribution(s) • Salient result(s) If you were tasked to create a computer-based tool for summarizing papers, how would you translate your expectations as a reader into an implementable algorithm? This is the type of problem that the field of natural language processing (NLP) addresses.
    [Show full text]
  • Evaluating Lexical Similarity to Build Sentiment Similarity
    Evaluating Lexical Similarity to Build Sentiment Similarity Grégoire Jadi∗, Vincent Claveauy, Béatrice Daille∗, Laura Monceaux-Cachard∗ ∗ LINA - Univ. Nantes {gregoire.jadi beatrice.daille laura.monceaux}@univ-nantes.fr y IRISA–CNRS, France [email protected] Abstract In this article, we propose to evaluate the lexical similarity information provided by word representations against several opinion resources using traditional Information Retrieval tools. Word representation have been used to build and to extend opinion resources such as lexicon, and ontology and their performance have been evaluated on sentiment analysis tasks. We question this method by measuring the correlation between the sentiment proximity provided by opinion resources and the semantic similarity provided by word representations using different correlation coefficients. We also compare the neighbors found in word representations and list of similar opinion words. Our results show that the proximity of words in state-of-the-art word representations is not very effective to build sentiment similarity. Keywords: opinion lexicon evaluation, sentiment analysis, spectral representation, word embedding 1. Introduction tend or build sentiment lexicons. For example, (Toh and The goal of opinion mining or sentiment analysis is to iden- Wang, 2014) uses the syntactic categories from WordNet tify and classify texts according to the opinion, sentiment or as features for a CRF to extract aspects and WordNet re- emotion that they convey. It requires a good knowledge of lations such as antonymy and synonymy to extend an ex- the sentiment properties of each word. This properties can isting opinion lexicons. Similarly, (Agarwal et al., 2011) be either discovered automatically by machine learning al- look for synonyms in WordNet to find the polarity of words gorithms, or embedded into language resources.
    [Show full text]
  • Using Prior Knowledge to Guide BERT's Attention in Semantic
    Using Prior Knowledge to Guide BERT’s Attention in Semantic Textual Matching Tasks Tingyu Xia Yue Wang School of Artificial Intelligence, Jilin University School of Information and Library Science Key Laboratory of Symbolic Computation and Knowledge University of North Carolina at Chapel Hill Engineering, Jilin University [email protected] [email protected] Yuan Tian∗ Yi Chang∗ School of Artificial Intelligence, Jilin University School of Artificial Intelligence, Jilin University Key Laboratory of Symbolic Computation and Knowledge International Center of Future Science, Jilin University Engineering, Jilin University Key Laboratory of Symbolic Computation and Knowledge [email protected] Engineering, Jilin University [email protected] ABSTRACT 1 INTRODUCTION We study the problem of incorporating prior knowledge into a deep Measuring semantic similarity between two pieces of text is a fun- Transformer-based model, i.e., Bidirectional Encoder Representa- damental and important task in natural language understanding. tions from Transformers (BERT), to enhance its performance on Early works on this task often leverage knowledge resources such semantic textual matching tasks. By probing and analyzing what as WordNet [28] and UMLS [2] as these resources contain well- BERT has already known when solving this task, we obtain better defined types and relations between words and concepts. Recent understanding of what task-specific knowledge BERT needs the works have shown that deep learning models are more effective most and where it is most needed. The analysis further motivates on this task by learning linguistic knowledge from large-scale text us to take a different approach than most existing works. Instead of data and representing text (words and sentences) as continuous using prior knowledge to create a new training task for fine-tuning trainable embedding vectors [10, 17].
    [Show full text]
  • Unified Language Model Pre-Training for Natural
    Unified Language Model Pre-training for Natural Language Understanding and Generation Li Dong∗ Nan Yang∗ Wenhui Wang∗ Furu Wei∗† Xiaodong Liu Yu Wang Jianfeng Gao Ming Zhou Hsiao-Wuen Hon Microsoft Research {lidong1,nanya,wenwan,fuwei}@microsoft.com {xiaodl,yuwan,jfgao,mingzhou,hon}@microsoft.com Abstract This paper presents a new UNIfied pre-trained Language Model (UNILM) that can be fine-tuned for both natural language understanding and generation tasks. The model is pre-trained using three types of language modeling tasks: unidirec- tional, bidirectional, and sequence-to-sequence prediction. The unified modeling is achieved by employing a shared Transformer network and utilizing specific self-attention masks to control what context the prediction conditions on. UNILM compares favorably with BERT on the GLUE benchmark, and the SQuAD 2.0 and CoQA question answering tasks. Moreover, UNILM achieves new state-of- the-art results on five natural language generation datasets, including improving the CNN/DailyMail abstractive summarization ROUGE-L to 40.51 (2.04 absolute improvement), the Gigaword abstractive summarization ROUGE-L to 35.75 (0.86 absolute improvement), the CoQA generative question answering F1 score to 82.5 (37.1 absolute improvement), the SQuAD question generation BLEU-4 to 22.12 (3.75 absolute improvement), and the DSTC7 document-grounded dialog response generation NIST-4 to 2.67 (human performance is 2.65). The code and pre-trained models are available at https://github.com/microsoft/unilm. 1 Introduction Language model (LM) pre-training has substantially advanced the state of the art across a variety of natural language processing tasks [8, 29, 19, 31, 9, 1].
    [Show full text]
  • Semantic Computing
    SEMANTIC COMPUTING Lecture 7: Introduction to Distributional and Distributed Semantics Dagmar Gromann International Center For Computational Logic TU Dresden, 30 November 2018 Overview • Distributional Semantics • Distributed Semantics – Word Embeddings Dagmar Gromann, 30 November 2018 Semantic Computing 2 Distributional Semantics Dagmar Gromann, 30 November 2018 Semantic Computing 3 Distributional Semantics Definition • meaning of a word is the set of contexts in which it occurs • no other information is used than the corpus-derived information about word distribution in contexts (co-occurrence information of words) • semantic similarity can be inferred from proximity in contexts • At the very core: Distributional Hypothesis Distributional Hypothesis “similarity of meaning correlates with similarity of distribution” - Harris Z. S. (1954) “Distributional structure". Word, Vol. 10, No. 2-3, pp. 146-162 meaning = use = distribution in context => semantic distance Dagmar Gromann, 30 November 2018 Semantic Computing 4 Remember Lecture 1? Types of Word Meaning • Encyclopaedic meaning: words provide access to a large inventory of structured knowledge (world knowledge) • Denotational meaning: reference of a word to object/concept or its “dictionary definition” (signifier <-> signified) • Connotative meaning: word meaning is understood by its cultural or emotional association (positive, negative, neutral conntation; e.g. “She’s a dragon” in Chinese and English) • Conceptual meaning: word meaning is associated with the mental concepts it gives access to (e.g. prototype theory) • Distributional meaning: “You shall know a word by the company it keeps” (J.R. Firth 1957: 11) John Rupert Firth (1957). "A synopsis of linguistic theory 1930-1955." In Special Volume of the Philological Society. Oxford: Oxford University Press. Dagmar Gromann, 30 November 2018 Semantic Computing 5 Distributional Hypothesis in Practice Study by McDonald and Ramscar (2001): • The man poured from a balack into a handleless cup.
    [Show full text]
  • Latent Semantic Analysis for Text-Based Research
    Behavior Research Methods, Instruments, & Computers 1996, 28 (2), 197-202 ANALYSIS OF SEMANTIC AND CLINICAL DATA Chaired by Matthew S. McGlone, Lafayette College Latent semantic analysis for text-based research PETER W. FOLTZ New Mexico State University, Las Cruces, New Mexico Latent semantic analysis (LSA) is a statistical model of word usage that permits comparisons of se­ mantic similarity between pieces of textual information. This papersummarizes three experiments that illustrate how LSA may be used in text-based research. Two experiments describe methods for ana­ lyzinga subject's essay for determining from what text a subject learned the information and for grad­ ing the quality of information cited in the essay. The third experiment describes using LSAto measure the coherence and comprehensibility of texts. One of the primary goals in text-comprehension re­ A theoretical approach to studying text comprehen­ search is to understand what factors influence a reader's sion has been to develop cognitive models ofthe reader's ability to extract and retain information from textual ma­ representation ofthe text (e.g., Kintsch, 1988; van Dijk terial. The typical approach in text-comprehension re­ & Kintsch, 1983). In such a model, semantic information search is to have subjects read textual material and then from both the text and the reader's summary are repre­ have them produce some form of summary, such as an­ sented as sets ofsemantic components called propositions. swering questions or writing an essay. This summary per­ Typically, each clause in a text is represented by a single mits the experimenter to determine what information the proposition.
    [Show full text]
  • Question Answering by Bert
    Running head: QUESTION AND ANSWERING USING BERT 1 Question and Answering Using BERT Suman Karanjit, Computer SCience Major Minnesota State University Moorhead Link to GitHub https://github.com/skaranjit/BertQA QUESTION AND ANSWERING USING BERT 2 Table of Contents ABSTRACT .................................................................................................................................................... 3 INTRODUCTION .......................................................................................................................................... 4 SQUAD ............................................................................................................................................................ 5 BERT EXPLAINED ...................................................................................................................................... 5 WHAT IS BERT? .......................................................................................................................................... 5 ARCHITECTURE ............................................................................................................................................ 5 INPUT PROCESSING ...................................................................................................................................... 6 GETTING ANSWER ........................................................................................................................................ 8 SETTING UP THE ENVIRONMENT. ....................................................................................................
    [Show full text]
  • Ying Liu, Xi Yang
    Liu, Y. & Yang, X. (2018). A similar legal case retrieval Liu & Yang system by multiple speech question and answer. In Proceedings of The 18th International Conference on Electronic Business (pp. 72-81). ICEB, Guilin, China, December 2-6. A Similar Legal Case Retrieval System by Multiple Speech Question and Answer (Full paper) Ying Liu, School of Computer Science and Information Engineering, Guangxi Normal University, China, [email protected] Xi Yang*, School of Computer Science and Information Engineering, Guangxi Normal University, China, [email protected] ABSTRACT In real life, on the one hand people often lack legal knowledge and legal awareness; on the other hand lawyers are busy, time- consuming, and expensive. As a result, a series of legal consultation problems cannot be properly handled. Although some legal robots can answer some problems about basic legal knowledge that are matched by keywords, they cannot do similar case retrieval and sentencing prediction according to the criminal facts described in natural language. To overcome the difficulty, we propose a similar case retrieval system based on natural language understanding. The system uses online speech synthesis of IFLYTEK and speech reading and writing technology, integrates natural language semantic processing technology and multiple rounds of question-and-answer dialogue mechanism to realise the legal knowledge question and answer with the memory-based context processing ability, and finally retrieves a case that is most similar to the criminal facts that the user consulted. After trial use, the system has a good level of human-computer interaction and high accuracy of information retrieval, which can largely satisfy people's consulting needs for legal issues.
    [Show full text]