Recurrent Neural Network with Word Embedding for Complaint Classification
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
Malware Classification with BERT
San Jose State University SJSU ScholarWorks Master's Projects Master's Theses and Graduate Research Spring 5-25-2021 Malware Classification with BERT Joel Lawrence Alvares Follow this and additional works at: https://scholarworks.sjsu.edu/etd_projects Part of the Artificial Intelligence and Robotics Commons, and the Information Security Commons Malware Classification with Word Embeddings Generated by BERT and Word2Vec Malware Classification with BERT Presented to Department of Computer Science San José State University In Partial Fulfillment of the Requirements for the Degree By Joel Alvares May 2021 Malware Classification with Word Embeddings Generated by BERT and Word2Vec The Designated Project Committee Approves the Project Titled Malware Classification with BERT by Joel Lawrence Alvares APPROVED FOR THE DEPARTMENT OF COMPUTER SCIENCE San Jose State University May 2021 Prof. Fabio Di Troia Department of Computer Science Prof. William Andreopoulos Department of Computer Science Prof. Katerina Potika Department of Computer Science 1 Malware Classification with Word Embeddings Generated by BERT and Word2Vec ABSTRACT Malware Classification is used to distinguish unique types of malware from each other. This project aims to carry out malware classification using word embeddings which are used in Natural Language Processing (NLP) to identify and evaluate the relationship between words of a sentence. Word embeddings generated by BERT and Word2Vec for malware samples to carry out multi-class classification. BERT is a transformer based pre- trained natural language processing (NLP) model which can be used for a wide range of tasks such as question answering, paraphrase generation and next sentence prediction. However, the attention mechanism of a pre-trained BERT model can also be used in malware classification by capturing information about relation between each opcode and every other opcode belonging to a malware family. -
Sentence Modeling with Gated Recursive Neural Network
Sentence Modeling with Gated Recursive Neural Network Xinchi Chen, Xipeng Qiu,∗ Chenxi Zhu, Shiyu Wu, Xuanjing Huang Shanghai Key Laboratory of Intelligent Information Processing, Fudan University School of Computer Science, Fudan University 825 Zhangheng Road, Shanghai, China xinchichen13,xpqiu,czhu13,syu13,xjhuang @fudan.edu.cn { } Abstract Recently, neural network based sentence modeling methods have achieved great progress. Among these methods, the re- I cannot agree with you more I cannot agree with you more cursive neural networks (RecNNs) can ef- fectively model the combination of the Figure 1: Example of Gated Recursive Neural words in sentence. However, RecNNs Networks (GRNNs). Left is a GRNN using a di- need a given external topological struc- rected acyclic graph (DAG) structure. Right is a ture, like syntactic tree. In this paper, GRNN using a full binary tree (FBT) structure. we propose a gated recursive neural net- (The green nodes, gray nodes and white nodes work (GRNN) to model sentences, which illustrate the positive, negative and neutral senti- employs a full binary tree (FBT) struc- ments respectively.) ture to control the combinations in re- cursive structure. By introducing two kinds of gates, our model can better model to model sentences. However, DAG structure is the complicated combinations of features. relatively complicated. The number of the hidden Experiments on three text classification neurons quadraticly increases with the length of datasets show the effectiveness of our sentences so that grConv cannot effectively deal model. with long sentences. Inspired by grConv, we propose a gated recur- 1 Introduction sive neural network (GRNN) for sentence model- ing. Different with grConv, we use the full binary Recently, neural network based sentence modeling tree (FBT) as the topological structure to recur- approaches have been increasingly focused on for sively model the word combinations, as shown in their ability to minimize the efforts in feature en- Figure 1. -
Chombining Recurrent Neural Networks and Adversarial Training
Combining Recurrent Neural Networks and Adversarial Training for Human Motion Modelling, Synthesis and Control † ‡ Zhiyong Wang∗ Jinxiang Chai Shihong Xia Institute of Computing Texas A&M University Institute of Computing Technology CAS Technology CAS University of Chinese Academy of Sciences ABSTRACT motions from the generator using recurrent neural networks (RNNs) This paper introduces a new generative deep learning network for and refines the generated motion using an adversarial neural network human motion synthesis and control. Our key idea is to combine re- which we call the “refiner network”. Fig 2 gives an overview of current neural networks (RNNs) and adversarial training for human our method: a motion sequence XRNN is generated with the gener- motion modeling. We first describe an efficient method for training ator G and is refined using the refiner network R. To add realism, a RNNs model from prerecorded motion data. We implement recur- we train our refiner network using an adversarial loss, similar to rent neural networks with long short-term memory (LSTM) cells Generative Adversarial Networks (GANs) [9] such that the refined because they are capable of handling nonlinear dynamics and long motion sequences Xre fine are indistinguishable from real motion cap- term temporal dependencies present in human motions. Next, we ture sequences Xreal using a discriminative network D. In addition, train a refiner network using an adversarial loss, similar to Gener- we embed contact information into the generative model to further ative Adversarial Networks (GANs), such that the refined motion improve the quality of the generated motions. sequences are indistinguishable from real motion capture data using We construct the generator G based on recurrent neural network- a discriminative network. -
Transfer Learning Using CNN for Handwritten Devanagari Character
Accepted for publication in IEEE International Conference on Advances in Information Technology (ICAIT), ICAIT - 2019, Please refer IEEE Explore for final version Transfer Learning using CNN for Handwritten Devanagari Character Recognition Nagender Aneja∗ and Sandhya Aneja ∗ ∗Universiti Brunei Darussalam Brunei Darussalam fnagender.aneja, sandhya.aneja [email protected] Abstract—This paper presents an analysis of pre-trained models to recognize handwritten Devanagari alphabets using transfer learning for Deep Convolution Neural Network (DCNN). This research implements AlexNet, DenseNet, Vgg, and Inception ConvNet as a fixed feature extractor. We implemented 15 epochs for each of AlexNet, DenseNet 121, DenseNet 201, Vgg 11, Vgg 16, Vgg 19, and Inception V3. Results show that Inception V3 performs better in terms of accuracy achieving 99% accuracy with average epoch time 16.3 minutes while AlexNet performs fastest with 2.2 minutes per epoch and achieving 98% accuracy. Index Terms—Deep Learning, CNN, Transfer Learning, Pre-trained, handwritten, recognition, Devanagari I. INTRODUCTION Handwriting identification is one of the challenging research domains in computer vision. Handwriting character identification systems are useful for bank signatures, postal code recognition, and bank cheques, etc. Many researchers working in this area tend to use the same database and compare different techniques that either perform better concerning the accuracy, time, complexity, etc. However, the handwritten recognition in non-English is difficult due to complex shapes. Fig. 1: Devanagari Alphabets Many Indian Languages like Hindi, Sanskrit, Nepali, Marathi, Sindhi, and Konkani uses Devanagari script. The alphabets of Devanagari Handwritten Character Dataset (DHCD) [1] is shown in Figure 1. Recognition of Devanagari is particularly challenging due to many groups of similar characters. -
Deep Recursive Neural Networks for Compositionality in Language
Deep Recursive Neural Networks for Compositionality in Language Ozan Irsoy˙ Claire Cardie Department of Computer Science Department of Computer Science Cornell University Cornell University Ithaca, NY 14853 Ithaca, NY 14853 [email protected] [email protected] Abstract Recursive neural networks comprise a class of architecture that can operate on structured input. They have been previously successfully applied to model com- positionality in natural language using parse-tree-based structural representations. Even though these architectures are deep in structure, they lack the capacity for hierarchical representation that exists in conventional deep feed-forward networks as well as in recently investigated deep recurrent neural networks. In this work we introduce a new architecture — a deep recursive neural network (deep RNN) — constructed by stacking multiple recursive layers. We evaluate the proposed model on the task of fine-grained sentiment classification. Our results show that deep RNNs outperform associated shallow counterparts that employ the same number of parameters. Furthermore, our approach outperforms previous baselines on the sentiment analysis task, including a multiplicative RNN variant as well as the re- cently introduced paragraph vectors, achieving new state-of-the-art results. We provide exploratory analyses of the effect of multiple layers and show that they capture different aspects of compositionality in language. 1 Introduction Deep connectionist architectures involve many layers of nonlinear information processing [1]. This allows them to incorporate meaning representations such that each succeeding layer potentially has a more abstract meaning. Recent advancements in efficiently training deep neural networks enabled their application to many problems, including those in natural language processing (NLP). -
Recurrent Neural Network for Text Classification with Multi-Task
Proceedings of the Twenty-Fifth International Joint Conference on Artificial Intelligence (IJCAI-16) Recurrent Neural Network for Text Classification with Multi-Task Learning Pengfei Liu Xipeng Qiu⇤ Xuanjing Huang Shanghai Key Laboratory of Intelligent Information Processing, Fudan University School of Computer Science, Fudan University 825 Zhangheng Road, Shanghai, China pfliu14,xpqiu,xjhuang @fudan.edu.cn { } Abstract are based on unsupervised objectives such as word predic- tion for training [Collobert et al., 2011; Turian et al., 2010; Neural network based methods have obtained great Mikolov et al., 2013]. This unsupervised pre-training is effec- progress on a variety of natural language process- tive to improve the final performance, but it does not directly ing tasks. However, in most previous works, the optimize the desired task. models are learned based on single-task super- vised objectives, which often suffer from insuffi- Multi-task learning utilizes the correlation between related cient training data. In this paper, we use the multi- tasks to improve classification by learning tasks in parallel. [ task learning framework to jointly learn across mul- Motivated by the success of multi-task learning Caruana, ] tiple related tasks. Based on recurrent neural net- 1997 , there are several neural network based NLP models [ ] work, we propose three different mechanisms of Collobert and Weston, 2008; Liu et al., 2015b utilize multi- sharing information to model text with task-specific task learning to jointly learn several tasks with the aim of and shared layers. The entire network is trained mutual benefit. The basic multi-task architectures of these jointly on all these tasks. Experiments on four models are to share some lower layers to determine common benchmark text classification tasks show that our features. -
Learned in Speech Recognition: Contextual Acoustic Word Embeddings
LEARNED IN SPEECH RECOGNITION: CONTEXTUAL ACOUSTIC WORD EMBEDDINGS Shruti Palaskar∗, Vikas Raunak∗ and Florian Metze Carnegie Mellon University, Pittsburgh, PA, U.S.A. fspalaska j vraunak j fmetze [email protected] ABSTRACT model [10, 11, 12] trained for direct Acoustic-to-Word (A2W) speech recognition [13]. Using this model, we jointly learn to End-to-end acoustic-to-word speech recognition models have re- automatically segment and classify input speech into individual cently gained popularity because they are easy to train, scale well to words, hence getting rid of the problem of chunking or requiring large amounts of training data, and do not require a lexicon. In addi- pre-defined word boundaries. As our A2W model is trained at the tion, word models may also be easier to integrate with downstream utterance level, we show that we can not only learn acoustic word tasks such as spoken language understanding, because inference embeddings, but also learn them in the proper context of their con- (search) is much simplified compared to phoneme, character or any taining sentence. We also evaluate our contextual acoustic word other sort of sub-word units. In this paper, we describe methods embeddings on a spoken language understanding task, demonstrat- to construct contextual acoustic word embeddings directly from a ing that they can be useful in non-transcription downstream tasks. supervised sequence-to-sequence acoustic-to-word speech recog- Our main contributions in this paper are the following: nition model using the learned attention distribution. On a suite 1. We demonstrate the usability of attention not only for aligning of 16 standard sentence evaluation tasks, our embeddings show words to acoustic frames without any forced alignment but also for competitive performance against a word2vec model trained on the constructing Contextual Acoustic Word Embeddings (CAWE). -
Analysis Methods in Neural Language Processing: a Survey
Analysis Methods in Neural Language Processing: A Survey Yonatan Belinkov1;2 and James Glass1 1MIT Computer Science and Artificial Intelligence Laboratory 2Harvard School of Engineering and Applied Sciences Cambridge, MA, USA fbelinkov, [email protected] Abstract the networks in different ways.1 Others strive to better understand how NLP models work. This The field of natural language processing has theme of analyzing neural networks has connec- seen impressive progress in recent years, with tions to the broader work on interpretability in neural network models replacing many of the machine learning, along with specific characteris- traditional systems. A plethora of new mod- tics of the NLP field. els have been proposed, many of which are Why should we analyze our neural NLP mod- thought to be opaque compared to their feature- els? To some extent, this question falls into rich counterparts. This has led researchers to the larger question of interpretability in machine analyze, interpret, and evaluate neural net- learning, which has been the subject of much works in novel and more fine-grained ways. In debate in recent years.2 Arguments in favor this survey paper, we review analysis meth- of interpretability in machine learning usually ods in neural language processing, categorize mention goals like accountability, trust, fairness, them according to prominent research trends, safety, and reliability (Doshi-Velez and Kim, highlight existing limitations, and point to po- 2017; Lipton, 2016). Arguments against inter- tential directions for future work. pretability typically stress performance as the most important desideratum. All these arguments naturally apply to machine learning applications in NLP. 1 Introduction In the context of NLP, this question needs to be understood in light of earlier NLP work, often The rise of deep learning has transformed the field referred to as feature-rich or feature-engineered of natural language processing (NLP) in recent systems. -
Lecture 26 Word Embeddings and Recurrent Nets
CS447: Natural Language Processing http://courses.engr.illinois.edu/cs447 Where we’re at Lecture 25: Word Embeddings and neural LMs Lecture 26: Recurrent networks Lecture 26 Lecture 27: Sequence labeling and Seq2Seq Lecture 28: Review for the final exam Word Embeddings and Lecture 29: In-class final exam Recurrent Nets Julia Hockenmaier [email protected] 3324 Siebel Center CS447: Natural Language Processing (J. Hockenmaier) !2 What are neural nets? Simplest variant: single-layer feedforward net For binary Output unit: scalar y classification tasks: Single output unit Input layer: vector x Return 1 if y > 0.5 Recap Return 0 otherwise For multiclass Output layer: vector y classification tasks: K output units (a vector) Input layer: vector x Each output unit " yi = class i Return argmaxi(yi) CS447: Natural Language Processing (J. Hockenmaier) !3 CS447: Natural Language Processing (J. Hockenmaier) !4 Multi-layer feedforward networks Multiclass models: softmax(yi) We can generalize this to multi-layer feedforward nets Multiclass classification = predict one of K classes. Return the class i with the highest score: argmaxi(yi) Output layer: vector y In neural networks, this is typically done by using the softmax N Hidden layer: vector hn function, which maps real-valued vectors in R into a distribution … … … over the N outputs … … … … … …. For a vector z = (z0…zK): P(i) = softmax(zi) = exp(zi) ∕ ∑k=0..K exp(zk) Hidden layer: vector h1 (NB: This is just logistic regression) Input layer: vector x CS447: Natural Language Processing (J. Hockenmaier) !5 CS447: Natural Language Processing (J. Hockenmaier) !6 Neural Language Models LMs define a distribution over strings: P(w1….wk) LMs factor P(w1….wk) into the probability of each word: " P(w1….wk) = P(w1)·P(w2|w1)·P(w3|w1w2)·…· P(wk | w1….wk#1) A neural LM needs to define a distribution over the V words in Neural Language the vocabulary, conditioned on the preceding words. -
An Overview of Deep Learning Strategies for Time Series Prediction
An Overview of Deep Learning Strategies for Time Series Prediction Rodrigo Neves Instituto Superior Tecnico,´ Lisboa, Portugal [email protected] June 2018 Abstract—Deep learning is getting a lot of attention in the last Jenkins methodology [2] and were developed mainly in the few years, mainly due to the state-of-the-art results obtained in area of econometrics and statistics. different areas like object detection, natural language processing, Driven by the rise of popularity and attention in the area sequential modeling, among many others. Time series problems are a special case of sequential data, where deep learning models of deep learning, due to the state-of-the-art results obtained can be applied. The standard option to this type of problems in several areas, Artificial Neural Networks (ANN), which are are Recurrent Neural Networks (RNNs), but recent results are non-linear function approximator, have also been receiving an supporting the idea that Convolutional Neural Networks (CNNs) increasing attention in time series field. This rise of popularity can also be applied to time series with good results. This raises is associated with the breakthroughs achieved in this area, with the following question - Which are the best attributes and architectures to apply in time series prediction problems? It was the successful application of CNNs and RNNs to sequential assessed which is the current state on deep learning applied to modeling problems, with promising results [3]. RNNs are a time-series and studied which are the most promising topologies type of artificial neural networks that were specially designed and characteristics on sequential tasks that are worth it to to deal with sequential tasks, and thus can be used on time be explored. -
Deep Learning Architectures for Sequence Processing
Speech and Language Processing. Daniel Jurafsky & James H. Martin. Copyright © 2021. All rights reserved. Draft of September 21, 2021. CHAPTER Deep Learning Architectures 9 for Sequence Processing Time will explain. Jane Austen, Persuasion Language is an inherently temporal phenomenon. Spoken language is a sequence of acoustic events over time, and we comprehend and produce both spoken and written language as a continuous input stream. The temporal nature of language is reflected in the metaphors we use; we talk of the flow of conversations, news feeds, and twitter streams, all of which emphasize that language is a sequence that unfolds in time. This temporal nature is reflected in some of the algorithms we use to process lan- guage. For example, the Viterbi algorithm applied to HMM part-of-speech tagging, proceeds through the input a word at a time, carrying forward information gleaned along the way. Yet other machine learning approaches, like those we’ve studied for sentiment analysis or other text classification tasks don’t have this temporal nature – they assume simultaneous access to all aspects of their input. The feedforward networks of Chapter 7 also assumed simultaneous access, al- though they also had a simple model for time. Recall that we applied feedforward networks to language modeling by having them look only at a fixed-size window of words, and then sliding this window over the input, making independent predictions along the way. Fig. 9.1, reproduced from Chapter 7, shows a neural language model with window size 3 predicting what word follows the input for all the. Subsequent words are predicted by sliding the window forward a word at a time. -
Improving Gated Recurrent Unit Predictions with Univariate Time Series Imputation Techniques
(IJACSA) International Journal of Advanced Computer Science and Applications, Vol. 10, No. 12, 2019 Improving Gated Recurrent Unit Predictions with Univariate Time Series Imputation Techniques 1 Anibal Flores Hugo Tito2 Deymor Centty3 Grupo de Investigación en Ciencia E.P. Ingeniería de Sistemas e E.P. Ingeniería Ambiental de Datos, Universidad Nacional de Informática, Universidad Nacional Universidad Nacional de Moquegua Moquegua, Moquegua, Perú de Moquegua, Moquegua, Perú Moquegua, Perú Abstract—The work presented in this paper has its main Similarly to the analysis performed in [1] for LSTM objective to improve the quality of the predictions made with the predictions, Fig. 1 shows a 14-day prediction analysis for recurrent neural network known as Gated Recurrent Unit Gated Recurrent Unit (GRU). In the first case, the LANN (GRU). For this, instead of making different adjustments to the algorithm is applied to a 1 NA gap-size by rebuilding the architecture of the neural network in question, univariate time elements 2, 4, 6, …, 12 of the GRU-predicted series in order to series imputation techniques such as Local Average of Nearest outperform the results. In the second case, LANN is applied for Neighbors (LANN) and Case Based Reasoning Imputation the elements 3, 5, 7, …, 13 in order to improve the results (CBRi) are used. It is experimented with different gap-sizes, from produced by GRU. How it can be appreciated GRU results are 1 to 11 consecutive NAs, resulting in the best gap-size of six improve just in the second case. consecutive NA values for LANN and for CBRi the gap-size of two NA values.