Open Source Used in Network Insight for Resources Release 2.1

Total Page:16

File Type:pdf, Size:1020Kb

Open Source Used in Network Insight for Resources Release 2.1 Open Source Used In Network Insight For Resources Release 2.1 Cisco Systems, Inc. www.cisco.com Cisco has more than 200 offices worldwide. Addresses, phone numbers, and fax numbers are listed on the Cisco website at www.cisco.com/go/offices. Text Part Number: 78EE117C99-208090335 Open Source Used In Network Insight For Resources Release 2.1 1 This document contains licenses and notices for open source software used in this product. With respect to the free/open source software listed in this document, if you have any questions or wish to receive a copy of any source code to which you may be entitled under the applicable free/open source license(s) (such as the GNU Lesser/General Public License), please contact us at [email protected]. In your requests please include the following reference number 78EE117C99-208090335 Contents 1.1 asgiref 3.1.4 1.1.1 Available under license 1.2 buger/jsonparser master 1.2.1 Available under license 1.3 busybox 1.21.1 :7142499350fd020d6d0499a7ff65fc32 1.3.1 Available under license 1.4 classnames 2.2.3 1.4.1 Available under license 1.5 classnames 2.2.3 1.5.1 Available under license 1.6 Click 7.0 1.6.1 Available under license 1.7 com.github.spullara.mustache.java/compiler 0.9.4 1.7.1 Available under license 1.8 com.google.code.gson/gsob 2.7 1.8.1 Available under license 1.9 commons-io 2.5 1.9.1 Available under license 1.10 commons-lang3 3.0 1.10.1 Available under license 1.11 confluent-kafka 0.11.6 1.11.1 Available under license 1.12 confluent-kafka-go 0.11.0 1.12.1 Available under license 1.13 d3 5.0.0 1.13.1 Available under license Open Source Used In Network Insight For Resources Release 2.1 2 1.14 deepdiff 4.0.6 1.14.1 Available under license 1.15 disruptor 3.3.4 1.15.1 Available under license 1.16 dlintw/goconf master 1.16.1 Available under license 1.17 eapache/queue master 1.17.1 Available under license 1.18 elasticsearch 6.3.1 1.18.1 Available under license 1.19 Elasticsearch Rest Client 5.6.2 1.19.1 Available under license 1.20 elasticsearch-curator 5.6.0 1.20.1 Available under license 1.21 elasticsearch-dsl 6.3.1 1.21.1 Available under license 1.22 elasticsearch-python 6.3.1 1.22.1 Available under license 1.23 faker 4.1.0 1.23.1 Available under license 1.24 faker 4.1.0 1.24.1 Available under license 1.25 Flask 1.0.2 1.25.1 Available under license 1.26 Flask-Cors 3.0.7 1.26.1 Available under license 1.27 flink 1.4.2_Scala_2.11 1.27.1 Available under license 1.28 fluent-logger-golang v1.4.0 1.28.1 Available under license 1.29 fuzzy 0.1.0 1.29.1 Available under license 1.30 fuzzymatcher 0.1.0 1.31 fwd v1.0.0 1.31.1 Available under license 1.32 github.com/beorn7/perks 3a771d992973f24aa725d07868b467d1ddfceafb :1.0 1.32.1 Available under license 1.33 go-ini/ini v1.42.0 1.33.1 Available under license Open Source Used In Network Insight For Resources Release 2.1 3 1.34 go-resiliency 1.0.0 1.34.1 Available under license 1.35 go-spew v1.1.0 1.35.1 Available under license 1.36 go-xerial-snappy master 1.36.1 Available under license 1.37 golang-snappy v0.0.1 1.37.1 Available under license 1.38 golang/genproto master 1.38.1 Available under license 1.39 golang/grpc v1.21.0-dev 1.39.1 Available under license 1.40 golang/x/crypto master 1.40.1 Available under license 1.41 golang/x/net master 1.41.1 Available under license 1.42 golang/x/sys master 1.42.1 Available under license 1.43 golang/x/text master 1.43.1 Available under license 1.44 golang_protobuf_extensions v1.0.1 1.44.1 Available under license 1.45 gorilla-mux v1.7.0 1.45.1 Available under license 1.46 gorilla-websocket v1.4.0 1.46.1 Available under license 1.47 grpcio 1.19.0 1.47.1 Available under license 1.48 Guava 18.0 1.48.1 Available under license 1.49 httpclient 4.5.6 1.49.1 Available under license 1.50 influxdb v1.2.0 1.50.1 Available under license 1.51 itsdangerous 1.1.0 1.51.1 Available under license 1.52 jackson-core 2.7.5 1.52.1 Available under license 1.53 jackson-core 2.9.4 Open Source Used In Network Insight For Resources Release 2.1 4 1.53.1 Available under license 1.54 jinja2 2.10.3 1.54.1 Available under license 1.55 js-cookie 2.2.0 1.55.1 Available under license 1.56 json-cpp 1.8.4 1.56.1 Available under license 1.57 json-path 2.3.0 1.57.1 Available under license 1.58 json-schema-faker 0.5.0-rc9 1.58.1 Available under license 1.59 jsonpickle 1.2 1.59.1 Available under license 1.60 kafka-python 1.4.6 1.60.1 Available under license 1.61 kafka-python 1.4.7 1.61.1 Available under license 1.62 klauspost/crc32 v1.2.0 1.62.1 Available under license 1.63 libcurl 7.61.1 1.63.1 Available under license 1.64 librdkafka 0.11.6 1.64.1 Available under license 1.65 libseccomp 2.3.1 :3.el7 1.65.1 Available under license 1.66 libseccomp 2.3.1 :3.el7 1.66.1 Available under license 1.67 libsodium 1.0.16 :1.0 1.67.1 Available under license 1.68 libzmq 4.3.2 1.68.1 Available under license 1.69 lodash 4.17.10 1.69.1 Available under license 1.70 logrus_fluent v0.5.1 1.70.1 Available under license 1.71 luqum 0.7.5 1.71.1 Available under license 1.72 MarkupSafe 1.1.1 1.72.1 Available under license Open Source Used In Network Insight For Resources Release 2.1 5 1.73 mockito-core 1.10.19 1.74 moment 2.20.1 1.74.1 Available under license 1.75 netaddr 0.7.19 1.75.1 Available under license 1.76 numpy 1.16.1 1.76.1 Available under license 1.77 numpy_Python 1.16.1 1.77.1 Available under license 1.78 object-assign 4.1.1 1.78.1 Available under license 1.79 object-assign 4.1.1 1.79.1 Available under license 1.80 olivere-elastic 6.2.16 1.80.1 Available under license 1.81 orderedset 3.1.1 1.81.1 Available under license 1.82 org.apache.kafka/kafka-streams 2.1.1 1.82.1 Available under license 1.83 org.springframework.boot 2.0.1.RELEASE 1.83.1 Available under license 1.84 org.springframework.kafka/spring-kafka 2.1.9.RELEASE 1.84.1 Available under license 1.85 perks v1.0.0 1.85.1 Available under license 1.86 pierrec/lz4 v2.2.0 1.86.1 Available under license 1.87 pierrec/xxHash v0.1.5 1.87.1 Available under license 1.88 pkg-errors v0.9 1.88.1 Available under license 1.89 ply 3.11 1.90 prometheus v2.5.0 1.90.1 Available under license 1.91 prop-types 15.6.1 1.91.1 Available under license 1.92 prop-types 15.6.1 1.92.1 Available under license 1.93 protobuf 3.7.1 Open Source Used In Network Insight For Resources Release 2.1 6 1.93.1 Available under license 1.94 protobuf java 3.2.0 1.94.1 Available under license 1.95 protobuf-1.2.0 1.2.0 1.95.1 Available under license 1.96 protobuf-java-format 1.4 1.96.1 Available under license 1.97 proxy-polyfill 0.3.0 1.97.1 Available under license 1.98 query-string 6.1.0 1.98.1 Available under license 1.99 query-string 5.1.1 1.99.1 Available under license 1.100 rc-calendar 9.6.2 1.100.1 Available under license 1.101 rc-time-picker 3.3.1 1.101.1 Available under license 1.102 rcrowley/go-metrics master 1.102.1 Available under license 1.103 react 16.3.2 1.103.1 Available under license 1.104 react-addons-shallow-compare 15.6.2 1.104.1 Available under license 1.105 react-dates 12.7.1 1.105.1 Available under license 1.106 react-datetime 2.14.0 1.106.1 Available under license 1.107 react-dom 16.3.2 1.107.1 Available under license 1.108 react-form 3.5.5 1.108.1 Available under license 1.109 react-onclickoutside 6.7.1 1.110 react-popper 0.10.1 1.110.1 Available under license 1.111 react-redux 5.0.7 1.111.1 Available under license 1.112 react-router-dom 4.2.2 1.113 react-router-redux 5.0.0 1.113.1 Available under license Open Source Used In Network Insight For Resources Release 2.1 7 1.114 react-router-redux 5.0.0-alpha.9 1.114.1 Available under license 1.115 react-select 1.2.1 1.115.1 Available under license 1.116 redux 4.0.0 1.117 redux-promise-middleware 5.1.1 1.117.1 Available under license 1.118 sarama-cluster.v2 v2.1.15 1.119 scala-compiler 2.11.0 1.120 schedule-python 0.5.0 1.120.1 Available under license 1.121 shopify/sarama v1.11.0 1.121.1 Available under license 1.122 simplejson 3.16.0 1.122.1 Available under license 1.123 sirupsen/logrus 0.10.0 1.123.1 Available under license 1.124 spring-boot-starter-tomcat 2.0.3 1.125 tinylib/msgp 1.0.2 1.125.1 Available under license 1.126 Werkzeug 0.16.0 1.126.1 Available under license 1.127 whatwg-fetch 2.0.4 1.127.1 Available under license 1.128 zlib 1.2.7 1.128.1 Available under license 1.1 asgiref 3.1.4 1.1.1 Available under license : Copyright (c) Django Software Foundation and individual contributors.

  1352
Recommended publications
  • Mysql NDB Cluster 7.5.16 (And Later)
    Licensing Information User Manual MySQL NDB Cluster 7.5.16 (and later) Table of Contents Licensing Information .......................................................................................................................... 2 Licenses for Third-Party Components .................................................................................................. 3 ANTLR 3 .................................................................................................................................... 3 argparse .................................................................................................................................... 4 AWS SDK for C++ ..................................................................................................................... 5 Boost Library ............................................................................................................................ 10 Corosync .................................................................................................................................. 11 Cyrus SASL ............................................................................................................................. 11 dtoa.c ....................................................................................................................................... 12 Editline Library (libedit) ............................................................................................................. 12 Facebook Fast Checksum Patch ..............................................................................................
    [Show full text]
  • Konlpy Documentation 출시 0.4.1
    KoNLPy Documentation 출시 0.4.1 Lucy Park 2015D 02월 25| Contents 1 Standing on the shoulders of giants2 2 License 3 3 Contribute 4 4 Getting started 5 4.1 What is NLP?............................................5 4.2 What do I need to get started?....................................5 5 User guide 6 5.1 Installation..............................................6 5.2 Morphological analysis and POS tagging..............................7 5.3 Data.................................................. 10 5.4 Examples............................................... 11 5.5 Running tests............................................. 23 5.6 References.............................................. 23 6 API 26 6.1 konlpy Package............................................ 26 7 Indices and tables 34 Python ¨È ©] 35 i KoNLPy Documentation, 출시 0.4.1 (https://travis-ci.org/konlpy/konlpy) (https://readthedocs.org/projects/konlpy/?badge=latest) KoNLPy (pro- nounced “ko en el PIE”) is a Python package for natural language processing (NLP) of the Korean language. For installation directions, see here (page 6). For users new to NLP, go to Getting started (page 5). For step-by-step instructions, follow the User guide (page 6). For specific descriptions of each module, go see the API (page 26) documents. >>> from konlpy.tag import Kkma >>> from konlpy.utils import pprint >>> kkma= Kkma() >>> pprint(kkma.sentences(u’$, HUX8요. 반갑습니다.’)) [$, HUX8요.., 반갑습니다.] >>> pprint(kkma.nouns(u’È8t나 tX¬m@ C헙 t슈 ¸래커Ð ¨¨주8요.’)) [È8, tX, tX¬m, ¬m, C헙, t슈, ¸래커] >>> pprint(kkma.pos(u’$X보고는 실행X½, Ð러T8지@h께 $명D \대\Á8히!^^’)) [($X, NNG), (보고, NNG), (는, JX), (실행, NNG), (X½, NNG), (,, SP), (Ð러, NNG), (T8지, NNG), (@, JKM), (h께, MAG), ($명, NNG), (D, JKO), (\대\, NNG), (Á8히, MAG), (!, SF), (^^, EMO)] Contents 1 CHAPTER 1 Standing on the shoulders of giants Korean, the 13th most widely spoken language in the world (http://www.koreatimes.co.kr/www/news/nation/2014/05/116_157214.html), is a beautiful, yet complex language.
    [Show full text]
  • Learning to Generate Pseudo-Code from Source Code Using Statistical Machine Translation
    Learning to Generate Pseudo-code from Source Code using Statistical Machine Translation Yusuke Oda, Hiroyuki Fudaba, Graham Neubig, Hideaki Hata, Sakriani Sakti, Tomoki Toda, and Satoshi Nakamura Graduate School of Information Science, Nara Institute of Science and Technology 8916-5 Takayama, Ikoma, Nara 630-0192, Japan foda.yusuke.on9, fudaba.hiroyuki.ev6, neubig, hata, ssakti, tomoki, [email protected] Abstract—Pseudo-code written in natural language can aid comprehension of beginners because it explicitly describes the comprehension of source code in unfamiliar programming what the program is doing, but is more readable than an languages. However, the great majority of source code has no unfamiliar programming language. corresponding pseudo-code, because pseudo-code is redundant and laborious to create. If pseudo-code could be generated Fig. 1 shows an example of Python source code, and En- automatically and instantly from given source code, we could glish pseudo-code that describes each corresponding statement allow for on-demand production of pseudo-code without human in the source code.1 If the reader is a beginner at Python effort. In this paper, we propose a method to automatically (or a beginner at programming itself), the left side of Fig. generate pseudo-code from source code, specifically adopting the 1 may be difficult to understand. On the other hand, the statistical machine translation (SMT) framework. SMT, which was originally designed to translate between two natural lan- right side of the figure can be easily understood by most guages, allows us to automatically learn the relationship between English speakers, and we can also learn how to write specific source code/pseudo-code pairs, making it possible to create a operations in Python (e.g.
    [Show full text]
  • “Computer Programming IV” As Capstone Design and Laboratory Attachment Shoichi Yokoyama† Yamagata University, Yonezawa, Japan
    Journal of Engineering Education Research Vol. 15, No. 5, pp. 31~35, September, 2012 “Computer Programming IV” as Capstone Design and Laboratory Attachment Shoichi Yokoyama† Yamagata University, Yonezawa, Japan ABSTRACT A new obligatory subject, Computer Programming IV, is organized in the Department of Informatics, Faculty of Engineering, Yamagata University. The purposes of the subject are as follows: (1) Attachment to each laboratory for bachelor thesis was usually at the initial stage of the student’s fourth academic year. This subject actually moves up the attachment because students are tentatively attached to a laboratory for this subject. The interval to complete their bachelor thesis is extended by half a year. (2) In each laboratory, students cooperate with each other to complete their project. The project becomes capstone design which JABEE (Japan Accreditation Board for Engineering Education) is recently emphasizing. We not only explain the introduction of this subject, but also report some case studies. Keywords: Engineering education, Capstone design, Laboratory attachment, Project, JABEE I. Introduction 1) third academic year, so that students first took the subject in 2009. The detailed plan created before this first use is The education program of the Department of Informatics, described in [2]. Faculty of Engineering, Yamagata University (YUDI) was The present paper describes the syllabus and proceeds accredited in 2003 by Japan Accreditation Board for to describe case studies of some laboratories. We explain Engineering Education (JABEE) [1], the second to be three years of results. accredited for information engineering. Through the in- Computer Programming IV has the following two purposes: termediate examination in 2005, the program was re- accredited in 2008 for the 2009 to 2014 period.
    [Show full text]
  • Mysql Installation Guide Abstract
    MySQL Installation Guide Abstract This is the MySQL Installation Guide from the MySQL 5.7 Reference Manual. For legal information, see the Legal Notices. For help with using MySQL, please visit the MySQL Forums, where you can discuss your issues with other MySQL users. Document generated on: 2021-10-06 (revision: 70984) Table of Contents Preface and Legal Notices ............................................................................................................ v 1 Installing and Upgrading MySQL ................................................................................................ 1 2 General Installation Guidance .................................................................................................... 3 2.1 Supported Platforms ....................................................................................................... 3 2.2 Which MySQL Version and Distribution to Install .............................................................. 3 2.3 How to Get MySQL ........................................................................................................ 4 2.4 Verifying Package Integrity Using MD5 Checksums or GnuPG .......................................... 5 2.4.1 Verifying the MD5 Checksum ............................................................................... 5 2.4.2 Signature Checking Using GnuPG ........................................................................ 5 2.4.3 Signature Checking Using Gpg4win for Windows ................................................. 13 2.4.4
    [Show full text]
  • ML-Ask: Open Source Affect Analysis Software for Textual Input In
    Ptaszynski, M et al 2017 ML-Ask: Open Source Affect Analysis Software Journal of for Textual Input in Japanese. Journal of Open Research Software, 5: 16, open research software DOI: https://doi.org/10.5334/jors.149 SOFTWARE METAPAPER ML-Ask: Open Source Affect Analysis Software for Textual Input in Japanese Michal Ptaszynski1,2, Pawel Dybala3,4, Rafal Rzepka5,6, Kenji Araki5,6 and Fumito Masui2,7 1 Software Development, Open Source Version Development, JP 2 Department of Computer Science, Kitami Institute of Technology, Kitami, JP 3 Software Development, PL 4 Institute Of Middle and Far Eastern Studies, Faculty of International and Political Studies, Jagiellonian University, Kraków, PL 5 Software Development Supervision, JP 6 Language Media Laboratory, Graduate School of Information Science and Technology, Hokkaido University, Sapporo, JP 7 Open Source Version Development Supervision, JP Corresponding author: Michal Ptaszynski ([email protected]) We present ML-Ask – the first Open Source Affect Analysis system for textual input in Japanese. ML-Ask analyses the contents of an input (e.g., a sentence) and annotates it with information regarding the contained general emotive expressions, specific emotional words, valence-activation dimensions of overall expressed affect, and particular emotion types expressed with their respective expressions. ML-Ask also incorporates the Contextual Valence Shifters model for handling negation in sentences to deal with grammatically expressible shifts in the conveyed valence. The system, designed to work mainly under Linux and MacOS, can be used for research on, or applying the techniques of Affect Analysis within the framework Japanese language. It can also be used as an experimental baseline for specific research in Affect Analysis, and as a practical tool for written contents annotation.
    [Show full text]
  • Release 0.5.1 Lucy Park
    KoNLPy Documentation Release 0.5.1 Lucy Park Aug 03, 2018 Contents 1 Standing on the shoulders of giants2 2 License 3 3 Contribute 4 4 Getting started 5 4.1 What is NLP?............................................5 4.2 What do I need to get started?....................................5 5 User guide 7 5.1 Installation..............................................7 5.2 Morphological analysis and POS tagging.............................. 10 5.3 Data.................................................. 13 5.4 Examples............................................... 14 5.5 Running tests............................................. 28 5.6 References.............................................. 28 6 API 32 6.1 konlpy Package............................................ 32 7 Indices and tables 42 Python Module Index 43 i KoNLPy Documentation, Release 0.5.1 (https://travis-ci.org/konlpy/konlpy) (https://readthedocs.org/projects/konlpy/?badge=latest) KoNLPy (pro- nounced “ko en el PIE”) is a Python package for natural language processing (NLP) of the Korean language. For installation directions, see here (page 7). For users new to NLP, go to Getting started (page 5). For step-by-step instructions, follow the User guide (page 7). For specific descriptions of each module, go see the API (page 32) documents. >>> from konlpy.tag import Kkma >>> from konlpy.utils import pprint >>> kkma= Kkma() >>> pprint(kkma.sentences(u'$, HUX8요. 반갑습니다.')) [$, HUX8요.., 반갑습니다.] >>> pprint(kkma.nouns(u'È8t나 tX¬m@ C헙 t슈 ¸래커Ð ¨¨주8요.')) [È8, tX, tX¬m, ¬m, C헙, t슈, ¸래커] >>> pprint(kkma.pos(u'$X보고는 실행X½, Ð러T8지@h께 $명D \대\Á8히!^^')) [($X, NNG), (보고, NNG), (는, JX), (실행, NNG), (X½, NNG), (,, SP), (Ð러, NNG), (T8지, NNG), (@, JKM), (h께, MAG), ($명, NNG), (D, JKO), (\대\, NNG), (Á8히, MAG), (!, SF), (^^, EMO)] Contents 1 CHAPTER 1 Standing on the shoulders of giants Korean, the 13th most widely spoken language in the world (http://www.koreatimes.co.kr/www/news/nation/2014/05/116_157214.html), is a beautiful, yet complex language.
    [Show full text]
  • Latest) Konlpy (Pro- Nounced “Ko En El PIE”) Is a Python Package for Natural Language Processing (NLP) of the Korean Language
    KoNLPy Documentation Release 0.5.2 Lucy Park Dec 03, 2019 Contents 1 Standing on the shoulders of giants2 2 License 3 3 Contribute 4 4 Getting started 5 4.1 What is NLP?............................................5 4.2 What do I need to get started?....................................5 5 User guide 7 5.1 Installation..............................................7 5.2 Morphological analysis and POS tagging.............................. 10 5.3 Data.................................................. 13 5.4 Examples............................................... 15 5.5 Running tests............................................. 29 5.6 References.............................................. 29 6 API 33 6.1 konlpy Package............................................ 33 7 Indices and tables 44 Python Module Index 45 Index 46 i KoNLPy Documentation, Release 0.5.2 (https://travis-ci.org/konlpy/konlpy) (https://readthedocs.org/projects/konlpy/?badge=latest) KoNLPy (pro- nounced “ko en el PIE”) is a Python package for natural language processing (NLP) of the Korean language. For installation directions, see here (page 7). For users new to NLP, go to Getting started (page 5). For step-by-step instructions, follow the User guide (page 7). For specific descriptions of each module, go see the API (page 33) documents. >>> from konlpy.tag import Kkma >>> from konlpy.utils import pprint >>> kkma= Kkma() >>> pprint(kkma.sentences(u'$, HUX8요. 반갑습니다.')) [$, HUX8요.., 반갑습니다.] >>> pprint(kkma.nouns(u'È8t나 tX¬m@ C헙 t슈 ¸래커Ð ¨¨주8요.')) [È8, tX, tX¬m, ¬m, C헙, t슈, ¸래커] >>> pprint(kkma.pos(u'$X보고는 실행X½, Ð러T8지@h께 $명D \대\Á8히!^^')) [($X, NNG), (보고, NNG), (는, JX), (실행, NNG), (X½, NNG), (,, SP), (Ð러, NNG), (T8지, NNG), (@, JKM), (h께, MAG), ($명, NNG), (D, JKO), (\대\, NNG), (Á8히, MAG), (!, SF), (^^, EMO)] Contents 1 CHAPTER 1 Standing on the shoulders of giants Korean, the 13th most widely spoken language in the world (http://www.koreatimes.co.kr/www/news/nation/2014/05/116_157214.html), is a beautiful, yet complex language.
    [Show full text]
  • Comparison of Korean Preprocessing Performance According to Tokenizer in NMT Transformer Model
    Journal of Advances in Information Technology Vol. 11, No. 4, November 2020 Comparison of Korean Preprocessing Performance according to Tokenizer in NMT Transformer Model Geumcheol Kim and Sang-Hong Lee Department of Computer Science & Engineering, Anyang University, Anyang-si, Republic of Korea Email: [email protected], [email protected] Abstract—Mechanical translation using neural networks in greatly depending on the characteristics of each language. natural language processing is making rapid progress. With Recently, we are using tokenizer based on morphological the development of natural language processing model and analysis that identifies grammatical structures such as tokenizer, accurate translation is becoming possible. In this root, prefix, and verb, and Tokenizer based On Byte Pair paper, we will create a transformer model that shows high Encoding (BPE), a data compression technique that can performance recently and compare the performance of English Korean according to tokenizer. We made a reduce problems (Out of Vocabulary, OOV) that do not traditional neural network-based Neural Machine exist in learning. Translation (NMT) model using a transformer and Because the performance of these tokenizer varies compared the Korean translation results according to the from language to language, it seems necessary to find tokenizer. The Byte Pair Encoding (BPE)-based Tokenizer tokenizer that fits the Korean language. The composition showed a small vocabulary size and a fast learning speed, of this paper aims to introduce the Transformer model but due to the nature of Korean, the translation result was and create the English-Korean NMT model to enhance not good. The morphological analysis-based Tokenizer the performance of Korean translation by comparing the showed that the parallel corpus data is large and the performance according to tokenizer.
    [Show full text]
  • Department of Computer Engineering
    Department of Computer Engineering Objective behind Technical Magazine Department of Computer Engineering is very happy and proud to publish technical magazine of year 2018-19. We have gathered technical articles from our students worked as intern in Tech Mahindra IT industry. These articles gives guidelines to students regarding what is expected in IT industry and how various technologies are applied for the projects in IT industry. Department has set objective to bring technical competency among the students. Department is taking efforts for the same since second year of these students. Department arranges various expert lectures,workshops,industrial visits,learning contents beyond syllabus for the students. All these activities are planned to make students aware of current need of IT industry. Outcome of these efforts is reflected through their final year projects ,placement and admission to higher studies. We had collected project details from our studenst who worked in Tech Mahindra as intern . We are sharing experience technical work of these students with our students through this magazine. Our objective behind sharing this information is to motivate students and to create awareness among them about current need in IT industy. Coordinator HOD S.P.Pimpalkar S.N.Zaware Contents 1. About Tech Mahindra Limited 2. About Maker‟s Lab 3. Student’s work experience 4. Dynamic HTML code generation Pratiksha Jatti Arbaaz Shaikh Sujay Patil Isha Doshi 5. Translation of English words from a German-English bi-lingual text for Natural Language Processing Gautami Mudaliar Tejasvi Gadakh 6. Japanese NLP MeCab Tool Jincy Biju Mrinal Bhangale 7. Counter Raffle Unity3D Software Jaymala Pawar Smita Muke Parv Javheri 8.
    [Show full text]
  • Release Notes for Oracle Linux 8.2
    Oracle® Linux 8 Release Notes for Oracle Linux 8.2 F31299-14 August 2021 Oracle Legal Notices Copyright © 2020, 2021, Oracle and/or its affiliates. This software and related documentation are provided under a license agreement containing restrictions on use and disclosure and are protected by intellectual property laws. Except as expressly permitted in your license agreement or allowed by law, you may not use, copy, reproduce, translate, broadcast, modify, license, transmit, distribute, exhibit, perform, publish, or display any part, in any form, or by any means. Reverse engineering, disassembly, or decompilation of this software, unless required by law for interoperability, is prohibited. The information contained herein is subject to change without notice and is not warranted to be error-free. If you find any errors, please report them to us in writing. If this is software or related documentation that is delivered to the U.S. Government or anyone licensing it on behalf of the U.S. Government, then the following notice is applicable: U.S. GOVERNMENT END USERS: Oracle programs (including any operating system, integrated software, any programs embedded, installed or activated on delivered hardware, and modifications of such programs) and Oracle computer documentation or other Oracle data delivered to or accessed by U.S. Government end users are "commercial computer software" or "commercial computer software documentation" pursuant to the applicable Federal Acquisition Regulation and agency-specific supplemental regulations. As such, the use, reproduction, duplication, release, display, disclosure, modification, preparation of derivative works, and/or adaptation of i) Oracle programs (including any operating system, integrated software, any programs embedded, installed or activated on delivered hardware, and modifications of such programs), ii) Oracle computer documentation and/or iii) other Oracle data, is subject to the rights and limitations specified in the license contained in the applicable contract.
    [Show full text]
  • Arxiv:2003.06651V1 [Cs.CL] 14 Mar 2020
    Word Sense Disambiguation for 158 Languages using Word Embeddings Only Varvara Logacheva1, Denis Teslenko2, Artem Shelmanov1, Steffen Remus3, Dmitry Ustalov4?, Andrey Kutuzov5, Ekaterina Artemova6, Chris Biemann3, Simone Paolo Ponzetto4, Alexander Panchenko1 1Skolkovo Institute of Science and Technology, Moscow, Russia [email protected] 2Ural Federal University, Yekaterinburg, Russia 3Universität Hamburg, Hamburg, Germany 4Universität Mannheim, Mannheim, Germany 5University of Oslo, Oslo, Norway 6Higher School of Economics, Moscow, Russia Abstract Disambiguation of word senses in context is easy for humans, but is a major challenge for automatic approaches. Sophisticated supervised and knowledge-based models were developed to solve this task. However, (i) the inherent Zipfian distribution of supervised training instances for a given word and/or (ii) the quality of linguistic knowledge representations motivate the development of completely unsupervised and knowledge-free approaches to word sense disambiguation (WSD). They are particularly useful for under-resourced languages which do not have any resources for building either supervised and/or knowledge-based models. In this paper, we present a method that takes as input a standard pre-trained word embedding model and induces a fully-fledged word sense inventory, which can be used for disambiguation in context. We use this method to induce a collection of sense inventories for 158 languages on the basis of the original pre-trained fastText word embeddings by Grave et al. (2018), enabling WSD in these languages. Models and system are available online. Keywords: word sense induction, word sense disambiguation, word embeddings, sense embeddings, graph clustering 1. Introduction biguation needs knowledge-rich approaches. There are many polysemous words in virtually any lan- We tackle this problem by suggesting a method of post- guage.
    [Show full text]