Stackoverflow Vs Kaggle: a Study of Developer Discussions About Data Science
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
A Literature Review on Patent Texts Analysis Techniques Guanlin Li
International Journal of Knowledge www.ijklp.org and Language Processing KLP International ⓒ2018 ISSN 2191-2734 Volume 9, Number 3, 2018 pp.1–-15 A Literature Review on Patent Texts Analysis Techniques Guanlin Li School of Software & Microelectronics Peking University No.5 Yiheyuan Road Haidian District Beijing, 100871, China [email protected] Received Sep 2018; revised Sep 2018 ABSTRACT. Patent data are expanding explosively nowadays with the advent of new technologies, and it’s significant to put forward the method of automatic patent analysis and use appropriate patent analysis techniques to make use of scattered, multi-source and interrelated patent text, in order to improve the efficiency of patent analyzing. Currently there are a lot of techniques being used to process patent intelligence. This literature review focuses on automatic patent text analysis techniques, which use computer to automatically analyze large scale of patent texts and find useful information in them. These techniques are divided into the following categories: semantic analysis based techniques, rule based techniques, machine learning based techniques and patent text clustering techniques. Keywords: Patent analysis, text mining, patent intelligence 1. Introduction. Patents are important sources of technology information, in which we can find great value of scientific and technological intelligence. At the same time, patents are of high commercial value. Enterprise analyzes patent information, which contains more than 90% of the world's scientific and technological -
Full Academic Cv: Grigori Fursin, Phd
FULL ACADEMIC CV: GRIGORI FURSIN, PHD Current position VP of MLOps at OctoML.ai (USA) Languages English (British citizen); French (spoken, intermediate); Russian (native) Address Paris region, France Education PhD in computer science with the ORS award from the University of Edinburgh (2004) Website cKnowledge.io/@gfursin LinkedIn linkedin.com/in/grigorifursin Publications scholar.google.com/citations?user=IwcnpkwAAAAJ (H‐index: 25) Personal e‐mail [email protected] I am a computer scientist, engineer, educator and business executive with an interdisciplinary background in computer engineering, machine learning, physics and electronics. I am passionate about designing efficient systems in terms of speed, energy, accuracy and various costs, bringing deep tech to the real world, teaching, enabling reproducible research and sup‐ porting open science. MY ACADEMIC RESEARCH (TENURED RESEARCH SCIENTIST AT INRIA WITH PHD IN CS FROM THE UNIVERSITY OF EDINBURGH) • I was among the first researchers to combine machine learning, autotuning and knowledge sharing to automate and accelerate the development of efficient software and hardware by several orders of magnitudeGoogle ( scholar); • developed open‐source tools and started educational initiatives (ACM, Raspberry Pi foundation) to bring this research to the real world (see use cases); • prepared and tought M.S. course at Paris‐Saclay University on using ML to co‐design efficient software and hardare (self‐optimizing computing systems); • gave 100+ invited research talks; • honored to receive the -
Delivering a Machine Learning Course on HPC Resources
Delivering a machine learning course on HPC resources Stefano Bagnasco, Federica Legger, Sara Vallero This project has received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement LHCBIGDATA No 799062 The course ● Title: Big Data Science and Machine Learning ● Graduate Program in Physics at University of Torino ● Academic year 2018-2019: ○ Starts in 2 weeks ○ 2 CFU, 10 hours (theory+hands-on) ○ 7 registered students ● Academic year 2019-2020: ○ March 2020 ○ 4 CFU, 16 hours (theory+hands-on) ○ Already 2 registered students 2 The Program ● Introduction to big data science ○ The big data pipeline: state-of-the-art tools and technologies ● ML and DL methods: ○ supervised and unsupervised models, ○ neural networks ● Introduction to computer architecture and parallel computing patterns ○ Initiation to OpenMP and MPI (2019-2020) ● Parallelisation of ML algorithms on distributed resources ○ ML applications on distributed architectures ○ Beyond CPUs: GPUs, FPGAs (2019-2020) 3 The aim ● Applied ML course: ○ Many courses on advanced statistical methods available elsewhere ○ Focus on hands-on sessions ● Students will ○ Familiarise with: ■ ML methods and libraries ■ Analysis tools ■ Collaborative models ■ Container and cloud technologies ○ Learn how to ■ Optimise ML models ■ Tune distributed training ■ Work with available resources 4 Hands-on ● Python with Jupyter notebooks ● Prerequisites: some familiarity with numpy and pandas ● ML libraries ○ Day 2: MLlib ■ Gradient Boosting Trees GBT ■ Multilayer Perceptron Classifier MCP ○ Day 3: Keras ■ Sequential model ○ Day 4: bigDL ■ Sequential model ● Coming: ○ CUDA ○ MPI ○ OpenMP 5 ML Input Dataset for hands on ● Open HEP dataset @UCI, 7GB (.csv) ● Signal (heavy Higgs) + background ● 10M MC events (balanced, 50%:50%) ○ 21 low level features ■ pt’s, angles, MET, b-tag, … Signal ○ 7 high level features ■ Invariant masses (m(jj), m(jjj), …) Background: ttbar Baldi, Sadowski, and Whiteson. -
Density-Based Clustering of Static and Dynamic Functional MRI Connectivity
Rangaprakash et al. Brain Inf. (2020) 7:19 https://doi.org/10.1186/s40708-020-00120-2 Brain Informatics RESEARCH Open Access Density-based clustering of static and dynamic functional MRI connectivity features obtained from subjects with cognitive impairment D. Rangaprakash1,2,3, Toluwanimi Odemuyiwa4, D. Narayana Dutt5, Gopikrishna Deshpande6,7,8,9,10,11,12,13* and Alzheimer’s Disease Neuroimaging Initiative Abstract Various machine-learning classifcation techniques have been employed previously to classify brain states in healthy and disease populations using functional magnetic resonance imaging (fMRI). These methods generally use super- vised classifers that are sensitive to outliers and require labeling of training data to generate a predictive model. Density-based clustering, which overcomes these issues, is a popular unsupervised learning approach whose util- ity for high-dimensional neuroimaging data has not been previously evaluated. Its advantages include insensitivity to outliers and ability to work with unlabeled data. Unlike the popular k-means clustering, the number of clusters need not be specifed. In this study, we compare the performance of two popular density-based clustering methods, DBSCAN and OPTICS, in accurately identifying individuals with three stages of cognitive impairment, including Alzhei- mer’s disease. We used static and dynamic functional connectivity features for clustering, which captures the strength and temporal variation of brain connectivity respectively. To assess the robustness of clustering to noise/outliers, we propose a novel method called recursive-clustering using additive-noise (R-CLAN). Results demonstrated that both clustering algorithms were efective, although OPTICS with dynamic connectivity features outperformed in terms of cluster purity (95.46%) and robustness to noise/outliers. -
Semantic Computing
SEMANTIC COMPUTING 10651_9789813227910_TP.indd 1 24/7/17 1:49 PM World Scientific Encyclopedia with Semantic Computing and Robotic Intelligence ISSN: 2529-7686 Published Vol. 1 Semantic Computing edited by Phillip C.-Y. Sheu 10651 - Semantic Computing.indd 1 27-07-17 5:07:03 PM World Scientific Encyclopedia with Semantic Computing and Robotic Intelligence – Vol. 1 SEMANTIC COMPUTING Editor Phillip C-Y Sheu University of California, Irvine World Scientific NEW JERSEY • LONDON • SINGAPORE • BEIJING • SHANGHAI • HONG KONG • TAIPEI • CHENNAI • TOKYO 10651_9789813227910_TP.indd 2 24/7/17 1:49 PM World Scientific Encyclopedia with Semantic Computing and Robotic Intelligence ISSN: 2529-7686 Published Vol. 1 Semantic Computing edited by Phillip C.-Y. Sheu Catherine-D-Ong - 10651 - Semantic Computing.indd 1 22-08-17 1:34:22 PM Published by World Scientific Publishing Co. Pte. Ltd. 5 Toh Tuck Link, Singapore 596224 USA office: 27 Warren Street, Suite 401-402, Hackensack, NJ 07601 UK office: 57 Shelton Street, Covent Garden, London WC2H 9HE Library of Congress Cataloging-in-Publication Data Names: Sheu, Phillip C.-Y., editor. Title: Semantic computing / editor, Phillip C-Y Sheu, University of California, Irvine. Other titles: Semantic computing (World Scientific (Firm)) Description: Hackensack, New Jersey : World Scientific, 2017. | Series: World Scientific encyclopedia with semantic computing and robotic intelligence ; vol. 1 | Includes bibliographical references and index. Identifiers: LCCN 2017032765| ISBN 9789813227910 (hardcover : alk. paper) | ISBN 9813227915 (hardcover : alk. paper) Subjects: LCSH: Semantic computing. Classification: LCC QA76.5913 .S46 2017 | DDC 006--dc23 LC record available at https://lccn.loc.gov/2017032765 British Library Cataloguing-in-Publication Data A catalogue record for this book is available from the British Library. -
Mobility Modes Awareness from Trajectories Based on Clustering and a Convolutional Neural Network
International Journal of Geo-Information Article Mobility Modes Awareness from Trajectories Based on Clustering and a Convolutional Neural Network Rui Chen * , Mingjian Chen, Wanli Li, Jianguang Wang and Xiang Yao Institute of Geospatial Information, Information Engineering University, Zhengzhou 450000, China; [email protected] (M.C.); [email protected] (W.L.); [email protected] (J.W.); [email protected] (X.Y.) * Correspondence: [email protected]; Tel.: +86-181-4029-5462 Received: 13 March 2019; Accepted: 5 May 2019; Published: 7 May 2019 Abstract: Massive trajectory data generated by ubiquitous position acquisition technology are valuable for knowledge discovery. The study of trajectory mining that converts knowledge into decision support becomes appealing. Mobility modes awareness is one of the most important aspects of trajectory mining. It contributes to land use planning, intelligent transportation, anomaly events prevention, etc. To achieve better comprehension of mobility modes, we propose a method to integrate the issues of mobility modes discovery and mobility modes identification together. Firstly, route patterns of trajectories were mined based on unsupervised origin and destination (OD) points clustering. After the combination of route patterns and travel activity information, different mobility modes existing in history trajectories were discovered. Then a convolutional neural network (CNN)-based method was proposed to identify the mobility modes of newly emerging trajectories. The labeled history trajectory data were utilized to train the identification model. Moreover, in this approach, we introduced a mobility-based trajectory structure as the input of the identification model. This method was evaluated with a real-world maritime trajectory dataset. The experiment results indicated the excellence of this method. -
Representatives for Visually Analyzing Cluster Hierarchies
Visually Mining Through Cluster Hierarchies Stefan Brecheisen Hans-Peter Kriegel Peer Kr¨oger Martin Pfeifle Institute for Computer Science University of Munich Oettingenstr. 67, 80538 Munich, Germany brecheis,kriegel,kroegerp,pfeifle @dbs.informatik.uni-muenchen.de f g Abstract providing the user with significant and quick information. Similarity search in database systems is becoming an increas- In this paper, we introduce algorithms for automatically detecting hierarchical clusters along with their correspond- ingly important task in modern application domains such as ing representatives. In order to evaluate our ideas, we de- multimedia, molecular biology, medical imaging, computer veloped a prototype called BOSS (Browsing OPTICS Plots aided engineering, marketing and purchasing assistance as for Similarity Search). BOSS is based on techniques related well as many others. In this paper, we show how visualizing to visual data mining. It helps to visually analyze cluster the hierarchical clustering structure of a database of objects hierarchies by providing meaningful cluster representatives. can aid the user in his time consuming task to find similar ob- To sum up, the main contributions of this paper are as jects. We present related work and explain its shortcomings follows: which led to the development of our new methods. Based We explain how different important application ranges on reachability plots, we introduce approaches which auto- • would benefit from a tool which allows visually mining matically extract the significant clusters in a hierarchical through cluster hierarchies. cluster representation along with suitable cluster represen- We reason why the hierarchical clustering algorithm tatives. These techniques can be used as a basis for visual • OPTICS forms a suitable foundation for such a brows- data mining. -
B.Sc Computer Science with Specialization in Artificial Intelligence & Machine Learning
B.Sc Computer Science with Specialization in Artificial Intelligence & Machine Learning Curriculum & Syllabus (Based on Choice Based Credit System) Effective from the Academic year 2020-2021 PROGRAMME EDUCATIONAL OBJECTIVES (PEO) PEO 1 : Graduates will have solid basics in Mathematics, Programming, Machine Learning, Artificial Intelligence fundamentals and advancements to solve technical problems. PEO 2 : Graduates will have the capability to apply their knowledge and skills acquired to solve the issues in real world Artificial Intelligence and Machine learning areas and to develop feasible and reliable systems. PEO 3 : Graduates will have the potential to participate in life-long learning through the successful completion of advanced degrees, continuing education, certifications and/or other professional developments. PEO 4 : Graduates will have the ability to apply the gained knowledge to improve the society ensuring ethical and moral values. PEO 5 : Graduates will have exposure to emerging cutting edge technologies and excellent training in the field of Artificial Intelligence & Machine learning PROGRAMME OUTCOMES (PO) PO 1 : Develop knowledge in the field of AI & ML courses necessary to qualify for the degree. PO 2 : Acquire a rich basket of value added courses and soft skill courses instilling self-confidence and moral values. PO 3 : Develop problem solving, decision making and communication skills. PO 4 : Demonstrate social responsibility through Ethics and values and Environmental Studies related activities in the campus and in the society. PO 5 : Strengthen the critical thinking skills and develop professionalism with the state of art ICT facilities. PO 6 : Quality for higher education, government services, industry needs and start up units through continuous practice of preparatory examinations. -
Visualization of the Optics Algorithm
Visualization of the Optics Algorithm Gregor Redinger* Markus Hunner† 01163940 01503441 VIS 2017 - Universitt Wien ABSTRACT Our Visualization not only helps in interpreting this Reachability- In our Project we have the goal to provide a visualization for the Plot, but also provides the functionality of picking a cutoff value OPTICS Clustering Algorithm. There hardly exist in-depth visual- for parameter e, that we called e’, with this cutoff it is possible to izations of this algorithm and we developed a online tool to fill this interpret one result of the OPTICS algorithm like several results of the related DBSCAN clustering algorithm. Thus our visualization gap. In this paper we will give you a deep insight in our solution. 0 In a first step we give an introduction to our visualization approach. provides a parameter space exploration for all e < e. Then we will discuss related work and introduce the different parts Users Therefore our visualization enables users without prior of our visualization. Then we discuss the software stack we used knowledge of the OPTICS algorithm and its unusual result format for our application and which challenges and problems we encoun- to easily interpret the ordered result structure as cluster assignments. tered during the development. After this, we will look at concrete Additionally it allows the user to explore the parameter space of use cases for our visualization, take a look at the performance and the eparameter in an intuitive way, without the need to educate the present the results of a evaluation in form of a field study. At last we user on the algorithmic details of OPTICS and why introducing a will discuss the strengths and weaknesses of our approach and take cutoff value for the calculated distance measures corresponds to the a closer look at the lessons we learned from our project. -
Delivering a Machine Learning Course on HPC Resources
EPJ Web of Conferences 245, 08016 (2020) https://doi.org/10.1051/epjconf/202024508016 CHEP 2019 Delivering a machine learning course on HPC resources Stefano Bagnasco1, Gabriele Gaetano Fronzé1, Federica Legger1;∗ Stefano Lusso1, and Sara Vallero1 1Istituto Nazionale di Fisica Nucleare, via Pietro Giuria 1, 10125 Torino, Italy Abstract. In recent years, proficiency in data science and machine learning (ML) became one of the most requested skills for jobs in both industry and academy. Machine learning algorithms typically require large sets of data to train the models and extensive usage of computing resources, both for training and inference. Especially for deep learning algorithms, training performances can be dramatically improved by exploiting Graphical Processing Units (GPUs). The needed skill set for a data scientist is therefore extremely broad, and ranges from knowledge of ML models to distributed programming on heterogeneous resources. While most of the available training resources focus on ML algorithms and tools such as TensorFlow, we designed a course for doctoral students where model training is tightly coupled with underlying technologies that can be used to dynamically provision resources. Throughout the course, students have access to a dedicated cluster of computing nodes on local premises. A set of libraries and helper functions is provided to execute a parallelized ML task by automatically deploying a Spark driver and several Spark execution nodes as Docker containers. Task scheduling is managed by an orchestration layer (Kubernetes). This solution automates the delivery of the software stack required by a typical ML workflow and enables scalability by allowing the execution of ML tasks, including training, over commodity (i.e. -
Bootstrap Aggregating and Random Forest
Bootstrap Aggregating and Random Forest Tae-Hwy Lee, Aman Ullah and Ran Wang Abstract Bootstrap Aggregating (Bagging) is an ensemble technique for improving the robustness of forecasts. Random Forest is a successful method based on Bagging and Decision Trees. In this chapter, we explore Bagging, Random Forest, and their variants in various aspects of theory and practice. We also discuss applications based on these methods in economic forecasting and inference. 1 Introduction The last 30 years witnessed the dramatic developments and applications of Bag- ging and Random Forests. The core idea of Bagging is model averaging. Instead of choosing one estimator, Bagging considers a set of estimators trained on the boot- strap samples and then takes the average output of them, which is helpful in improv- ing the robustness of an estimator. In Random Forest, we grow a set of Decision Trees to construct a ‘forest’ to balance the accuracy and robustness for forecasting. This chapter is organized as follows. First, we introduce Bagging and some vari- ants. Second, we discuss Decision Trees in details. Then, we move to Random Forest which is one of the most attractive machine learning algorithms combining Decision Trees and Bagging. Finally, several economic applications of Bagging and Random Forest are discussed. As we mainly focus on the regression problems rather than classification problems, the response y is a real number, unless otherwise mentioned. Tae-Hwy Lee Department of Economics, University of California, Riverside, e-mail: [email protected] Aman Ullah Department of Economics, University of California, Riverside, e-mail: [email protected] Ran Wang Department of Economics, University of California, Riverside, e-mail: ran.wang@email. -
Iowa State University – 2016-2017 1
Iowa State University – 2016-2017 1 CPR E 281 Digital Logic 4 COMPUTER SCIENCE COM S 309 Software Development Practices 3 COM S 311 Design and Analysis of Algorithms 3 http://www.cs.iastate.edu COM S 321 Introduction to Computer Architecture and 3 The undergraduate curriculum in computer science leading to the degree Machine-Level Programming Bachelor of Science is accredited by the Computing Accreditation COM S 327 Advanced Programming Techniques 3 Commission of ABET, http://www.abet.org , and equips students with COM S 331 Theory of Computing 3 a sound knowledge of the foundations of computer science, as well COM S 342 Principles of Programming Languages 3 as the problem solving and system design skills necessary to create robust, efficient, reliable, scalable, and flexible software systems. The COM S 352 Introduction to Operating Systems 3 B.S. degree in Computer Science prepares students for graduate study COM S 362 Object-Oriented Analysis and Design 3 in computer science, and for various business, industry, and government positions including computer scientists, information technologists, In addition to the above courses, at least 6 credits of 400-level courses and software developers. The main educational objectives of the are required, with a grade of C- or better. At least 3 credits must be from computer science program at Iowa State University are that its graduates courses in Group 1 (oral and written reports) and the remaining credits demonstrate expertise, engagement, and learning within three to five from courses in Group 1 or 2. Com S 402 may be applied towards the years after graduation.