Learnings from Kaggle's Forecasting Competitions
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
Naiad: a Timely Dataflow System
Naiad: A Timely Dataflow System Derek G. Murray Frank McSherry Rebecca Isaacs Michael Isard Paul Barham Mart´ın Abadi Microsoft Research Silicon Valley {derekmur,mcsherry,risaacs,misard,pbar,abadi}@microsoft.com Abstract User queries Low-latency query are received responses are delivered Naiad is a distributed system for executing data parallel, cyclic dataflow programs. It offers the high throughput Queries are of batch processors, the low latency of stream proces- joined with sors, and the ability to perform iterative and incremental processed data computations. Although existing systems offer some of Complex processing these features, applications that require all three have re- incrementally re- lied on multiple platforms, at the expense of efficiency, Updates to executes to reflect maintainability, and simplicity. Naiad resolves the com- data arrive changed data plexities of combining these features in one framework. A new computational model, timely dataflow, under- Figure 1: A Naiad application that supports real- lies Naiad and captures opportunities for parallelism time queries on continually updated data. The across a wide class of algorithms. This model enriches dashed rectangle represents iterative processing that dataflow computation with timestamps that represent incrementally updates as new data arrive. logical points in the computation and provide the basis for an efficient, lightweight coordination mechanism. requirements: the application performs iterative process- We show that many powerful high-level programming ing on a real-time data stream, and supports interac- models can be built on Naiad’s low-level primitives, en- tive queries on a fresh, consistent view of the results. abling such diverse tasks as streaming data analysis, it- However, no existing system satisfies all three require- erative machine learning, and interactive graph mining. -
Machine Learning Is Driving an Innovation Wave in Saas Software
Machine Learning is Driving an Innovation Wave in SaaS Software By Jeff Houston, CFA [email protected] | 512.364.2258 SaaS software vendors are enhancing their soluti ons with machine learning (ML) algorithms. ML is the ability of a computer to either automate or recommend appropriate actions by applying probability to data with a feedback loop that enables learning. We view ML as a subset of artificial intelligence (AI), which represents a broad collection of tools that include/leverage ML, such as natural language processing (NLP), self-driving cars, and robotics as well as some that are tangent to ML, such as logical rule-based algorithms. Although there have been multiple “AI Winters” since the 1950s where hype cycles were followed by a dearth of funding, ML is now an enduring innovation driver, in our opinion, due in part to the increasing ubiquity of affordable cloud-based processing power, data storage—we expect the pace of breakthroughs to accelerate. automation, customer support, marketing, and human resources) through both organic and M&A These innovations will make it easier for companies initiatives. As for vertical-specific solutions, we think to benefit beyond what is available with traditional that a new crop of vendors will embrace ML and business intelligence (BI) solutions, like IBM/ achieve billion-dollar valuations. VCs are making big Cognos and Tableau, that simply describe what bets that these hypotheses will come to fruition as happened in the past. New ML solutions are demonstrated by the $5B they poured into 550 amplifying intelligence while enabling consistent and startups using AI as a core part of their solution in better-informed reasoning, behaving as thought 20162. -
Economic and Social Impacts of Google Cloud September 2018 Economic and Social Impacts of Google Cloud |
Economic and social impacts of Google Cloud September 2018 Economic and social impacts of Google Cloud | Contents Executive Summary 03 Introduction 10 Productivity impacts 15 Social and other impacts 29 Barriers to Cloud adoption and use 38 Policy actions to support Cloud adoption 42 Appendix 1. Country Sections 48 Appendix 2. Methodology 105 This final report (the “Final Report”) has been prepared by Deloitte Financial Advisory, S.L.U. (“Deloitte”) for Google in accordance with the contract with them dated 23rd February 2018 (“the Contract”) and on the basis of the scope and limitations set out below. The Final Report has been prepared solely for the purposes of assessment of the economic and social impacts of Google Cloud as set out in the Contract. It should not be used for any other purposes or in any other context, and Deloitte accepts no responsibility for its use in either regard. The Final Report is provided exclusively for Google’s use under the terms of the Contract. No party other than Google is entitled to rely on the Final Report for any purpose whatsoever and Deloitte accepts no responsibility or liability or duty of care to any party other than Google in respect of the Final Report and any of its contents. As set out in the Contract, the scope of our work has been limited by the time, information and explanations made available to us. The information contained in the Final Report has been obtained from Google and third party sources that are clearly referenced in the appropriate sections of the Final Report. -
Youtube-8M Video Classification
YouTube-8M Video Classification Alexandre Gauthier and Haiyu Lu Stanford University 450 Serra Mall Stanford, CA 94305 [email protected] [email protected] Abstract metadata. To this end, Google has released the YouTube- 8M dataset [1]. This set of over 8 million labeled videos al- Convolutional Neural Networks (CNNs) have been lows the public to train algorithms to learn how to associate widely applied for image classification. Encouraged by labels with YouTube videos, helping to solve the problem these results, we apply a CNN model to large scale video of video classification. classification using the recently published YouTube-8M We use multilayer convolutional neural networks to clas- dataset, which contains more than 8 million videos labeled sify YouTube-8M videos. The input to our algorithm is with 4800 classes. We study models with different archi- pre-processed video data taken from individual frames of tectures, and find best performance on a 3-layer spatial- the videos. We then use a CNN to output a score for each temporal CNN ([CNN-BatchNormzliation-LeakyReLU]3- label in the vocabulary. We briefly tried LSTM and fully- FullyConnected). We propose that this is because this connected models, but found that convolutional models per- model takes both feature and frame information into ac- form better. count. By optimizing hyperparameters including learning rate, we achieve a Hit@1 of 79.1%, a significant improve- 2. Related Work ment over the 64.5% for the best model Google trained in Large-scale datasets such as ImageNet [2] have played their initial analysis. Further improvements may be achiev- a crucial role for the rapid progress of image recognition able by incorporating audio information into our models. -
ML Cheatsheet Documentation
ML Cheatsheet Documentation Team Sep 02, 2021 Basics 1 Linear Regression 3 2 Gradient Descent 21 3 Logistic Regression 25 4 Glossary 39 5 Calculus 45 6 Linear Algebra 57 7 Probability 67 8 Statistics 69 9 Notation 71 10 Concepts 75 11 Forwardpropagation 81 12 Backpropagation 91 13 Activation Functions 97 14 Layers 105 15 Loss Functions 117 16 Optimizers 121 17 Regularization 127 18 Architectures 137 19 Classification Algorithms 151 20 Clustering Algorithms 157 i 21 Regression Algorithms 159 22 Reinforcement Learning 161 23 Datasets 165 24 Libraries 181 25 Papers 211 26 Other Content 217 27 Contribute 223 ii ML Cheatsheet Documentation Brief visual explanations of machine learning concepts with diagrams, code examples and links to resources for learning more. Warning: This document is under early stage development. If you find errors, please raise an issue or contribute a better definition! Basics 1 ML Cheatsheet Documentation 2 Basics CHAPTER 1 Linear Regression • Introduction • Simple regression – Making predictions – Cost function – Gradient descent – Training – Model evaluation – Summary • Multivariable regression – Growing complexity – Normalization – Making predictions – Initialize weights – Cost function – Gradient descent – Simplifying with matrices – Bias term – Model evaluation 3 ML Cheatsheet Documentation 1.1 Introduction Linear Regression is a supervised machine learning algorithm where the predicted output is continuous and has a constant slope. It’s used to predict values within a continuous range, (e.g. sales, price) rather than trying to classify them into categories (e.g. cat, dog). There are two main types: Simple regression Simple linear regression uses traditional slope-intercept form, where m and b are the variables our algorithm will try to “learn” to produce the most accurate predictions. -
Student Resume Book
Class of 2019 STUDENT RESUME BOOK [email protected] CLASS OF 2019 PROFILE 46 48% WOMEN CLASS SIZE PRIOR COMPANIES Feldsted & Scolney 2 Amazon.com, Inc Fincare Small Finance Bank American Airlines Group General Electric Co AVERAGE YEARS American Family Insurance Mu Sigma, Inc PRIOR WORK Bank of New York, Qualtrics LLC EXPERIENCE Melloncorp Quantiphi Inc Brookhurst Insurance Skyline Technologies Services TechLoss Consulting & CEB Inc Restoration, Inc Cecil College ThoughtWorks, Inc Darwin Labs US Army Economists, Inc UCSD Guardian United Health Group, Inc PRIOR DEGREE Welch Consulting, Ltd CONCENTRATIONS ZS Associates Inc & BACKGROUND* MATH & STATISTICS 82.61% ENGINEERING 32.61% ECONOMICS 30.43% COMPUTER SCIENCE & IT 19.57% SOCIAL SCIENCES 13.04% HUMANITIES 10.87% BUSINESS 8.70% OTHER SCIENCES 8.70% *many students had multiple majors or OTHER 8.70% specializations; for example, of the 82.61% of students with a math and/or statistics DATA SCIENCE 2.17% background, most had an additional major or concentration and therefore are represented in additional categories. 0 20 40 60 80 100 Class of 2019 SAURABH ANNADATE ALICIA BURRIS ALEX BURZINSKI TED CARLSON IVAN CHEN ANGELA CHEN CARSON CHEN HARISH CHOCKALINGAM SABARISH CHOCKALINGAM TONY COLUCCI JD COOK SOPHIE DU JAMES FAN MICHAEL FEDELL JOYCE FENG NATHAN FRANKLIN TIAN FU ELLIOT GARDNER MAX HOLIBER NAOMI KADUWELA MATT KEHOE JOE KUPRESANIN MICHEL LEROY JONATHAN LEWYCKYJ Class of 2019 HENRY PARK KAREN QIAN FINN QIAO RACHEL ROSENBERG SHREYAS SABNIS SURABHI SETH TOVA SIMONSON MOLLY SROUR -
1.2 Le Zhang, Microsoft
Objective • “Taking recommendation technology to the masses” • Helping researchers and developers to quickly select, prototype, demonstrate, and productionize a recommender system • Accelerating enterprise-grade development and deployment of a recommender system into production • Key takeaways of the talk • Systematic overview of the recommendation technology from a pragmatic perspective • Best practices (with example codes) in developing recommender systems • State-of-the-art academic research in recommendation algorithms Outline • Recommendation system in modern business (10min) • Recommendation algorithms and implementations (20min) • End to end example of building a scalable recommender (10min) • Q & A (5min) Recommendation system in modern business “35% of what consumers purchase on Amazon and 75% of what they watch on Netflix come from recommendations algorithms” McKinsey & Co Challenges Limited resource Fragmented solutions Fast-growing area New algorithms sprout There is limited reference every day – not many Packages/tools/modules off- and guidance to build a people have such the-shelf are very recommender system on expertise to implement fragmented, not scalable, scale to support and deploy a and not well compatible with enterprise-grade recommender by using each other scenarios the state-of-the-arts algorithms Microsoft/Recommenders • Microsoft/Recommenders • Collaborative development efforts of Microsoft Cloud & AI data scientists, Microsoft Research researchers, academia researchers, etc. • Github url: https://github.com/Microsoft/Recommenders • Contents • Utilities: modular functions for model creation, data manipulation, evaluation, etc. • Algorithms: SVD, SAR, ALS, NCF, Wide&Deep, xDeepFM, DKN, etc. • Notebooks: HOW-TO examples for end to end recommender building. • Highlights • 3700+ stars on GitHub • Featured in YC Hacker News, O’Reily Data Newsletter, GitHub weekly trending list, etc. -
Alibaba Amazon
Online Virtual Sponsor Expo Sunday December 6th Amazon | Science Alibaba Neural Magic Facebook Scale AI IBM Sony Apple Quantum Black Benevolent AI Zalando Kuaishou Cruise Ant Group | Alipay Microsoft Wild Me Deep Genomics Netflix Research Google Research CausaLens Hudson River Trading 1 TALKS & PANELS Scikit-learn and Fairness, Tools and Challenges 5 AM PST (1 PM UTC) Speaker: Adrin Jalali Fairness, accountability, and transparency in machine learning have become a major part of the ML discourse. Since these issues have attracted attention from the public, and certain legislations are being put in place regulating the usage of machine learning in certain domains, the industry has been catching up with the topic and a few groups have been developing toolboxes to allow practitioners incorporate fairness constraints into their pipelines and make their models more transparent and accountable. AIF360 and fairlearn are just two examples available in Python. On the machine learning side, scikit-learn has been one of the most commonly used libraries which has been extended by third party libraries such as XGBoost and imbalanced-learn. However, when it comes to incorporating fairness constraints in a usual scikit- learn pipeline, there are challenges and limitations related to the API, which has made developing a scikit-learn compatible fairness focused package challenging and hampering the adoption of these tools in the industry. In this talk, we start with a common classification pipeline, then we assess fairness/bias of the data/outputs using disparate impact ratio as an example metric, and finally mitigate the unfair outputs and search for hyperparameters which give the best accuracy while satisfying fairness constraints. -
Implementation of Machine Learning with Google Cloud Platform
Cloud Computing Implementation of Machine learning with Google cloud platform Referent : Prof. Dr. Christian Baun Submitted by: Ammar Albaalbaki(1267651) Anish Joys Yesuadimai Michael(1280214) Eigenständigkeitserklärung Ich versichere wahrheitsgemäß, die Arbeit selbstständig angefertigt, alle benutzten Hilfsmittel vollständig und genau angegeben und alles kenntlich gemacht zu haben, was aus Arbeiten anderer unverändert oder mit Abänderungen entnommen wurde. __________________ Ammar Albaalbaki Ich versichere wahrheitsgemäß, die Arbeit selbstständig angefertigt, alle benutzten Hilfsmittel vollständig und genau angegeben und alles kenntlich gemacht zu haben, was aus Arbeiten anderer unverändert oder mit Abänderungen entnommen wurde. ------------------------------ Anish joys yesuadimai michael Table of Contents 1 Introduction 1 1.1 Motivation 1 1.2 Purpose 1 1.3 The aim of the work 2 2 Fundamental 4 2.1 Google Cloud 4 2.2 Machine Learning 5 2.2.1 Supervised Learning 6 2.3 Image Processing 7 2.4 Webhosting 8 2.4.1 Basic of response and request 9 2.4.2 Describe PHP 9 2.4.3 Putty to connection with host 10 3 Implementation 11 3.1 Log in Google Cloud 11 3.2 Google Cloud Storage 11 3.3 Datalab for implementing image processing 14 3.4 Create Web Host 19 4 Evaluation and results 23 5 Summary and outlook 24 5.1 Summary 24 5.2 outlook 24 List of Figures I List of tables II References III Page I 1 Introduction Cloud computing is a very hot and booming topic in the current era, lots of companies have already moved to cloud computing technologies and many companies are urging to move to cloud. -
Microsoft Recommenders Tools to Accelerate Developing Recommender Systems Scott Graham, Jun Ki Min, Tao Wu {Scott.Graham, Jun.Min, Tao.Wu} @ Microsoft.Com
Microsoft Recommenders Tools to Accelerate Developing Recommender Systems Scott Graham, Jun Ki Min, Tao Wu {Scott.Graham, Jun.Min, Tao.Wu} @ microsoft.com Microsoft Recommenders repository is an open source collection of python utilities and Jupyter notebooks to help accelerate the process of designing, evaluating, and deploying recommender systems. The repository is actively maintained and constantly being improved and extended. It is publicly available on GitHub and licensed through a permissive MIT License to promote widespread use of the tools. Core components reco_utils • Dataset helpers and splitters • Ranking & rating metrics • Hyperparameter tuning helpers • Operationalization utilities notebooks • Spark ALS • Inhouse: SAR, Vowpal Wabbit, LightGBM, RLRMC, xDeepFM • Deep learning: NCF, Wide-Deep • Open source frameworks: Surprise, FastAI tests • Unit tests • Smoke & Integration tests Workflow Train set Recommender train Environment Data Data split setup load Validation set Hyperparameter tune Operationalize Test set Evaluate • Rating metrics • Content based models • Clone & conda-install on a local / • Chronological split • Ranking metrics • Collaborative-filtering models virtual machine (Windows / Linux / • Stratified split • Classification metrics • Deep-learning models Mac) or Databricks • Non-overlap split • Docker • Azure Machine Learning service (AzureML) • Random split • Tuning libraries (NNI, HyperOpt) • pip install • Azure Kubernetes Service (AKS) • Tuning services (AzureML) • Azure Databricks Example: Movie Recommendation -
Machine Learning in Google Drive
Senior Staff Software Engineer, Google Google I/O Extended Boulder 8-May-2018 #io18extended ...like space :) #spaceishard Spaceflight is unforgiving and complicated. But it’s not a miracle. It can be done #intelligenceishard ● Fundamentals of machine learning are complicated; ML is easily misapplied ● Doing “toy” things with ML is easy. ○ Doing useful things is a lot harder How to get there from here? ● Start small, but real ● Measure. Be rigorous. Make sure you’re helping ● Have a compelling UX ● Launch. Iterate. Improve. ● Never forget the user #io18extended Drive Quick Access Main Idea: Prominently show the user's documents and files they likely want to open right now Benefit 1: Save Users Time ● Quick Access gets users to their files 50% faster… and with less cognitive friction Benefit 2: Enable users to make better business decisions ● Show users relevant documents to their pending business decisions, including documents they may not be aware of. (The right information at the right time.) #io18extended Drive Quick Access: Mobile (Android and iOS) #io18extended What we’ve learned: Quick Access Feature Intelligence works ● Training and using Machine Learning models results in improved metrics beyond a simple “Most Recently Used” across all metrics Quick Access saves users time ● About 50% of opens come from QA, each one saves 50% on “finding time” Starting Point for Future Work ● Quick Access has proven out our machine learning infrastructure for Drive and provides a framework for future intelligence features. #io18extended Quick Access -
The Design and Implementation of Low-Latency Prediction Serving Systems
The Design and Implementation of Low-Latency Prediction Serving Systems Daniel Crankshaw Electrical Engineering and Computer Sciences University of California at Berkeley Technical Report No. UCB/EECS-2019-171 http://www2.eecs.berkeley.edu/Pubs/TechRpts/2019/EECS-2019-171.html December 16, 2019 Copyright © 2019, by the author(s). All rights reserved. Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission. Acknowledgement To my parents The Design and Implementation of Low-Latency Prediction Serving Systems by Daniel Crankshaw A dissertation submitted in partial satisfaction of the requirements for the degree of Doctor of Philosophy in Computer Science in the Graduate Division of the University of California, Berkeley Committee in charge: Professor Joseph Gonzalez, Chair Professor Ion Stoica Professor Fernando Perez Fall 2019 The Design and Implementation of Low-Latency Prediction Serving Systems Copyright 2019 by Daniel Crankshaw 1 Abstract The Design and Implementation of Low-Latency Prediction Serving Systems by Daniel Crankshaw Doctor of Philosophy in Computer Science University of California, Berkeley Professor Joseph Gonzalez, Chair Machine learning is being deployed in a growing number of applications which demand real- time, accurate, and cost-efficient predictions under heavy query load. These applications employ a variety of machine learning frameworks and models, often composing several models within the same application.