By Scott Juds 7 PM PST, 9-23-2019 Sumgrowth Strategies

Total Page:16

File Type:pdf, Size:1020Kb

By Scott Juds 7 PM PST, 9-23-2019 Sumgrowth Strategies by Scott Juds 7 PM PST, 9-23-2019 SumGrowth Strategies by Scott Juds – SumGrowth Strategies – Sept. 2019 1 Disclaimers • DO NOT BASE ANY INVESTMENT DECISION SOLELY UPON MATERIALS IN THIS PRESENTATION • Neither SumGrowth Strategies nor I are a registered investment advisor or broker-dealer. • This presentation is for educational purposes only and is not an offer to buy or sell securities. • This information is only educational in nature and should not be construed as investment advice as it is not provided in view of the individual circumstances of any particular individual. • Investing in securities is speculative. You may lose some or all of the money that is invested. • Past results of any particular trading system are not a guarantee of future performance. • Always consult with a registered investment advisor or licensed stock broker before investing. 2 Also Show Client Account with AD Risk as it should Be Compare to S&P500 3 Also Show Client Account with AD Risk as it should Be Compare to S&P500 4 5 Merlyn.AI Builds On (and does not have to re-discover) Other Existing Knowledge It Doesn’t Have to Reinvent All of this From Scratch Formally Confirmed Momentum in Market Data Merlyn.AI Accepts This Re-Discovery Not Necessary Eugene Fama Kenneth French Nobel Prize, 2013 Dartmouth College “the premier market anomaly” that’s “above suspicion.” Academic Paper - 2008: “Dissecting Anomalies” Proved Signal-to-Noise Ratio Controls the Probability of Making the Right Decision Claude Shannon National Medal of Science, 1966 Merlyn.AI Accepts This Re-Discovery Not Necessary Matched Filter Theory Design for Optimum Signal-to-Noise Ratio J. H. Van Vleck Noble Prize, 1977 Think Outside of the Box Merlyn.AI Accepts This Re-Discovery Not Necessary Someplace to Start Designed for Performance Differential Signal Processing Removes Common Mode Noise (Relative Strength) Samuel H. Christie Royal Society 1836 Merlyn.AI Accepts This Re-Discovery Not Necessary 5 Years Full Span Wheatstone Bridge Sectors Provide Power Strokes Merlyn.AI Market Economic Cycle Cycle Accepts This Re-Discovery Not Necessary StormGuard-Armor Detect the Onset of Bad Markets Know When the Market is Safe: Risk-On vs Risk-Off Merlyn.AI Accepts This Re-Discovery Not Necessary StormGuard-Armor + Integrated Bear Market Strategies Merlyn.AI Accepts This Re-Discovery Not Necessary These Guys Got us Here Is There More? What About Selection Bias? Who Needs XLV-Healthcare and XLE-Energy? . Merlyn.AI Is a Genetic Algorithm Layered on Top of a Strategy Genetic Algorithm on Top Why? To Evolve its Set of Funds Each Month Why? To Remove Hindsight Selection Bias Why? To Achieve Better Future Performance Artificial Intelligence Algorithms Types of machine learning algorithms[edit] •Reinforcement learning Bayesian[edit] Semi-supervised learning[edit] •Almeida–Pineda recurrent backpropagation •Repeated incremental pruning to produce error reduction (RIPPER) Bayesian statistics Semi-supervised learning •ALOPEX •Rprop •Bayesian knowledge base •Active learning – special case of semi-supervised learning •Backpropagation •Rule-based machine learning •Naive Bayes Generative models •Bootstrap aggregating •Skill chaining •Gaussian Naive Bayes •Low-density separation •CN2 algorithm •Sparse PCA •Multinomial Naive Bayes •Graph-based methods •Constructing skill trees •State–action–reward–state–action •Averaged One-Dependence Estimators (AODE) •Co-training •Dehaene–Changeux model •Stochastic gradient descent •Bayesian Belief Network (BBN) •Transduction •Diffusion map •Structured kNN •Bayesian Network (BN) Deep learning[edit] •Dominance-based rough set approach •T-distributed stochastic neighbor embedding Decision tree algorithms[edit] Deep learning •Dynamic time warping •Temporal difference learning Decision tree algorithm •Deep belief networks •Error-driven learning •Wake-sleep algorithm •Decision tree •Deep Boltzmann machines •Evolutionary multimodal optimization •Weighted majority algorithm (machine l •Classification and regression tree (CART) •Deep Convolutional neural networks •Expectation–maximization algorithm •Iterative Dichotomiser 3 (ID3) •Deep Recurrent neural networks •FastICA •C4.5 algorithm •Hierarchical temporal memory •Forward–backward algorithm Supervised learning •C5.0 algorithm •Generative Adversarial Networks •GeneRec •AODE •Chi-squared Automatic Interaction Detection (CHAID) •Deep Boltzmann Machine (DBM) •Genetic Algorithm for Rule Set Production •Artificial neural network •Decision stump •Stacked Auto-Encoders •Growing self-organizing map •Association rule learning algorithms •Conditional decision tree Other machine learning methods and problems[edit] •HEXQ • Apriori algorithm •ID3 algorithm •Anomaly detection •Hyper basis function network • Eclat algorithm •Random forest •Association rules •IDistance •Case-based reasoning •SLIQ •Bias-variance dilemma •K-nearest neighbors algorithm •Gaussian process regression Linear classifier[edit] •Classification •Kernel methods for vector output •Gene expression programming Linear classifier • Multi-label classification •Kernel principal component analysis •Group method of data handling (GMDH) •Fisher's linear discriminant •Clustering •Leabra •Inductive logic programming •Linear regression •Data Pre-processing •Linde–Buzo–Gray algorithm •Instance-based learning •Logistic regression •Empirical risk minimization •Local outlier factor •Lazy learning •Multinomial logistic regression •Feature engineering •Logic learning machine •Learning Automata •Naive Bayes classifier •Feature learning •LogitBoost •Learning Vector Quantization •Perceptron •Learning to rank •Manifold alignment •Logistic Model Tree •Support vector machine •Occam learning •Minimum redundancy feature selection •Minimum message length (decision trees, decision graphs, etc.) Unsupervised learning[edit] •Online machine learning •Mixture of experts • Nearest Neighbor Algorithm Unsupervised learning •PAC learning •Multiple kernel learning • Analogical modeling •Expectation-maximization algorithm •Regression •Non-negative matrix factorization •Probably approximately correct learning (PAC) learning •Vector Quantization •Reinforcement Learning •Online machine learning •Ripple down rules, a knowledge acquisition methodology •Generative topographic map •Semi-supervised learning •Out-of-bag error •Symbolic machine learning algorithms •Information bottleneck method •Statistical learning •Prefrontal cortex basal ganglia working memory •Support vector machines Artificial neural networks[edit] •Structured prediction •PVLV •Random Forests Artificial neural network • Graphical models •Q-learning •Ensembles of classifiers •Feedforward neural network Logic learning machine • Bayesian network •Quadratic unconstrained binary optimization • Bootstrap aggregating (bagging) •Self-organizing map • Conditional random field (CRF) •Query-level feature • Boosting (meta-algorithm) Association rule learning[edit] • Hidden Markov model (HMM) •Quickprop •Ordinal classification Association rule learning •Unsupervised learning •Radial basis function network •Information fuzzy networks (IFN) •Apriori algorithm •VC theory •Randomized weighted majority algorithm •Eclat algorithm How SumGrowth Will Use AI To Perceive the environment and take action to maximize success. Adaptively changing the algorithm FWPT: Forward Walk based on the past character of the Progressive Tuning Old data. Walks through out-of-sample data for its buy/sell decisions. Employs Fuzzy Logic to evaluate a composite of 12 measures of the StormGuard - Armor Old market’s character to determine current investment safety. Uses a Genetic Algorithm to evolve Merlyn.AI the candidate funds in a population FWPP: Forward Walk New Progressive Picking of momentum strategies to eradicate remnants of hindsight selection bias. (Note: Play Merlyn video from desktop now) Merlyn’s Magic Portfolio Merlyn’s Magic Portfolio ETF Tax Efficiency? An Exchange In-Kind Is a Non-Taxable Event Originally Designed for Moving an Accounts to Another Brokerage Market Makers Exchange a Basket of Stocks for ETF Shares Individuals Buy and Sell Only ETF Shares – It is Not Like Mutual Fund Ownership Market Makers Return ETF Shares to the ETF Company and Receive an In-Kind Exchange for the Current Basket of Stocks. Thus Trades Occur Within the ETF Change the Basket, BUT the Investor Gets Long Term Tax Treatment. ETF Liquidity Myth Merlyn.AI Corp. Founded Jan 2019, Raised $2.5M, Exclusive License from SGS to Create & Market Merlyn ETFs Solactive US Bank RBC Calculator Custodian Publisher Market Maker Alpha Architect SGS Merlyn.AI ETF Advisor SectorSurfer SGS License Exemptive Relief NYSE AlphaDroid Investors Web Services MAI Indexes ETF Sponsor Compliance Quasar Marketing Distributor Cable CNBC Mktg. Approval Advisor Shares SEC FINRA G. AdWords Articles Bull-Rider Bear-Fighter Index MAI Bull-Rider Bear-Fighter Index MAI Indexes Solactive: Index Calculator & Publisher Bull-Rider Bear-Fighter Index Google Search – Stay Up To Date Bull-Rider Bear-Fighter Almost Found in the Partner Store BRBF Portfolio Coming Soon 30 Bull-Rider Bear-Fighter www.AlphaDroid.com/InfoPages/Merlyn-BRBF.aspx Docs In Riskalyze Partner Store (2/8 pages) 31 AlphaDroid Portfolios Logistics in Riskalyze 32 Why Measure Risk If You’re not Going to Fix it? by Scott Juds – SumGrowth Strategies – Sept. 2019 by Scott Juds SumGrowth Strategies 33.
Recommended publications
  • Ranking and Automatic Selection of Machine Learning Models Abstract Sandro Feuz
    Technical Disclosure Commons Defensive Publications Series December 13, 2017 Ranking and automatic selection of machine learning models Abstract Sandro Feuz Victor Carbune Follow this and additional works at: http://www.tdcommons.org/dpubs_series Recommended Citation Feuz, Sandro and Carbune, Victor, "Ranking and automatic selection of machine learning models Abstract", Technical Disclosure Commons, (December 13, 2017) http://www.tdcommons.org/dpubs_series/982 This work is licensed under a Creative Commons Attribution 4.0 License. This Article is brought to you for free and open access by Technical Disclosure Commons. It has been accepted for inclusion in Defensive Publications Series by an authorized administrator of Technical Disclosure Commons. Feuz and Carbune: Ranking and automatic selection of machine learning models Abstra Ranking and automatic selection of machine learning models Abstract Generally, the present disclosure is directed to an API for ranking and automatic selection from competing machine learning models that can perform a particular task. In particular, in some implementations, the systems and methods of the present disclosure can include or otherwise leverage one or more machine-learned models to provide to a software application one or more machine learning models from different providers. The trained models are suited to a task or data type specified by the developer. The one or more models are selected from a registry of machine learning models, their task specialties, cost, and performance, such that the application specified cost and performance requirements are met. An application processor interface (API) maintains a registry of various machine learning models, their task specialties, costs and/or performances. A third-party developer can make a call to the API to select one or more machine learning models.
    [Show full text]
  • A Robust Deep Learning Approach for Spatiotemporal Estimation of Satellite AOD and PM2.5
    remote sensing Article A Robust Deep Learning Approach for Spatiotemporal Estimation of Satellite AOD and PM2.5 Lianfa Li 1,2,3 1 State Key Laboratory of Resources and Environmental Information Systems, Institute of Geographic Sciences and Natural Resources Research, Chinese Academy of Sciences, Datun Road, Beijing 100101, China; [email protected]; Tel.: +86-10-648888362 2 University of Chinese Academy of Sciences, Beijing 100049, China 3 Spatial Data Intelligence Lab Ltd. Liability Co., Casper, WY 82609, USA Received: 22 November 2019; Accepted: 7 January 2020; Published: 13 January 2020 Abstract: Accurate estimation of fine particulate matter with diameter 2.5 µm (PM ) at a high ≤ 2.5 spatiotemporal resolution is crucial for the evaluation of its health effects. Previous studies face multiple challenges including limited ground measurements and availability of spatiotemporal covariates. Although the multiangle implementation of atmospheric correction (MAIAC) retrieves satellite aerosol optical depth (AOD) at a high spatiotemporal resolution, massive non-random missingness considerably limits its application in PM2.5 estimation. Here, a deep learning approach, i.e., bootstrap aggregating (bagging) of autoencoder-based residual deep networks, was developed to make robust imputation of MAIAC AOD and further estimate PM2.5 at a high spatial (1 km) and temporal (daily) resolution. The base model consisted of autoencoder-based residual networks where residual connections were introduced to improve learning performance. Bagging of residual networks was used to generate ensemble predictions for better accuracy and uncertainty estimates. As a case study, the proposed approach was applied to impute daily satellite AOD and subsequently estimate daily PM2.5 in the Jing-Jin-Ji metropolitan region of China in 2015.
    [Show full text]
  • An Evaluation of Machine Learning Approaches to Natural Language Processing for Legal Text Classification
    Imperial College London Department of Computing An Evaluation of Machine Learning Approaches to Natural Language Processing for Legal Text Classification Supervisors: Author: Prof Alessandra Russo Clavance Lim Nuri Cingillioglu Submitted in partial fulfillment of the requirements for the MSc degree in Computing Science of Imperial College London September 2019 Contents Abstract 1 Acknowledgements 2 1 Introduction 3 1.1 Motivation .................................. 3 1.2 Aims and objectives ............................ 4 1.3 Outline .................................... 5 2 Background 6 2.1 Overview ................................... 6 2.1.1 Text classification .......................... 6 2.1.2 Training, validation and test sets ................. 6 2.1.3 Cross validation ........................... 7 2.1.4 Hyperparameter optimization ................... 8 2.1.5 Evaluation metrics ......................... 9 2.2 Text classification pipeline ......................... 14 2.3 Feature extraction ............................. 15 2.3.1 Count vectorizer .......................... 15 2.3.2 TF-IDF vectorizer ......................... 16 2.3.3 Word embeddings .......................... 17 2.4 Classifiers .................................. 18 2.4.1 Naive Bayes classifier ........................ 18 2.4.2 Decision tree ............................ 20 2.4.3 Random forest ........................... 21 2.4.4 Logistic regression ......................... 21 2.4.5 Support vector machines ...................... 22 2.4.6 k-Nearest Neighbours .......................
    [Show full text]
  • Predicting Construction Cost and Schedule Success Using Artificial
    Available online at www.sciencedirect.com International Journal of Project Management 30 (2012) 470–478 www.elsevier.com/locate/ijproman Predicting construction cost and schedule success using artificial neural networks ensemble and support vector machines classification models ⁎ Yu-Ren Wang , Chung-Ying Yu, Hsun-Hsi Chan Dept. of Civil Engineering, National Kaohsiung University of Applied Sciences, 415 Chien-Kung Road, Kaohsiung, 807, Taiwan Received 11 May 2011; received in revised form 2 August 2011; accepted 15 September 2011 Abstract It is commonly perceived that how well the planning is performed during the early stage will have significant impact on final project outcome. This paper outlines the development of artificial neural networks ensemble and support vector machines classification models to predict project cost and schedule success, using status of early planning as the model inputs. Through industry survey, early planning and project performance information from a total of 92 building projects is collected. The results show that early planning status can be effectively used to predict project success and the proposed artificial intelligence models produce satisfactory prediction results. © 2011 Elsevier Ltd. APM and IPMA. All rights reserved. Keywords: Project success; Early planning; Classification model; ANNs ensemble; Support vector machines 1. Introduction Menches and Hanna, 2006). In particular, researches have indi- cated that project definition in the early planning process is an im- In the past few decades, the researchers and industry prac- portant factor leading to project success (Le et al., 2010; Thomas titioners have recognized the potential impact of early plan- and Fernández, 2008; Younga and Samson, 2008). Based on ning to final project outcomes and started to put more these results, this research intends to further investigate this rela- emphasis on early planning process (Dvir, 2005; Gibson et tionship and to examine if the status of early planning can be used al., 2006; Hartman and Ashrafi, 2004).
    [Show full text]
  • Full Academic Cv: Grigori Fursin, Phd
    FULL ACADEMIC CV: GRIGORI FURSIN, PHD Current position VP of MLOps at OctoML.ai (USA) Languages English (British citizen); French (spoken, intermediate); Russian (native) Address Paris region, France Education PhD in computer science with the ORS award from the University of Edinburgh (2004) Website cKnowledge.io/@gfursin LinkedIn linkedin.com/in/grigorifursin Publications scholar.google.com/citations?user=IwcnpkwAAAAJ (H‐index: 25) Personal e‐mail [email protected] I am a computer scientist, engineer, educator and business executive with an interdisciplinary background in computer engineering, machine learning, physics and electronics. I am passionate about designing efficient systems in terms of speed, energy, accuracy and various costs, bringing deep tech to the real world, teaching, enabling reproducible research and sup‐ porting open science. MY ACADEMIC RESEARCH (TENURED RESEARCH SCIENTIST AT INRIA WITH PHD IN CS FROM THE UNIVERSITY OF EDINBURGH) • I was among the first researchers to combine machine learning, autotuning and knowledge sharing to automate and accelerate the development of efficient software and hardware by several orders of magnitudeGoogle ( scholar); • developed open‐source tools and started educational initiatives (ACM, Raspberry Pi foundation) to bring this research to the real world (see use cases); • prepared and tought M.S. course at Paris‐Saclay University on using ML to co‐design efficient software and hardare (self‐optimizing computing systems); • gave 100+ invited research talks; • honored to receive the
    [Show full text]
  • Delivering a Machine Learning Course on HPC Resources
    Delivering a machine learning course on HPC resources Stefano Bagnasco, Federica Legger, Sara Vallero This project has received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement LHCBIGDATA No 799062 The course ● Title: Big Data Science and Machine Learning ● Graduate Program in Physics at University of Torino ● Academic year 2018-2019: ○ Starts in 2 weeks ○ 2 CFU, 10 hours (theory+hands-on) ○ 7 registered students ● Academic year 2019-2020: ○ March 2020 ○ 4 CFU, 16 hours (theory+hands-on) ○ Already 2 registered students 2 The Program ● Introduction to big data science ○ The big data pipeline: state-of-the-art tools and technologies ● ML and DL methods: ○ supervised and unsupervised models, ○ neural networks ● Introduction to computer architecture and parallel computing patterns ○ Initiation to OpenMP and MPI (2019-2020) ● Parallelisation of ML algorithms on distributed resources ○ ML applications on distributed architectures ○ Beyond CPUs: GPUs, FPGAs (2019-2020) 3 The aim ● Applied ML course: ○ Many courses on advanced statistical methods available elsewhere ○ Focus on hands-on sessions ● Students will ○ Familiarise with: ■ ML methods and libraries ■ Analysis tools ■ Collaborative models ■ Container and cloud technologies ○ Learn how to ■ Optimise ML models ■ Tune distributed training ■ Work with available resources 4 Hands-on ● Python with Jupyter notebooks ● Prerequisites: some familiarity with numpy and pandas ● ML libraries ○ Day 2: MLlib ■ Gradient Boosting Trees GBT ■ Multilayer Perceptron Classifier MCP ○ Day 3: Keras ■ Sequential model ○ Day 4: bigDL ■ Sequential model ● Coming: ○ CUDA ○ MPI ○ OpenMP 5 ML Input Dataset for hands on ● Open HEP dataset @UCI, 7GB (.csv) ● Signal (heavy Higgs) + background ● 10M MC events (balanced, 50%:50%) ○ 21 low level features ■ pt’s, angles, MET, b-tag, … Signal ○ 7 high level features ■ Invariant masses (m(jj), m(jjj), …) Background: ttbar Baldi, Sadowski, and Whiteson.
    [Show full text]
  • A Taxonomy of Massive Data for Optimal Predictive Machine Learning and Data Mining Ernest Fokoue
    Rochester Institute of Technology RIT Scholar Works Articles 2013 A Taxonomy of Massive Data for Optimal Predictive Machine Learning and Data Mining Ernest Fokoue Follow this and additional works at: http://scholarworks.rit.edu/article Recommended Citation Fokoue, Ernest, "A Taxonomy of Massive Data for Optimal Predictive Machine Learning and Data Mining" (2013). Accessed from http://scholarworks.rit.edu/article/1750 This Article is brought to you for free and open access by RIT Scholar Works. It has been accepted for inclusion in Articles by an authorized administrator of RIT Scholar Works. For more information, please contact [email protected]. A Taxonomy of Massive Data for Optimal Predictive Machine Learning and Data Mining Ernest Fokoué Center for Quality and Applied Statistics Rochester Institute of Technology 98 Lomb Memorial Drive, Rochester, NY 14623, USA [email protected] Abstract Massive data, also known as big data, come in various ways, types, shapes, forms and sizes. In this paper, we propose a rough idea of a possible taxonomy of massive data, along with some of the most commonly used tools for handling each particular category of massiveness. The dimensionality p of the input space and the sample size n are usually the main ingredients in the characterization of data massiveness. The specific statistical machine learning technique used to handle a particular massive data set will depend on which category it falls in within the massiveness taxonomy. Large p small n data sets for instance require a different set of tools from the large n small p variety. Among other tools, we discuss Prepro- cessing, Standardization, Imputation, Projection, Regularization, Penalization, Compression, Reduction, Selection, Kernelization, Hybridization, Parallelization, Aggregation, Randomization, Replication, Se- quentialization.
    [Show full text]
  • A Comparison of Artificial Neural Networks and Bootstrap
    Journal of Risk and Financial Management Article A Comparison of Artificial Neural Networks and Bootstrap Aggregating Ensembles in a Modern Financial Derivative Pricing Framework Ryno du Plooy * and Pierre J. Venter Department of Finance and Investment Management, University of Johannesburg, P.O. Box 524, Auckland Park 2006, South Africa; [email protected] * Correspondence: [email protected] Abstract: In this paper, the pricing performances of two learning networks, namely an artificial neural network and a bootstrap aggregating ensemble network, were compared when pricing the Johannesburg Stock Exchange (JSE) Top 40 European call options in a modern option pricing framework using a constructed implied volatility surface. In addition to this, the numerical accuracy of the better performing network was compared to a Monte Carlo simulation in a separate numerical experiment. It was found that the bootstrap aggregating ensemble network outperformed the artificial neural network and produced price estimates within the error bounds of a Monte Carlo simulation when pricing derivatives in a multi-curve framework setting. Keywords: artificial neural networks; vanilla option pricing; multi-curve framework; collateral; funding Citation: du Plooy, Ryno, and Pierre J. Venter. 2021. A Comparison of Artificial Neural Networks and Bootstrap Aggregating Ensembles in 1. Introduction a Modern Financial Derivative Black and Scholes(1973) established the foundation for modern option pricing the- Pricing Framework. Journal of Risk ory by showing that under certain ideal market conditions, it is possible to derive an and Financial Management 14: 254. analytically tractable solution for the price of a financial derivative. Industry practition- https://doi.org/10.3390/jrfm14060254 ers however quickly discovered that certain assumptions underlying the Black–Scholes (BS) model such as constant volatility and the existence of a unique risk-free interest rate Academic Editor: Jakub Horak were fundamentally flawed.
    [Show full text]
  • Bagging and the Bayesian Bootstrap
    Bagging and the Bayesian Bootstrap Merlise A. Clyde and Herbert K. H. Lee Institute of Statistics & Decision Sciences Duke University Durham, NC 27708 Abstract reduction in mean-squared prediction error for unsta- ble procedures. Bagging is a method of obtaining more ro- In this paper, we consider a Bayesian version of bag- bust predictions when the model class under ging based on Rubin’s Bayesian bootstrap (1981). consideration is unstable with respect to the This overcomes a technical difficulty with the usual data, i.e., small changes in the data can cause bootstrap in bagging, and it leads to a reduction the predicted values to change significantly. in variance over the bootstrap for certain classes of In this paper, we introduce a Bayesian ver- estimators. Another Bayesian approach for dealing sion of bagging based on the Bayesian boot- with unstable procedures is Bayesian model averaging strap. The Bayesian bootstrap resolves a the- (BMA) (Hoeting et al., 1999). In BMA, one fits sev- oretical problem with ordinary bagging and eral models to the data and makes predictions by tak- often results in more efficient estimators. We ing the weighted average of the predictions from each show how model averaging can be combined of the fitted models, where the weights are posterior within the Bayesian bootstrap and illustrate probabilities of models. We show that the Bayesian the procedure with several examples. bootstrap and Bayesian model averaging can be com- bined. We illustrate Bayesian bagging in a regression problem with variable selection and a highly influen- 1INTRODUCTION tial data point, a classification problem using logistic regression, and a CART model.
    [Show full text]
  • Tensor Ensemble Learning for Multidimensional Data
    Tensor Ensemble Learning for Multidimensional Data Ilia Kisil1, Ahmad Moniri1, and Danilo P. Mandic1 1Electrical and Electronic Engineering Department, Imperial College London, SW7 2AZ, UK, E-mails: fi.kisil15, ahmad.moniri13, [email protected] Abstract In big data applications, classical ensemble learning is typically infeasible on the raw input data and dimensionality reduction techniques are necessary. To this end, novel framework that generalises classic flat-view ensemble learning to multidimensional tensor- valued data is introduced. This is achieved by virtue of tensor decompositions, whereby the proposed method, referred to as tensor ensemble learning (TEL), decomposes every input data sample into multiple factors which allows for a flexibility in the choice of multiple learning algorithms in order to improve test performance. The TEL framework is shown to naturally compress multidimensional data in order to take advantage of the inherent multi-way data structure and exploit the benefit of ensemble learning. The proposed framework is verified through the application of Higher Order Singular Value Decomposition (HOSVD) to the ETH-80 dataset and is shown to outperform the classical ensemble learning approach of bootstrap aggregating. Index terms| Tensor Decomposition, Multidimensional Data, Ensemble Learning, Clas- sification, Bagging 1 Introduction The phenomenon of the wisdom of the crowd has been known for a very long time and was originally formulated by Aristotle. It simply states that the collective answer of a group of peo- ple to questions related to common world knowledge, spatial reasoning, and general estimation tasks, is often superior to the judgement of a particular person within this group. With the advent of computer, the machine learning community have adopted this concept under the framework of ensemble learning [1].
    [Show full text]
  • B.Sc Computer Science with Specialization in Artificial Intelligence & Machine Learning
    B.Sc Computer Science with Specialization in Artificial Intelligence & Machine Learning Curriculum & Syllabus (Based on Choice Based Credit System) Effective from the Academic year 2020-2021 PROGRAMME EDUCATIONAL OBJECTIVES (PEO) PEO 1 : Graduates will have solid basics in Mathematics, Programming, Machine Learning, Artificial Intelligence fundamentals and advancements to solve technical problems. PEO 2 : Graduates will have the capability to apply their knowledge and skills acquired to solve the issues in real world Artificial Intelligence and Machine learning areas and to develop feasible and reliable systems. PEO 3 : Graduates will have the potential to participate in life-long learning through the successful completion of advanced degrees, continuing education, certifications and/or other professional developments. PEO 4 : Graduates will have the ability to apply the gained knowledge to improve the society ensuring ethical and moral values. PEO 5 : Graduates will have exposure to emerging cutting edge technologies and excellent training in the field of Artificial Intelligence & Machine learning PROGRAMME OUTCOMES (PO) PO 1 : Develop knowledge in the field of AI & ML courses necessary to qualify for the degree. PO 2 : Acquire a rich basket of value added courses and soft skill courses instilling self-confidence and moral values. PO 3 : Develop problem solving, decision making and communication skills. PO 4 : Demonstrate social responsibility through Ethics and values and Environmental Studies related activities in the campus and in the society. PO 5 : Strengthen the critical thinking skills and develop professionalism with the state of art ICT facilities. PO 6 : Quality for higher education, government services, industry needs and start up units through continuous practice of preparatory examinations.
    [Show full text]
  • Delivering a Machine Learning Course on HPC Resources
    EPJ Web of Conferences 245, 08016 (2020) https://doi.org/10.1051/epjconf/202024508016 CHEP 2019 Delivering a machine learning course on HPC resources Stefano Bagnasco1, Gabriele Gaetano Fronzé1, Federica Legger1;∗ Stefano Lusso1, and Sara Vallero1 1Istituto Nazionale di Fisica Nucleare, via Pietro Giuria 1, 10125 Torino, Italy Abstract. In recent years, proficiency in data science and machine learning (ML) became one of the most requested skills for jobs in both industry and academy. Machine learning algorithms typically require large sets of data to train the models and extensive usage of computing resources, both for training and inference. Especially for deep learning algorithms, training performances can be dramatically improved by exploiting Graphical Processing Units (GPUs). The needed skill set for a data scientist is therefore extremely broad, and ranges from knowledge of ML models to distributed programming on heterogeneous resources. While most of the available training resources focus on ML algorithms and tools such as TensorFlow, we designed a course for doctoral students where model training is tightly coupled with underlying technologies that can be used to dynamically provision resources. Throughout the course, students have access to a dedicated cluster of computing nodes on local premises. A set of libraries and helper functions is provided to execute a parallelized ML task by automatically deploying a Spark driver and several Spark execution nodes as Docker containers. Task scheduling is managed by an orchestration layer (Kubernetes). This solution automates the delivery of the software stack required by a typical ML workflow and enables scalability by allowing the execution of ML tasks, including training, over commodity (i.e.
    [Show full text]