Getting Started with Machine Learning

Total Page:16

File Type:pdf, Size:1020Kb

Getting Started with Machine Learning Getting Started with Machine Learning CSC131 The Beauty & Joy of Computing Cornell College 600 First Street SW Mount Vernon, Iowa 52314 September 2018 ii Contents 1 Applications: where machine learning is helping1 1.1 Sheldon Branch............................1 1.2 Bram Dedrick.............................4 1.3 Tony Ferenzi.............................5 1.3.1 Benefits of Machine Learning................5 1.4 William Golden............................7 1.4.1 Humans: The Teachers of Technology...........7 1.5 Yuan Hong..............................9 1.6 Easton Jensen............................. 11 1.7 Rodrigo Martinez........................... 13 1.7.1 Machine Learning in Medicine............... 13 1.8 Matt Morrical............................. 15 1.9 Ella Nelson.............................. 16 1.10 Koichi Okazaki............................ 17 1.11 Jakob Orel.............................. 19 1.12 Marcellus Parks............................ 20 1.13 Lydia Sanchez............................. 22 1.14 Tiff Serra-Pichardo.......................... 24 1.15 Austin Stala.............................. 25 1.16 Nicole Trenholm........................... 26 1.17 Maddy Weaver............................ 28 1.18 Peter Weber.............................. 29 iii iv CONTENTS 2 Recommendations: How to learn more about machine learning 31 2.1 Sheldon Branch............................ 31 2.1.1 Course 1: Machine Learning................. 31 2.1.2 Course 2: Robotics: Vision Intelligence and Machine Learn- ing............................... 33 2.1.3 Course 3: Machine Learning Fundamentals........ 34 2.2 Bram Dedrick............................. 36 2.2.1 Online Courses........................ 36 2.2.2 Beginning the Class..................... 37 2.2.3 Machine Learning Courses Offered in Other Schools... 37 2.3 Tony Ferenzi............................. 39 2.3.1 Robotics: Vision Intelligence and Machine Learning... 41 2.3.2 Machine Learning...................... 41 2.3.3 Machine Learning Fundamentals.............. 41 2.3.4 Machine Learning...................... 42 2.3.5 Data Science: Machine learning.............. 42 2.4 William Golden............................ 43 2.4.1 course overview........................ 43 2.4.2 Conclusion.......................... 45 2.5 Yuan Hong.............................. 46 2.6 Easton Jensen............................. 49 2.6.1 Overview........................... 49 2.6.2 Course 1............................ 49 2.6.3 Course 2............................ 49 2.6.4 Course 3............................ 50 2.6.5 Course 4............................ 50 2.6.6 Why Take The Course.................... 51 2.7 Rodrigo Martinez........................... 52 2.8 Matt Morrical............................. 54 2.9 Ella Nelson.............................. 55 2.10 Koichi Okazaki............................ 56 CONTENTS v 2.10.1 Deep Learning Nanodegree Program from Udacity.... 56 2.10.2 At the end.......................... 59 2.11 Jakob Orel.............................. 60 2.11.1 Coursera........................... 60 2.11.2 EdX.............................. 61 2.11.3 Argonne National Laboratory................ 61 2.11.4 Takeaway........................... 62 2.12 Marcellus Parks............................ 63 2.13 Lydia Sanchez............................. 65 2.14 Tiff Serra-Pichardo.......................... 70 2.15 Austin Stala.............................. 71 2.15.1 Machine Learning Background............... 71 2.15.2 Lesson Resources....................... 74 2.15.3 Conclusion.......................... 74 2.16 Nicole Trenholm........................... 76 2.17 Maddy Weaver............................ 79 2.18 Peter Weber.............................. 83 3 Algorithms: how machine learning works 85 3.1 Sheldon Branch............................ 85 3.2 Bram Dedrick............................. 87 3.3 Tony Ferenzi............................. 89 3.4 William Golden............................ 92 3.4.1 A brief introduction..................... 92 3.4.2 Programs for Computer Learning.............. 92 3.4.3 Good Terms to Know.................... 92 3.4.4 Data mining in use...................... 93 3.4.5 Algorithms.......................... 94 3.4.6 Creating Speech Recognition................ 95 3.4.7 Conclusion.......................... 96 3.5 Yuan Hong.............................. 97 vi CONTENTS 3.6 Easton Jensen............................. 99 3.7 Rodrigo Martinez........................... 100 3.8 Matt Morrical............................. 102 3.9 Ella Nelson.............................. 103 3.10 Koichi Okazaki............................ 104 3.10.1 Facial Recognition...................... 104 3.11 Jakob Orel.............................. 106 3.12 Marcellus Parks............................ 108 3.13 Lydia Sanchez............................. 110 3.14 Tiff Serra-Pichardo.......................... 113 3.14.1 Artificial Neural Networks and How They Work..... 113 3.15 Austin Stala.............................. 114 3.16 Nicole Trenholm........................... 116 3.17 Maddy Weaver............................ 118 3.17.1 Random Forest........................ 118 3.18 Peter Weber.............................. 121 3.18.1 Neural Networks....................... 121 4 Authors: Who we are and how we learn 123 4.1 Sheldon Branch............................ 123 4.2 Bram Dedrick............................. 124 4.3 Tony Ferenzi............................. 125 4.4 William Golden............................ 126 4.5 Yuan Hong.............................. 127 4.6 Easton Jensen............................. 128 4.7 Rodrigo Martinez........................... 129 4.8 Matt Morrical............................. 130 4.9 Ella Nelson.............................. 131 4.10 Koichi Okazaki............................ 132 4.11 Jakob Orel.............................. 133 4.12 Marcellus Parks............................ 134 CONTENTS vii 4.13 Lydia Sanchez............................. 135 4.14 Tiff Serra-Pichardo.......................... 136 4.15 Austin Stala.............................. 137 4.16 Nicole Trenholm........................... 138 4.17 Maddy Weaver............................ 139 4.18 Peter Weber.............................. 140 4.18.1 Best learning experiences.................. 140 viii CONTENTS Chapter 1 Applications: where machine learning is helping 1.1 Sheldon Branch Sheldon Branch CSC131 9/3/13 Machine Learning in the Medical Field The medical field is facing a huge problem, one that affects over 14 million people every year. This problem is cancer. Because of our lack of knowledge, doctors are often unable to reliably identify cancer. Researchers at The Johns Hop- kins Hospital in Baltimore reviewed tissue samples from 6,000 cancer patients across the country. They found one out of every 71 cases was misdiagnosed; for example, a biopsy was labeled cancerous when it was not. And up to one out of five cancer cases was misclassified. Human error in the medical field is responsible for hundreds of thousands of deaths each year. So where does ma- chine learning fall into all of this? Machine learning is an application of artificial intelligence (AI) that provides systems the ability to automatically learn and improve from experience without being explicitly programmed. Machine learn- ing focuses on the development of computer programs that can access data and use it learn for themselves. By using algorithms to build models that uncover connections, doctors can make better decisions without human intervention. This innovation allows calculations and diagnoses to be made without human error. Healthcare involves a lot of prediction tasks. Such tasks are perfect for an artificial intelligence whose sole purpose is to predict. Regina Barzilay is an AI pioneer who leads a team at MIT. In 2014, she was diagnosed with breast cancer. After her initial diagnosis, she had a mammogram and the doctors said that her cancer was very small. Next, the doctors did an MRI which reported that Reginas cancer was very widespread. The doctors then needed to do a biopsy on Regina, which elucidated that the MRI showed a false positive. After her treatment, her past mammograms were reviewed and small spots of cancer 1 were found on the scans as early as two years prior to her diagnosis. Using a machine to predict cancer instead of doing painful and expensive procedures is much more efficient. Her diagnosis and treatment fueled her desire to use her knowledge of artificial intelligence and machine learning to prevent such mistakes and improve the accuracy of cancer diagnoses. Regina aims to utilize her computer science knowledge to create an AI that can recognize patterns in medical images and doctors electronic notes in an attempt to catch cancer earlier and avoid overusing invasive treatments. A few years ago, Researchers in Brazil studied the brains of expert radiologist in order to understand how they reached their diagnoses. The researchers wanted to know if the seasoned diagnosticians were applying a mental rule book to the images, or if they applied pattern recognition. Twenty-five radiologists were asked to evaluate X-rays of the lung while inside MRI machines that could track their brain activity. Some of the X-rays contained common encounters, such as a palm shaped shadow of pneumonia or the dull, opaque wall of fluid that accumulated behind the lining of the lung. Others showed lines of drawings of animals. The final group had the outlines of letters of the alphabet. The radiologists were shown these images in a random order and then asked to call out the name of the lesion,
Recommended publications
  • CV, Dlib, Flask, Opencl, CUDA, Matlab/Octave, Assembly/Intrinsics, Cmake, Make, Git, Linux CLI
    Arun Arun Ponnusamy Visakhapatnam, India. Ponnusamy Computer Vision [email protected] Research Engineer www.arunponnusamy.com github.com/arunponnusamy ㅡ Skills Passionate about Image Processing, Computer Vision and Machine Learning. Key skills - C/C++, Python, Keras,TensorFlow, PyTorch, OpenCV, dlib, Flask, OpenCL, CUDA, Matlab/Octave, Assembly/Intrinsics, CMake, Make, Git, Linux CLI. ㅡ Experience care.ai (5 years) Computer Vision Research Engineer JAN 2019 - PRESENT, VISAKHAPATNAM Key areas worked on - image classification, object detection, action ​ recognition, face detection and face recognition for edge devices. Tools and technologies - TensorFlow/Keras, TensorFlow Lite, TensorRT, ​ OpenCV, dlib, Python, Google Cloud. Devices - Nvidia Jetson Nano / TX2, Google Coral Edge TPU. ​ 1000Lookz Senior Computer Vision Engineer JAN 2018 - JAN 2019, CHENNAI Key areas worked on - head pose estimation, face analysis, image ​ ​ classification and object detection. Tools and technologies - TensorFlow/Keras, OpenCV, dlib, Flask, Python, ​ AWS, Google Cloud. MulticoreWare Inc. Software Engineer JULY 2014 - DEC 2017, CHENNAI Key areas worked on - image processing, computer vision, parallel ​ processing, software optimization and porting. Tools and technologies - OpenCV, dlib, C/C++, OpenCL, CUDA. ​ ㅡ PSG College of Technology / Bachelor of Engineering Education ​ JULY 2010 - MAY 2014, COIMBATORE Bachelor’s degree in Electronics and Communication Engineering with ​ ​ CGPA of 8.42 (out of 10). ​ ​ Favourite subjects - Digital Electronics, Embedded Systems and C Programming. ㅡ Notable Served as one of the Joint Secretaries of Institution of Engineers(I) (students’ chapter) of PSG College of Technology. Efforts Secured first place in Line Tracer Event in Kriya ‘13, a Techno management fest conducted by Students Union of PSG College of Technology. Actively participated in FIRST Tech Challenge, a robotics competition, conducted by Caterpillar India.
    [Show full text]
  • Deep Hashing Using an Extreme Learning Machine with Convolutional Networks
    i \1-Zeng" | 2018/2/2 | 23:57 | page 133 | #1 i i i Communications in Information and Systems Volume 17, Number 3, 133{146, 2017 Deep hashing using an extreme learning machine with convolutional networks Zhiyong Zeng∗, Shiqi Dai, Yunsong Li, Dunyu Chen In this paper, we present a deep hashing approach for large scale image search. It is different from most existing deep hash learn- ing methods which use convolutional neural networks (CNN) to execute feature extraction to learn binary codes. These methods could achieve excellent results, but they depend on an extreme huge and complex networks. We combine an extreme learning ma- chine (ELM) with convolutional neural networks to speed up the training of deep learning methods. In contrast to existing deep hashing approaches, our method leads to faster and more accurate feature learning. Meanwhile, it improves the generalization ability of deep hashing. Experiments on large scale image datasets demon- strate that the proposed approach can achieve better results with state-of-the-art methods with much less complexity. 1. Introduction With the growing of multimedia data, fast and effective search technique has become a hot research topic. Among existing search techniques, hashing is one of the most important retrieval techniques due to its fast query speed and low memory cost. Hashing methods can be divided into two categories: data-independent [1{3], data-dependent [4{8]. For the first category, hash functions random generated are first used to map data into feature space and then binarization is carried out. Representative methods of this category are locality sensitive hashing (LSH) [1] and its variants [2, 3].
    [Show full text]
  • Mlops: from Model-Centric to Data-Centric AI
    MLOps: From Model-centric to Data-centric AI Andrew Ng AI system = Code + Data (model/algorithm) Andrew Ng Inspecting steel sheets for defects Examples of defects Baseline system: 76.2% accuracy Target: 90.0% accuracy Andrew Ng Audience poll: Should the team improve the code or the data? Poll results: Andrew Ng Improving the code vs. the data Steel defect Solar Surface detection panel inspection Baseline 76.2% 75.68% 85.05% Model-centric +0% +0.04% +0.00% (76.2%) (75.72%) (85.05%) Data-centric +16.9% +3.06% +0.4% (93.1%) (78.74%) (85.45%) Andrew Ng Data is Food for AI ~1% of AI research? ~99% of AI research? 80% 20% PREP ACTION Source and prepare high quality ingredients Cook a meal Source and prepare high quality data Train a model Andrew Ng Lifecycle of an ML Project Scope Collect Train Deploy in project data model production Define project Define and Training, error Deploy, monitor collect data analysis & iterative and maintain improvement system Andrew Ng Scoping: Speech Recognition Scope Collect Train Deploy in project data model production Define project Decide to work on speech recognition for voice search Andrew Ng Collect Data: Speech Recognition Scope Collect Train Deploy in project data model production Define and collect data “Um, today’s weather” Is the data labeled consistently? “Um… today’s weather” “Today’s weather” Andrew Ng Iguana Detection Example Labeling instruction: Use bounding boxes to indicate the position of iguanas Andrew Ng Making data quality systematic: MLOps • Ask two independent labelers to label a sample of images.
    [Show full text]
  • Convolutional Neural Network Based on Extreme Learning Machine for Maritime Ships Recognition in Infrared Images
    sensors Article Convolutional Neural Network Based on Extreme Learning Machine for Maritime Ships Recognition in Infrared Images Atmane Khellal 1,*, Hongbin Ma 1,2,* and Qing Fei 1,2 1 School of Automation, Beijing Institute of Technology, Beijing 100081, China; [email protected] 2 State Key Laboratory of Intelligent Control and Decision of Complex Systems, Beijing Institute of Technology, Beijing 100081, China * Correspondence: [email protected] (A.K.); [email protected] (H.M.); Tel.: +86-132-6107-8800 (A.K.); +86-152-1065-8048 (H.M.) Received: 1 April 2018; Accepted: 7 May 2018; Published: 9 May 2018 Abstract: The success of Deep Learning models, notably convolutional neural networks (CNNs), makes them the favorable solution for object recognition systems in both visible and infrared domains. However, the lack of training data in the case of maritime ships research leads to poor performance due to the problem of overfitting. In addition, the back-propagation algorithm used to train CNN is very slow and requires tuning many hyperparameters. To overcome these weaknesses, we introduce a new approach fully based on Extreme Learning Machine (ELM) to learn useful CNN features and perform a fast and accurate classification, which is suitable for infrared-based recognition systems. The proposed approach combines an ELM based learning algorithm to train CNN for discriminative features extraction and an ELM based ensemble for classification. The experimental results on VAIS dataset, which is the largest dataset of maritime ships, confirm that the proposed approach outperforms the state-of-the-art models in term of generalization performance and training speed.
    [Show full text]
  • ML Cheatsheet Documentation
    ML Cheatsheet Documentation Team Sep 02, 2021 Basics 1 Linear Regression 3 2 Gradient Descent 21 3 Logistic Regression 25 4 Glossary 39 5 Calculus 45 6 Linear Algebra 57 7 Probability 67 8 Statistics 69 9 Notation 71 10 Concepts 75 11 Forwardpropagation 81 12 Backpropagation 91 13 Activation Functions 97 14 Layers 105 15 Loss Functions 117 16 Optimizers 121 17 Regularization 127 18 Architectures 137 19 Classification Algorithms 151 20 Clustering Algorithms 157 i 21 Regression Algorithms 159 22 Reinforcement Learning 161 23 Datasets 165 24 Libraries 181 25 Papers 211 26 Other Content 217 27 Contribute 223 ii ML Cheatsheet Documentation Brief visual explanations of machine learning concepts with diagrams, code examples and links to resources for learning more. Warning: This document is under early stage development. If you find errors, please raise an issue or contribute a better definition! Basics 1 ML Cheatsheet Documentation 2 Basics CHAPTER 1 Linear Regression • Introduction • Simple regression – Making predictions – Cost function – Gradient descent – Training – Model evaluation – Summary • Multivariable regression – Growing complexity – Normalization – Making predictions – Initialize weights – Cost function – Gradient descent – Simplifying with matrices – Bias term – Model evaluation 3 ML Cheatsheet Documentation 1.1 Introduction Linear Regression is a supervised machine learning algorithm where the predicted output is continuous and has a constant slope. It’s used to predict values within a continuous range, (e.g. sales, price) rather than trying to classify them into categories (e.g. cat, dog). There are two main types: Simple regression Simple linear regression uses traditional slope-intercept form, where m and b are the variables our algorithm will try to “learn” to produce the most accurate predictions.
    [Show full text]
  • Deep Learning I: Gradient Descent
    Roadmap Intro, model, cost Gradient descent Deep Learning I: Gradient Descent Hinrich Sch¨utze Center for Information and Language Processing, LMU Munich 2017-07-19 Sch¨utze (LMU Munich): Gradient descent 1 / 40 Roadmap Intro, model, cost Gradient descent Overview 1 Roadmap 2 Intro, model, cost 3 Gradient descent Sch¨utze (LMU Munich): Gradient descent 2 / 40 Roadmap Intro, model, cost Gradient descent Outline 1 Roadmap 2 Intro, model, cost 3 Gradient descent Sch¨utze (LMU Munich): Gradient descent 3 / 40 Roadmap Intro, model, cost Gradient descent word2vec skipgram predict, based on input word, a context word Sch¨utze (LMU Munich): Gradient descent 4 / 40 Roadmap Intro, model, cost Gradient descent word2vec skipgram predict, based on input word, a context word Sch¨utze (LMU Munich): Gradient descent 5 / 40 Roadmap Intro, model, cost Gradient descent word2vec parameter estimation: Historical development vs. presentation in this lecture Mikolov et al. (2013) introduce word2vec, estimating parameters by gradient descent. (today) Still the learning algorithm used by default and in most cases Levy and Goldberg (2014) show near-equivalence to a particular type of matrix factorization. (yesterday) Important because it links two important bodies of research: neural networks and distributional semantics Sch¨utze (LMU Munich): Gradient descent 6 / 40 Roadmap Intro, model, cost Gradient descent Gradient descent (GD) Gradient descent is a learning algorithm. Given: a hypothesis space (or model family) an objective function or cost function a training set Gradient descent (GD) finds a set of parameters, i.e., a member of the hypothesis space (or specified model) that performs well on the objective for the training set.
    [Show full text]
  • A Real-Time Sequential Deep Extreme Learning Machine Cybersecurity Intrusion Detection System
    Computers, Materials & Continua Tech Science Press DOI:10.32604/cmc.2020.013910 Article A Real-Time Sequential Deep Extreme Learning Machine Cybersecurity Intrusion Detection System Amir Haider1, Muhammad Adnan Khan2, Abdur Rehman3, Muhib Ur Rahman4 and Hyung Seok Kim1,* 1Department of Intelligent Mechatronics Engineering, Sejong University, Seoul, 05006, Korea 2Department of Computer Science, Lahore Garrison University, Lahore, 54000, Pakistan 3School of Computer Science, National College of Business Administration & Economics, Lahore, 54000, Pakistan 4Department of Electrical Engineering, Polytechnique Montreal, Montreal, QC H3T 1J4, Canada ÃCorresponding Author: Hyung Seok Kim. Email: [email protected] Received: 26 August 2020; Accepted: 28 September 2020 Abstract: In recent years, cybersecurity has attracted significant interest due to the rapid growth of the Internet of Things (IoT) and the widespread development of computer infrastructure and systems. It is thus becoming particularly necessary to identify cyber-attacks or irregularities in the system and develop an efficient intru- sion detection framework that is integral to security. Researchers have worked on developing intrusion detection models that depend on machine learning (ML) methods to address these security problems. An intelligent intrusion detection device powered by data can exploit artificial intelligence (AI), and especially ML, techniques. Accordingly, we propose in this article an intrusion detection model based on a Real-Time Sequential Deep Extreme Learning Machine Cyber- security Intrusion Detection System (RTS-DELM-CSIDS) security model. The proposed model initially determines the rating of security aspects contributing to their significance and then develops a comprehensive intrusion detection frame- work focused on the essential characteristics. Furthermore, we investigated the feasibility of our proposed RTS-DELM-CSIDS framework by performing dataset evaluations and calculating accuracy parameters to validate.
    [Show full text]
  • Road Traffic Prediction Model Using Extreme Learning Machine: the Case Study of Tangier, Morocco
    information Article Road Traffic Prediction Model Using Extreme Learning Machine: The Case Study of Tangier, Morocco Mouna Jiber *, Abdelilah Mbarek *, Ali Yahyaouy , My Abdelouahed Sabri and Jaouad Boumhidi Department of Computer Science, Faculty of Sciences Dhar El Mahraz, Sidi Mohamed Ben Abdellah University, Fez 30003, Morocco; [email protected] (A.Y.); [email protected] (M.A.S.); [email protected] (J.B.) * Correspondence: [email protected] (M.J.); [email protected] (A.M.) Received: 29 September 2020; Accepted: 10 November 2020; Published: 24 November 2020 Abstract: An efficient and credible approach to road traffic management and prediction is a crucial aspect in the Intelligent Transportation Systems (ITS). It can strongly influence the development of road structures and projects. It is also essential for route planning and traffic regulations. In this paper, we propose a hybrid model that combines extreme learning machine (ELM) and ensemble-based techniques to predict the future hourly traffic of a road section in Tangier, a city in the north of Morocco. The model was applied to a real-world historical data set extracted from fixed sensors over a 5-years period. Our approach is based on a type of Single hidden Layer Feed-forward Neural Network (SLFN) known for being a high-speed machine learning algorithm. The model was, then, compared to other well-known algorithms in the prediction literature. Experimental results demonstrated that, according to the most commonly used criteria of error measurements (RMSE, MAE, and MAPE), our model is performing better in terms of prediction accuracy.
    [Show full text]
  • Cascaded Facial Detection Algorithms to Improve Recognition Edmund Yee San Jose State University
    San Jose State University SJSU ScholarWorks Master's Projects Master's Theses and Graduate Research Spring 5-22-2017 Cascaded Facial Detection Algorithms To Improve Recognition Edmund Yee San Jose State University Follow this and additional works at: https://scholarworks.sjsu.edu/etd_projects Part of the Artificial Intelligence and Robotics Commons Recommended Citation Yee, Edmund, "Cascaded Facial Detection Algorithms To Improve Recognition" (2017). Master's Projects. 517. DOI: https://doi.org/10.31979/etd.c62h-qdhm https://scholarworks.sjsu.edu/etd_projects/517 This Master's Project is brought to you for free and open access by the Master's Theses and Graduate Research at SJSU ScholarWorks. It has been accepted for inclusion in Master's Projects by an authorized administrator of SJSU ScholarWorks. For more information, please contact [email protected]. Cascaded Facial Detection Algorithms To Improve Recognition A Thesis Presented to The Faculty of the Department of Computer Science San Jose State University In Partial Fulfillment of the Requirements for the Degree Master of Science By Edmund Yee May 2017 c 2017 Edmund Yee ALL RIGHTS RESERVED The Designated Thesis Committee Approves the Thesis Titled Cascaded Facial Detection Algorithms To Improve Recognition by Edmund Yee APPROVED FOR THE DEPARTMENTS OF COMPUTER SCIENCE SAN JOSE STATE UNIVERSITY May 2017 Dr. Robert Chun Department of Computer Science Dr. Thomas Austin Department of Computer Science Dr. Melody Moh Department of Computer Science Abstract The desire to be able to use computer programs to recognize certain biometric qualities of people have been desired by several different types of organizations. One of these qualities worked on and has achieved moderate success is facial detection and recognition.
    [Show full text]
  • Bringing Data Into Focus
    Bringing Data into Focus Brian F. Tankersley, CPA.CITP, CGMA K2 Enterprises Bringing Data into Focus It has been said that data is the new oil, and our smartphones, computer systems, and internet of things devices add hundreds of millions of gigabytes more every day. The data can create new opportunities for your cooperative, but your team must take care to harvest and store it properly. Just as oil must be refined and separated into gasoline, diesel fuel, and lubricants, organizations must create digital processing platforms to the realize value from this new resource. This session will cover fundamental concepts including extract/transform/load, big data, analytics, and the analysis of structured and unstructured data. The materials include an extensive set of definitions, tools and resources which you can use to help you create your data, big data, and analytics strategy so you can create systems which measure what really matters in near real time. Stop drowning in data! Attend this session to learn techniques for navigating your ship on ocean of opportunity provided by digital exhaust, and set your course for a more efficient and effective future. Copyright © 2018, K2 Enterprises, LLC. Reproduction or reuse for purposes other than a K2 Enterprises' training event is prohibited. About Brian Tankersley @BFTCPA CPA, CITP, CGMA with over 25 years of Accounting and Technology business experience, including public accounting, industry, consulting, media, and education. • Director, Strategic Relationships, K2 Enterprises, LLC (k2e.com) (2005-present) – Delivered presentations in 48 US states, Canada, and Bermuda. • Author, 2014-2019 CPA Firm Operations and Technology Survey • Director, Strategic Relationships / Instructor, Yaeger CPA Review (2017-present) • Freelance Writer for accounting industry media outlets such as AccountingWeb and CPA Practice Advisor (2015-present) • Technology Editor, The CPA Practice Advisor (CPAPracAdvisor.com) (2010-2014) • Selected seven times as a “Top 25 Thought Leader” by The CPA Practice Advisor.
    [Show full text]
  • Face Recognition in Unconstrained Conditions: a Systematic Review
    Face Recognition in Unconstrained Conditions: A Systematic Review ANDREW JASON SHEPLEY, Charles Darwin University, Australia ABSTRACT Face recognition is a biometric which is attracting significant research, commercial and government interest, as it provides a discreet, non-intrusive way of detecting, and recognizing individuals, without need for the subject’s knowledge or consent. This is due to reduced cost, and evolution in hardware and algorithms which have improved their ability to handle unconstrained conditions. Evidently affordable and efficient applications are required. However, there is much debate over which methods are most appropriate, particularly in the context of the growing importance of deep neural network-based face recognition systems. This systematic review attempts to provide clarity on both issues by organizing the plethora of research and data in this field to clarify current research trends, state-of-the-art methods, and provides an outline of their benefits and shortcomings. Overall, this research covered 1,330 relevant studies, showing an increase of over 200% in research interest in the field of face recognition over the past 6 years. Our results also demonstrated that deep learning methods are the prime focus of modern research due to improvements in hardware databases and increasing understanding of neural networks. In contrast, traditional methods have lost favor amongst researchers due to their inherent limitations in accuracy, and lack of efficiency when handling large amounts of data. Keywords: unconstrained face recognition, deep neural networks, feature extraction, face databases, traditional handcrafted features 1 INTRODUCTION The development of accurate and efficient face recognition systems for use in unconstrained conditions is an area of high research interest.
    [Show full text]
  • Investigations of Object Detection in Images/Videos Using Various Deep Learning Techniques and Embedded Platforms—A Comprehensive Review
    applied sciences Review Investigations of Object Detection in Images/Videos Using Various Deep Learning Techniques and Embedded Platforms—A Comprehensive Review Chinthakindi Balaram Murthy 1,†, Mohammad Farukh Hashmi 1,† , Neeraj Dhanraj Bokde 2,† and Zong Woo Geem 3,* 1 Department of Electronics and Communication Engineering, National Institute of Technology, Warangal 506004, India; [email protected] (C.B.M.); [email protected] (M.F.H.) 2 Department of Engineering—Renewable Energy and Thermodynamics, Aarhus University, 8000 Aarhus, Denmark; [email protected] 3 Department of Energy IT, Gachon University, Seongnam 13120, Korea * Correspondence: [email protected]; Tel.: +82-31-750-5586 † These authors contributed equally to this work. Received: 15 April 2020; Accepted: 2 May 2020; Published: 8 May 2020 Abstract: In recent years there has been remarkable progress in one computer vision application area: object detection. One of the most challenging and fundamental problems in object detection is locating a specific object from the multiple objects present in a scene. Earlier traditional detection methods were used for detecting the objects with the introduction of convolutional neural networks. From 2012 onward, deep learning-based techniques were used for feature extraction, and that led to remarkable breakthroughs in this area. This paper shows a detailed survey on recent advancements and achievements in object detection using various deep learning techniques. Several topics have been included, such as Viola–Jones (VJ), histogram of oriented gradient (HOG), one-shot and two-shot detectors, benchmark datasets, evaluation metrics, speed-up techniques, and current state-of-art object detectors. Detailed discussions on some important applications in object detection areas, including pedestrian detection, crowd detection, and real-time object detection on Gpu-based embedded systems have been presented.
    [Show full text]