IMPROVING PERFORMANCE of DEEP NEURAL NETWORKS by DEVELOPING NOVEL ACTIVATION FUNCTIONS Phd Thesis

Total Page:16

File Type:pdf, Size:1020Kb

IMPROVING PERFORMANCE of DEEP NEURAL NETWORKS by DEVELOPING NOVEL ACTIVATION FUNCTIONS Phd Thesis IOSUD - Universitatea Politehnica Timişoara Şcoala Doctorală de Studii Inginereşti IMPROVING PERFORMANCE OF DEEP NEURAL NETWORKS BY DEVELOPING NOVEL ACTIVATION FUNCTIONS PhD Thesis – Abstract To obtain a PhD degree at Politehnica University Timișoara In Computers and Information Technology author ing. Marina Adriana MERCIONI Thesis supervisor Prof.univ.dr.ing. Ștefan HOLBAN In the thesis entitled „Improving performance of deep neural networks by developing novel activation functions” are presented a multivariate series of activation functions developed along time in Deep Learning [1] domain. The thesis is structured in five parts having a total of eight chapters and contains more subchapters (Fig.1. and Fig.2.) divided as follows: 2. Analysis of the current stage 1. Introduction in Deep Learning 3. Optimization techniques 4. Architectures of neural networks 5. Deep Learning for text and sequence 6. Proposed activation functions 8. Conclusions 7. Experimental results Bibliography Fig.1. The architecture of the thesis Fig.2. The architecture of the paper by subchapters The thesis structure: The first part contains chapter 1 in order to present the motivation, the objective of the research treated in the paper. The second part contains the analysis of the current state in the field of activation functions. The third part consists of chapters 3, 4 and 5 in which are presented the optimization techniques used in the paper, different architectures of neural networks based on which I did experiments and a short presentation of Deep Learning for text and sequences. The fourth part is the main part of the paper, it contains chapters 6 and 7 with proposed activation functions and experimental results. The last part contains chapter 8 on the final conclusions on the issues presented in the paper as well as personal contributions. The paper concludes with future research directions, publications and bibliography. The aim of the paper is to analyze in detail and develop new activation functions in the field of Deep Learning that are able to increase performance tasks on both Computer Vision [2] and Natural Language Processing fields [3] but also other types of tasks such as detection anomalies [4] and predictions in time series. The interest for the analysis of Artificial Neural Networks through the use of an activation function has increased rapidly in recent years, thus becoming a highly studied field of research, as evidenced by existing articles in this direction. That is why in this thesis I want to emphasize the importance of the activation function in the analysis of several datasets for different tasks, to see how the activation function impacts the training process. I will continue to present, very succinctly, the essential aspects that are key points throughout each chapter of the thesis. The first chapter presents the motivation of choosing the topic for the thesis, the objective pursued and the structure of the thesis. My motivation starts from the fact that without activation functions, the neural network can only learn basic tasks, so by introducing the activation function the network is able to learn more complex tasks. So, the activation function is a key point in defining the architecture of the neural network and at the same time the activation function is one of the most important parameters that we must choose to successfully obtain a better performance in an artificial neural network. Starting from its role, which it plays in a neural network, my focus was on the study, analysis of how it works, the advantages and disadvantages of activation functions to find the function that best maps to the type of task, bringing the best performance. A decisive property that determines the performance of an activation function is given by whether or not it is smooth. [5] Property also analyzed by me in this thesis in comparison with other functions such as: tangent activation function (tanh) [6], ReLU function (rectified linear unit) [7], Swish activation function [8], and so on. This property gives to the function the capacity to have continuous derivatives, up to a certain specified order. This implies that the function is continuously differentiable, in other words the first derivative exists everywhere and is continuous. Also, due to the fact that this field is constantly evolving, another direction was the development of novel activation functions that would bring improvements to the architecture of artificial neural networks. This thesis aims to investigate the use of different activation functions that are an integral part of artificial neural networks in the field of Deep Learning and the development of novel activation functions to bring improvements during network training. This field has been very successful and has been widely studied in recent years due to its availability in terms of hardware, having graphics processing units (GPUs) and increasing volumes of data (Big Data). To achieve this goal, I have defined the following tasks: ● analysis of the current state in the field and identification of new research directions in the context of activation functions, which have developed over time, being a research area that is constantly evolving and how to correlate the functions of activation with the requirements of the task, the architecture’s type but also the data’s type. ● identification of activation functions that bring substantial improvements and lead to a rapid convergence of artificial neural networks. This task is based on the analysis of the activation functions and the selection of the most appropriate function based on the data, which also constitutes the exploitation and impact potential. ● implementation of several types of models and their evaluation according to certain specific metrics such as precision, accuracy, cost function, mean absolute error, etc. In the second chapter, which also corresponds to the second part, I focus on the presentation of the state of the art and the current situation in the field, the Multilayer Perceptron architecture, activation functions and the Backpropagation algorithm. Artificial Intelligence (AI) is the key point in technological development, allowing computers to model the real world to a very high degree, obtaining results comparable to reality. Significant progress has been made in the field of neural networks - a large and sufficient number to drive my attention in this direction. To achieve this, we need a large amount of information about everything around us, information that must be stored in the computer. Basically, a certain form is given to this data that computers can use to answer questions, providing better answers as the model has a growing volume of information. To generalize this context, many researchers have turned to learning algorithms to store a quantity of information in a short time. Lots of progress has been made in understanding and improving these learning algorithms, but Artificial Intelligence still remains a challenge. Fig.3. Artificial Intelligence, Automated Learning and Deep Learning - Venn Chart [9] The history of Deep Learning began in 1943, when a model based on the neural networks of the human brain was created. Since then, Deep Learning has constantly evolved, there were only two significant breaks in its development, encountered in the literature as the ugly winters of Artificial Intelligence. In 1960, the basic elements of a continuous pattern of Backpropagation were defined. [10] Although so many algorithms have been developed over time, activation functions have not been ignored either. Starting from the role of the activation function to filter information, I want to have functions with equally specific properties, as it is based on updating the weights in the neural network. Based on this consideration, several activation functions have been developed that allow us to use them in different ways and that help neural networks achieve convergence faster or give them the ability to use fewer layers in defining their architecture. With fewer layers in the architecture, we have fewer parameters in our network so it is a good way to optimize the network. The main purpose of the activation function is to introduce nonlinearity into the result of a neuron. Also, the activation function known as the transfer function can decide whether or not a neuron should be activated by a weighted amount to which bias is added. A neural network without activation functions is just a linear regression model. But the activation function gives the network the ability to learn more complex tasks. Another important aspect is that the activation function and the determination of the initialization of the weights was given by the space in which the optimization algorithm will take values. This algorithm comes as a solution for creating the neural network which consists of a non-convex optimization problem. Selecting an activation function is an important issue because it can affect how input data changes. It was shown that a novel type of activation function known as the Rectified Linear Unit (ReLU) improves the performance of the neural network in terms of time efficiency and spatial complexity. It was also studied the impact of using nonlinear behavior instead of sigmoid function or tangent function (also known as tanh) by using the controller to prevent possible numerical problems with unlimited activations. Also in this chapter I presented Perceptron and Multi-layer perceptron architectures. Because the activation function is a fundamental
Recommended publications
  • Improvements on Activation Functions in ANN: an Overview
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by CSCanada.net: E-Journals (Canadian Academy of Oriental and Occidental Culture,... ISSN 1913-0341 [Print] Management Science and Engineering ISSN 1913-035X [Online] Vol. 14, No. 1, 2020, pp. 53-58 www.cscanada.net DOI:10.3968/11667 www.cscanada.org Improvements on Activation Functions in ANN: An Overview YANG Lexuan[a],* [a]Hangzhou No.2 High School of Zhejiang Province; Hangzhou, China. An ANN is comprised of computable units called * Corresponding author. neurons (or nodes). The activation function in a neuron processes the inputted signals and determines the output. Received 9 April 2020; accepted 21 May 2020 Published online 26 June 2020 Activation functions are particularly important in the ANN algorithm; change in activation functions can have a significant impact on network performance. Therefore, Abstract in recent years, researches have been done in depth on Activation functions are an essential part of artificial the improvement of activation functions to solve or neural networks. Over the years, researches have been alleviate some problems encountered in practices with done to seek for new functions that perform better. “classic” activation functions. This paper will give some There are several mainstream activation functions, such brief examples and discussions on the modifications and as sigmoid and ReLU, which are widely used across improvements of mainstream activation functions. the decades. At the meantime, many modified versions of these functions are also proposed by researchers in order to further improve the performance. In this paper, 1. INTRODUCTION OF ACTIVATION limitations of the mainstream activation functions, as well as the main characteristics and relative performances of FUNCTIONS their modifications, are discussed.
    [Show full text]
  • Eis- a Family of Activation Functions Combining Exponential,Isru, and Softplus
    EIS - A FAMILY OF ACTIVATION FUNCTIONS COMBINING EXPONENTIAL, ISRU, AND SOFTPLUS APREPRINT Koushik Biswas Sandeep Kumar Department of Computer Science Department of Computer Science, IIIT Delhi IIIT Delhi & New Delhi, India, 110020 Department of Mathematics, Shaheed Bhagat Singh College, [email protected] University of Delhi. New Delhi, India. [email protected], [email protected] Shilpak Banerjee Ashish Kumar Pandey Department of Mathematics Department of Mathematics IIIT Delhi IIIT Delhi New Delhi, India, 110020 New Delhi, India, 110020 [email protected] [email protected] ABSTRACT Activation functions play a pivotal role in the function learning using neural networks. The non- linearity in the learned function is achieved by repeated use of the activation function. Over the years, numerous activation functions have been proposed to improve accuracy in several tasks. Basic functions like ReLU, Sigmoid, Tanh, or Softplus have been favorite among the deep learning community because of their simplicity. In recent years, several novel activation functions arising from these basic functions have been proposed, which have improved accuracy in some challenging datasets. We propose a five hyper-parameters family of activation functions, namely EIS, defined as, x(ln(1 + ex))α : pβ + γx2 + δe−θx We show examples of activation functions from the EIS family which outperform widely used x ln(1+ex) activation functions on some well known datasets and models. For example, x+1:16e−x beats ReLU arXiv:2009.13501v2 [cs.LG] 12 Oct 2020 by 0.89% in DenseNet-169, 0.24% in Inception V3 in CIFAR100 dataset while 1.13% in Inception x ln(1+ex) V3, 0.13% in DenseNet-169, 0.94% in SimpleNet model in CIFAR10 dataset.
    [Show full text]
  • Arxiv:2102.02694V2 [Stat.ML] 19 May 2021
    Invertible DenseNets with Concatenated LipSwish Yura Perugachi-Diaz Jakub M. Tomczak Sandjai Bhulai Vrije Universiteit Amsterdam Vrije Universiteit Amsterdam Vrije Universiteit Amsterdam [email protected] [email protected] [email protected] Abstract We introduce Invertible Dense Networks (i-DenseNets), a more parameter efficient extension of Residual Flows. The method relies on an analysis of the Lipschitz continuity of the concatenation in DenseNets, where we enforce invertibility of the network by satisfying the Lipschitz constant. Furthermore, we propose a learnable weighted concatenation, which not only improves the model performance but also indicates the importance of the concatenated weighted representation. Additionally, we introduce the Concatenated LipSwish as activation function, for which we show how to enforce the Lipschitz condition and which boosts performance. The new architecture, i-DenseNet, out-performs Residual Flow and other flow-based models on density estimation evaluated in bits per dimension, where we utilize an equal parameter budget. Moreover, we show that the proposed model out- performs Residual Flows when trained as a hybrid model where the model is both a generative and a discriminative model. 1 Introduction Neural networks are widely used to parameterize non-linear models in supervised learning tasks such as classification. In addition, they are also utilized to build flexible density estimators of the true distribution of the observed data [25, 33]. The resulting deep density estimators, also called deep generative models, can be further used to generate realistic-looking images that are hard to separate from real ones, detection of adversarial attacks [9, 17], and for hybrid modeling [27] which have the property to both predict a label (classify) and generate.
    [Show full text]
  • Deep Learning Fall 2020 Last Time: Deep Learning
    CPSC 340: Machine Learning and Data Mining More Deep Learning Fall 2020 Last Time: Deep Learning https://en.wikipedia.org/wiki/Neuron https://www.youtube.com/watch?v=aircAruvnKk ImageNet Challenge • Millions of labeled images, 1000 object classes. http://www.image-net.org/challenges/LSVRC/2014/ ImageNet Challenge • Object detection task: – Single label per image. – Humans: ~5% error. Syberian Husky Canadian Husky https://ischlag.github.io/2016/04/05/important-ILSVRC-achievements/ http://arxiv.org/pdf/1409.0575v3.pdf http://arxiv.org/pdf/1409.4842v1.pdf ImageNet Challenge • Object detection task: – Single label per image. – Humans: ~5% error. Syberian Husky Canadian Husky https://ischlag.github.io/2016/04/05/important-ILSVRC-achievements/ http://arxiv.org/pdf/1409.0575v3.pdf http://arxiv.org/pdf/1409.4842v1.pdf ImageNet Challenge • Object detection task: – Single label per image. – Humans: ~5% error. Syberian Husky Canadian Husky https://ischlag.github.io/2016/04/05/important-ILSVRC-achievements/ http://arxiv.org/pdf/1409.0575v3.pdf http://arxiv.org/pdf/1409.4842v1.pdf ImageNet Challenge • Object detection task: – Single label per image. – Humans: ~5% error. Syberian Husky Canadian Husky https://ischlag.github.io/2016/04/05/important-ILSVRC-achievements/ http://arxiv.org/pdf/1409.0575v3.pdf http://arxiv.org/pdf/1409.4842v1.pdf ImageNet Challenge • Object detection task: – Single label per image. – Humans: ~5% error. Syberian Husky Canadian Husky https://ischlag.github.io/2016/04/05/important-ILSVRC-achievements/ http://arxiv.org/pdf/1409.0575v3.pdf http://arxiv.org/pdf/1409.4842v1.pdf ImageNet Challenge • Object detection task: – Single label per image. – Humans: ~5% error. • 2015: Won by Microsoft Asia – 3.6% error.
    [Show full text]
  • Tion Function for Deep Neural Networks
    Under review as a conference paper at ICLR 2019 ON THE SELECTION OF INITIALIZATION AND ACTIVA- TION FUNCTION FOR DEEP NEURAL NETWORKS Anonymous authors Paper under double-blind review ABSTRACT The weight initialization and the activation function of deep neural networks have a crucial impact on the performance of the training procedure. An inappropriate selection can lead to the loss of information of the input during forward propaga- tion and the exponential vanishing/exploding of gradients during back-propagation. Understanding the theoretical properties of untrained random networks is key to identifying which deep networks may be trained successfully as recently demon- strated by Schoenholz et al. (2017) who showed that for deep feedforward neural networks only a specific choice of hyperparameters known as the ‘edge of chaos’ can lead to good performance. We complete this analysis by providing quantitative results showing that, for a class of ReLU-like activation functions, the information propagates indeed deeper for an initialization at the edge of chaos. By further extending this analysis, we identify a class of activation functions that improve the information propagation over ReLU-like functions. This class includes the Swish activation, φswish(x) = x · sigmoid(x), used in Hendrycks & Gimpel (2016), Elfwing et al. (2017) and Ramachandran et al. (2017). This provides a theoretical grounding for the excellent empirical performance of φswish observed in these contributions. We complement those previous results by illustrating the benefit of using a random initialization on the edge of chaos in this context. 1 INTRODUCTION Deep neural networks have become extremely popular as they achieve state-of-the-art performance on a variety of important applications including language processing and computer vision; see, e.g., LeCun et al.
    [Show full text]
  • Activation Functions: Comparison of Trends in Practice and Research for Deep Learning
    This is a peer-reviewed, accepted author manuscript of the following paper: Nwankpa, C. E., Ijomah, W., Gachagan, A., & Marshall, S. (2021). Activation functions: comparison of trends in practice and research for deep learning. 124 - 133. 2nd International Conference on Computational Sciences and Technology, Jamshoro, Pakistan. Activation Functions: Comparison of Trends in Practice and Research for Deep Learning Chigozie E. Nwankpa Winifred l.Ijomah Anthony Gachagan Stephen Marshall Dept. of DMEM Dept. of DMEM Dept. of EEE Dept. of EEE University of Strathclyde University of Strathclyde University of Strathclyde University of Strathclyde Glasgow, UK Glasgow, UK Glasgow, UK Glasgow, UK [email protected] [email protected] [email protected] [email protected] Abstract—Deep neural networks (DNN) have been successfully and detection. Deep learning, a sub-field of machine learning used in diverse emerging domains to solve real-world complex (ML), is more recently being referred to as representation problems with may more deep learning(DL) architectures, learning in some literature [3]. The typical artificial neural being developed to date. To achieve this state-of-the-art (SOTA) networks (ANN) are biologically inspired computer pro- performances, the DL architectures use activation functions grammes, designed by the inspiration of the workings of (AFs), to perform diverse computations between the hidden the human brain. These ANNs are called networks because layers and the output layers of any given DL architecture. they are composed of different functions [4], which gathers This paper presents a survey on the existing AFs used in knowledge by detecting the relationships and patterns in deep learning applications and highlights the recent trends data using past experiences known as training examples in in the use of the AFs for DL applications.
    [Show full text]
  • Activation Functions in Neural Networks
    International Journal of Engineering Applied Sciences and Technology, 2020 Vol. 4, Issue 12, ISSN No. 2455-2143, Pages 310-316 Published Online April 2020 in IJEAST (http://www.ijeast.com) ACTIVATION FUNCTIONS IN NEURAL NETWORKS Siddharth Sharma, Simone Sharma Anidhya Athaiya UG Scholar, Dept. of Computer Science and Assistant Professor, Dept. of Computer Science and Engineering, Global Institute of Technology, Jaipur Engineering, Global Institute of Technology, Jaipur Abstract—Artificial Neural Networks are inspired from linear activation function is used where the output is similar the human brain and the network of neurons present in as the input fed along with some error. A linear activation the brain. The information is processed and passed on function’s boundary is linear and the if they are used, then from one neuron to another through neuro synaptic the network can adapt to only the linear changes of the junctions. Similarly, in artificial neural networks there input are different layers of cells arranged and connected to but, in real world the errors possess non-linear each other. The output/information from the inner characteristics which in turn with the neural networks layers of the neural network are passed on to the next ability to learn about erroneous data. Hence non-linear layers and finally to the outermost layer which gives the activation functions are preferred over linear activation output. The input to the outer layer is provided non- functions in a Neural Network. linearity to inner layers’ output so that it can be further The most appealing property of Artificial Neural processed. In an Artificial Neural Network, activation Networks is the ability to adapt their behavior according to functions are very important as they help in learning the changing characteristics of the system.
    [Show full text]
  • A New Activation Function for Training Deep Neural Networks to Avoid Local Minimum
    A New Activation Function for Training Deep Neural Networks to Avoid Local Minimum Abhinav Sagar Vellore Institute of Technology, Vellore, Tamil Nadu, India ABSTRACT Activation functions have a major role to play and hence are very important while training neural networks. Understanding various activation functions and their advantages and disadvantages is crucial to achieve better results. This paper will first introduce common types of non linear activation functions and then evaluate their characteristics with their pros and cons. We will be focussing only on deep neural networks as it has proven to be much more difficult to train while avoiding overfitting. We have proposed a new activation function named - Abhinav which adds a non linearity with parametric coefficients to the Swish activation function. This has proven to give better results on the MNIST dataset. We reason this is because the model avoids getting stuck in local minima due to only the sigmoidal term present in the Swish function. The coefficients are automatically adjusted in each and every iteration i.e. coefficient values are reduced if the error is large and also sometimes reducing it to zero thus removing coefficients present in the polynomial. This new activation function can be made to generalize to other tasks including multi class and multi label image classification also. Index Terms—Neural Networks, Deep learning, Image classification, Activation function, Swish ​ 1. INTRODUCTION Deep neural network algorithms are multi layered learning techniques that allows non-linear nodes to transform representations from the raw input into the higher levels of abstract representations, with many of these transformations producing learned complex functions.
    [Show full text]
  • Activation Functions: Comparison of Trends in Practice and Research for Deep Learning
    1 Activation Functions: Comparison of Trends in Practice and Research for Deep Learning Chigozie Enyinna Nwankpa, Winifred Ijomah, Anthony Gachagan, and Stephen Marshall Abstract Deep neural networks have been successfully used in diverse emerging domains to solve real world complex problems with may more deep learning(DL) architectures, being developed to date. To achieve these state-of-the-art performances, the DL architectures use activation functions (AFs), to perform diverse computations between the hidden layers and the output layers of any given DL architecture. This paper presents a survey on the existing AFs used in deep learning applications and highlights the recent trends in the use of the activation functions for deep learning applications. The novelty of this paper is that it compiles majority of the AFs used in DL and outlines the current trends in the applications and usage of these functions in practical deep learning deployments against the state-of-the-art research results. This compilation will aid in making effective decisions in the choice of the most suitable and appropriate activation function for any given application, ready for deployment. This paper is timely because most research papers on AF highlights similar works and results while this paper will be the first, to compile the trends in AF applications in practice against the research results from literature, found in deep learning research to date. Index Terms activation function, activation function types, activation function choices, deep learning, neural networks, learning algorithms I. INTRODUCTION Deep learning algorithms are multi-level representation learning techniques that allows simple non-linear modules to transform representations from the raw input into the higher levels of abstract representations, with many of these transformations producing learned complex functions.
    [Show full text]
  • Review and Comparison of Commonly Used Activation Functions for Deep Neural Networks
    Review and Comparison of Commonly Used Activation Functions for Deep Neural Networks Tomasz Szandała​1 1Wroclaw​ University of Science and Technology Wroclaw, Poland The primary neural networks decision-making units are activation functions. Moreover, they evaluate the output of networks neural node; thus, they are essential for the performance of the whole network. Hence, it is critical to choose the most appropriate activation function in neural networks calculation. Acharya et al. (2018) suggest that numerous recipes have been formulated over the years, though some of them are considered deprecated these days since they are unable to operate properly under some conditions. These functions have a variety of characteristics, which are deemed essential to successfully learning. Their monotonicity, individual derivatives, and finite of their range are some of these characteristics (Bach 2017). This research paper will evaluate the commonly used additive functions, such as swish, ReLU, Sigmoid, and so forth. This will be followed by their properties, own cons and pros, and particular formula application recommendations. deep learning, neural networks, activation function, classification, regression Introduction There exist many applications of deep learning neural network, such as voice analysis, speech or pattern recognition, and object classification. This has been demonstrated in its excellent performance in many fields. Moreover, it comprises a structure with hidden layers; this implies that the layers are more than one [1], [2]. However, from the study of deep learning structure, it can adapt in terms of real-world application with natural scenes due to its salient features. There exist only a few layers in the first deep learning model used for classification tasks; for example, there were only five layers in LeNet5 model[3].
    [Show full text]
  • Improvements on Activation Functions in ANN: an Overview
    ISSN 1913-0341 [Print] Management Science and Engineering ISSN 1913-035X [Online] Vol. 14, No. 1, 2020, pp. 53-58 www.cscanada.net DOI:10.3968/11667 www.cscanada.org Improvements on Activation Functions in ANN: An Overview YANG Lexuan[a],* [a]Hangzhou No.2 High School of Zhejiang Province; Hangzhou, China. An ANN is comprised of computable units called * Corresponding author. neurons (or nodes). The activation function in a neuron processes the inputted signals and determines the output. Received 9 April 2020; accepted 21 May 2020 Published online 26 June 2020 Activation functions are particularly important in the ANN algorithm; change in activation functions can have a significant impact on network performance. Therefore, Abstract in recent years, researches have been done in depth on Activation functions are an essential part of artificial the improvement of activation functions to solve or neural networks. Over the years, researches have been alleviate some problems encountered in practices with done to seek for new functions that perform better. “classic” activation functions. This paper will give some There are several mainstream activation functions, such brief examples and discussions on the modifications and as sigmoid and ReLU, which are widely used across improvements of mainstream activation functions. the decades. At the meantime, many modified versions of these functions are also proposed by researchers in order to further improve the performance. In this paper, 1. INTRODUCTION OF ACTIVATION limitations of the mainstream activation functions, as well as the main characteristics and relative performances of FUNCTIONS their modifications, are discussed. Activation functions are functions that determine the key words: Activation functions; ANN; Piecewise- output of the nodes (neurons) in ANNs.
    [Show full text]
  • Deep Learning Fall 2019 Last Time: Deep Learning
    CPSC 340: Machine Learning and Data Mining More Deep Learning Fall 2019 Last Time: Deep Learning https://en.wikipedia.org/wiki/Neuron https://www.youtube.com/watch?v=aircAruvnKk ImageNet Challenge • Millions of labeled images, 1000 object classes. http://www.image-net.org/challenges/LSVRC/2014/ ImageNet Challenge • Object detection task: – Single label per image. – Humans: ~5% error. Syberian Husky Canadian Husky https://ischlag.github.io/2016/04/05/important-ILSVRC-achievements/ http://arxiv.org/pdf/1409.0575v3.pdf http://arxiv.org/pdf/1409.4842v1.pdf ImageNet Challenge • Object detection task: – Single label per image. – Humans: ~5% error. Syberian Husky Canadian Husky https://ischlag.github.io/2016/04/05/important-ILSVRC-achievements/ http://arxiv.org/pdf/1409.0575v3.pdf http://arxiv.org/pdf/1409.4842v1.pdf ImageNet Challenge • Object detection task: – Single label per image. – Humans: ~5% error. Syberian Husky Canadian Husky https://ischlag.github.io/2016/04/05/important-ILSVRC-achievements/ http://arxiv.org/pdf/1409.0575v3.pdf http://arxiv.org/pdf/1409.4842v1.pdf ImageNet Challenge • Object detection task: – Single label per image. – Humans: ~5% error. Syberian Husky Canadian Husky https://ischlag.github.io/2016/04/05/important-ILSVRC-achievements/ http://arxiv.org/pdf/1409.0575v3.pdf http://arxiv.org/pdf/1409.4842v1.pdf ImageNet Challenge • Object detection task: – Single label per image. – Humans: ~5% error. Syberian Husky Canadian Husky https://ischlag.github.io/2016/04/05/important-ILSVRC-achievements/ http://arxiv.org/pdf/1409.0575v3.pdf http://arxiv.org/pdf/1409.4842v1.pdf ImageNet Challenge • Object detection task: – Single label per image. – Humans: ~5% error. • 2015: Won by Microsoft Asia – 3.6% error.
    [Show full text]