Fine Tuning Image Input Sets for Deep Neural Networks Using Hallucinations
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
Training Autoencoders by Alternating Minimization
Under review as a conference paper at ICLR 2018 TRAINING AUTOENCODERS BY ALTERNATING MINI- MIZATION Anonymous authors Paper under double-blind review ABSTRACT We present DANTE, a novel method for training neural networks, in particular autoencoders, using the alternating minimization principle. DANTE provides a distinct perspective in lieu of traditional gradient-based backpropagation techniques commonly used to train deep networks. It utilizes an adaptation of quasi-convex optimization techniques to cast autoencoder training as a bi-quasi-convex optimiza- tion problem. We show that for autoencoder configurations with both differentiable (e.g. sigmoid) and non-differentiable (e.g. ReLU) activation functions, we can perform the alternations very effectively. DANTE effortlessly extends to networks with multiple hidden layers and varying network configurations. In experiments on standard datasets, autoencoders trained using the proposed method were found to be very promising and competitive to traditional backpropagation techniques, both in terms of quality of solution, as well as training speed. 1 INTRODUCTION For much of the recent march of deep learning, gradient-based backpropagation methods, e.g. Stochastic Gradient Descent (SGD) and its variants, have been the mainstay of practitioners. The use of these methods, especially on vast amounts of data, has led to unprecedented progress in several areas of artificial intelligence. On one hand, the intense focus on these techniques has led to an intimate understanding of hardware requirements and code optimizations needed to execute these routines on large datasets in a scalable manner. Today, myriad off-the-shelf and highly optimized packages exist that can churn reasonably large datasets on GPU architectures with relatively mild human involvement and little bootstrap effort. -
ML-Based Interactive Data Visualization System for Diversity and Fairness Issues 1
Jusub Kim : ML-based Interactive Data Visualization System for Diversity and Fairness Issues 1 https://doi.org/10.5392/IJoC.2019.15.4.001 ML-based Interactive Data Visualization System for Diversity and Fairness Issues Sey Min Department of Art & Technology Sogang University, Seoul, Republic of Korea Jusub Kim Department of Art & Technology Sogang University, Seoul, Republic of Korea ABSTRACT As the recent developments of artificial intelligence, particularly machine-learning, impact every aspect of society, they are also increasingly influencing creative fields manifested as new artistic tools and inspirational sources. However, as more artists integrate the technology into their creative works, the issues of diversity and fairness are also emerging in the AI-based creative practice. The data dependency of machine-learning algorithms can amplify the social injustice existing in the real world. In this paper, we present an interactive visualization system for raising the awareness of the diversity and fairness issues. Rather than resorting to education, campaign, or laws on those issues, we have developed a web & ML-based interactive data visualization system. By providing the interactive visual experience on the issues in interesting ways as the form of web content which anyone can access from anywhere, we strive to raise the public awareness of the issues and alleviate the important ethical problems. In this paper, we present the process of developing the ML-based interactive visualization system and discuss the results of this project. The proposed approach can be applied to other areas requiring attention to the issues. Key words: AI Art, Machine Learning, Data Visualization, Diversity, Fairness, Inclusiveness. -
Q-Learning in Continuous State and Action Spaces
-Learning in Continuous Q State and Action Spaces Chris Gaskett, David Wettergreen, and Alexander Zelinsky Robotic Systems Laboratory Department of Systems Engineering Research School of Information Sciences and Engineering The Australian National University Canberra, ACT 0200 Australia [cg dsw alex]@syseng.anu.edu.au j j Abstract. -learning can be used to learn a control policy that max- imises a scalarQ reward through interaction with the environment. - learning is commonly applied to problems with discrete states and ac-Q tions. We describe a method suitable for control tasks which require con- tinuous actions, in response to continuous states. The system consists of a neural network coupled with a novel interpolator. Simulation results are presented for a non-holonomic control task. Advantage Learning, a variation of -learning, is shown enhance learning speed and reliability for this task.Q 1 Introduction Reinforcement learning systems learn by trial-and-error which actions are most valuable in which situations (states) [1]. Feedback is provided in the form of a scalar reward signal which may be delayed. The reward signal is defined in relation to the task to be achieved; reward is given when the system is successfully achieving the task. The value is updated incrementally with experience and is defined as a discounted sum of expected future reward. The learning systems choice of actions in response to states is called its policy. Reinforcement learning lies between the extremes of supervised learning, where the policy is taught by an expert, and unsupervised learning, where no feedback is given and the task is to find structure in data. -
Artificial Neural Networks Part 2/3 – Perceptron
Artificial Neural Networks Part 2/3 – Perceptron Slides modified from Neural Network Design by Hagan, Demuth and Beale Berrin Yanikoglu Perceptron • A single artificial neuron that computes its weighted input and uses a threshold activation function. • It effectively separates the input space into two categories by the hyperplane: wTx + b = 0 Decision Boundary The weight vector is orthogonal to the decision boundary The weight vector points in the direction of the vector which should produce an output of 1 • so that the vectors with the positive output are on the right side of the decision boundary – if w pointed in the opposite direction, the dot products of all input vectors would have the opposite sign – would result in same classification but with opposite labels The bias determines the position of the boundary • solve for wTp+b = 0 using one point on the decision boundary to find b. Two-Input Case + a = hardlim(n) = [1 2]p + -2 w1, 1 = 1 w1, 2 = 2 - Decision Boundary: all points p for which wTp + b =0 If we have the weights and not the bias, we can take a point on the decision boundary, p=[2 0]T, and solving for [1 2]p + b = 0, we see that b=-2. p w wT.p = ||w||||p||Cosθ θ Decision Boundary proj. of p onto w proj. of p onto w T T w p + b = 0 w p = -bœb = ||p||Cosθ 1 1 = wT.p/||w|| • All points on the decision boundary have the same inner product (= -b) with the weight vector • Therefore they have the same projection onto the weight vector; so they must lie on a line orthogonal to the weight vector ADVANCED An Illustrative Example Boolean OR ⎧ 0 ⎫ ⎧ 0 ⎫ ⎧ 1 ⎫ ⎧ 1 ⎫ ⎨p1 = , t1 = 0 ⎬ ⎨p2 = , t2 = 1 ⎬ ⎨p3 = , t3 = 1⎬ ⎨p4 = , t4 = 1⎬ ⎩ 0 ⎭ ⎩ 1 ⎭ ⎩ 0 ⎭ ⎩ 1 ⎭ Given the above input-output pairs (p,t), can you find (manually) the weights of a perceptron to do the job? Boolean OR Solution 1) Pick an admissable decision boundary 2) Weight vector should be orthogonal to the decision boundary. -
Building Intelligent Systems with Large Scale Deep Learning Jeff Dean Google Brain Team G.Co/Brain
Building Intelligent Systems with Large Scale Deep Learning Jeff Dean Google Brain team g.co/brain Presenting the work of many people at Google Google Brain Team Mission: Make Machines Intelligent. Improve People’s Lives. How do we do this? ● Conduct long-term research (>200 papers, see g.co/brain & g.co/brain/papers) ○ Unsupervised learning of cats, Inception, word2vec, seq2seq, DeepDream, image captioning, neural translation, Magenta, ML for robotics control, healthcare, … ● Build and open-source systems like TensorFlow (see tensorflow.org and https://github.com/tensorflow/tensorflow) ● Collaborate with others at Google and Alphabet to get our work into the hands of billions of people (e.g., RankBrain for Google Search, GMail Smart Reply, Google Photos, Google speech recognition, Google Translate, Waymo, …) ● Train new researchers through internships and the Google Brain Residency program Main Research Areas ● General Machine Learning Algorithms and Techniques ● Computer Systems for Machine Learning ● Natural Language Understanding ● Perception ● Healthcare ● Robotics ● Music and Art Generation Main Research Areas ● General Machine Learning Algorithms and Techniques ● Computer Systems for Machine Learning ● Natural Language Understanding ● Perception ● Healthcare ● Robotics ● Music and Art Generation research.googleblog.com/2017/01 /the-google-brain-team-looking-ba ck-on.html 1980s and 1990s Accuracy neural networks other approaches Scale (data size, model size) 1980s and 1990s more Accuracy compute neural networks other approaches Scale -
Artificial Neuron Using Mos2/Graphene Threshold Switching Memristor
University of Central Florida STARS Electronic Theses and Dissertations, 2004-2019 2018 Artificial Neuron using MoS2/Graphene Threshold Switching Memristor Hirokjyoti Kalita University of Central Florida Part of the Electrical and Electronics Commons Find similar works at: https://stars.library.ucf.edu/etd University of Central Florida Libraries http://library.ucf.edu This Masters Thesis (Open Access) is brought to you for free and open access by STARS. It has been accepted for inclusion in Electronic Theses and Dissertations, 2004-2019 by an authorized administrator of STARS. For more information, please contact [email protected]. STARS Citation Kalita, Hirokjyoti, "Artificial Neuron using MoS2/Graphene Threshold Switching Memristor" (2018). Electronic Theses and Dissertations, 2004-2019. 5988. https://stars.library.ucf.edu/etd/5988 ARTIFICIAL NEURON USING MoS2/GRAPHENE THRESHOLD SWITCHING MEMRISTOR by HIROKJYOTI KALITA B.Tech. SRM University, 2015 A thesis submitted in partial fulfilment of the requirements for the degree of Master of Science in the Department of Electrical and Computer Engineering in the College of Engineering and Computer Science at the University of Central Florida Orlando, Florida Summer Term 2018 Major Professor: Tania Roy © 2018 Hirokjyoti Kalita ii ABSTRACT With the ever-increasing demand for low power electronics, neuromorphic computing has garnered huge interest in recent times. Implementing neuromorphic computing in hardware will be a severe boost for applications involving complex processes such as pattern recognition. Artificial neurons form a critical part in neuromorphic circuits, and have been realized with complex complementary metal–oxide–semiconductor (CMOS) circuitry in the past. Recently, insulator-to-metal-transition (IMT) materials have been used to realize artificial neurons. -
Audio Event Classification Using Deep Learning in an End-To-End Approach
Audio Event Classification using Deep Learning in an End-to-End Approach Master thesis Jose Luis Diez Antich Aalborg University Copenhagen A. C. Meyers Vænge 15 2450 Copenhagen SV Denmark Title: Abstract: Audio Event Classification using Deep Learning in an End-to-End Approach The goal of the master thesis is to study the task of Sound Event Classification Participant(s): using Deep Neural Networks in an end- Jose Luis Diez Antich to-end approach. Sound Event Classifi- cation it is a multi-label classification problem of sound sources originated Supervisor(s): from everyday environments. An auto- Hendrik Purwins matic system for it would many applica- tions, for example, it could help users of hearing devices to understand their sur- Page Numbers: 38 roundings or enhance robot navigation systems. The end-to-end approach con- Date of Completion: sists in systems that learn directly from June 16, 2017 data, not from features, and it has been recently applied to audio and its results are remarkable. Even though the re- sults do not show an improvement over standard approaches, the contribution of this thesis is an exploration of deep learning architectures which can be use- ful to understand how networks process audio. The content of this report is freely available, but publication (with reference) may only be pursued due to agreement with the author. Contents 1 Introduction1 1.1 Scope of this work.............................2 2 Deep Learning3 2.1 Overview..................................3 2.2 Multilayer Perceptron...........................4 -
Comparative Analysis of Recurrent Neural Network Architectures for Reservoir Inflow Forecasting
water Article Comparative Analysis of Recurrent Neural Network Architectures for Reservoir Inflow Forecasting Halit Apaydin 1 , Hajar Feizi 2 , Mohammad Taghi Sattari 1,2,* , Muslume Sevba Colak 1 , Shahaboddin Shamshirband 3,4,* and Kwok-Wing Chau 5 1 Department of Agricultural Engineering, Faculty of Agriculture, Ankara University, Ankara 06110, Turkey; [email protected] (H.A.); [email protected] (M.S.C.) 2 Department of Water Engineering, Agriculture Faculty, University of Tabriz, Tabriz 51666, Iran; [email protected] 3 Department for Management of Science and Technology Development, Ton Duc Thang University, Ho Chi Minh City, Vietnam 4 Faculty of Information Technology, Ton Duc Thang University, Ho Chi Minh City, Vietnam 5 Department of Civil and Environmental Engineering, Hong Kong Polytechnic University, Hong Kong, China; [email protected] * Correspondence: [email protected] or [email protected] (M.T.S.); [email protected] (S.S.) Received: 1 April 2020; Accepted: 21 May 2020; Published: 24 May 2020 Abstract: Due to the stochastic nature and complexity of flow, as well as the existence of hydrological uncertainties, predicting streamflow in dam reservoirs, especially in semi-arid and arid areas, is essential for the optimal and timely use of surface water resources. In this research, daily streamflow to the Ermenek hydroelectric dam reservoir located in Turkey is simulated using deep recurrent neural network (RNN) architectures, including bidirectional long short-term memory (Bi-LSTM), gated recurrent unit (GRU), long short-term memory (LSTM), and simple recurrent neural networks (simple RNN). For this purpose, daily observational flow data are used during the period 2012–2018, and all models are coded in Python software programming language. -
Using the Modified Back-Propagation Algorithm to Perform Automated Downlink Analysis by Nancy Y
Using The Modified Back-propagation Algorithm To Perform Automated Downlink Analysis by Nancy Y. Xiao Submitted to the Department of Electrical Engineering and Computer Science in partial fulfillment of the requirements for the degrees of Bachelor of Science and Master of Engineering in Electrical Engineering and Computer Science at the MASSACHUSETTS INSTITUTE OF TECHNOLOGY June 1996 Copyright Nancy Y. Xiao, 1996. All rights reserved. The author hereby grants to MIT permission to reproduce and distribute publicly paper and electronic copies of this thesis document in whole or in part, and to grant others the right to do so. A uthor ....... ............ Depar -uter Science May 28, 1996 Certified by. rnard C. Lesieutre ;rical Engineering Thesis Supervisor Accepted by.. F.R. Morgenthaler Chairman, Departmental Committee on Graduate Students OF TECH NOLOCG JUN 111996 Eng, Using The Modified Back-propagation Algorithm To Perform Automated Downlink Analysis by Nancy Y. Xiao Submitted to the Department of Electrical Engineering and Computer Science on May 28, 1996, in partial fulfillment of the requirements for the degrees of Bachelor of Science and Master of Engineering in Electrical Engineering and Computer Science Abstract A multi-layer neural network computer program was developed to perform super- vised learning tasks. The weights in the neural network were found using the back- propagation algorithm. Several modifications to this algorithm were also implemented to accelerate error convergence and optimize search procedures. This neural network was used mainly to perform pattern recognition tasks. As an initial test of the system, a controlled classical pattern recognition experiment was conducted using the X and Y coordinates of the data points from two to five possibly overlapping Gaussian distributions, each with a different mean and variance. -
Deep Multiphysics and Particle–Neuron Duality: a Computational Framework Coupling (Discrete) Multiphysics and Deep Learning
applied sciences Article Deep Multiphysics and Particle–Neuron Duality: A Computational Framework Coupling (Discrete) Multiphysics and Deep Learning Alessio Alexiadis School of Chemical Engineering, University of Birmingham, Birmingham B15 2TT, UK; [email protected]; Tel.: +44-(0)-121-414-5305 Received: 11 November 2019; Accepted: 6 December 2019; Published: 9 December 2019 Featured Application: Coupling First-Principle Modelling with Artificial Intelligence. Abstract: There are two common ways of coupling first-principles modelling and machine learning. In one case, data are transferred from the machine-learning algorithm to the first-principles model; in the other, from the first-principles model to the machine-learning algorithm. In both cases, the coupling is in series: the two components remain distinct, and data generated by one model are subsequently fed into the other. Several modelling problems, however, require in-parallel coupling, where the first-principle model and the machine-learning algorithm work together at the same time rather than one after the other. This study introduces deep multiphysics; a computational framework that couples first-principles modelling and machine learning in parallel rather than in series. Deep multiphysics works with particle-based first-principles modelling techniques. It is shown that the mathematical algorithms behind several particle methods and artificial neural networks are similar to the point that can be unified under the notion of particle–neuron duality. This study explains in detail the particle–neuron duality and how deep multiphysics works both theoretically and in practice. A case study, the design of a microfluidic device for separating cell populations with different levels of stiffness, is discussed to achieve this aim. -
Neural Network Architectures and Activation Functions: a Gaussian Process Approach
Fakultät für Informatik Neural Network Architectures and Activation Functions: A Gaussian Process Approach Sebastian Urban Vollständiger Abdruck der von der Fakultät für Informatik der Technischen Universität München zur Erlangung des akademischen Grades eines Doktors der Naturwissenschaften (Dr. rer. nat.) genehmigten Dissertation. Vorsitzender: Prof. Dr. rer. nat. Stephan Günnemann Prüfende der Dissertation: 1. Prof. Dr. Patrick van der Smagt 2. Prof. Dr. rer. nat. Daniel Cremers 3. Prof. Dr. Bernd Bischl, Ludwig-Maximilians-Universität München Die Dissertation wurde am 22.11.2017 bei der Technischen Universtät München eingereicht und durch die Fakultät für Informatik am 14.05.2018 angenommen. Neural Network Architectures and Activation Functions: A Gaussian Process Approach Sebastian Urban Technical University Munich 2017 ii Abstract The success of applying neural networks crucially depends on the network architecture being appropriate for the task. Determining the right architecture is a computationally intensive process, requiring many trials with different candidate architectures. We show that the neural activation function, if allowed to individually change for each neuron, can implicitly control many aspects of the network architecture, such as effective number of layers, effective number of neurons in a layer, skip connections and whether a neuron is additive or multiplicative. Motivated by this observation we propose stochastic, non-parametric activation functions that are fully learnable and individual to each neuron. Complexity and the risk of overfitting are controlled by placing a Gaussian process prior over these functions. The result is the Gaussian process neuron, a probabilistic unit that can be used as the basic building block for probabilistic graphical models that resemble the structure of neural networks. -
Training Deep Networks Without Learning Rates Through Coin Betting
Training Deep Networks without Learning Rates Through Coin Betting Francesco Orabona∗ Tatiana Tommasi∗ Department of Computer Science Department of Computer, Control, and Stony Brook University Management Engineering Stony Brook, NY Sapienza, Rome University, Italy [email protected] [email protected] Abstract Deep learning methods achieve state-of-the-art performance in many application scenarios. Yet, these methods require a significant amount of hyperparameters tuning in order to achieve the best results. In particular, tuning the learning rates in the stochastic optimization process is still one of the main bottlenecks. In this paper, we propose a new stochastic gradient descent procedure for deep networks that does not require any learning rate setting. Contrary to previous methods, we do not adapt the learning rates nor we make use of the assumed curvature of the objective function. Instead, we reduce the optimization process to a game of betting on a coin and propose a learning-rate-free optimal algorithm for this scenario. Theoretical convergence is proven for convex and quasi-convex functions and empirical evidence shows the advantage of our algorithm over popular stochastic gradient algorithms. 1 Introduction In the last years deep learning has demonstrated a great success in a large number of fields and has attracted the attention of various research communities with the consequent development of multiple coding frameworks (e.g., Caffe [Jia et al., 2014], TensorFlow [Abadi et al., 2015]), the diffusion of blogs, online tutorials, books, and dedicated courses. Besides reaching out scientists with different backgrounds, the need of all these supportive tools originates also from the nature of deep learning: it is a methodology that involves many structural details as well as several hyperparameters whose importance has been growing with the recent trend of designing deeper and multi-branches networks.