Training DNN with Keras
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
Intro to Tensorflow 2.0 MBL, August 2019
Intro to TensorFlow 2.0 MBL, August 2019 Josh Gordon (@random_forests) 1 Agenda 1 of 2 Exercises ● Fashion MNIST with dense layers ● CIFAR-10 with convolutional layers Concepts (as many as we can intro in this short time) ● Gradient descent, dense layers, loss, softmax, convolution Games ● QuickDraw Agenda 2 of 2 Walkthroughs and new tutorials ● Deep Dream and Style Transfer ● Time series forecasting Games ● Sketch RNN Learning more ● Book recommendations Deep Learning is representation learning Image link Image link Latest tutorials and guides tensorflow.org/beta News and updates medium.com/tensorflow twitter.com/tensorflow Demo PoseNet and BodyPix bit.ly/pose-net bit.ly/body-pix TensorFlow for JavaScript, Swift, Android, and iOS tensorflow.org/js tensorflow.org/swift tensorflow.org/lite Minimal MNIST in TF 2.0 A linear model, neural network, and deep neural network - then a short exercise. bit.ly/mnist-seq ... ... ... Softmax model = Sequential() model.add(Dense(256, activation='relu',input_shape=(784,))) model.add(Dense(128, activation='relu')) model.add(Dense(10, activation='softmax')) Linear model Neural network Deep neural network ... ... Softmax activation After training, select all the weights connected to this output. model.layers[0].get_weights() # Your code here # Select the weights for a single output # ... img = weights.reshape(28,28) plt.imshow(img, cmap = plt.get_cmap('seismic')) ... ... Softmax activation After training, select all the weights connected to this output. Exercise 1 (option #1) Exercise: bit.ly/mnist-seq Reference: tensorflow.org/beta/tutorials/keras/basic_classification TODO: Add a validation set. Add code to plot loss vs epochs (next slide). Exercise 1 (option #2) bit.ly/ijcav_adv Answers: next slide. -
Recurrent Neural Networks
Sequence Modeling: Recurrent and Recursive Nets Lecture slides for Chapter 10 of Deep Learning www.deeplearningbook.org Ian Goodfellow 2016-09-27 Adapted by m.n. for CMPS 392 RNN • RNNs are a family of neural networks for processing sequential data • Recurrent networks can scale to much longer sequences than would be practical for networks without sequence-based specialization. • Most recurrent networks can also process sequences of variable length. • Based on parameter sharing q If we had separate parameters for each value of the time index, o we could not generalize to sequence lengths not seen during training, o nor share statistical strength across different sequence lengths, o and across different positions in time. (Goodfellow 2016) Example • Consider the two sentences “I went to Nepal in 2009” and “In 2009, I went to Nepal.” • How a machine learning can extract the year information? q A traditional fully connected feedforward network would have separate parameters for each input feature, q so it would need to learn all of the rules of the language separately at each position in the sentence. • By comparison, a recurrent neural network shares the same weights across several time steps. (Goodfellow 2016) RNN vs. 1D convolutional • The output of convolution is a sequence where each member of the output is a function of a small number of neighboring members of the input. • Recurrent networks share parameters in a different way. q Each member of the output is a function of the previous members of the output q Each member of the output is produced using the same update rule applied to the previous outputs. -
Ways to Use Machine Learning Approaches for Software Development
Ways to use Machine Learning approaches for software development Nicklas Jonsson Nicklas Jonsson VT 2018 Examensarbete, 30 hp Supervisor: Eddie wadbro Extern Supervisor: C4 Contexture Examiner: Henrik Bjorklund¨ Master of Science Programme in Computing Science and Engineering, 300 hp Abstract With the rise of machine learning and in particular deep learning enter- ing all different types of fields, including software development. It could be a bit hard to know where to begin to search for the tools when some- one wants to use machine learning for a one’s problems. This thesis has looked at some available technologies of today for applying machine learning to one’s applications. This thesis has looked at some of the available cloud services, frame- works, and libraries for machine learning and it presents three different implementation structures that can be used with these technologies for the problem of image classification. Acknowledgements I want to thank C4 Contexture for giving me the thesis idea, support, and supplying me with a working station. I also want to thank Lantmannen¨ for supplying me with the image data that was used for this thesis. Finally, I want to thank Eddie Wadbro for his guidance during this thesis and of course a big thanks to my family and friends for their support during this period of my life. 1(45) Content 1 Introduction 3 1.1 The Client 3 1.1.1 C4 Contexture PIM software 4 1.2 The data 4 1.3 Goal 5 1.4 Limitation 5 2 Background 7 2.1 Artificial intelligence, machine learning and deep learning. -
Biased Edge Dropout for Enhancing Fairness in Graph Representation
PREPRINT SUBMITTED TO A JOURNAL. 1 Biased Edge Dropout for Enhancing Fairness in Graph Representation Learning Indro Spinelli, Graduate Student Member, IEEE, Simone Scardapane, Amir Hussain, Senior Member, IEEE and Aurelio Uncini, Member, IEEE Abstract—Graph representation learning has become a ubiqui- I. INTRODUCTION tous component in many scenarios, ranging from social network analysis to energy forecasting in smart grids. In several applica- RAPH structured data, ranging from friendships on tions, ensuring the fairness of the node (or graph) representations G social networks to physical links in energy grids, powers with respect to some protected attributes is crucial for their many algorithms governing our digital life. Social networks correct deployment. Yet, fairness in graph deep learning remains under-explored, with few solutions available. In particular, the topologies define the stream of information we will receive, tendency of similar nodes to cluster on several real-world often influencing our opinion [1][2][3][4]. Bad actors, some- graphs (i.e., homophily) can dramatically worsen the fairness times, define these topologies ad-hoc to spread false informa- of these procedures. In this paper, we propose a biased edge tion [5]. Similarly, recommender systems [6] suggest products dropout algorithm (FairDrop) to counter-act homophily and tailored to our own experiences and history of purchases. improve fairness in graph representation learning. FairDrop can be plugged in easily on many existing algorithms, is efficient, However, pursuing the highest accuracy as the only metric of adaptable, and can be combined with other fairness-inducing interest has let many of these algorithms discriminate against solutions. After describing the general algorithm, we demonstrate minorities in the past [7][8][9], despite the law prohibiting its application on two benchmark tasks, specifically, as a random unfair treatment based on sensitive traits such as race, religion, walk model for producing node embeddings, and to a graph and gender. -
Keras2c: a Library for Converting Keras Neural Networks to Real-Time Compatible C
Keras2c: A library for converting Keras neural networks to real-time compatible C Rory Conlina,∗, Keith Ericksonb, Joeseph Abbatec, Egemen Kolemena,b,∗ aDepartment of Mechanical and Aerospace Engineering, Princeton University, Princeton NJ 08544, USA bPrinceton Plasma Physics Laboratory, Princeton NJ 08544, USA cDepartment of Astrophysical Sciences at Princeton University, Princeton NJ 08544, USA Abstract With the growth of machine learning models and neural networks in mea- surement and control systems comes the need to deploy these models in a way that is compatible with existing systems. Existing options for deploying neural networks either introduce very high latency, require expensive and time con- suming work to integrate into existing code bases, or only support a very lim- ited subset of model types. We have therefore developed a new method called Keras2c, which is a simple library for converting Keras/TensorFlow neural net- work models into real-time compatible C code. It supports a wide range of Keras layers and model types including multidimensional convolutions, recurrent lay- ers, multi-input/output models, and shared layers. Keras2c re-implements the core components of Keras/TensorFlow required for predictive forward passes through neural networks in pure C, relying only on standard library functions considered safe for real-time use. The core functionality consists of ∼ 1500 lines of code, making it lightweight and easy to integrate into existing codebases. Keras2c has been successfully tested in experiments and is currently in use on the plasma control system at the DIII-D National Fusion Facility at General Atomics in San Diego. 1. Motivation TensorFlow[1] is one of the most popular libraries for developing and training neural networks. -
Combining Collective Classification and Link Prediction
Combining Collective Classification and Link Prediction Mustafa Bilgic, Galileo Mark Namata, Lise Getoor Dept. of Computer Science Univ. of Maryland College Park, MD 20742 {mbilgic, namatag, getoor}@cs.umd.edu Abstract however, this is rarely the case. Real world collections usu- ally have a number of missing and incorrect labels and links. The problems of object classification (labeling the nodes Other approaches construct a complex joint probabilistic of a graph) and link prediction (predicting the links in model, capable of handling missing values, but in which a graph) have been largely studied independently. Com- inference is often intractable. monly, object classification is performed assuming a com- In this paper, we take the middle road. We propose a plete set of known links and link prediction is done assum- simple yet general approach for combining object classifi- ing a fully observed set of node attributes. In most real cation and link prediction. We propose an algorithm called world domains, however,attributes and links are often miss- Iterative Collective Classification and Link Prediction (IC- ing or incorrect. Object classification is not provided with CLP) that integrates collective object classification and link all the links relevant to correct classification and link pre- prediction by effectively passing up-to-date information be- diction is not provided all the labels needed for accurate tween the algorithms. We experimentally show on many link prediction. In this paper, we propose an approach that different network types that applying ICCLP improves per- addresses these two problems by interleaving object clas- formance over running collective classification and link pre- sification and link prediction in a collective algorithm. -
Ian Goodfellow, Staff Research Scientist, Google Brain CVPR
MedGAN ID-CGAN CoGAN LR-GAN CGAN IcGAN b-GAN LS-GAN LAPGAN DiscoGANMPM-GAN AdaGAN AMGAN iGAN InfoGAN CatGAN IAN LSGAN Introduction to GANs SAGAN McGAN Ian Goodfellow, Staff Research Scientist, Google Brain MIX+GAN MGAN CVPR Tutorial on GANs BS-GAN FF-GAN Salt Lake City, 2018-06-22 GoGAN C-VAE-GAN C-RNN-GAN DR-GAN DCGAN MAGAN 3D-GAN CCGAN AC-GAN BiGAN GAWWN DualGAN CycleGAN Bayesian GAN GP-GAN AnoGAN EBGAN DTN Context-RNN-GAN MAD-GAN ALI f-GAN BEGAN AL-CGAN MARTA-GAN ArtGAN MalGAN Generative Modeling: Density Estimation Training Data Density Function (Goodfellow 2018) Generative Modeling: Sample Generation Training Data Sample Generator (CelebA) (Karras et al, 2017) (Goodfellow 2018) Adversarial Nets Framework D tries to make D(G(z)) near 0, D(x) tries to be G tries to make near 1 D(G(z)) near 1 Differentiable D function D x sampled from x sampled from data model Differentiable function G Input noise z (Goodfellow et al., 2014) (Goodfellow 2018) Self-Play 1959: Arthur Samuel’s checkers agent (OpenAI, 2017) (Silver et al, 2017) (Bansal et al, 2017) (Goodfellow 2018) 3.5 Years of Progress on Faces 2014 2015 2016 2017 (Brundage et al, 2018) (Goodfellow 2018) p.15 General Framework for AI & Security Threats Published as a conference paper at ICLR 2018 <2 Years of Progress on ImageNet Odena et al 2016 monarch butterfly goldfinch daisy redshank grey whale Miyato et al 2017 monarch butterfly goldfinch daisy redshank grey whale Zhang et al 2018 monarch butterfly goldfinch (Goodfellow 2018) daisy redshank grey whale Figure 7: 128x128 pixel images generated by SN-GANs trained on ILSVRC2012 dataset. -
Finding Experts by Link Prediction in Co-Authorship Networks
Finding Experts by Link Prediction in Co-authorship Networks Milen Pavlov1,2,RyutaroIchise2 1 University of Waterloo, Waterloo ON N2L 3G1, Canada 2 National Institute of Informatics, Tokyo 101-8430, Japan Abstract. Research collaborations are always encouraged, as they of- ten yield good results. However, the researcher network contains massive amounts of experts in various disciplines and it is difficult for the indi- vidual researcher to decide which experts will match his own expertise best. As a result, collaboration outcomes are often uncertain and research teams are poorly organized. We propose a method for building link pre- dictors in networks, where nodes can represent researchers and links - collaborations. In this case, predictors might offer good suggestions for future collaborations. We test our method on a researcher co-authorship network and obtain link predictors of encouraging accuracy. This leads us to believe our method could be useful in building and maintaining strong research teams. It could also help with choosing vocabulary for expert de- scription, since link predictors contain implicit information about which structural attributes of the network are important with respect to the link prediction problem. 1 Introduction Collaborations between researchers often have a synergistic effect. The combined expertise of a group of researchers can often yield results far surpassing the sum of the individual researchers’ capabilities. However, creating and organizing such research teams is not a straightforward task. The individual researcher often has limited awareness of the existence of other researchers with which collaborations might prove fruitful. Further- more, even in the presence of such awareness, it is difficult to predict in advance which potential collaborations should be pursued. -
Learning to Make Predictions on Graphs with Autoencoders
Learning to Make Predictions on Graphs with Autoencoders Phi Vu Tran Strategic Innovation Group Booz j Allen j Hamilton San Diego, CA USA [email protected] Abstract—We examine two fundamental tasks associated networks [38], communication networks [11], cybersecurity with graph representation learning: link prediction and semi- [6], recommender systems [16], and knowledge bases such as supervised node classification. We present a novel autoencoder DBpedia and Wikidata [35]. architecture capable of learning a joint representation of both local graph structure and available node features for the multi- There are a number of technical challenges associated with task learning of link prediction and node classification. Our learning to make meaningful predictions on complex graphs: autoencoder architecture is efficiently trained end-to-end in a single learning stage to simultaneously perform link prediction • Extreme class imbalance: in link prediction, the number and node classification, whereas previous related methods re- of known present (positive) edges is often significantly quire multiple training steps that are difficult to optimize. We less than the number of known absent (negative) edges, provide a comprehensive empirical evaluation of our models making it difficult to reliably learn from rare examples; on nine benchmark graph-structured datasets and demonstrate • Learn from complex graph structures: edges may be significant improvement over related methods for graph rep- resentation learning. Reference code and data are available at directed or undirected, weighted or unweighted, highly https://github.com/vuptran/graph-representation-learning. sparse in occurrence, and/or consisting of multiple types. Index Terms—network embedding, link prediction, semi- A useful model should be versatile to address a variety supervised learning, multi-task learning of graph types, including bipartite graphs; • Incorporate side information: nodes (and maybe edges) I. -
Predicting Disease Related Microrna Based on Similarity and Topology
cells Article Predicting Disease Related microRNA Based on Similarity and Topology Zhihua Chen 1,†, Xinke Wang 2,†, Peng Gao 2,†, Hongju Liu 3,† and Bosheng Song 4,* 1 Institute of Computing Science and Technology, Guangzhou University, Guangzhou 510006, China; [email protected] 2 School of Artificial Intelligence and Automation, Huazhong University of Science and Technology, Wuhan 430074, China; [email protected] (X.W.); [email protected] (P.G.) 3 College of Information Technology and Computer Science, University of the Cordilleras, Baguio 2600, Philippines; [email protected] 4 School of Information Science and Engineering, Hunan University, Changsha 410082, China * Correspondence: [email protected]; Tel.: +86-731-88821907 † All authors contributed equally to this work. Received: 4 July 2019; Accepted: 5 November 2019; Published: 7 November 2019 Abstract: It is known that many diseases are caused by mutations or abnormalities in microRNA (miRNA). The usual method to predict miRNA disease relationships is to build a high-quality similarity network of diseases and miRNAs. All unobserved associations are ranked by their similarity scores, such that a higher score indicates a greater probability of a potential connection. However, this approach does not utilize information within the network. Therefore, in this study, we propose a machine learning method, called STIM, which uses network topology information to predict disease–miRNA associations. In contrast to the conventional approach, STIM constructs features according to information on similarity and topology in networks and then uses a machine learning model to predict potential associations. To verify the reliability and accuracy of our method, we compared STIM to other classical algorithms. -
Tensorflow, Theano, Keras, Torch, Caffe Vicky Kalogeiton, Stéphane Lathuilière, Pauline Luc, Thomas Lucas, Konstantin Shmelkov Introduction
TensorFlow, Theano, Keras, Torch, Caffe Vicky Kalogeiton, Stéphane Lathuilière, Pauline Luc, Thomas Lucas, Konstantin Shmelkov Introduction TensorFlow Google Brain, 2015 (rewritten DistBelief) Theano University of Montréal, 2009 Keras François Chollet, 2015 (now at Google) Torch Facebook AI Research, Twitter, Google DeepMind Caffe Berkeley Vision and Learning Center (BVLC), 2013 Outline 1. Introduction of each framework a. TensorFlow b. Theano c. Keras d. Torch e. Caffe 2. Further comparison a. Code + models b. Community and documentation c. Performance d. Model deployment e. Extra features 3. Which framework to choose when ..? Introduction of each framework TensorFlow architecture 1) Low-level core (C++/CUDA) 2) Simple Python API to define the computational graph 3) High-level API (TF-Learn, TF-Slim, soon Keras…) TensorFlow computational graph - auto-differentiation! - easy multi-GPU/multi-node - native C++ multithreading - device-efficient implementation for most ops - whole pipeline in the graph: data loading, preprocessing, prefetching... TensorBoard TensorFlow development + bleeding edge (GitHub yay!) + division in core and contrib => very quick merging of new hotness + a lot of new related API: CRF, BayesFlow, SparseTensor, audio IO, CTC, seq2seq + so it can easily handle images, videos, audio, text... + if you really need a new native op, you can load a dynamic lib - sometimes contrib stuff disappears or moves - recently introduced bells and whistles are barely documented Presentation of Theano: - Maintained by Montréal University group. - Pioneered the use of a computational graph. - General machine learning tool -> Use of Lasagne and Keras. - Very popular in the research community, but not elsewhere. Falling behind. What is it like to start using Theano? - Read tutorials until you no longer can, then keep going. -
Representation Learning on Graphs
Representation learning on graphs A/Prof Truyen Tran [email protected] @truyenoz Deakin University truyentran.github.io letdataspeak.blogspot.com goo.gl/3jJ1O0 HCMC, May 2019 11/05/2019 1 Graph representation Graph reasoning Why bother? Graph generation Embedding Graph dynamics Message passing 11/05/2019 2 Why learning of graph representation? Graphs are pervasive in many scientific disciplines. The sub-area of graph representation has reached a certain maturity, with multiple reviews, workshops and papers at top AI/ML venues. Deep learning needs to move beyond vector, fixed-size data. Learning representation as a powerful way to discover hidden patterns making learning, inference and planning easier. 11/05/2019 3 System medicine 11/05/2019 https://www.frontiersin.org/articles/10.3389/fphys.2015.00225/full 4 Biology & pharmacy Traditional techniques: Graph kernels (ML) Molecular fingerprints (Chemistry) Modern techniques Molecule as graph: atoms as nodes, chemical bonds as edges #REF: Penmatsa, Aravind, Kevin H. Wang, and Eric Gouaux. "X- ray structure of dopamine transporter elucidates antidepressant mechanism." Nature 503.7474 (2013): 85-90. 11/05/2019 5 Chemistry DFT = Density Functional Theory Gilmer, Justin, et al. "Neural message passing for quantum chemistry." arXiv preprint arXiv:1704.01212 (2017). • Molecular properties • Chemical-chemical interaction • Chemical reaction • Synthesis planning 11/05/2019 6 Materials science • Crystal properties • Exploring/generating solid structures • Inverse design Xie, Tian, and Jeffrey