Performance Analysis of Deep Learning Libraries: Tensorflow and Pytorch

Performance Analysis of Deep Learning Libraries: Tensorflow and Pytorch

Journal of Computer Science Original Research Paper Performance Analysis of Deep Learning Libraries: TensorFlow and PyTorch 1Felipe Florencio, 1Thiago Valença, 1Edward David Moreno and 2Methanias Colaço Junior 1Departamento de Computação, Universidade Federal de Sergipe, São Cristovão, Brazil 2Departamento de Sistema de Informação, Universidade Federal de Sergipe, Itabaiana, Brazil Article history Abstract: Through the increase in deep learning study and use, in the last Received: 01-01-2019 years there was a development of specific libraries for Deep Neural Revised: 25-02-2019 Network (DNN). Each one of these libraries has different performance Accepted: 11-04-2019 results and applies different techniques to optimize the implementation of algorithms. Therefore, even though implementing the same algorithm and Corresponding Author: Felipe Florencio using different libraries, the performance of their executions may have a Departamento de Computação, considerable variation. For this reason, developers and scientists that work Universidade Federal de with deep learning need scientific experimental studies that examine the Sergipe, São Cristovão, Brazil performance of those libraries. Therefore, this paper has the aim of Email: [email protected] evaluating and comparing these two libraries: TensorFlow and PyTorch. We have used three parameters: Hardware utilization, hardware temperature and execution time in a context of heterogeneous platforms with CPU and GPU. We used the MNIST database for training and testing the LeNet Convolutional Neural Network (CNN). We performed a scientific experiment following the Goal Question Metrics (GQM) methodology, data were validated through statistical tests. After data analysis, we show that PyTorch library presented a better performance, even though the TensorFlow library presented a greater GPU utilization rate. Keywords: Tensorflow, PyTorch, Comparison, Evaluation Performance, Benchmarking, Deep Learning Library Introduction high performance computers (Shatnawi et al., 2018; Verhelst and Moons, 2017) (Raschka, 2015). Deep learning is an artificial intelligence field that There are many deep learning libraries, such as allows computers to learn with experience and TensorFlow, Theano, CNTK, Caffe, Torch, Neon and understand the world in terms of a concept hierarchy, PyTorch. Each one of these libraries has different with each concept defined by its relationship with features of performance and applies different techniques simpler concepts. If we draw a graph showing how these to optimize the implementation of algorithms. Therefore, concepts are built on top of each other, the graph is deep, even though implementing the same algorithm in with many layers. For this reason, it is called deep different structures, the performance of these different learning (Goodfellow et al., 2016). implementations may have a considerable variation The deep learning idea is to train an Artificial (Bahrampour et al., 2015; Shatnawi et al., 2018). Neural Network (ANN) of multiple layers in a set of Since there are a variety of open source libraries data in order to allow it to deal with real world tasks. available, developers and scientists that work with Although the theoretical concepts behind are not new, deep learning need scientific experimental studies that deep learning has become to be a trend in the last point out which library is the most suitable for decade due to many factors, including its well-succeed determined application. application in a variety of problem solution (many of Being that, the present work evaluates and compares them are potentially commercial, such as the the Tensorflow library and PyTorch library focusing on development of new computer architectures with a their hardware utilization, hardware temperature and higher level of parallelism, the design of Convolutional execution time in a context of heterogeneous platforms Neural Network (CNN) and a higher accessibility to with CPU and GPU. We have used the Modified National © 2019 Felipe Florencio, Thiago Valença, Edward David Moreno and Methanias Colaço Junior. This open access article is distributed under a Creative Commons Attribution (CC-BY) 3.0 license. Felipe Florencio et al. / Journal of Computer Science 2019, 15 (6): 785.799 DOI: 10.3844/jcssp.2019.785.799 Institute of Standards and Technology (MNIST) database not take in consideration the utilization rate of hardware for training and testing the LeNet CNN. components and that, in general, the authors do not use For the reason that there are a variety of open source statistical tests for validating the data extracted from the libraries available, developers and scientists that work experiment. We have selected the PyTorch library, for with deep learning need scientific experimental studies being little explored and the TensorFlow library since it is that point which library is the most suitable for popular and for serving as performance reference. determined application. Being that, the present work has We used six (6) metrics: (i) execution time for inference as aim to evaluate and compare the Tensorflow library algorithm, (ii) execution time for training algorithm, (iii) and PyTorch library focusing on their hardware GPU utilization rate, (iv) CPU utilization rate, (v) GPU utilization, hardware temperature and execution time in a temperature and (vi) CPU temperature. The execution time context of heterogeneous platforms with CPU and GPU. is used to verify which library presents the best It was used the Modified National Institute of Standards performance and the utilization rate is used to investigate and Technology (MNIST) database for training and possible causes for this performance. The selected CNN for testing the LeNet CNN. evaluating performance of the libraries is LeNet and the The novelty that this article presents is the selected dataset is MNIST. The reason for that is the performance evaluation of PyTorch library, the use of availability of preexisting codes for both libraries. GPU and CPU utilization rate as evaluation metrics and After the materials selection, it was performed an in the use of statistical tests for validating the obtained data silico experiment using a heterogeneous computational during the experiment. As a result, the PyTorch library system. The experiment consisted in preparing the presented a superior performance when compared with execution environment, adapting the codes extracted TensorFlow library, through data analysis, it was from the library’s official repository, implementing verified that during execution using PyTorch there is a scrips, executing the codes and extracting data of each smaller GPU utilization rate. It is possible to conclude execution. The experiment’s organization followed the that the communication bottleneck between CPU and Goal Question Metric (GQM) method as indicated by GPU is a relevant factor for TensorFlow presenting an Basili et al. (1994). inferior performance than PyTorch, once TensorFlow After data extraction, it was performed Kolmogov- uses more the resources of the GPU. Smirnov (KS) and Wilcoxon statistical tests for For a better understanding in how the results were validating the data. After validation, the data could be obtained, the present paper is divided into eight sections. analyzed and evaluated. In section 2 presents the used method. In Section 3, the conceptual bases are presented, also the used neural Conceptual Bases network is described and the examined libraries are This section presents some concepts that are presented. In Section 4 related works are presented. In necessaries for understanding this work. section 5, the definition and the experiment planning for making the frameworks comparison is showed. The Convolutional Neural Networks Section 6 has the execution phases of the experiment, The Convolutional Neural Networks (CNNs) are a including how the data were collected. In Section 7, the kind of biologically inspired feed-forward neural network, analysis and interpretation of the obtained results are it is developed to imitate the behavior of an animal visual done, besides that, it is shown the threats to validity in cortex (da Costa e Silva Franco, 2016). The basic concept this work. Finally, in Section 8 is showed the conclusion of CNNs goes back to 1979 when Fukushima (1979) and the possible future works. proposed an artificial neural network including simple and complex cells that were very similar to the convolutional Method and pooling layers from modern CNN. The paper consists in an experimental study of CNNs composed by many non-linear data processing performance evaluation of two deep learning libraries layers, where the output of each inferior layer feed the input (TensorFlow and PyTorch) in a heterogeneous of its immediately superior layer (Deng, 2014). They use computational system with CPU and GPU. For convolution in place of general matrix multiplication in at experimental research we have used as benchmark least one of their layers (Goodfellow et al., 2016). The framework the LeNet CNN for training and inferring layers of CNNs may be of three kinds: Convolutional data in MNIST dataset. layer, pooling layer and dense layer. In a prior moment of this research we have selected Convolutional Layer some related works that benchmark deep learning libraries. The analysis of the related works demonstrated the absence The convolutional layer consists in a set of resource of performance analysis of PyTorch library, in the analysis maps, that are generated from

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    15 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us