Mariana: Tencent Deep Learning Platform and Its Applications Yongqiang Zou, Xing Jin, Yi Li, Zhimao Guo, Eryu Wang, Bin Xiao Tencent Inc

Mariana: Tencent Deep Learning Platform and Its Applications Yongqiang Zou, Xing Jin, Yi Li, Zhimao Guo, Eryu Wang, Bin Xiao Tencent Inc

Mariana: Tencent Deep Learning Platform and its Applications Yongqiang Zou, Xing Jin, Yi Li, Zhimao Guo, Eryu Wang, Bin Xiao Tencent Inc. {aaronzou, xingjin, sincereli, zhimaoguo, eryuwang, leoxiao}@tencent.com ABSTRACT generated content (UGC) such as photos, audios, and videos. Deep learning gains lots of attentions in recent years and is more Deep learning is a hot topic in mining values in big data in recent and more important for mining values in big data. However, to years [1], and makes a breakthrough in several different fields, make deep learning practical for a wide range of applications in such as automatic speech recognition (ASR) [2] and image Tencent Inc., three requirements must be considered: 1) Lots of recognition [3]. Deep learning has potential benefits for various computational power are required to train a practical model with applications in Tencent, such as speech recognition and image tens of millions of parameters and billions of samples for products recognition in WeChat, and advertising in QQ and Qzone, to such as automatic speech recognition (ASR), and the number of improve application quality, build exciting applications, and parameters and training data is still growing. 2) The capability of increase revenue. training larger model is necessary for better model quality. 3) Easy to use frameworks are valuable to do many experiments to To make deep learning practical in Tencent to realize benefits, perform model selection, such as finding an appropriate three requirements must be considered. optimization algorithm and tuning optimal hyper-parameters. To Lots of computational power and efficient parallelism is accelerate training, support large models, and make experiments needed to train models fast. For example, acoustic model of easier, we built Mariana, the Tencent deep learning platform, automatic speech recognition for Chinese and English in which utilizes GPU and CPU cluster to train models parallelly Tencent WeChat adopts a deep neural network with more with three frameworks: 1) a multi-GPU data parallelism than 50 millions of parameters, more than 15 thousands of framework for deep neural networks (DNNs). 2) a multi-GPU senones (tied triphone model which is represented by one model parallelism and data parallelism framework for deep output node in output layer in a DNN), and tens of billions convolutional neural networks (CNNs). 3) a CPU cluster of samples, so it would cost years to train this model by a framework for large scale DNNs. Mariana also provides built-in single CPU server, or months by a single off-the-shelf GPU. algorithms and features to facilitate experiments. Mariana is in production usage for more than one year, achieves state-of-the-art Support for training large models is necessary to improve acceleration performance, and plays a key role in training models model quality. Since it is validated that more filter maps and and improving quality for automatic speech recognition and image more layers for CNN networks could both contribute to recognition in Tencent WeChat, a mobile social platform, and for better classification accuracy. Ad click-through rate prediction (pCTR) in Tencent QQ, an Easy to use frameworks are valuable to do various instant messaging platform, and Tencent Qzone, a social experiments to perform model selection, such as choosing networking service. model architectures, finding optimization methods and tuning optimal hyper-parameters for a high performance 1. INTRODUCTION model, especially for applications with mass data. 1.1 Backgroud To meet above requirements, parallel frameworks are essential to Tencent provides a wide range of Internet services, such as make training model faster, bigger, and easier. WeChat, a mobile social platform having 396 millions of monthly Data parallelism and model parallelism are introduced in Google active users (MAU), QQ, an instant messaging platform having DistBelief [4] for CPU clusters, and also used in Google COTS 848 millions of MAU, and Qzone, a social networking service systems [5] for GPU servers as well as Facebook multi-GPU having 644 millions of MAU in the first quarter of 2014. Tencent training [6]. Data parallelism launches multiple model replicas for holds over 100 PB data from various applications, plus more user different mini-batches, and collects gradients to update model parameters. Model parallelism involves multiple workers each This work is licensed under the Creative Commons Attribution- NonCommercial-NoDerivs 3.0 Unported License. To view a copy of this holding parts of the model and swapping activations and errors to license, visit http://creativecommons.org/licenses/by-nc-nd/3.0/. Obtain complete a mini-batch. Besides improving performance, model permission prior to any use beyond those covered by the license. Contact parallelism reduces memory consumption for each worker, thus copyright holder by emailing [email protected]. Articles from this volume were make it possible to build larger models. invited to present their results at the 40th International Conference on Very Large Data Bases, September 1st - 5th 2014, Hangzhou, China. Besides company proprietary frameworks, some deep learning Proceedings of the VLDB Endowment, Vol. 7, No. 13 frameworks are emerged in the open source community using Copyright 2014 VLDB Endowment 2150-8097/14/08 CPUs or GPUs. However, available open source frameworks do not meet the requirements in Tencent. These frameworks usually 1772 train models using a single multi-threaded CPU server, or a single 2. MULTI-GPU DATA PARALLELISM GPU, therefore lack of efficient parallelism or large model support for tremendous training data and large scale models in FRAMEWORK FOR DEEP NEURAL Tencent. For the easy to use requirements, these frameworks NETWORKS usually need tons of efforts to code, deploy, and configure to make them be able to start new experiments. 2.1 Application Requirements Mariana includes a multi-GPU data parallelism framework for 1.2 Platform Overview DNNs, and its first target application is acoustic model of automatic speech recognition. To support emerging deep learning applications in Tencent, we built Mariana, the Tencent deep learning platform, to accelerate Acoustic model of ASR tries to classify triphones from input training, support large models, and facilitate experiments. audio signals. Acoustic model uses a fully connected 4 to 6 We found it's hard to construct a one-size-fits-all framework to hidden layers deep neural networks with about 50 millions of support a wide range of applications including speech recognition parameters. and image recognition in WeChat, and Ads pCTR in QQ and ASR needs many computational power, but consumes only about Qzone. Different applications emphasize different aspects of 1 GB memory, so GPU is a good available option which has much requirements. higher computational power compared to microprocessors but So Mariana builds three parallel frameworks for different with limited memory. To exploit multiple GPUs in one server, we applications: 1) a multi-GPU data parallelism framework for deep first built a model parallelism framework for ASR and gained a neural networks (DNNs). 2) a multi-GPU model parallelism and 1.5 times speedup with two GPUs. However, model parallelism data parallelism framework for deep convolutional neural has limited scalability for ASR and could not achieve better networks (CNNs). 3) a CPU cluster framework with model performance when using more than two GPUs. So we pay much parallelism and data parallelism for large scale DNNs. more attention to data parallelism for multi-GPU DNN framework. Mariana makes efforts to simplify deep learning experiments and 2.2 Architecture offload burden of algorithm engineers. Mariana provides built-in To control the data parallelism training, a worker group is used algorithms, allows flexible hyper-parameter tuning, supports for each model replica. Since the DNN framework has no model checkpoint and restart at configurable periods, generates parallelism, each worker group contains one GPU. automatically test reports, and enables training jobs monitoring. Multi-GPU data parallelism DNN framework uses a driver Mariana is used by various applications such as automatic speech program running in CPUs to control the whole synchronized recognition in WeChat, image recognition in WeChat and Ad stochastic gradient descent (SGD) training process. The driver click-through rate prediction (pCTR) in QQ and Qzone, and plays program threads read next round training samples as mini-batches a key role for these applications in Tencent for more than one year. from disk, and copy these samples from CPU memory to GPU Mariana uses GPU servers each equipped with 4 or 6 GPUs, and memory for next round training. At the same time, the driver also has CPU servers. program waits for all GPUs to finish training its assigned mini- The multi-GPU data parallelism DNN framework is used to build batches in current round, and then coordinates the parameter the acoustic model of automatic speech recognition in Tencent swapping between all GPUs. WeChat, and gains a speedup of 4.6 times by 6 GPUs in one server compared to one GPU. 2.3 Parameter Swapping Parameter swapping process is illustrated in Figure 1 and has The multi-GPU model parallelism and data parallelism CNN three phases in logical view. First, the gradient collection phase framework trains image recognition model for Tencent WeChat gathers gradients from all GPUs. Then, the parameter updating and for Ads pCTR in QQ and Qzone. The CNN framework for phase updates the model with the gathered gradients. Last, the ImageNet 2012 dataset with Krizhevsky's network [3] using 4 parameter distribution phase scatters the updated model to all GPUs gains a speedup of 2.52 times over one GPU. GPUs. In physical view, parameter swapping copies data through The CPU cluster framework is also used to build model of PCIe or input output hub (IOH) connections between GPUs. For a automatic speech recognition. multi-GPU server, GPUs are connected by PCIe switches in a tree This paper is organized as follows.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    6 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us