Dong, S., Sun, Y., Bohm Agostini, N., Karimi, E., Lowell, D., Zhou, J., Cano, J
\ Dong, S., Sun, Y., Bohm Agostini, N., Karimi, E., Lowell, D., Zhou, J., Cano, J. , Abellán, J. L. and Kaeli, D. (2021) Spartan: a sparsity-adaptive framework to accelerate deep neural network training on GPUs. IEEE Transactions on Parallel and Distributed Systems, 32(10), pp. 2448-2463. (doi: 10.1109/TPDS.2021.3067825) The material cannot be used for any other purpose without further permission of the publisher and is for private use only. There may be differences between this version and the published version. You are advised to consult the publisher’s version if you wish to cite from it. http://eprints.gla.ac.uk/240696/ Deposited on 05 May 2021 Enlighten – Research publications by members of the University of Glasgow http://eprints.gla.ac.uk 1 Spartan: A Sparsity-Adaptive Framework to Accelerate Deep Neural Network Training on GPUs Shi Dong, Member, IEEE, Yifan Sun, Member, IEEE, Nicolas Bohm Agostini, Student Member, IEEE, Elmira Karimi, Student Member, IEEE, Daniel Lowell, Jing Zhou, Member, IEEE, Jos´e Cano, Senior Member, IEEE, Jos´e L. Abell´an, Member, IEEE, and David Kaeli, Fellow, IEEE Abstract—Deep Neural Networks (DNNs) have emerged as an important class of machine learning algorithms, providing accurate solutions to a broad range of applications. Sparsity in activation maps in DNN training presents an opportunity to reduce computations. However, exploiting activation sparsity presents two major challenges: i) profiling activation sparsity during training comes with significant overhead due to computing the degree of sparsity and the data movement; ii) the dynamic nature of activation maps requires dynamic dense-to-sparse conversion during training, leading to significant overhead.
[Show full text]