Felix, qui, quod amat, defendere fortiter audet
Home -> Publications
Home
  Publications
    
edited volumes
  Awards
  Research
  Teaching
  Miscellaneous
  Full CV [pdf]
  BLOG






  Events








  Past Events





Publications of Torsten Hoefler
Y. Oyama, Tal Ben-Nun, Torsten Hoefler, Satoshi Matsuoka:

 Accelerating Deep Learning Frameworks with Micro-batches

(In {IEEE} International Conference on Cluster Computing, {CLUSTER} 2018, Belfast, UK, September 10-13, 2018, presented in Belfast, UK, IEEE, ISBN: 978-1-5386-8319-4, Sep. 2018)

Abstract

cuDNN is a low-level library that provides GPU kernels frequently used in deep learning. Specifically, cuDNN implements several equivalent convolution algorithms, whose performance and memory footprint may vary considerably, depending on the layer dimensions. When an algorithm is automatically selected by cuDNN, the decision is performed on a per-layer basis, and thus it often resorts to slower algorithms that fit the workspace size constraints. We present µ-cuDNN, a thin wrapper library for cuDNN that transparently divides layers’ mini-batch computation into multiple micro-batches, both on a single GPU and a heterogeneous set of GPUs. Based on Dynamic Programming and Integer Linear Programming (ILP), µ-cuDNN enables faster algorithms by decreasing the workspace requirements. At the same time, µ-cuDNN does not decrease the accuracy of the results, effectively decoupling statistical efficiency from the hardware efficiency. We demonstrate the effectiveness of µ-cuDNN for the Caffe and TensorFlow frameworks, achieving speedups of 1.63x for AlexNet and 1.21x for ResNet-18 on the P100-SXM2 GPU. We also show that µ-cuDNN achieves speedups of up to 4.54x, and 1.60x on average for DeepBench’s convolutional layers on the V100-SXM2 GPU. In a distributed setting, µ-cuDNN attains a speedup of 2.20x when training ResNet-18 on a heterogeneous GPU cluster over a single GPU. These results indicate that using micro-batches can seamlessly increase the performance of deep learning, while maintaining the same overall memory footprint.

Documents

download article:
 

BibTeX

@inproceedings{ucudnn,
  author={Y. Oyama and Tal Ben-Nun and Torsten Hoefler and Satoshi Matsuoka},
  title={{Accelerating Deep Learning Frameworks with Micro-batches}},
  year={2018},
  month={Sep.},
  booktitle={{IEEE} International Conference on Cluster Computing, {CLUSTER} 2018, Belfast, UK, September 10-13, 2018},
  location={Belfast, UK},
  publisher={IEEE},
  isbn={978-1-5386-8319-4},
  source={http://www.unixer.de/~htor/publications/},
}


serving: 18.118.144.50:54361© Torsten Hoefler