Hanjun Kim Professor School of Electrical and Electronic Engineering, Yonsei University Ph.D. 2013, Department of Computer Science, Princeton University Office: Engineering Hall #3-C415 Phone: +82-2-2123-2770 Email: first_name at yonsei.ac.kr |
[Home] [Curriculum Vitae] [Publications] [CoreLab] [Korean] |
Refereed International Conference PublicationsTeMCO: Tensor Memory Compiler Optimization across Tensor Decompositions in Deep Learning Inference [abstract] (ACM, PDF)
Since the increasing complexity of deep learning models, tensor decomposition is one of the promising solutions that reduce computational complexity in deep learning models. By decomposing a convolution layer with a large weight tensor into multiple layers with smaller weight tensors, tensor decomposition can reduce the number of operations and weight memory spaces. However, existing tensor decomposition schemes face difficulties in reducing peak memory usage of the entire inference. The decomposed layers produce the reduced-sized tensors during inference, but the reduced tensors should be restored to their original sizes due to skip connections and non-decomposed activation layers between the decomposed layers. To reduce the peak memory usage of the end-to-end inference of the decomposed models, this work proposes a new tensor memory optimization scheme and its prototype compiler, called TeMCO. TeMCO replaces the original internal tensors used in the skip connections with reduced internal tensors derived by the decomposed layers. In addition, TeMCO fuses the decomposed layers and the non-decomposed activation layer and thus keeps the reduced internal tensors produced without restoring them. Thanks to the optimizations, this work reduces memory usage of internal tensors by 75.7% for 10 models of 5 deep learning architectures.
|