TeMCO: Tensor Memory Compiler Optimization across Tensor Decompositions in Deep Learning Inference [abstract] (ACM, PDF) Seungbin Song, Ju Min Lee, Haeeun Jeong, Hyunho Kwon, Shinnung Jeong, Jaeho Lee, and Hanjun Kim Proceedings of the 53rd International Conference on Parallel Processing (ICPP), August 2024.
Refereed International Journal Publications
Peak-Memory-aware Partitioning and Scheduling for Multi-tenant DNN Model Inference [abstract] (ScienceDirect) Jaeho Lee, Ju Min Lee, Haeeun Jeong, Hyunho Kwon, Youngsok Kim, Yongjun Park, and Hanjun Kim To Appear: Journal of Systems Architecture, Volume 173, Number 103696, April 2026.
IF=4.1, Q1 (JCR 2025)
Refereed International Conference Poster
Approximation-based Inter-PE Communication-free Image Filtering for Commodity PIM [abstract] Chan Lee, Shinnung Jeong, Heelim Choi, Jaeho Lee, Haeeun Jeong, Hoyun Youm, Ju Min Lee, and Hanjun Kim Proceedings of the 62th Annual Design Automation Conference 2025 - (Poster) (DAC), June 2025.
Patents
Method of Filtering Image [abstract] Hanjun Kim, Chan Lee, Heelim Choi, Haeeun Jeong, Shiunnung Jeong, Jumin Lee, Jaeho Lee, and Hoyun Youm KR Patent App. 10-2025-0181300, November 2025.
Electronic Device for Performing Inference by Partitioning Neural Network Model in Multi-tenant Environment and Operating Method thereof [abstract] Hanjun Kim, Jaeho Lee, Hyunho Kwon, Jumin Lee, and Haeeun Jeong KR Patent App. 10-2025-0130051, September 2025.
Device and Method with Control of Neural Network Bit Precision [abstract] Hanjun Kim, Haeeun Jeong, Hyunho Kwon, Jaeho Lee, Jumin Lee, and Heelim Choi KR Patent App. 10-2025-0016828, February 2025.
Compile Method and Apparatus for Neural Network [abstract] Hanjun Kim, Seungbin Song, Hyunho Kwon, Jumin Lee, Haeeun Jeong, and Jaeho Lee KR Patent App. 10-2024-0088419, July 2024.