About me
I’m a second-year Ph.D student of Efficient Computing Lab in the Department of Computer Science & Engineering at POSTECH, advised by Prof. Eunhyeok Park. Before joining POSTECH, I completed my B.S. in Department of Computer Science & Engineering in Kyung Hee University.
I’m currently focusing on Efficient AI, particularly in enhancing Memory Efficiency and Computation Acceleration during model training and inference of various models (Vision, LLM, Video Generation, etc.) via Quantization and Low-rank Approximation.
Research Keywords
- Fast and Memory Efficient Training (A01, C03)
- Fast and Memory Efficient Inference (C01, C02)
- Parameter Efficient Fine-tuning of LLMs (U01, P03)
- CUDA Kernel optimization (P01, P02, P03, C01, C03)
- Fast Sampling of Video Generation Diffusion Models
News
- [Nov. 03, 2025] I’ve joined AMD as a Research Associate.
- [Oct. 21, 2025] I’ve been selected as a Winner of Qualcomm Innovation Fellowship Korea.
- [Oct. 01, 2025] I’ve been selected for the BK21 Outstanding Graduate Student International Training Scholarship by Ministry of Science and ICT, South Korea.
- [Mar. 02, 2025] 1 paper has been accepted to ICML 2025.
- [Feb. 25, 2025] 1 paper has been accepted to CVPR 2025.
- [Oct. 28, 2024] 1 paper has been accepted to WACV 2025.
Publications
- [C03] HOT: Hadamard-based Optimized Training
Seonggon Kim, Juncheol Shin, Seung-taek Woo, Eunhyeok Park
Computer Vision and Pattern Recognition (CVPR 2025), Nashville.
- [C02] Merge-Friendly Post-Training Quantization for Multi-Target Domain Adaptation
Juncheol Shin, Minsang Seok, Seonggon Kim, Eunhyeok Park
International Conference on Machine Learning (ICML 2025), Vancouver.
- [C01] PTQ4VM: Post-training Quantization for Visual Mamba
Younghyun Cho*, Changhun Lee*, Seonggon Kim, Eunhyeok Park
Winter Conference on Applications of Computer Vision (WACV 2025 Oral), Tucson.
- [U01] HoLA: Overcoming the full-finetuning with Hadamard-oriented LoRA
Seonggon Kim, Taehyeon Kim, Byeori Kim, Eunhyeok Park
Neural Information Processing Systems (NeurIPS 2025, Under review), San Diego.
- [A01] HLQ: Fast and Efficient Backpropagation via Hadamard Low-rank Quantization
Seonggon Kim, Eunhyeok Park
arXiv 2406.
Project
- [P03] Fast and Memory-efficient training on Extreme environment, Jul. 2024 – Current
National AI Research Lab of Korea- Conducted research on memory-efficient training for vision models.
- Prototype development of an optimized CUDA kernel for memory-efficient training.
- [P02] GEMV Accelerator for LLM inference on Intel Gaudi-2, Jun. 2024 - Jun. 2025
Naver & Intel Joint Research Center- Conducted research on LLM’s fast inference on Intel Gaudi-2 architecture.
- Implemented custom GEMV kernel for Gaudi with TPC-C language.
- Transplanted LUT Quantization from CUDA to Gaudi TPC.
- [P01] Solutions for Self-supervised training on Edge Device, Jun. 2023 – Current
Ministry of Science and ICT of Korea- Conducted research on fast fine-tuning on Edge device.
- Designed an efficient fine-tuning algorithm with stochastic quantization.
- Implemented custom CUDA kernel for fast fine-tuning.
Experience
- Research Associate, Nov. 2025 - Current
AMD, Longmont, CO, USA Software Engineer Intern, Jul. 2022 - Feb. 2023
Spirent Communications, San Jose, CA, USASoftware Engineer Intern, Feb. 2022 - Jun. 2022
Common Computer, Seoul, Korea- Research Intern, Mar. 2021 - Dec. 2021
SI Analytics, Daejeon, Korea
Awards & Honors
Qualcomm Innovation Fellowship Korea, Winner, Oct. 2025
BK21 Outstanding Graduate Student International Training Scholarship, Selected Graduate Student, Oct. 2025
ETHDenver 2022 Blockchain Hackathon NFT project, 3rd Prize
SporkDAO, Feb. 2022CVPR 2021 Earthvision workshop, Land Cover Classification Challenge, 5th Prize
CVPR, Jun. 2021
Education
M.S/Ph.D. in Computer Science and Engineering, POSTECH
Sep. 2023 - PresentB.S. in Computer Science and Engineering, Kyung Hee University
Feb. 2017 - Aug. 2023
Teaching Experience
- Teaching Assistant, Mar. 2025 - June. 2025
CSED311: Computer Architecture, POSTECH
