I am a Ph.D. student at the Computer Systems Laboratory, Cornell University, supervised by Prof. Zhiru Zhang. I received my B.E. degree from the School of Computer Science and Engineering, Sun Yat-sen University in 2021.

My research interests broadly lie in domain-specific languages and compilers, efficient runtime systems, and accelerator architecture. In particular, I attempt to bridge the productivity and performance gap between emerging machine learning applications and heterogeneous hardware (CPU/GPU/FPGA).

Currently, I am working on compiler optimizations for (1) large-scale model training in distributed environments and (2) scalable hardware accelerator design for inference workload. Feel free to drop me an email if you have aligned interests.

Education

Cornell University, US
Ph.D. in Computer Science
Accumulated GPA: 4.0/4.0
Aug. 2021 - Present
Sun Yat-sen University, China
B.E. in Computer Science
Aug. 2017 - Jun. 2021
Thesis: High-Performance Concurrent Graph Processing System
(Outstanding Undergraduate Thesis)
Overall GPA: 3.95/4.00 (Major GPA: 3.99/4.00)
Ranking: 1/188

Work Experience

Amazon Web Services (AWS) , Santa Clara, CA, US
Applied Scientist Intern, Deep Engine-Science Team
Mentors: Cody Hao Yu, Shuai Zheng, and Yida Wang
Aug. 2022 - Apr. 2023
ByteDance AI Lab , Beijing, China
Research Intern, MLSys Team, Applied Machine Learning (AML)
Mentors: Jun He and Yibo Zhu
Aug. 2020 - May 2021

Publications

Decoupled Model Schedule for Deep Learning Training
Hongzheng Chen, Cody Hao Yu, Shuai Zheng, Zhen Zhang, Zhiru Zhang, Yida Wang
arXiv:2302.08005, 2023

BGL: GPU-Efficient GNN Training by Optimizing Graph Data I/O and Preprocessing
Tianfeng Liu, Yangrui Chen, Dan Li, Chuan Wu, Yibo Zhu, Jun He, Yanghua Peng, Hongzheng Chen, Hongzhi Chen, Chuanxiong Guo
NSDI, 2023

Accelerator Design with Decoupled Hardware Customizations: Benefits and Challenges
Debjit Pal, Yi-Hsiang Lai, Shaojie Xiang, Niansong Zhang, Hongzheng Chen, Jeremy Casas, Pasquale Cocchini, Zhenkun Yang, Jin Yang, Louis-Noël Pouchet, Zhiru Zhang
DAC, 2022 (Invited Paper)

HeteroFlow: An Accelerator Programming Model with Decoupled Data Placement for Software-Defined FPGAs
Shaojie Xiang, Yi-Hsiang Lai, Yuan Zhou, Hongzheng Chen, Niansong Zhang, Debjit Pal, Zhiru Zhang
FPGA, 2022

Structured Pruning is All You Need for Pruning CNNs at Initialization
Yaohui Cai, Weizhe Hua, Hongzheng Chen, G. Edward Suh, Christopher De Sa, Zhiru Zhang
arXiv:2203.02549, 2022

Krill: A Compiler and Runtime System for Concurrent Graph Processing
Hongzheng Chen, Minghua Shen, Nong Xiao, Yutong Lu
SC, 2021

FracBNN: Accurate and FPGA-Efficient Binary Neural Networks with Fractional Activations
Yichi Zhang, Junhao Pan, Xinheng Liu, Hongzheng Chen, Deming Chen, Zhiru Zhang
FPGA, 2021 (Best Paper Nominee)

Entropy-Directed Scheduling for FPGA High-Level Synthesis
Minghua Shen, Hongzheng Chen (Corresponding author), Nong Xiao
IEEE Transactions on CAD, 2020

A Deep-Reinforcement-Learning-Based Scheduler for FPGA HLS
Hongzheng Chen, Minghua Shen
ICCAD, 2019

Teaching

Professional Service

Awards & Honors

  • USENIX NSDI’23 Student Grant, USENIX, 2023
  • FPGA’21 Best Paper Nominee, FPGA, 2021
  • Outstanding Undergraduate Thesis Award, Sun Yat-sen University, 2021
  • SenseTime Scholarship (21 undergrads in China), SenseTime, 2020
  • CCF Elite Collegiate Award (98 undergrads in China), China Computer Federation (CCF), 2020
  • Chinese National Scholarship $\times$ 2 (Top 1%), Ministry of Education of PRC, 2018-2020
  • First-Prize Scholarship $\times$ 3 (Top 5%), Sun Yat-sen University, 2017-2020
  • Samsung Scholarship (Top 1%), Samsung Electronics, 2017-2018

Talks