I am a Ph.D. student at the Computer Systems Laboratory, Cornell University, supervised by Prof. Zhiru Zhang. I received my B.E. degree from the School of Computer Science and Engineering, Sun Yat-sen University in 2021.

My research interests broadly lie in domain-specific languages and compilers, efficient runtime systems, and accelerator architecture. In particular, I attempt to bridge the productivity and performance gap between emerging machine learning applications and heterogeneous hardware (CPU/GPU/FPGA).

Currently, I am working on compiler optimizations for (1) large-scale model training in distributed environments and (2) scalable hardware accelerator design for inference workload. Feel free to drop me an email if you have aligned interests.

Education

Cornell University, US
Ph.D. in Computer Science
Accumulated GPA: 4.0/4.0
Aug. 2021 - Present
Sun Yat-sen University, China
B.E. in Computer Science
Aug. 2017 - Jun. 2021
Thesis: High-Performance Concurrent Graph Processing System
(Outstanding Undergraduate Thesis)
Overall GPA: 3.95/4.00 (Major GPA: 3.99/4.00)
Ranking: 1/188

Work Experience

NVIDIA , Redmond, WA, US
Research Intern, Deep Learning Compiler Team
Mentors: Vinod Grover
May 2024 - Aug. 2024
Amazon Web Services (AWS) , Santa Clara, CA, US
Applied Scientist Intern, Deep Engine-Science Team
Mentors: Cody Hao Yu, Shuai Zheng, and Yida Wang
Aug. 2022 - Apr. 2023
ByteDance/TikTok AI Lab , Beijing, China
Research Intern, MLSys Team, Applied Machine Learning (AML)
Mentors: Jun He and Yibo Zhu
Aug. 2020 - May 2021

News

  • [02/27/24] Our Allo paper has been conditionally accepted to PLDI’24! Code will be open-source.
  • [02/21/24] I will be attending FPGA’24 from Mar 2 to Mar 6 in Monterey, CA. Feel free to reach out if you want to chat!
  • [01/15/24] Joined PLDI’24 Artifact Evaluation Committee.
  • [12/27/23] Received the internship offer from NVIDIA! I’ll join the NVIDIA deep learning compiler team in 2024 Summer.
  • [12/10/23] Our HLS verification paper has been accepted to FPGA’24. Congrats to all the coauthors!
  • [11/07/23] Our Slapo paper has been accepted to ASPLOS’24! Code is open-source.
  • [10/30/23] Joined OOPSLA’24 Artifact Evaluation Committee.
  • [09/12/23] Attended SRC TECHCON at Austin and gave a talk on decoupled model schedule.

Publications

[Preprint] Understanding the Potential of FPGA-Based Spatial Acceleration for Large Language Model Inference
Hongzheng Chen, Jiahao Zhang, Yixiao Du, Shaojie Xiang, Zichao Yue, Niansong Zhang, Yaohui Cai, Zhiru Zhang
arXiv:2312.15159, 2024

Allo: A Programming Model for Composable Accelerator Design
Hongzheng Chen*, Niansong Zhang*, Shaojie Xiang, Zhichen Zeng, Mengjia Dai, Zhiru Zhang
PLDI, 2024 (To appear)

Slapo: A Schedule Language for Progressive Optimization of Large Deep Learning Model Training
Hongzheng Chen, Cody Hao Yu, Shuai Zheng, Zhen Zhang, Zhiru Zhang, Yida Wang
ASPLOS, 2024 (To appear)

Formal Verification of Source-to-Source Transformations for HLS
Louis-Noël Pouchet, Emily Tucker, Niansong Zhang, Hongzheng Chen, Debjit Pal, Gabriel Rodríguez, Zhiru Zhang
FPGA, 2024 (Best Paper Nominee)

BGL: GPU-Efficient GNN Training by Optimizing Graph Data I/O and Preprocessing
Tianfeng Liu*, Yangrui Chen*, Dan Li, Chuan Wu, Yibo Zhu, Jun He, Yanghua Peng, Hongzheng Chen, Hongzhi Chen, Chuanxiong Guo
NSDI, 2023

Accelerator Design with Decoupled Hardware Customizations: Benefits and Challenges
Debjit Pal, Yi-Hsiang Lai, Shaojie Xiang, Niansong Zhang, Hongzheng Chen, Jeremy Casas, Pasquale Cocchini, Zhenkun Yang, Jin Yang, Louis-Noël Pouchet, Zhiru Zhang
DAC, 2022 (Invited Paper)

HeteroFlow: An Accelerator Programming Model with Decoupled Data Placement for Software-Defined FPGAs
Shaojie Xiang, Yi-Hsiang Lai, Yuan Zhou, Hongzheng Chen, Niansong Zhang, Debjit Pal, Zhiru Zhang
FPGA, 2022

[Preprint] Structured Pruning is All You Need for Pruning CNNs at Initialization
Yaohui Cai, Weizhe Hua, Hongzheng Chen, G. Edward Suh, Christopher De Sa, Zhiru Zhang
arXiv:2203.02549, 2022

Krill: A Compiler and Runtime System for Concurrent Graph Processing
Hongzheng Chen, Minghua Shen, Nong Xiao, Yutong Lu
SC, 2021

FracBNN: Accurate and FPGA-Efficient Binary Neural Networks with Fractional Activations
Yichi Zhang, Junhao Pan, Xinheng Liu, Hongzheng Chen, Deming Chen, Zhiru Zhang
FPGA, 2021 (Best Paper Nominee)

Entropy-Directed Scheduling for FPGA High-Level Synthesis
Minghua Shen, Hongzheng Chen (Corresponding author), Nong Xiao
IEEE Transactions on CAD, 2020

A Deep-Reinforcement-Learning-Based Scheduler for FPGA HLS
Hongzheng Chen, Minghua Shen
ICCAD, 2019

Teaching

Professional Service

Awards & Honors

  • FPGA’24 Best Paper Nominee, FPGA, 2024
  • USENIX NSDI’23 Student Grant, USENIX, 2023
  • FPGA’21 Best Paper Nominee, FPGA, 2021
  • Outstanding Undergraduate Thesis Award, Sun Yat-sen University, 2021
  • SenseTime Scholarship (21 undergrads in China), SenseTime, 2020
  • CCF Elite Collegiate Award (98 undergrads in China), China Computer Federation (CCF), 2020
  • Chinese National Scholarship $\times$ 2 (Top 1%), Ministry of Education of PRC, 2018-2020
  • First-Prize Scholarship $\times$ 3 (Top 5%), Sun Yat-sen University, 2017-2020
  • Samsung Scholarship (Top 1%), Samsung Electronics, 2017-2018

Talks