Qingyu Zhang
Qingyu Zhang
Home
News
Publications
Projects
Awards
Contact
Light
Dark
Automatic
Large Language Models
AutoAlign - Automated Alignment Toolkit for LLMs
An open-source toolkit for automated alignment of Large Language Models. I was responsible for adapting and optimizing SFT/DPO algorithms for the Megatron framework.
Paper
Code
Inference Acceleration for the 70B LLaMA-2 Large Language Model
As a coaching assistant for the ASC24 Student Supercomputer Challenge, I helped the team optimize the inference performance of the LLaMA-2-70B model using efficient frameworks like vLLM and designed data parallelism strategies to significantly reduce latency.
Training the 10-Billion Parameter Yuan-1.0 LLM
As a key member of the ASC23 team, I trained a 10B-level large language model using the DeepSpeed-Megatron framework, combining tensor, pipeline, and data parallelism. Our work won the
First Prize
.
Cite
×