Qingyu Zhang
Qingyu Zhang
Home
News
Publications
Projects
Awards
Contact
Light
Dark
Automatic
Conference
AutoAlign: Automated Alignment for Large Language Models
An open-source toolkit for automated alignment of Large Language Models with human intentions and values.
Xinyu Lu
,
Dong Xu
,
Chunkang Zhang
,
Xinyan Guan
,
Junxiang Wang
,
Qingyu Zhang
,
Pengbo Wang
,
Yingzhi Mao
,
Hao Xiang
,
Xueru Wen
,
Zichao Li
,
Yaojie Lu
,
Hongyu Lin
,
Le Sun
,
Xianpei Han
PDF
Cite
Code
ShortV: Efficient Multimodal Large Language Models by Freezing Visual Tokens in Ineffective Layers
We propose ShortV, a training-free method that reduces computational costs of MLLMs by freezing visual tokens in ineffective layers.
Qianhao Yuan
,
Qingyu Zhang
,
Yanjiang Liu
,
Jiawei Chen
,
Yaojie Lu
,
Hongyu Lin
,
Jia Zheng
,
Xianpei Han
,
Le Sun
PDF
Cite
ShortGPT: Layers in Large Language Models are More Redundant Than You Expect
We investigate the redundancy within Transformer layers and propose an effective layer-based pruning method.
Xin Men
,
Mingyu Xu
,
Qingyu Zhang
,
Qianhao Yuan
,
Bingning Wang
,
Hongyu Lin
,
Yaojie Lu
,
Xianpei Han
,
Weipeng Chen
PDF
Cite
Base of RoPE Bounds Context Length
This work contributes to the investigation of the lower bounds of the Base in RoPE, providing a theoretical foundation for the long-context extrapolation of models.
Xin Men*
,
Mingyu Xu*
,
Bingning Wang†
,
Qingyu Zhang
,
Hongyu Lin
,
Xianpei Han
,
Weipeng Chen
PDF
Cite
Cite
×