- CPET: Effective Parameter-Efficient Tuning for Compressed Large Language Models. Weilin Zhao, Yuxiang Huang, Xu Han, Zhiyuan Liu, Zhengyan Zhang, Maosong Sun. [Paper]
- ReMax: A Simple, Effective, and Efficient Method for Aligning Large Language Models. Ziniu Li, Tian Xu, Yushun Zhang, Yang Yu, Ruoyu Sun, Zhi-Quan Luo. [Paper][Github]
- TRANSOM: An Efficient Fault-Tolerant System for Training LLMs. Baodong Wu, Lei Xia, Qingping Li, Kangyu Li, Xu Chen, Yongqiang Guo, Tieyao Xiang, Yuheng Chen, Shigang Li. [Paper]
- DEFT: Data Efficient Fine-Tuning for Large Language Models via Unsupervised Core-Set Selection. Devleena Das, Vivek Khetan. [Paper]
- LongQLoRA: Efficient and Effective Method to Extend Context Length of Large Language Models. Jianxin Yang. [Paper][Github]
- Sparse Fine-tuning for Inference Acceleration of Large Language Models. Eldar Kurtic, Denis Kuznedelev, Elias Frantar, Michael Goin, Dan Alistarh. [Paper][Github][Github]
- ComPEFT: Compression for Communicating Parameter Efficient Updates via Sparsification and Quantization. Prateek Yadav, Leshem Choshen, Colin Raffel, Mohit Bansal. [Paper][Github]
- Towards Better Parameter-Efficient Fine-Tuning for Large Language Models: A Position Paper. Chengyu Wang, Junbing Yan, Wei Zhang, Jun Huang. [Paper]
- SPT: Fine-Tuning Transformer-based Language Models Efficiently with Sparsification. Yuntao Gui, Xiao Yan, Peiqi Yin, Han Yang, James Cheng. [Paper][Github]
- LoRA+: Efficient Low Rank Adaptation of Large Models. Soufiane Hayou, Nikhil Ghosh, Bin Yu. [Paper][Github]
- Sparse MeZO: Less Parameters for Better Performance in Zeroth-Order LLM Fine-Tuning. Yong Liu, Zirui Zhu, Chaoyu Gong, Minhao Cheng, Cho-Jui Hsieh, Yang You. [Paper]
- DropBP: Accelerating Fine-Tuning of Large Language Models by Dropping Backward Propagation. Sunghyeon Woo, Baeseong Park, Byeongwook Kim, Minjung Jo, Sejung Kwon, Dongsuk Jeon, Dongsoo Lee. [Paper][Github]
- LoRA-SP: Streamlined Partial Parameter Adaptation for Resource-Efficient Fine-Tuning of Large Language Models. Yichao Wu, Yafei Xiang, Shuning Huo, Yulu Gong, Penghao Liang. [Paper]
- Parameter-Efficient Fine-Tuning for Large Models: A Comprehensive Survey. Zeyu Han, Chao Gao, Jinyang Liu, Jeff (Jun)Zhang, Sai Qian Zhang. [Paper]
- AILS-NTUA at SemEval-2024 Task 6: Efficient model tuning for hallucination detection and analysis. Natalia Griogoriadou, Maria Lymperaiou, Giorgos Filandrianos, Giorgos Stamou. [Paper][Github]
- BAdam: A Memory Efficient Full Parameter Training Method for Large Language Models. Qijun Luo, Hengxu Yu, Xiao Li. [Paper][Github]
- Intuition-aware Mixture-of-Rank-1-Experts for Parameter Efficient Finetuning. Yijiang Liu, Rongyu Zhang, Huanrui Yang, Kurt Keutzer, Yuan Du, Li Du, Shanghang Zhang. [Paper]
- Random Masking Finds Winning Tickets for Parameter Efficient Fine-tuning. Jing Xu, Jingzhao Zhang. [Paper][Github]
- Zeroth-Order Fine-Tuning of LLMs with Extreme Sparsity. Wentao Guo, Jikai Long, Yimeng Zeng, Zirui Liu, Xinyu Yang, Yide Ran, Jacob R. Gardner, Osbert Bastani, Christopher De Sa, Xiaodong Yu, Beidi Chen, Zhaozhuo Xu. [Paper]
- Light-PEFT: Lightening Parameter-Efficient Fine-Tuning via Early Pruning. Naibin Gu, Peng Fu, Xiyu Liu, Bowen Shen, Zheng Lin, Weiping Wang. [Paper][Github]
- BlockLLM: Memory-Efficient Adaptation of LLMs by Selecting and Optimizing the Right Coordinate Blocks. Amrutha Varshini Ramesh, Vignesh Ganapathiraman, Issam H. Laradji, Mark Schmidt. [Paper]
- Compress then Serve: Serving Thousands of LoRA Adapters with Little Overhead. Rickard Brüel-Gabrielsson, Jiacheng Zhu, Onkar Bhardwaj, Leshem Choshen, Kristjan Greenewald, Mikhail Yurochkin, Justin Solomon. [Paper]
- Increasing Model Capacity for Free: A Simple Strategy for Parameter Efficient Fine-tuning. Haobo Song, Hao Zhao, Soumajit Majumder, Tao Lin. [Paper][Github]
- PocketLLM: Enabling On-Device Fine-Tuning for Personalized LLMs. Dan Peng, Zhihui Fu, Jun Wang. [Paper]
- Code Less, Align More: Efficient LLM Fine-tuning for Code Generation with Data Pruning. Yun-Da Tsai, Mingjie Liu, Haoxing Ren. [Paper]
- Tensor Train Low-rank Approximation (TT-LoRA): Democratizing AI with Accelerated LLMs. Afia Anjum, Maksim E. Eren, Ismael Boureima, Boian Alexandrov, Manish Bhattarai. [Paper]
- Enabling Resource-Efficient On-Device Fine-Tuning of LLMs Using Only Inference Engines. Lei Gao, Amir Ziashahabi, Yue Niu, Salman Avestimehr, Murali Annavaram. [Paper]
- Bone: Block Affine Transformation as Parameter Efficient Fine-tuning Methods for Large Language Models. Jiale Kang. [Paper][Github]
- SparseGrad: A Selective Method for Efficient Fine-tuning of MLP Layers. Viktoriia Chekalina, Anna Rudenko, Gleb Mezentsev, Alexander Mikhalev, Alexander Panchenko, Ivan Oseledets. [Paper][Github]
- SpaLLM: Unified Compressive Adaptation of Large Language Models with Sketching. Tianyi Zhang, Junda Su, Oscar Wu, Zhaozhuo Xu, Anshumali Shrivastava. [Paper]
- Layer-wise Importance Matters: Less Memory for Better Performance in Parameter-efficient Fine-tuning of Large Language Models. Kai Yao, Penlei Gao, Lichun Li, Yuan Zhao, Xiaofeng Wang, Wei Wang, Jianke Zhu. [Paper][Github]
- Parameter-Efficient Fine-Tuning of Large Language Models using Semantic Knowledge Tuning. Nusrat Jahan Prottasha, Asif Mahmud, Md. Shohanur Islam Sobuj, Prakash Bhat, Md Kowsher, Niloofar Yousefi, Ozlem Ozmen Garibay. [Paper]
- QEFT: Quantization for Efficient Fine-Tuning of LLMs. Changhun Lee, Jun-gyu Jin, Younghyun Cho, Eunhyeok Park. [Paper][Github]
- BIPEFT: Budget-Guided Iterative Search for Parameter Efficient Fine-Tuning of Large Pretrained Language Models. Aofei Chang, Jiaqi Wang, Han Liu, Parminder Bhatia, Cao Xiao, Ting Wang, Fenglong Ma. [Paper][Github]
- RoCoFT: Efficient Finetuning of Large Language Models with Row-Column Updates. Md Kowsher, Tara Esmaeilbeig, Chun-Nam Yu, Mojtaba Soltanalian, Niloofar Yousefi. [Paper][Github]
- MiLoRA: Efficient Mixture of Low-Rank Adaptation for Large Language Models Fine-tuning. Jingfan Zhang, Yi Zhao, Dan Chen, Xing Tian, Huanran Zheng, Wei Zhu. [Paper]