Skip to content
/ modpo Public

[ACL'24] Beyond One-Preference-Fits-All Alignment: Multi-Objective Direct Preference Optimization

Notifications You must be signed in to change notification settings

ZHZisZZ/modpo

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

11 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

MODPO: Multi-Objective Direct Preference Optimization

Code release for Beyond One-Preference-Fits-All Alignment: Multi-Objective Direct Preference Optimization.

TL;DR: Compared to DPO loss, MODPO loss includes a margin to steer language models by multiple objectives.

Installation

conda create -n modpo python=3.10
conda activate modpo
pip install torch==2.1.0 --index-url https://download.pytorch.org/whl/cu118
pip install -r requirements.txt
# (optional) pip install flash-attn==2.3.2 --no-build-isolation

Running MODPO

This repository includes two MODPO examples:

Other examples

This repository also contains other off-the-shelf tuning recipes:

To implement new alignment algorithms, please add new trainers at src/trainer.

Customized datasets

For supported datasets, refer to REAL_DATASET_CONFIGS(src/data/configs.py). To train on your datasets, add them under src/data/raw_data and modify REAL_DATASET_CONFIGS(src/data/configs.py) accordingly. Please see src/data/raw_data/shp for an example.

Reference

@inproceedings{zhou2024beyond,
  title={Beyond one-preference-fits-all alignment: Multi-objective direct preference optimization},
  author={Zhou, Zhanhui and Liu, Jie and Shao, Jing and Yue, Xiangyu and Yang, Chao and Ouyang, Wanli and Qiao, Yu},
  booktitle={Findings of the Association for Computational Linguistics ACL 2024},
  pages={10586--10613},
  year={2024}
}

About

[ACL'24] Beyond One-Preference-Fits-All Alignment: Multi-Objective Direct Preference Optimization

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published