From e6b824d4ff000f3478e997182cff25bb576d4d22 Mon Sep 17 00:00:00 2001 From: "Coder.AN" Date: Thu, 15 Aug 2024 17:44:18 +0800 Subject: [PATCH] update readme Change-Id: Ic6698970f31cb4ddfc01de618b4e6471e6c444ca --- README.md | 188 ++++++++++++++++++++++++++++++++++++++++++++++-- README_zh-CN.md | 1 + 2 files changed, 185 insertions(+), 4 deletions(-) create mode 100644 README_zh-CN.md diff --git a/README.md b/README.md index 35ee682..8f208d1 100644 --- a/README.md +++ b/README.md @@ -21,14 +21,14 @@ Current large language models (LLMs) primarily utilize next-token prediction met By integrating SentenceVAE into the input and output layers of LLMs, we develop Sentence-level LLMs (SLLMs) that employ a sentence-by-sentence inference method.
- +
Fig. 2. (a) The schematic form of published LLMs. (b) The schematic form of SLLMs, which embedded with SentenceVAEs.
-The SLLMs can maintain the integrity of the original semantic content by segmenting the context into sentences, thereby improving accuracy while boosting inference speed. Moreover, compared to previous LLMs, SLLMs process fewer tokens over equivalent context length, significantly reducing memory demands for self-attention computation and facilitating the handling of longer context. Extensive experiments on [Wanjuan dataset](https://github.com/opendatalab/WanJuan1.0/) have revealed that the proposed method can accelerate inference speed by 204~365%, reduce perplexity (PPL) to 46~75% of its original metric, and decrease memory overhead by 86~91% for the equivalent context length, compared to previous token-by-token methods. +The SLLMs can maintain the integrity of the original semantic content by segmenting the context into sentences, thereby improving accuracy while boosting inference speed. Moreover, compared to previous LLMs, SLLMs process fewer tokens over equivalent context length, significantly reducing memory demands for self-attention computation and facilitating the handling of longer context. Extensive experiments on [Wanjuan dataset](https://github.com/opendatalab/WanJuan1.0/) have revealed that the proposed method can accelerate inference speed by 204 ~ 365%, reduce perplexity (PPL) to 46 ~ 75% of its original metric, and decrease memory overhead by 86 ~ 91% for the equivalent context length, compared to previous token-by-token methods.
- +
@@ -142,6 +142,186 @@ The SLLMs can maintain the integrity of the original semantic content by segment In addition, by corroborating the Scaling Law, we extrapolated the feasibility of our methodologies to larger-scale models.
- +
Fig. 3. Scaling Law of (a) SLLMs and (b) SVAEs.
+ +# 2.Quick Start + +
+Installation + +Step1. Install SentenceVAE from source. + +```sh +git clone https://github.com/BestAnHongjun/SentenceVAE.git +cd SentenceVAE +pip3 install -e . # or python3 setup.py develop +``` + +
+ +
+Prepare OPT models + +Step1. Create a folder named `model_repo` under `SentenceVAE` to save OPT series models. + +```sh +cd SentenceVAE +mkdir -p model_repo +``` + +Step2. Navigate to the `model_repo` directory with `cd` and initialize [`git-lfs`](https://git-lfs.com). + +```sh +cd model_repo +git lfs install +``` + +Step3. Download [OPT-125M](https://huggingface.co/facebook/opt-125m) model for SentenceVAE-768 series and SLLM-125M series. + +```sh +git clone https://huggingface.co/facebook/opt-125m +``` + +Step4. Download [OPT-350M](https://huggingface.co/facebook/opt-350m) model for SentenceVAE-1024 series and SLLM-350M series. + +```sh +git clone https://huggingface.co/facebook/opt-350m +``` + +Step5. Download [OPT-1.3B](https://huggingface.co/facebook/opt-1.3b) model for Sentence-2048 series and SLLM-1.3B series. + +```sh +git clone https://huggingface.co/facebook/opt-1.3b +``` + +
+ +
+SentenceVAE Demo + +Step1. Download a pretrained model from table below. + +
+ +|Model|Hidden Size|Hidden Layers|Loss↓|PPL↓|Download Link| +|:-:|:-:|:-:|:-:|:-:|:-:| +|SVAE-768-H1|768|1|1.339|3.605|[ModelScope](https://modelscope.cn/models/CoderAN/SentenceVAE/resolve/master/SVAE-768-H1.pth)| +|SVAE-768-H2|768|2|1.019|2.588|[ModelScope](https://modelscope.cn/models/CoderAN/SentenceVAE/resolve/master/SVAE-768-H2.pth)| +|SVAE-768-H4|768|4|**0.5598**|**1.649**|[ModelScope](https://modelscope.cn/models/CoderAN/SentenceVAE/resolve/master/SVAE-768-H4.pth)| +|SVAE-1024-H1|1024|1|0.9266|2.406|[ModelScope](https://modelscope.cn/models/CoderAN/SentenceVAE/resolve/master/SVAE-1024-H1.pth)| +|SVAE-1024-H2|1024|2|0.6610|1.845|[ModelScope](https://modelscope.cn/models/CoderAN/SentenceVAE/resolve/master/SVAE-1024-H2.pth)| +|SVAE-1024-H4|1024|4|**0.3704**|**1.384**|[ModelScope](https://modelscope.cn/models/CoderAN/SentenceVAE/resolve/master/SVAE-1024-H4.pth)| +|SVAE-2048-H1|2048|1|0.5165|1.622|[ModelScope](https://modelscope.cn/models/CoderAN/SentenceVAE/resolve/master/SVAE-2048-H1.pth)| +|SVAE-2048-H2|2048|2|0.2845|1.292|[ModelScope](https://modelscope.cn/models/CoderAN/SentenceVAE/resolve/master/SVAE-2048-H2.pth)| +|SVAE-2048-H4|2048|4|**0.1270**|**1.115**|[ModelScope](https://modelscope.cn/models/CoderAN/SentenceVAE/resolve/master/SVAE-2048-H4.pth)| + +
+ +Step2. Run demo script under `tools/demo` folder. Here's an example: + +```sh +cd SentenceVAE + +python3 tools/demo/demo_svae.py \ + -c config/SVAE/SVAE-768/svae_768_h4.yaml \ + --checkpoint /path/to/pretrained/checkpoint \ + --input "What's your name?" +``` + +**Arguments**: +* `-c`,`--config`: path to the corresponding configuration file, please reference [this folder](config/SVAE/). +* `--checkpoint`: path to the checkpoint file you just downloaded. +* `--input`: A sentence you want to test. + * It must be a separate sentence ending with punctuation marks such as commas, periods, etc. Please refer to the [paper](https://arxiv.org/abs/2408.00655) for specific reasons. + * Currently, only English is supported. + +The model will compress this sentence into a single vector, decode and restore it for output. In an ideal state, the output and input should be consistent. + +
+ +
+ +SentenceLLM Demo + +**Notice**: Please be aware that, as SFT datasets are typically commercial secrets and difficult for us to access, all the models listed below are **pre-trained models**, not general-purpose conversation models. Therefore, the **PPL** (Perplexity) metric should be used to assess model quality, not conversational performance. If you treat them as Q&A models, you're likely to get gibberish outputs (***in fact, even our baseline OPT model will output gibberish***). We recommend fine-tuning these models on private SFT datasets to explore their potential as general-purpose conversation models. + +Step1. Download a pretrained model from table below. + +
+ +|Model|Download Link| +|:-:|:-:| +|SLLM-125M-H1|[ModelScope](https://modelscope.cn/models/CoderAN/SentenceLLM/resolve/master/SLLM-125M-H1.pth)| +|SLLM-125M-H2|[ModelScope](https://modelscope.cn/models/CoderAN/SentenceLLM/resolve/master/SLLM-125M-H2.pth)| +|SLLM-125M-H4|[ModelScope](https://modelscope.cn/models/CoderAN/SentenceLLM/resolve/master/SLLM-125M-H4.pth)| +|SLLM-350M-H1|[ModelScope](https://modelscope.cn/models/CoderAN/SentenceLLM/resolve/master/SLLM-350M-H1.pth)| +|SLLM-350M-H2|[ModelScope](https://modelscope.cn/models/CoderAN/SentenceLLM/resolve/master/SLLM-350M-H2.pth)| +|SLLM-350M-H4|[ModelScope](https://modelscope.cn/models/CoderAN/SentenceLLM/resolve/master/SLLM-350M-H4.pth)| +|SLLM-1.3B-H1|[ModelScope](https://modelscope.cn/models/CoderAN/SentenceLLM/resolve/master/SLLM-1.3B-H1.pth)| +|SLLM-1.3B-H2|[ModelScope](https://modelscope.cn/models/CoderAN/SentenceLLM/resolve/master/SLLM-1.3B-H2.pth)| + +
+ +Step2. Run demo script under `tools/demo` folder. Here's an example: + +```sh +cd SentenceVAE + +python3 tools/demo/demo_sllm.py \ + -c config/SLLM/SLLM-125m/sllm_125m_h4_all.yaml \ + --checkpoint /path/to/pretrained/checkpoint \ + --input "What's your name?" +``` + +**Arguments**: +* `-c`,`--config`: path to the corresponding configuration file, please reference [this folder](config/SLLM/). +* `--checkpoint`: path to the checkpoint file you just downloaded. +* `--input`: Your input sentence. + + +
+ +# 3.Tutorials + +Under writing... + +
+Train Models + +* [Prepare Datasets](#) +* [Train SentenceVAEs](#) +* [Train SentenceLLMs](#) + +
+ +
+Eval Models + +* [Eval OPT models (baseline)](#) +* [Eval SentenceVAEs](#) +* [Eval SentenceLLMs](#) + +
+ +
+Test Benchmarks + +* [Test benchmarks of SentenceVAEs](#) +* [Test benchmarks of SentenceLLMs](#) + +
+ +# 4.Cite SentenceVAE + +If you use SentenceVAE in your research, please cite our work by using the following BibTeX entry: + +```bibtex +@article{an2024sentencevae, + title={SentenceVAE: Enable Next-sentence Prediction for Large Language Models with Faster Speed, Higher Accuracy and Longer Context}, + author={An, Hongjun and Chen, Yifan and Sun, Zhe and Li, Xuelong}, + journal={arXiv preprint arXiv:2408.00655}, + year={2024} +} +``` \ No newline at end of file diff --git a/README_zh-CN.md b/README_zh-CN.md new file mode 100644 index 0000000..be0bcfd --- /dev/null +++ b/README_zh-CN.md @@ -0,0 +1 @@ +TODO. \ No newline at end of file
Model Total Params