Skip to content

Commit

Permalink
add reference
Browse files Browse the repository at this point in the history
  • Loading branch information
awaelchli committed Apr 26, 2024
1 parent 4d51b73 commit 967cfda
Showing 1 changed file with 1 addition and 1 deletion.
2 changes: 1 addition & 1 deletion config_hub/finetune/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
The table below lists the performances you can expect from the provided config files. Note that you can achieve lower memory consumption by lowering the micro batch size as needed. In addition, you can lower the rank (`lora_r`) in the LoRA configuration files and disable LoRA for certain layers (for example, setting `lora_projection` and other LoRA layer-specific parameters to `false`).
For more information, see the [Dealing with out-of-memory (OOM) errors](../../tutorials/oom.md) on lowering the memory requirements.
The "Cost" column refers to the on-demand compute cost on [Lightning AI](https://lightning.ai) where these benchmarks were executed.
All experiments were conducted using bfloat-16 precision on the Alpaca2k dataset. The "Multitask score" refers to MMLU.
All experiments were conducted using bfloat-16 precision on the Alpaca2k dataset. The "Multitask score" refers to [MMLU](https://arxiv.org/abs/2009.03300).

 

Expand Down

0 comments on commit 967cfda

Please sign in to comment.