Skip to content

Commit

Permalink
Merge branch 'huggingface:main' into donut-iobinding
Browse files Browse the repository at this point in the history
  • Loading branch information
IlyasMoutawwakil authored Aug 20, 2023
2 parents 231ef6c + 0b08a1f commit d081995
Show file tree
Hide file tree
Showing 5 changed files with 547 additions and 2 deletions.
2 changes: 1 addition & 1 deletion optimum/version.py
Original file line number Diff line number Diff line change
Expand Up @@ -12,4 +12,4 @@
# See the License for the specific language governing permissions and
# limitations under the License.

__version__ = "1.11.2.dev0"
__version__ = "1.11.3.dev0"
2 changes: 1 addition & 1 deletion setup.py
Original file line number Diff line number Diff line change
Expand Up @@ -66,7 +66,7 @@
"nncf": "optimum-intel[nncf]>=1.10.1",
"neural-compressor": "optimum-intel[neural-compressor]>=1.9.2",
"graphcore": "optimum-graphcore",
"habana": ["transformers<4.29.0", "optimum-habana"],
"habana": "optimum-habana",
"neuron": "optimum-neuron[neuron]",
"neuronx": "optimum-neuron[neuronx]",
"furiosa": "optimum-furiosa",
Expand Down
77 changes: 77 additions & 0 deletions tests/benchmark/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,77 @@
# BetterTransformer benchmark

Please refer to https://medium.com/pytorch/bettertransformer-out-of-the-box-performance-for-huggingface-transformers-3fbe27d50ab2 & https://pytorch.org/blog/out-of-the-box-acceleration/ for reproduction.

# GPTQ benchmark

Run

```shell
CUDA_VISIBLE_DEVICES=0 python benchmark_gptq.py --model daryl149/llama-2-13b-chat-hf --sweep --num-batches 4 --task text-generation
```

and

```shell
git clone --branch main https://huggingface.co/TheBloke/Llama-2-13B-chat-GPTQ
cd Llama-2-13B-chat-GPTQ
mv gptq_model-4bit-128g.safetensors model.safetensors
mv quantize_config.json quantization_config.json

# and then
# with exllama kernel
CUDA_VISIBLE_DEVICES=0 python benchmark_gptq.py --model daryl149/llama-2-13b-chat-hf --gptq-model /path/to/Llama-2-13B-chat-GPTQ/ --sweep --num-batches 4 --gptq --task text-generation

# without exllama kernel
CUDA_VISIBLE_DEVICES=0 python benchmark_gptq.py --model daryl149/llama-2-13b-chat-hf --gptq-model /path/to/Llama-2-13B-chat-GPTQ/ --sweep --num-batches 4 --gptq --task text-generation --disable-exllama
```

## Benchmark results

Here are results obtained on a single NVIDIA A100-SXM4-80GB GPU. We use a prompt length of 512, and generate exactly 512 new tokens. Each generation is repeated for 4 batches, and metrics are averaged over the number of batches and generation length.

Additional benchmarks could be done in the act-order case.

From the bencharmk, it appears that Exllama kernel is the best-in-class for GPTQ, although it is rather slow for larger batch sizes. The memory savings are not exactly of x4 although weights are in int4. This can be explained by the possible static buffers used by the kernels, the CUDA context (taken into account in the measurements), and the KV cache that is still in fp16.

### Batch size = 1

|gptq |act_order|bits|group_size|kernel|Load time (s)|Per-token latency (ms)|Throughput (tok/s)|Peak memory (MB)|
|-----|---------|----|----------|------|-------------|----------------------|------------------|----------------|
|False|None |None|None |None |26.0 |36.958 |27.058 |29152.98 |
|True |False |4 |128 |exllama|36.2 |33.711 |29.663 |10484.34 |
|True |False |4 |128 |autogptq-cuda-old|36.2 |46.44 |21.53 |10344.62 |


### Batch size = 2

|gptq |act_order|bits|group_size|kernel|Load time (s)|Per-token latency (ms)|Throughput (tok/s)|Peak memory (MB)|
|-----|---------|----|----------|------|-------------|----------------------|------------------|----------------|
|False|None |None|None |None |26.0 |37.35 |53.53 |30831.09 |
|True |False |4 |128 |exllama|36.2 |37.25 |53.68 |12162.43 |
|True |False |4 |128 |autogptq-cuda-old|36.2 |47.41 |42.18 |12020.34 |

### Batch size = 4

|gptq |act_order|bits|group_size|kernel |Load time (s)|Per-token latency (ms)|Throughput (tok/s)|Peak memory (MB)|
|-----|---------|----|----------|-----------------|-------------|----------------------|------------------|----------------|
|False|None |None|None |None |26.0 |37.89 |105.55 |34187.22 |
|True |False |4 |128 |exllama |36.2 |54.14 |73.87 |15518.55 |
|True |False |4 |128 |autogptq-cuda-old|36.2 |60.98 |65.59 |15374.67 |


### Batch size = 8

|gptq |act_order|bits|group_size|kernel|Load time (s)|Per-token latency (ms)|Throughput (tok/s)|Peak memory (MB)|
|-----|---------|----|----------|------|-------------|----------------------|------------------|----------------|
|False|None |None|None |None |26.0 |47.37 |168.86 |40327.62 |
|True |False |4 |128 |exllama|36.2 |73.57 |108.73 |21864.56 |
|True |False |4 |128 |autogptq-cuda-old|36.2 |104.44 |76.59 |20987.68 |

### Batch size = 16

|gptq |act_order|bits|group_size|kernel|Load time (s)|Per-token latency (ms)|Throughput (tok/s)|Peak memory (MB)|
|-----|---------|----|----------|------|-------------|----------------------|------------------|----------------|
|False|None |None|None |None |26.0 |69.94 |228.76 |53986.51 |
|True |False |4 |128 |exllama|36.2 |95.41 |167.68 |34777.04 |
|True |False |4 |128 |autogptq-cuda-old|36.2 |192.48 |83.12 |35497.62 |
Loading

0 comments on commit d081995

Please sign in to comment.