Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG] Qwen 2.5 34B returns garbage at certain quantization levels, but not others #628

Closed
3 tasks done
Downtown-Case opened this issue Sep 19, 2024 · 4 comments
Closed
3 tasks done
Labels
bug Something isn't working

Comments

@Downtown-Case
Copy link
Contributor

Downtown-Case commented Sep 19, 2024

OS

Linux

GPU Library

CUDA 12.x

Python version

3.12

Pytorch version

2.3, 2.4, 2.6 nightly, flash-attn and xformers built from source, exllama built from master branch

Describe the bug

Qwen 2.5 34B returns garbage output with certain quantizations above 4bpw, but not ones below 4bpw.

Possibly related to #621 or #627

What's unusual is that lower quantizations work, but higher ones do not.

These two quants work for me:

While this one (and a 4.04 I had locally) return garbage:

Here's an example command I used for quantization: python convert.py --in_dir "/home/down/Models/Raw/Qwen_Qwen2.5-32B" -o "/home/down/FastStorage/scratch2" -m "/home/down/Models/calibration/Q32-base.json" -b 4.0 -hb 6 -cf "/home/down/Models/exllama/Qwen_Qwen2.5-32B-exl2-4.0bpw" -nr --fast_safetensors

Re-doing the calibration from scratch doesn't seem to make a difference, and that same calibration was used for the sub 4bpw quantizations.

I tried quantizing at 4.1/4.04 bpw in multiple pytorch environments, with different versions of flash-attention installed, remaking the measurements json from scratch, and so on. My test is an 75K context story at Q4 cache quantization, simply continuing it in exui. Again, the sub 4bpw quantization continue it coherently while the ones over 4bpw return garbled english, with no errors in the console.

I'm running through more troubleshooting steps now (like trying different levels of cache quantization and making more quantizations), but figured I'd post early since others seem to be having issues with Qwen.

Acknowledgements

  • I have looked for similar issues before submitting this one.
  • I understand that the developers have lives and my issue will be answered when possible.
  • I understand the developers of this program are human, and I will ask my questions politely.
@Downtown-Case Downtown-Case added the bug Something isn't working label Sep 19, 2024
@Downtown-Case
Copy link
Contributor Author

Downtown-Case commented Sep 19, 2024

Does not appear to be a quantized KV cache issue, FP16 cache returns the same garbled english.

Brand new 4bpw quantization also returns the same garbled english.

@turboderp
Copy link
Owner

Can you tell me more about how you're prompting the model to get garbage? If I try your 4.1bpw version, it seems to be working fine, both in 0.2.2 master and 0.2.2 dev, with FP16 or Q4 cache. Doesn't seem to break either way.

image

Is it possible you're running low on VRAM or something?

@Downtown-Case
Copy link
Contributor Author

Downtown-Case commented Sep 19, 2024

I have a few gigabytes of vram to spare loading it at short context.

If I load it into exui, even with the default prompt of "Once upon a time" it just starts looping and looping garbled english with the 4.1 quant, but the 3.75 is fine.

...I know, lol. I'm currently trying to reproduce it with a super minimal exllama script, and working my way up from there.

@Downtown-Case
Copy link
Contributor Author

...I'm a moron. I overwrote an ancient test model in exui, and it turns out RoPE scale was set at 4.0.

I appreciate the quick response anyway!

For reference, Qwen 2.5 doesn't seem to mind Q4 cache like Qwen 2 does.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants