Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unable to compile and export Stable Diffusion 2.1 #723

Open
1 of 4 tasks
pinak-p opened this issue Oct 24, 2024 · 2 comments
Open
1 of 4 tasks

Unable to compile and export Stable Diffusion 2.1 #723

pinak-p opened this issue Oct 24, 2024 · 2 comments
Assignees
Labels
bug Something isn't working

Comments

@pinak-p
Copy link

pinak-p commented Oct 24, 2024

System Info

aws-neuronx-runtime-discovery     2.9
libneuronxla                      2.0.4115.0
neuronx-cc                        2.14.227.0+2d4f85be
neuronx-distributed               0.8.0
optimum-neuron                    0.0.24
torch-neuronx                     2.1.2.2.2.0
transformers-neuronx              0.11.351

Who can help?

@JingyaHuang

Information

  • The official example scripts
  • My own modified scripts

Tasks

  • An officially supported task in the examples folder (such as GLUE/SQuAD, ...)
  • My own task or dataset (give details below)

Reproduction (minimal, reproducible, runnable)

optimum-cli export neuron --model stabilityai/stable-diffusion-2-1-base --batch_size 1 --height 512 --width 512 --auto_cast matmul --auto_cast_type bf16 --num_images_per_prompt 1 . /sd_neuron/

The above command throws the below error

Keyword arguments {'subfolder': '', 'use_auth_token': None, 'trust_remote_code': False} are not expected by StableDiffusionPipeline and will be ignored.
Loading pipeline components...: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 6/6 [00:00<00:00, 18.05it/s]
Applying optimized attention score computation for stable diffusion.
***** Compiling text_encoder *****
Using Neuron: --auto-cast matmul
Using Neuron: --auto-cast-type bf16
2024-10-24 21:17:54.907338: F external/xla/xla/parse_flags_from_env.cc:224] Unknown flags in XLA_FLAGS: --xla_gpu_simplify_all_fp_conversions=false --xla_gpu_force_compilation_parallelism=8
Aborted (core dumped)
Traceback (most recent call last):
File "/home/ubuntu/aws_neuron_venv_pytorch/bin/optimum-cli", line 8, in
sys.exit(main())
File "/home/ubuntu/aws_neuron_venv_pytorch/lib/python3.10/site-packages/optimum/commands/optimum_cli.py", line 163, in main
service.run()
File "/home/ubuntu/aws_neuron_venv_pytorch/lib/python3.10/site-packages/optimum/commands/export/neuronx.py", line 298, in run
subprocess.run(full_command, shell=True, check=True)
File "/usr/lib/python3.10/subprocess.py", line 526, in run
raise CalledProcessError(retcode, process.args,
subprocess.CalledProcessError: Command 'python3 -m optimum.exporters.neuron --model stabilityai/stable-diffusion-2-1-base --batch_size 1 --height 512 --width 512 --auto_cast matmul --auto_cast_type bf16 --num_images_per_prompt 1 ./sd_neuron/' returned non-zero exit status 134.

Expected behavior

The model should successfully be exported.

@pinak-p pinak-p added the bug Something isn't working label Oct 24, 2024
@JingyaHuang
Copy link
Collaborator

Hi @pinak-p,

What's the neuron-sdk and optimum-neuron version that you are using? I just tested with the following setup, and I have compiled the models without any issue:

aws-neuronx-collectives/unknown,now 2.22.26.0-17a033bc8 amd64 [installed]
aws-neuronx-dkms/unknown,now 2.18.12.0 amd64 [installed]
aws-neuronx-runtime-lib/unknown,now 2.22.14.0-6e27b8d5b amd64 [installed]
aws-neuronx-tools/unknown,now 2.19.0.0 amd64 [installed]
aws-neuronx-runtime-discovery 2.9
diffusers                     0.30.3
libneuronxla                  2.0.4115.0
neuronx-cc                    2.15.128.0+56dc5a86
neuronx-distributed           0.9.0
optimum                       1.22.0
optimum-neuron                0.0.25.dev0
sentence-transformers         3.1.0
torch                         2.1.2
torch-neuronx                 2.1.2.2.3.0
torch-xla                     2.1.4
torchvision                   0.16.2
transformers                  4.43.2
transformers-neuronx          0.12.313

@JingyaHuang JingyaHuang self-assigned this Oct 24, 2024
@pinak-p
Copy link
Author

pinak-p commented Oct 24, 2024

Thank you, it succeeds after I upgrade the dependencies.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants