Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add Stable Diffusion ControlNet support #622

Merged
merged 22 commits into from
Jun 28, 2024
Merged

Conversation

JingyaHuang
Copy link
Collaborator

@JingyaHuang JingyaHuang commented Jun 4, 2024

What does this PR do?

Fixes #575

  • Add ControlNet export support
  • Regular SD
optimum-cli export neuron -m runwayml/stable-diffusion-v1-5 --task stable-diffusion --batch_size 1 --height 512 --width 512 --controlnet_ids lllyasviel/sd-controlnet-canny --num_images_per_prompt 1 sd_neuron_controlnet/
  • Tiny test
optimum-cli export neuron -m hf-internal-testing/tiny-stable-diffusion-torch --task stable-diffusion --batch_size 1 --height 64 --width 64 --controlnet_ids hf-internal-testing/tiny-controlnet --num_images_per_prompt 1 sd_neuron_tiny_controlnet/
  • Add ControlNet pipeline for stable diffusion
  • Compilation
from optimum.neuron import NeuronStableDiffusionControlNetPipeline

model_id = "runwayml/stable-diffusion-v1-5"
controlnet_id = "lllyasviel/sd-controlnet-canny"
save_directory = "sd_neuron_controlnet"

# [Neuron] pipeline
input_shapes = {"batch_size": 1, "height": 512, "width": 512, "num_images_per_prompt": 1}
compiler_args = {"auto_cast": "matmul", "auto_cast_type": "bf16"}
pipe = NeuronStableDiffusionControlNetPipeline.from_pretrained(
    model_id,
    controlnet_ids=controlnet_id,
    export=True,
    **input_shapes,
    **compiler_args,
)
pipe.save_pretrained(save_directory)
  • Inference
import cv2
import numpy as np
from diffusers import UniPCMultistepScheduler
from diffusers.utils import load_image, make_image_grid
from PIL import Image

from optimum.neuron import NeuronStableDiffusionControlNetPipeline


# prepare canny image
original_image = load_image(
    "https://hf.co/datasets/huggingface/documentation-images/resolve/main/diffusers/input_image_vermeer.png"
)

image = np.array(original_image)

low_threshold = 100
high_threshold = 200

image = cv2.Canny(image, low_threshold, high_threshold)
image = image[:, :, None]
image = np.concatenate([image, image, image], axis=2)
canny_image = Image.fromarray(image)

# [Neuron] pipeline
save_directory = "sd_neuron_controlnet"
pipe = NeuronStableDiffusionControlNetPipeline.from_pretrained(save_directory)

pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config)
output = pipe("the mona lisa", image=canny_image).images[0]
compare = make_image_grid([original_image, canny_image, output], rows=1, cols=3)
compare.save("compare.png")
  • Tests
  • Documentation

Next Steps

  • Add ControlNet pipeline for sdxl.
  • Mutil ControlNets might not be well supported need to be further validated.

Before submitting

  • This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
  • Did you make sure to update the documentation with your changes?
  • Did you write any new necessary tests?

@HuggingFaceDocBuilderDev

The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.

@Suprhimp
Copy link

what happen? I hope it will be merge it soon, aslo sdxl control model too :)

@JingyaHuang JingyaHuang marked this pull request as ready for review June 25, 2024 09:53
Copy link
Collaborator

@dacorvo dacorvo left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the pull-request. Looks good to me, but I realize I missed the input flattening from previous SD pipelines pull-requests: could you explain a little bit what happens and when ?
Also, perhaps consider raising an exception when someone tries to call the SDXL ControlNet pipeline that is present but not implemented.

optimum/neuron/modeling_diffusion.py Show resolved Hide resolved
@dacorvo
Copy link
Collaborator

dacorvo commented Jun 26, 2024

If you push your pull-request back, consider cherry-picking this commit from my branch to fix the TGI docker build.

@JingyaHuang
Copy link
Collaborator Author

@dacorvo for the tracing, the compiler only accepts tensors, but not a list or a tuple of tensors which could be the case in transformers. So we flatten (actually we create directly non-nested dummy) inputs during the tracing, and during the inference runtime, we need to flatten inputs generated by the preprocessor (or the output of another model in the pipe like the case of stable diffusion) before feeding it into the compiled model.

Copy link
Member

@michaelbenayoun michaelbenayoun left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Left a few nits, otherwise lgtm from what I can understand.

docs/source/tutorials/stable_diffusion.mdx Outdated Show resolved Hide resolved
docs/source/tutorials/stable_diffusion.mdx Outdated Show resolved Hide resolved
optimum/commands/export/neuronx.py Outdated Show resolved Hide resolved
optimum/neuron/utils/input_generators.py Outdated Show resolved Hide resolved
optimum/neuron/utils/input_generators.py Outdated Show resolved Hide resolved
@JingyaHuang JingyaHuang merged commit 2c524df into main Jun 28, 2024
11 of 16 checks passed
@JingyaHuang JingyaHuang deleted the add-controlnet-support branch June 28, 2024 11:54
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Add optimum-neuron support for diffusers.StableDiffusionControlNetPipeline
5 participants