Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ORTDiffusionPipelines with IO Binding #2056

Draft
wants to merge 11 commits into
base: main
Choose a base branch
from

Conversation

IlyasMoutawwakil
Copy link
Member

@IlyasMoutawwakil IlyasMoutawwakil commented Oct 13, 2024

What does this PR do?

This is also my attempt to create a generalizable io binding framework, the idea is to always have output_shapes = fn(input_shapes, known_shapes) where known_shapes is mostly stuff we find in the config, we the use this information at runtime with a simple symbolic resolver, keeping the shape inference time minimal, to create output tensors in torch and thus accelerate inference without the need to pass by ort values / cupy / numpy.

Before submitting

  • This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
  • Did you make sure to update the documentation with your changes?
  • Did you write any new necessary tests?

Who can review?

@HuggingFaceDocBuilderDev

The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.

Comment on lines +330 to 331
if self.use_io_binding is False and provider == "CUDAExecutionProvider":
self.use_io_binding = True

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This overrides use_io_binding choice from user. What if user want to run performance test with io binding disabled?

I suggest that:
if use_io_binding is None: change it to True
if not use_io_binding and it is cuda provider, log a warning.

Copy link
Member Author

@IlyasMoutawwakil IlyasMoutawwakil Oct 25, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is already the default behavior in ORTModels, I kept it for consistency (I'm not a fan of it tbh) to not break stuff for old users.

Comment on lines +211 to +224
def providers(self) -> Tuple[str]:
return self._validate_same_attribute_value_across_components("providers")

@property
def provider(self) -> str:
return self._validate_same_attribute_value_across_components("provider")

@property
def providers_options(self) -> Dict[str, Dict[str, Any]]:
return self._validate_same_attribute_value_across_components("providers_options")

@property
def provider_options(self) -> Dict[str, Any]:
return self._validate_same_attribute_value_across_components("provider_options")

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It is not necessary to validate same value across components.

I think it is feasible to use different provider and different provider options for components. For example, we can run text_encoder with CPU, and unet with CUDA provider. Or we want to enable cuda graph in one component but not the other in provider option.

May add some comments and loose the constraint later.

Copy link
Member Author

@IlyasMoutawwakil IlyasMoutawwakil Oct 25, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

there's a comment in _validate_same_attribute_value_across_components definition explaining the reasoning behind these checks, which is exactly what you said. Pipeline attributes can be accessed but they only make sense when they're consistent, for now this is my proposition for multi model parts pipelines, an alternative would be to return that of the main component (unet/transformer) or not supporting these attributes at all for the main pipeline (replace them with provider_map for example like device vs device_map).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants