diff --git a/docs/source/onnxruntime/usage_guides/models.mdx b/docs/source/onnxruntime/usage_guides/models.mdx index f34bd23ac2..1292e755c0 100644 --- a/docs/source/onnxruntime/usage_guides/models.mdx +++ b/docs/source/onnxruntime/usage_guides/models.mdx @@ -9,7 +9,7 @@ to run accelerated inference without rewriting your APIs. ### Transformers models -Once your model was [exported to the ONNX format](https://huggingface.co/docs/optimum/main/en/exporters/onnx/usage_guides/export_a_model), you can load it by replacing the `AutoModelForXxx` class with the corresponding `ORTModelForXxx`. +Once your model was [exported to the ONNX format](https://huggingface.co/docs/optimum/exporters/onnx/usage_guides/export_a_model), you can load it by replacing the `AutoModelForXxx` class with the corresponding `ORTModelForXxx`. ```diff from transformers import AutoTokenizer, pipeline @@ -24,12 +24,12 @@ Once your model was [exported to the ONNX format](https://huggingface.co/docs/op result = pipe("He never went out without a book under his arm") ``` -More information for all the supported `ORTModelForXxx` in our [documentation](https://huggingface.co/docs/optimum/main/en/onnxruntime/package_reference/modeling_ort) +More information for all the supported `ORTModelForXxx` in our [documentation](https://huggingface.co/docs/optimum/onnxruntime/package_reference/modeling_ort) ### Diffusers models -Once your model was [exported to the ONNX format](https://huggingface.co/docs/optimum/main/en/exporters/onnx/usage_guides/export_a_model), you can load it by replacing the `DiffusionPipeline` class with the corresponding `ORTDiffusionPipeline`. +Once your model was [exported to the ONNX format](https://huggingface.co/docs/optimum/exporters/onnx/usage_guides/export_a_model), you can load it by replacing the `DiffusionPipeline` class with the corresponding `ORTDiffusionPipeline`. ```diff @@ -45,7 +45,7 @@ Once your model was [exported to the ONNX format](https://huggingface.co/docs/op ## Converting your model to ONNX on-the-fly -In case your model wasn't already [converted to ONNX](https://huggingface.co/docs/optimum/main/en/exporters/onnx/usage_guides/export_a_model), [`~optimum.onnxruntime.ORTModel`] includes a method to convert your model to ONNX on-the-fly. +In case your model wasn't already [converted to ONNX](https://huggingface.co/docs/optimum/exporters/onnx/usage_guides/export_a_model), [`~optimum.onnxruntime.ORTModel`] includes a method to convert your model to ONNX on-the-fly. Simply pass `export=True` to the [`~optimum.onnxruntime.ORTModel.from_pretrained`] method, and your model will be loaded and converted to ONNX on-the-fly: ```python