Skip to content

Commit

Permalink
add sentence-transformers and timm example to documentation
Browse files Browse the repository at this point in the history
  • Loading branch information
echarlaix committed Oct 21, 2024
1 parent 59d6f7f commit b180019
Showing 1 changed file with 56 additions and 0 deletions.
56 changes: 56 additions & 0 deletions docs/source/onnxruntime/usage_guides/models.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -43,6 +43,62 @@ Once your model was [exported to the ONNX format](https://huggingface.co/docs/op
image = pipeline(prompt).images[0]
```


### Sentence Transformers models

Once your model was [exported to the ONNX format](https://huggingface.co/docs/optimum/exporters/onnx/usage_guides/export_a_model), you can load it by replacing the `AutoModel` class with the corresponding `ORTModelForFeatureExtraction`.

```diff
from transformers import AutoTokenizer
- from transformers import AutoModel
+ from optimum.onnxruntime import ORTModelForFeatureExtraction

model_id = "sentence-transformers/all-MiniLM-L6-v2"
tokenizer = AutoTokenizer.from_pretrained(model_id)
- model = AutoModel.from_pretrained(model_id)
+ model = ORTModelForFeatureExtraction.from_pretrained(model_id, export=True)
inputs = tokenizer("This is an example sentence", return_tensors="pt")
outputs = model(**inputs)
```

You can also load your ONNX model directly using the `SentenceTransformer` class, just make sure to have `sentence-transformers>=3.2.1` installed. If the model wasn't already converted to ONNX, it will be converted automatically on-the-fly.

```diff
from sentence_transformers import SentenceTransformer

model_id = "sentence-transformers/all-MiniLM-L6-v2"
- model = SentenceTransformer(model_id)
+ model = SentenceTransformer(model_id, backend="onnx")

sentences = ["This is an example sentence", "Each sentence is converted"]
embeddings = model.encode(sentences)
```


### Timm models

Once your model was [exported to the ONNX format](https://huggingface.co/docs/optimum/exporters/onnx/usage_guides/export_a_model), you can load it by replacing the `create_model` with the corresponding `ORTModelForImageClassification` class.


```diff
import requests
from PIL import Image
- from timm import create_model
from timm.data import resolve_data_config, create_transform
+ from optimum.onnxruntime import ORTModelForImageClassification

model_id = "timm/mobilenetv3_large_100.ra_in1k"
- model = create_model(model_id, pretrained=True)
+ model = ORTModelForImageClassification.from_pretrained(model_id, export=True)
transform = create_transform(**resolve_data_config(model.config.pretrained_cfg, model=model))
url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png"
image = Image.open(requests.get(url, stream=True).raw)
inputs = transform(image).unsqueeze(0)
outputs = model(inputs)
```



## Converting your model to ONNX on-the-fly

In case your model wasn't already [converted to ONNX](https://huggingface.co/docs/optimum/exporters/onnx/usage_guides/export_a_model), [`~optimum.onnxruntime.ORTModel`] includes a method to convert your model to ONNX on-the-fly.
Expand Down

0 comments on commit b180019

Please sign in to comment.