Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

loading from local inference on cpu :- Why does my inference takes 10 minutes #160

Open
vijayendra-g opened this issue Jul 18, 2024 · 0 comments

Comments

@vijayendra-g
Copy link

I followed exact steps from
https://github.com/urchade/GLiNER/blob/main/examples/load_local_model.ipynb
This takes close to 10 minutes to run .

I read in other threads , the running time should be in seconds when I load the model from local.
I think, I am exactly doing this here
loaded_model = GLiNER.from_pretrained("gliner_Med", load_tokenizer = True, local_files_only=True)

Can someone point out why it is still taking 10 mins.

import torch
from gliner import GLiNER

model = GLiNER.from_pretrained("gliner-community/gliner_medium-v2.5")
model.save_pretrained("gliner_Med")
loaded_model = GLiNER.from_pretrained("gliner_Med", load_tokenizer = True, local_files_only=True)

text = """
Libretto by Marius Petipa, based on the 1822 novella ``Trilby, ou Le Lutin d'Argail`` by Charles Nodier, first presented by the Ballet of the Moscow Imperial Bolshoi Theatre on January 25/February 6 (Julian/Gregorian calendar dates), 1870, in Moscow with Polina Karpakova as Trilby and Ludiia Geiten as Miranda and restaged by Petipa for the Imperial Ballet at the Imperial Bolshoi Kamenny Theatre on January 17–29, 1871 in St. Petersburg with Adèle Grantzow as Trilby and Lev Ivanov as Count Leopold."""

labels = ["person", "book", "location", "date", "actor", "character"]

entities = loaded_model.predict_entities(text, labels, threshold=0.4)

for entity in entities:
    print(entity["text"], "=>", entity["label"])


    
@vijayendra-g vijayendra-g changed the title loading from local :- Why does my inference takes 10 minutes, though I am loading the model from local. loading from local :- Why does my inference takes 10 minutes Jul 18, 2024
@vijayendra-g vijayendra-g changed the title loading from local :- Why does my inference takes 10 minutes loading from local inference on cpu :- Why does my inference takes 10 minutes Jul 18, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant