-
Notifications
You must be signed in to change notification settings - Fork 225
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add ollama #687
Add ollama #687
Conversation
Thanks for the PR! Unfortunately, I don't think this would work as |
Right so the deeplink doesn’t work but the other fields should be ok.
…On Mon, 20 May 2024 at 15:10, Omar Sanseviero ***@***.***> wrote:
Thanks for the PR! Unfortunately, I don't think this would work as ollama
re-hosts the models themselves. E.g. an example url generated by your PR is
https://ollama.com/library/meta-llama/Meta-Llama-3-8B which is a 404 on
ollama
—
Reply to this email directly, view it on GitHub
<#687 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AABD62IWWTG5TSHKJPWNAFTZDH735AVCNFSM6AAAAABH7VXB2OVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDCMRQGU2DCNJWGA>
.
You are receiving this because you authored the thread.Message ID:
***@***.***>
|
The only goal of this button is to work with deeplinks. The button serves no other purpose |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi @boxabirds - Just a heads-up we are thinking about this a bit more in our current sprint. We might have some exciting updates soon! 🤗
i thinik we could revive this PR but switch to a copy-pasteable command (like llama.cpp) rather than a deeplink |
@boxabirds ok for you if @mishig25 from our team takes over your PR and completes it? |
Very popular local LLM inference server.
Ready to be reviewed Order of operations
|
@@ -389,6 +399,13 @@ export const LOCAL_APPS = { | |||
displayOnModelPage: (model) => model.library_name === "diffusers" && model.pipeline_tag === "text-to-image", | |||
deeplink: (model) => new URL(`https://models.invoke.ai/huggingface/${model.id}`), | |||
}, | |||
ollama: { | |||
prettyLabel: "Ollama", | |||
docsUrl: "https://ollama.com", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Probably have dedicated docs on the hub for this, but we can make a follow-up PR later on
Moving this function from moon to hf.js/gguf as I need it for #687 ```ts const quantLabel = parseGGUFQuantLabel("abc-Q4.gguf") console.log(quantLabel) // Q4 ``` ### Order of operations - [ ] merge #967 (this PR) & deploy `@hf.js/gguf` - [ ] merge #687 & deploy `@hf.js/tasks` - [ ] merge moon --------- Co-authored-by: Xuan Son Nguyen <[email protected]> Co-authored-by: Julien Chaumond <[email protected]>
@coyotte508 could you help me with correctly importing hfjs/gguf inside hfjs/tasks?
However, the tests are failing & |
you can add (but not thrilled to add a dep to Also #887 woudl have fixed it maybe (no need for "prepare" anymore) |
im guessing we could merge tasks and gguf... |
As suggested in #687 (comment)
Very popular local LLM inference server.