-
Notifications
You must be signed in to change notification settings - Fork 23
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Multiple gpus on different nodes #427
Comments
This was referenced Dec 22, 2023
We may try
|
Use ray as distributed runtime to support GPU resource pool across nodes, we can run big LLM models using multiple GPUs now. |
nkwangleiGIT
added a commit
to nkwangleiGIT/arcadia
that referenced
this issue
Jan 5, 2024
nkwangleiGIT
added a commit
to nkwangleiGIT/arcadia
that referenced
this issue
Jan 5, 2024
nkwangleiGIT
added a commit
to nkwangleiGIT/arcadia
that referenced
this issue
Jan 5, 2024
nkwangleiGIT
added a commit
that referenced
this issue
Jan 5, 2024
feat: #427 support to run models using ray cluster
Fixed by #500 |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
I found a discussion in vllm community about
deploy vllm model across multiple nodes in kubernetes
vllm-project/vllm#1363
The text was updated successfully, but these errors were encountered: