We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
I am successfully using the build against newer Ubuntu 24.04 and latest CUDA 12.6.
ARG UBUNTU_VERSION=24.04 ARG CUDA_VERSION=12.6.0
It seems to work fine, with the exception that I had to set LD_LIBRARY_PATH='' due to #2032 (comment)
LD_LIBRARY_PATH=''
Here's my compose definition for completeness:
whisper_cpp: # image: ghcr.io/ggerganov/whisper.cpp:main-cuda image: whisper-corncuda:latest container_name: whisper-cpp ports: - "7777:7777" volumes: - /home/user/whisper/:/models restart: unless-stopped environment: LD_LIBRARY_PATH: "" command: "'./server -l en -m /models/ggml-large-v3-q5_0.bin --host 0.0.0.0 --port 7777'" deploy: resources: reservations: devices: - driver: nvidia count: all capabilities: [gpu]
The text was updated successfully, but these errors were encountered:
No branches or pull requests
I am successfully using the build against newer Ubuntu 24.04 and latest CUDA 12.6.
It seems to work fine, with the exception that I had to set
LD_LIBRARY_PATH=''
due to #2032 (comment)Here's my compose definition for completeness:
The text was updated successfully, but these errors were encountered: