Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How using it offline ? #313

Open
dredre-42 opened this issue Sep 17, 2024 · 2 comments
Open

How using it offline ? #313

dredre-42 opened this issue Sep 17, 2024 · 2 comments

Comments

@dredre-42
Copy link

I am a noob and i can't use it offline. I have follow all the processus but it not working. Can some one explaint me how to do please ?

I downloaded whisper yesterday, but I needed the dictation feature. So I downloaded whispering desktop today. I open the software in the settings there are only 3 options:

  • OpenAl
  • Groq
  • faster-whisper-server

Since I managed to make whisper use the gpu, I copy and paste this code for gpu into the docker terminal:

docker run -e ALLOW_ORIGINS='["https://whispering.bradenwong.com"]' --gpus=all --publish 8000:8000 --volume ~/.cache/huggingface:/root/.cache/huggingface fedirz/faster-whisper-server:latest-cuda

Here is what is displayed.

Unable to find image 'fedirz/faster-whisper-server:latest-cuda' locally
latest-cuda: Pulling from fedirz/faster-whisper-server
e647d7a9d601: Download complete
545c22d07ad0: Download complete
8a5d5207a265: Download complete
0a05696cb0bc: Download complete
ec99efa2e874: Download complete
52c9231e5de8: Download complete
aece8493d397: Download complete
8c66a4da03d6: Download complete
44a795514b94: Download complete
84a7aba84081: Download complete
21f78659429e: Download complete
9c479c80976d: Download complete
f73ff9d3e11f: Download complete
eb68083c7fb9: Download complete
59f997fe33fa: Download complete
8aa4f26a8c03: Download complete
Digest: sha256:0b1dbf77c1a85fbc7af2ac92e443faed4782e2f15220b1e9fd18aaf1be378077
Status: Downloaded newer image for fedirz/faster-whisper-server:latest-cuda

==========
== CUDA ==
==========

CUDA Version 12.2.2

Container image Copyright (c) 2016-2023, NVIDIA CORPORATION & AFFILIATES. All rights reserved.

This container image and its contents are governed by the NVIDIA Deep Learning Container License.        
By pulling and using the container, you accept the terms and conditions of this license:
https://developer.nvidia.com/ngc/nvidia-deep-learning-container-license

A copy of this license is made available in this container at /NGC-DL-CONTAINER-LICENSE for your convenience.

Traceback (most recent call last):
  File "/root/faster-whisper-server/.venv/bin/uvicorn", line 8, in <module>
    sys.exit(main())
             ^^^^^^
  File "/root/faster-whisper-server/.venv/lib/python3.12/site-packages/click/core.py", line 1157, in __call__
    return self.main(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/root/faster-whisper-server/.venv/lib/python3.12/site-packages/click/core.py", line 1078, in main
    rv = self.invoke(ctx)
         ^^^^^^^^^^^^^^^^
  File "/root/faster-whisper-server/.venv/lib/python3.12/site-packages/click/core.py", line 1434, in invoke
    return ctx.invoke(self.callback, **ctx.params)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/root/faster-whisper-server/.venv/lib/python3.12/site-packages/click/core.py", line 783, in invoke
    return __callback(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/root/faster-whisper-server/.venv/lib/python3.12/site-packages/uvicorn/main.py", line 410, in main
    run(
  File "/root/faster-whisper-server/.venv/lib/python3.12/site-packages/uvicorn/main.py", line 577, in run
    server.run()
  File "/root/faster-whisper-server/.venv/lib/python3.12/site-packages/uvicorn/server.py", line 65, in run
    return asyncio.run(self.serve(sockets=sockets))
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3.12/asyncio/runners.py", line 194, in run
    return runner.run(main)
           ^^^^^^^^^^^^^^^^
  File "/usr/lib/python3.12/asyncio/runners.py", line 118, in run
    return self._loop.run_until_complete(task)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3.12/asyncio/base_events.py", line 687, in run_until_complete
    return future.result()
           ^^^^^^^^^^^^^^^
  File "/root/faster-whisper-server/.venv/lib/python3.12/site-packages/uvicorn/server.py", line 69, in serve
    await self._serve(sockets)
  File "/root/faster-whisper-server/.venv/lib/python3.12/site-packages/uvicorn/server.py", line 76, in _serve
    config.load()
  File "/root/faster-whisper-server/.venv/lib/python3.12/site-packages/uvicorn/config.py", line 434, in load
    self.loaded_app = import_from_string(self.app)
                      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/root/faster-whisper-server/.venv/lib/python3.12/site-packages/uvicorn/importer.py", line 19, in import_from_string
    module = importlib.import_module(module_str)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3.12/importlib/__init__.py", line 90, in import_module
    return _bootstrap._gcd_import(name[level:], package, level)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "<frozen importlib._bootstrap>", line 1387, in _gcd_import
  File "<frozen importlib._bootstrap>", line 1360, in _find_and_load
  File "<frozen importlib._bootstrap>", line 1331, in _find_and_load_unlocked
  File "<frozen importlib._bootstrap>", line 935, in _load_unlocked
  File "<frozen importlib._bootstrap_external>", line 995, in exec_module
  File "<frozen importlib._bootstrap>", line 488, in _call_with_frames_removed
  File "/root/faster-whisper-server/faster_whisper_server/main.py", line 31, in <module>
    from faster_whisper_server import hf_utils
  File "/root/faster-whisper-server/faster_whisper_server/hf_utils.py", line 7, in <module>
    from faster_whisper_server.logger import logger
  File "/root/faster-whisper-server/faster_whisper_server/logger.py", line 3, in <module>
    from faster_whisper_server.config import config
  File "/root/faster-whisper-server/faster_whisper_server/config.py", line 243, in <module>
    config = Config()
             ^^^^^^^^
  File "/root/faster-whisper-server/.venv/lib/python3.12/site-packages/pydantic_settings/main.py", line 144, in __init__
    super().__init__(
  File "/root/faster-whisper-server/.venv/lib/python3.12/site-packages/pydantic/main.py", line 211, in __init__
    validated_self = self.__pydantic_validator__.validate_python(data, self_instance=self)
                     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
pydantic_core._pydantic_core.ValidationError: 1 validation error for Config
allow_origins
  Input should be a valid list [type=list_type, input_value='[https://whispering.bradenwong.com]', input_type=str]
    For further information visit https://errors.pydantic.dev/2.9/v/list_type

It is complete gibberish to me. But I realize that it downloads whisper again. I would have liked to use the one already downloaded. Is it possible to use the whisper that I downloaded yesterday so as not to have it twice? What is the procedure to link whispering to whisper?

And so whispering doesn't work.
There are also these options there that I don't understand. What is it for?
faster-whisper-server URL
http://localhost:8000
faster-whisper-server Model
Systran/faster-whisper-medium.en

How to make whispering work offline?

@yoloswagazn
Copy link

Yeah, also having the same issue.

@homelab-00
Copy link

homelab-00 commented Nov 2, 2024

Try running:
docker run -e ALLOW_ORIGINS="[\"https://whispering.bradenwong.com\"]" --gpus=all --publish 8000:8000 --volume ~/.cache/huggingface:/root/.cache/huggingface fedirz/faster-whisper-server:latest-cuda

This created the docker container fine for me. It runs fine, is accessible at http://localhost:8000/ and it the transcription does work.

edit: Also remember that this command creates the docker container, so you only run it once. To run the container again, either use the docker desktop app or run in cmd docker start -a -i <name_of_your_faster-whisper-server_container>
You can run the cmd command anywhere, since when installing Docker Desktop, the docker command is added to path.
Also for reference my system is Windows 10.


However, I can't get the Whispering desktop or web app to work with the faster-whisper-server that's running in the docker container.
Getting the error: An error occurred while sending the request to the transcription server. Please try again.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants