OpenAI new model: large v3 turbo - 8x faster! #2438
Replies: 3 comments 1 reply
-
+1 this. I think it is now a little different on how it runs, which will complicate things. But a new fast model, with whisper.cpp could be awesome! |
Beta Was this translation helpful? Give feedback.
-
I don't see much improvements comparing to ggml-medium. Am I missing something? Also I compared it to the tiny model. it's much slower than that. |
Beta Was this translation helpful? Give feedback.
-
if it really is a whisper-large-v3 model version, then it is smoking fast compared to just using regular whisper-large-v3 at a quick glance it seems to be as accurate, possibly more accurate. |
Beta Was this translation helpful? Give feedback.
-
openai/whisper#2361
How can we convert the .pt file to ggml and quantize it?
I tried to use the script from models folder, converted and quanitized but it failed to load
Beta Was this translation helpful? Give feedback.
All reactions