-
-
Notifications
You must be signed in to change notification settings - Fork 13.2k
-
-
Notifications
You must be signed in to change notification settings - Fork 13.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Getting model not found when using Completion with prompt attribute? #2230
Comments
Hey! Thats model is not actucal and very old. Please use this list: https://github.com/xtekky/gpt4free/blob/main/g4f/models.py#L622 |
Hey @unical1988, The models you're trying to use ( Currently, the 'gpt-4', 'gpt-4-0613', 'gpt-4-32k', 'gpt-4-0314', 'gpt-4-32k-0314',
'gpt-3.5-turbo', 'gpt-3.5-turbo-16k', 'gpt-3.5-turbo-0613', 'gpt-3.5-turbo-16k-0613', 'gpt-3.5-turbo-0301',
'gpt-3', 'text-davinci-003', 'text-davinci-002', 'code-davinci-002',
'text-curie-001', 'text-babbage-001', 'text-ada-001',
'davinci', 'curie', 'babbage', 'ada', 'babbage-002', 'davinci-002', I noticed that you're using the old g4f interaction syntax. You can try the new OpenAI library syntax (e.g., Here's an example code to interact with the older import g4f
import g4f.debug
g4f.debug.logging = True
g4f.debug.version_check = False
from g4f.client import Client
allowed_models = [
'gpt-4', 'gpt-4-0613', 'gpt-4-32k', 'gpt-4-0314', 'gpt-4-32k-0314',
'gpt-3.5-turbo', 'gpt-3.5-turbo-16k', 'gpt-3.5-turbo-0613', 'gpt-3.5-turbo-16k-0613', 'gpt-3.5-turbo-0301',
'gpt-3', 'text-davinci-003', 'text-davinci-002', 'code-davinci-002',
'text-curie-001', 'text-babbage-001', 'text-ada-001',
'davinci', 'curie', 'babbage', 'ada', 'babbage-002', 'davinci-002',
]
client = Client()
response = client.chat.completions.create(
model="code-davinci-002",
provider=g4f.Provider.Nexra,
messages=[{"role": "user", "content": "say this is a test"}],
)
print(response.choices[0].message.content) While the older models GPT are still available, they are combined into the |
The exception is: Model not found: code-davinci-002
The code I use which from here (https://github.com/xtekky/gpt4free/blob/cc80f2d3159ca0b6f6bfa2c36c4be87bc96209b2/docs/legacy.md)
is:
`import g4f
allowed_models = [
'code-davinci-002',
'text-ada-001',
'text-babbage-001',
'text-curie-001',
'text-davinci-002',
'text-davinci-003'
]
response = g4f.Completion.create(
model='text-davinci-003',
prompt='say this is a test'
)
print(response)`
any idea why?
The text was updated successfully, but these errors were encountered: