Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Is it reliable? #2218

Open
AlirezaAbavi opened this issue Sep 11, 2024 Discussed in #2217 · 5 comments
Open

Is it reliable? #2218

AlirezaAbavi opened this issue Sep 11, 2024 Discussed in #2217 · 5 comments

Comments

@AlirezaAbavi
Copy link

Discussed in #2217

Originally posted by AlirezaAbavi September 11, 2024
Hello.
I just found this repository. So I have a few questions.

  1. Is GPT4 model completely free? I mean is it just a demo or i cant rely on it? Because you cant do this using OpenAi.
  2. Suppose i just need this block of code:
from g4f.client import Client
import asyncio


client = Client()
response = client.chat.completions.create(
    model="gpt-4",
    messages=[{"role": "user", "content": "does this text makes any sense? just say True or False. 'fkvrkueirughierughieurgiuehrg'"}],
)
print(response.choices[0].message.content)

Question is can i use it for production?
I am afraid to use it for production and find out after a short time that it is not free and has limitations.

Thank you for answering.

@TheFirstNoob
Copy link

Hi. gpt4free uses providers that have these models available mostly for free. Problems can only be with the providers themselves and their work. I recommend that you familiarize yourself with model.py to find out all the providers.
In general, you can use pizzagpt because it works quite stably or ddg, but ddg does not support conversion. Other providers, for example liaobots, may have problems with the quota. The library itself will change providers, if possible, through the retryprovider to an available and free one.

Example providers:

providers = {
            "gpt-3.5-turbo": RetryProvider([FreeChatgpt, FreeNetfly, Bixin123, Nexra, TwitterBio], shuffle=False),
            "gpt-4": RetryProvider([Chatgpt4Online, Nexra, Binjie, FreeNetfly, AiChats, You, Liaobots], shuffle=False),
            "gpt-4-turbo": RetryProvider([Nexra, Bixin123, You, Liaobots], shuffle=False),
            "gpt-4o-mini": RetryProvider([Pizzagpt, AiChatOnline, ChatgptFree, CodeNews, You, FreeNetfly, Koala, MagickPen, DDG, Liaobots], shuffle=False),
            "gpt-4o": RetryProvider([Chatgpt4o, LiteIcoding, AiChatOnline, You, Liaobots], shuffle=False),
            "claude-3-haiku": RetryProvider([DDG, Liaobots], shuffle=False),
            "blackbox": RetryProvider([Blackbox], shuffle=False),
            "gemini-pro": RetryProvider([ChatGot, Liaobots], shuffle=False),
            "gemini-flash": RetryProvider([Blackbox, Liaobots], shuffle=False),
            "gemma-2b": RetryProvider([ReplicateHome], shuffle=False),
            "command-r-plus": RetryProvider([HuggingChat, HuggingFace], shuffle=False),
            "llama-3.1-70b": RetryProvider([HuggingChat, HuggingFace, Blackbox, DeepInfra, FreeGpt, TeachAnything, Free2GPT, Snova, DDG], shuffle=False),
            "llama-3.1-405b": RetryProvider([HuggingChat, HuggingFace, Blackbox, Snova], shuffle=False),
            "llama-3.1-sonar-large-128k-online": RetryProvider([PerplexityLabs], shuffle=False),
            "llama-3.1-sonar-large-128k-chat": RetryProvider([PerplexityLabs], shuffle=False),
            "pi": RetryProvider([Pi], shuffle=False),
            "mixtral-8x7b": RetryProvider([HuggingChat, HuggingFace, ReplicateHome, TwitterBio, DeepInfra, DDG], shuffle=False),
            "mistral-7b": RetryProvider([HuggingChat, HuggingFace, DeepInfra], shuffle=False),
            "microsoft/Phi-3-mini-4k-instruct": RetryProvider([HuggingChat], shuffle=False),
            "yi-1.5-34b": RetryProvider([HuggingChat, HuggingFace], shuffle=False),
            "SparkDesk-v1.1": RetryProvider([FreeChatgpt], shuffle=False),
        }
        ```

@AlirezaAbavi
Copy link
Author

AlirezaAbavi commented Sep 11, 2024

Wow that's a Long list. Thanks for the answer.

@AlirezaAbavi
Copy link
Author

AlirezaAbavi commented Sep 11, 2024

Let's Say i have used this in a web app. And i expect that 50 users, use the web app simultaneously. How can i know that the given provider will answer in this situation. it can block my servers IP or it can freeze. if anyone has any idea or solution other than RetryProvider Im happy to hear. RetryProvider works and it is great but i need to know if there is other way.

@TheFirstNoob
Copy link

TheFirstNoob commented Sep 11, 2024

Most providers are designed for most users. It depends on your code that will correctly divide the "memory" and the speed of each user and send. For the discord bot, my bot successfully handles 10+ users through python and async functions.

I can only say which providers are stable.
HuggingChat, HuggingFace, Blackbox

g4f libs always try update working providers and alyaws need check updates for update your code
I think you need test all providers for 10+ users and see how it works for you.

@Allamaris0
Copy link

Allamaris0 commented Sep 11, 2024

It's free and it's enough for me 😎 It's reverse engineering, so it's not stable. So if a provider changes something in their API there will be an error, so you shouldn't focus on one provider. I think that mixtral-8x7b is the most stable, but it's worse than gpt-4.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants