Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implement additional gemini backend #84

Open
andreashappe opened this issue Aug 28, 2024 · 4 comments
Open

Implement additional gemini backend #84

andreashappe opened this issue Aug 28, 2024 · 4 comments
Labels
enhancement New feature or request help wanted Extra attention is needed

Comments

@andreashappe
Copy link
Member

see https://ai.google.dev/gemini-api/docs/models/gemini

@andreashappe andreashappe added enhancement New feature or request help wanted Extra attention is needed labels Aug 28, 2024
@lloydchang
Copy link
Contributor

workaround: use a proxy such as https://github.com/zhu327/gemini-openai-proxy

@lloydchang
Copy link
Contributor

Regarding https://developers.googleblog.com/en/gemini-is-now-accessible-from-the-openai-library/

Content safety settings are not supported via Gemini's OpenAI chat completion API, as shown in the example at https://www.reddit.com/r/GoogleGeminiAI/comments/1ffb4by/is_there_a_way_to_change_the_safety_settings/

@lloydchang
Copy link
Contributor

Consider a workaround, such as a proxy, that turns off the content safety settings by default.

For example:
gemini-openai-proxy at https://github.com/zhu327/gemini-openai-proxy/blob/559085101f0ce5e8c98a94fb75fefd6c7a63d26d/pkg/adapter/chat.go#L230-L246
and
litellm at https://github.com/BerriAI/litellm/blob/70aa85af1fc01d30934ee71a703b8d3982420d05/litellm/llms/vertex_ai_and_google_ai_studio/vertex_ai_non_gemini.py#L203-L213

The above nuances are better suited for projects that act as proxies, such as gemini-openai-proxy, litellm, etc.

Adding a Gemini backend to hackingBuddyGPT would require ongoing maintenance to adapt to continuously evolving changes in Gemini's API, which is in beta (v1beta) rather than OpenAI's non-beta (v1) API.

For instance, maintainers and contributors might spend time trying to understand and justify why Gemini v1beta offers an OpenAI chat completion API but does not fully support content safety settings:

In conclusion, using proxies is a more sensible approach, given the current state of the Gemini v1beta API, as it helps avoid the overhead of adapting to its ongoing changes.

@lloydchang
Copy link
Contributor

lloydchang commented Nov 14, 2024

Relates to Mac target localhost container via gemini openai proxy #94
create onboarding/quickstart blog post as well as video #36
docs(README.md): add Mac use case #95

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request help wanted Extra attention is needed
Projects
None yet
Development

No branches or pull requests

2 participants