You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi there,
i am struggling to get LLMS running that arent from OpenAI with your langchain module. Per default you seem to filter out LLMS which are in a different "excecutableId" than "azure-openai"
This makes using different Models seemingly impossible (for now)
Propose a Solution
I would suggest that you allow to pass a excecutableId when initializing a langchain chat client
I guess the feature request is about supporting more direct LLM consumption.
I'm not sure whether we will reuse the same AzureOpenAiChatClient, though.
Are you aware our orchestration package: @sap-ai-sdk/orchestration and the orchestration service?
If you want to consume other LLMs, you can use the harmonised API offered by orchestration service now.
Describe the Problem
Hi there,
i am struggling to get LLMS running that arent from OpenAI with your langchain module. Per default you seem to filter out LLMS which are in a different "excecutableId" than "azure-openai"
This makes using different Models seemingly impossible (for now)
Propose a Solution
I would suggest that you allow to pass a excecutableId when initializing a langchain chat client
this should allow the usage of other models quite easily without having to rewrite much.
The chat-completion api ( for example when using llama or mixtral) seems identical to payloads working with OpenAI models.
POST {baseurl}/v2/inference/deployments/{deploymentID}/chat/completions
Body for GPT 4o:
Body For Llama 3.1 Instruct
Describe Alternatives
No response
Affected Development Phase
Development
Impact
Inconvenience
Timeline
No response
Additional Context
No response
The text was updated successfully, but these errors were encountered: