Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support for More LLms / configurable ExcecutableID #243

Open
MichaelSchmid-AGI opened this issue Oct 25, 2024 · 1 comment
Open

Support for More LLms / configurable ExcecutableID #243

MichaelSchmid-AGI opened this issue Oct 25, 2024 · 1 comment
Labels
author action feature request New feature or request

Comments

@MichaelSchmid-AGI
Copy link

MichaelSchmid-AGI commented Oct 25, 2024

Describe the Problem

Hi there,
i am struggling to get LLMS running that arent from OpenAI with your langchain module. Per default you seem to filter out LLMS which are in a different "excecutableId" than "azure-openai"

This makes using different Models seemingly impossible (for now)

Propose a Solution

I would suggest that you allow to pass a excecutableId when initializing a langchain chat client

const chatClient = new AzureOpenAiChatClient({
  modelName: 'meta--llama3.1-70b-instruct',
  excecutableID:'aicore-opensource'
});

this should allow the usage of other models quite easily without having to rewrite much.

The chat-completion api ( for example when using llama or mixtral) seems identical to payloads working with OpenAI models.

POST {baseurl}/v2/inference/deployments/{deploymentID}/chat/completions
Body for GPT 4o:

{
    "messages": [
        {
            "role": "user",
            "content": "test"
        }
    ],
    "model": "gpt-4o", 
    "max_tokens": 100,
    "temperature": 0.0,
    "frequency_penalty": 0,
    "presence_penalty": 0,
    "stop": "null"
}

Body For Llama 3.1 Instruct

{
  "messages": [
    {
      "role": "user",
      "content": "test"
    }
  ],
  "model": "meta--llama3.1-70b-instruct", 
  "max_tokens": 100,
  "temperature": 0.0,
  "frequency_penalty": 0,
  "presence_penalty": 0,
  "stop": "null"
}

Describe Alternatives

No response

Affected Development Phase

Development

Impact

Inconvenience

Timeline

No response

Additional Context

No response

@MichaelSchmid-AGI MichaelSchmid-AGI added the feature request New feature or request label Oct 25, 2024
@jjtang1985
Copy link
Contributor

Hi @MichaelSchmid-AGI ,
Thanks for reaching out.

I guess the feature request is about supporting more direct LLM consumption.
I'm not sure whether we will reuse the same AzureOpenAiChatClient, though.

Are you aware our orchestration package: @sap-ai-sdk/orchestration and the orchestration service?
If you want to consume other LLMs, you can use the harmonised API offered by orchestration service now.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
author action feature request New feature or request
Projects
None yet
Development

No branches or pull requests

2 participants