Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support tool calling in Langchain OpenAI client #231

Open
alexbau01 opened this issue Oct 21, 2024 · 1 comment
Open

Support tool calling in Langchain OpenAI client #231

alexbau01 opened this issue Oct 21, 2024 · 1 comment
Labels
author action feature request New feature or request

Comments

@alexbau01
Copy link

alexbau01 commented Oct 21, 2024

Describe the Problem

Native Tool calling (or function calling) provides several advantages, including making it easier to build AI Agents that request data from external systems as well as generating structured output from an LLM.

In my testing, the Python SDK supports tool calling, while the JavaScript SDK does not.

I am happy to close the issue if there is anything that I'm missing here.

Propose a Solution

Looking into the implementation in this SDK, it seems like a raw BaseChatModel implementation from scratch, only implementing the _generate() method. In the Python SDK, the standard Langchain ChatOpenAI class is used as a base and only the GenAI hub specific aspects are overwritten.

I am sure there is a good reason for this approach, however it means that any features (such as the upcoming native JSON Schema output from OpenAI) have to be manually implemented/added instead of relying on the existing Langchain implementation.

As described in the Langchain Docs https://js.langchain.com/docs/how_to/tool_calling/ Langchain Classes for LLMs that support tool calling should implement the .bindTools() method to parse langchain tools to the provider format (in this case OpenAI). The Langchain .withStructuredOutput() function also fails since .bindTools() is not implemented.

Describe Alternatives

It is possible to work around it by using other wrappers provided by Langchain, which use basic prompting to achieve a similar result. However, this is not as reliable, as noted by OpenAI in the release Blog for Structured Outputs: https://openai.com/index/introducing-structured-outputs-in-the-api/. This strict=true mode mentioned in the blog is also already supported by the native Langchain OpenAI implementation, however the models available in the GenAI hub currently don't support it anyways.

Affected Development Phase

Development

Impact

Impaired

Timeline

No response

Additional Context

No response

@alexbau01 alexbau01 added the feature request New feature or request label Oct 21, 2024
@jjtang1985
Copy link
Contributor

Thanks for raising up!
I've created a backlog item.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
author action feature request New feature or request
Projects
None yet
Development

No branches or pull requests

2 participants