You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Native Tool calling (or function calling) provides several advantages, including making it easier to build AI Agents that request data from external systems as well as generating structured output from an LLM.
In my testing, the Python SDK supports tool calling, while the JavaScript SDK does not.
I am happy to close the issue if there is anything that I'm missing here.
Propose a Solution
Looking into the implementation in this SDK, it seems like a raw BaseChatModel implementation from scratch, only implementing the _generate() method. In the Python SDK, the standard Langchain ChatOpenAI class is used as a base and only the GenAI hub specific aspects are overwritten.
I am sure there is a good reason for this approach, however it means that any features (such as the upcoming native JSON Schema output from OpenAI) have to be manually implemented/added instead of relying on the existing Langchain implementation.
It is possible to work around it by using other wrappers provided by Langchain, which use basic prompting to achieve a similar result. However, this is not as reliable, as noted by OpenAI in the release Blog for Structured Outputs: https://openai.com/index/introducing-structured-outputs-in-the-api/. This strict=true mode mentioned in the blog is also already supported by the native Langchain OpenAI implementation, however the models available in the GenAI hub currently don't support it anyways.
Affected Development Phase
Development
Impact
Impaired
Timeline
No response
Additional Context
No response
The text was updated successfully, but these errors were encountered:
Describe the Problem
Native Tool calling (or function calling) provides several advantages, including making it easier to build AI Agents that request data from external systems as well as generating structured output from an LLM.
In my testing, the Python SDK supports tool calling, while the JavaScript SDK does not.
I am happy to close the issue if there is anything that I'm missing here.
Propose a Solution
Looking into the implementation in this SDK, it seems like a raw BaseChatModel implementation from scratch, only implementing the _generate() method. In the Python SDK, the standard Langchain ChatOpenAI class is used as a base and only the GenAI hub specific aspects are overwritten.
I am sure there is a good reason for this approach, however it means that any features (such as the upcoming native JSON Schema output from OpenAI) have to be manually implemented/added instead of relying on the existing Langchain implementation.
As described in the Langchain Docs https://js.langchain.com/docs/how_to/tool_calling/ Langchain Classes for LLMs that support tool calling should implement the .bindTools() method to parse langchain tools to the provider format (in this case OpenAI). The Langchain .withStructuredOutput() function also fails since .bindTools() is not implemented.
Describe Alternatives
It is possible to work around it by using other wrappers provided by Langchain, which use basic prompting to achieve a similar result. However, this is not as reliable, as noted by OpenAI in the release Blog for Structured Outputs: https://openai.com/index/introducing-structured-outputs-in-the-api/. This strict=true mode mentioned in the blog is also already supported by the native Langchain OpenAI implementation, however the models available in the GenAI hub currently don't support it anyways.
Affected Development Phase
Development
Impact
Impaired
Timeline
No response
Additional Context
No response
The text was updated successfully, but these errors were encountered: