How can I disable streaming output for tools/steps with a provider not supporting streaming? #7109
Unanswered
crazybullet28
asked this question in
Help
Replies: 1 comment 1 reply
-
Has it been solved? I'm encountering the same issue. |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Self Checks
1. Is this request related to a challenge you're experiencing? Tell me about your story.
I'm trying to add a new model provider, but that provider currently didn't support streaming output. If I directly return the LLMResult type for the
_invoke
function, it will return errorRun failed: Node LLM run failed: 'tuple' object has no attribute 'delta'
when calling the_handle_invoke_result function
in api/core/workflow/nodes/llm/llm_node.py, as the returned LLMResult object didn't include the 'delta' fields.Here is the returned LLMResult object:
LLMResult(model='gpt-4', prompt_messages=[UserPromptMessage(role=<PromptMessageRole.USER: 'user'>, content='hi', name=None)], message=AssistantPromptMessage(role=<PromptMessageRole.ASSISTANT: 'assistant'>, content='Hello! How can I help you today?', name=None, tool_calls=[]), usage=LLMUsage(prompt_tokens=8, prompt_unit_price=Decimal('0.0'), prompt_price_unit=Decimal('0.0'), prompt_price=Decimal('0.0'), completion_tokens=9, completion_unit_price=Decimal('0.0'), completion_price_unit=Decimal('0.0'), completion_price=Decimal('0.0'), total_tokens=17, total_price=Decimal('0.0'), currency='USD', latency=21.528332874993794), system_fingerprint=None)
Is there any way I can disable the streaming output for certain workflows / tools / steps? Or how should I modify my provider model code?
2. Additional context or comments
No response
Beta Was this translation helpful? Give feedback.
All reactions