Common Approach for Chat Memory on Workflow #4058
Replies: 4 comments 2 replies
-
Yes, the design of this chat memory is very strange, taking the whole workflow as a whole or an agent as a whole, and storing the input and output of this whole. It is completely different from the chat history we usually understand, that is, the history function used by chat clients such as lobe. Caused us to be unable to automate the chat flow with dify. |
Beta Was this translation helpful? Give feedback.
-
Can the read and write of conversation - variables be used to solve this problem? |
Beta Was this translation helpful? Give feedback.
-
the chatflow in dify only records the QA pairs in the chat window, which already covers scenarios of chat history. I completely understand that "the history from llm 1's answer and answer from llm 2 itself" constitutes a complete context for the llm2 node. Could you provide more detailed scenarios to help me further understand the purpose behind your ideas? @skywolf123 @daryadifoo If there’s a delay in response, my email is [email protected] you for your suggestions and feedback again |
Beta Was this translation helpful? Give feedback.
-
I want to know why Chat Memory only stores the first query and last answer of the entire workflow? This approach makes chat history inappropriate when we use more than 1 llm. Let's say we use 3 llm nodes in 1 workflow. So llm 2 will receive the history from the query in llm 1 and the answer from llm 3. Meanwhile llm 2 should receive the history from llm 1's answer and answer from llm 2 itself.
Beta Was this translation helpful? Give feedback.
All reactions