Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: introducing configurable retrieval workflows #3227

Open
wants to merge 84 commits into
base: main
Choose a base branch
from

Conversation

jacopo-chevallard
Copy link
Collaborator

@jacopo-chevallard jacopo-chevallard commented Sep 18, 2024

Description

Major PR which, among other things, introduces the possibility of easily customizing the retrieval workflows. Workflows are based on LangGraph, and can be customized using a yaml configuration file, and adding the implementation of the nodes logic into quivr_rag_langgraph.py

This is a first, simple implementation that will significantly evolve in the coming weeks to enable more complex workflows (for instance, with conditional nodes). We also plan to adopt a similar approach for the ingestion part, i.e. to enable user to easily customize the ingestion pipeline.

Closes CORE-195, CORE-203, CORE-204

Checklist before requesting a review

Please delete options that are not relevant.

  • My code follows the style guidelines of this project
  • I have performed a self-review of my code
  • I have commented hard-to-understand areas
  • I have ideally added tests that prove my fix is effective or that my feature works
  • New and existing unit tests pass locally with my changes
  • Any dependent changes have been merged

Screenshots (if appropriate):

…, you need to set the env variables COHERE_API_KEY or JINA_API_KEY. If both are present, the Cohere reranker (rerank-multilingual-v3.0) is used.
…configuration fields of RAGConfig into RetrievalConfig
…itioning from QuivrQARAG to QuivrQARAGLangGraph
… by the front for the chat-with-llm modality
…yaml configuration with the configuration pulled from the front
…erging it with the user-made configuration setup in the front
…or models (e.g. Mistral, Groq) which don't have a specific interface
…ion and merging it with the configuration coming from the front
@dosubot dosubot bot added the size:XXL This PR changes 1000+ lines, ignoring generated files. label Sep 18, 2024
Copy link

vercel bot commented Sep 18, 2024

The latest updates on your projects. Learn more about Vercel for Git ↗︎

Name Status Preview Comments Updated (UTC)
quivrapp ✅ Ready (Inspect) Visit Preview 💬 Add feedback Sep 19, 2024 8:30am

@dosubot dosubot bot added the area: backend Related to backend functionality or under the /backend directory label Sep 18, 2024
@jacopo-chevallard
Copy link
Collaborator Author

Tomorrow morning I'll merge the latest changes from main to avoid conflicts

@jacopo-chevallard
Copy link
Collaborator Author

@AmineDiro cc @StanGirard I merged main and fixed a bug, I think that the PR is ready for review

Copy link
Collaborator

@AmineDiro AmineDiro left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Great work !

  • Had some questions on interface design for quivr-core
  • Comments on Knowledge entities
  • Comments on the use of pydantic to validate data and the use of env variables

chat_history = ChatHistory(uuid4(), uuid4())
rag_pipeline = QuivrQARAG(
rag_config=rag_config, llm=llm, vector_store=mem_vector_store
rag_pipeline = QuivrQARAGLangGraph(
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

not a major point but the QuivrQA interface is a bit weird, we are passing retrieval by config and llm by value. Either construct object before passing them to QuivrQA or pass the config and have a build_llm & build_retriever step. Nothing major 👍

@@ -16,6 +16,8 @@ dependencies = [
"aiofiles>=23.1.0",
"langchain-community>=0.2.12",
"langchain-anthropic>=0.1.23",
"types-pyyaml>=6.0.12.20240808",
"transformers[sentencepiece]>=4.44.2",
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

transformers depends on torch if I am not mistaken, this is a heavy dep. Should probably be added to optional dependencies

@@ -40,7 +41,7 @@ dev-dependencies = [
]

[tool.rye.workspace]
members = [".", "core", "worker", "api", "docs", "core/examples/chatbot"]
members = [".", "core", "worker", "api", "docs", "core/examples/chatbot", "core/MegaParse"]
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't really know if Megaparse should be in core or at the same level as the worker, api, core. 🤔 ? @jacopo-chevallard @StanGirard

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are those example configs or are they used in the tests ?

Comment on lines +38 to 39
megaparse_config: MegaparseConfig = MegaparseConfig(),
) -> None:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Much cleaner

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

refacto : put yaml configs in config/

@@ -271,6 +282,8 @@ async def create_stream_question_handler(
for model in models:
if brain_id == generate_uuid_from_string(model.name):
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't get this function. This seems complicated for getting model ?


return retrieval_config

async def _build_retrieval_config(self) -> RetrievalConfig:
model = await self.model_service.get_model(self.model_to_use) # type: ignore
api_key = os.getenv(model.env_variable_name, "not-defined")
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pydantic has SecretStr type to encode keys + you can define you model as having parameter that can be loaded from environement variable :

https://docs.pydantic.dev/latest/concepts/pydantic_settings/#environment-variable-names

answer=full_answer,
metadata=RAGResponseMetadata.model_validate(
streamed_chat_history.metadata
if self.brain_service:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

brain_service should probably be required.


# Save the answer to db
new_chat_entry = self.save_answer(question, parsed_response)
if self.brain_service:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why do we need the brain_service to save chat answer ? This should depend on the chat_service ?

CHAT_LLM_CONFIG_PATH=chat_llm_config.yaml

# LangSmith
LANGCHAIN_TRACING_V2=true
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Attention when you add that it uses that and can generate an issue

chat_id,
chat_service,
model_service,
if not os.getenv("CHAT_LLM_CONFIG_PATH"):
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not a big fan of this. I'd use a function instead to return the llm config so that later we can easily change how we load it

@@ -283,27 +296,43 @@ async def create_stream_question_handler(
assert model is not None
brain.model = model.name
validate_authorization(user_id=current_user.id, brain_id=brain_id)
if not os.getenv("BRAIN_CONFIG_PATH"):
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same comment here. Use of function ?

chat_id,
chat_service,
model_service,
if not os.getenv("CHAT_LLM_CONFIG_PATH"):
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same comment. Use function ?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area: backend Related to backend functionality or under the /backend directory size:XXL This PR changes 1000+ lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants