diff --git a/docs/docs/concepts/agentic_concepts.md b/docs/docs/concepts/agentic_concepts.md index e03f347eb..448853e9e 100644 --- a/docs/docs/concepts/agentic_concepts.md +++ b/docs/docs/concepts/agentic_concepts.md @@ -24,7 +24,7 @@ Structured outputs with LLMs work by providing a specific format or schema that 2. Output parsers: Using post-processing to extract structured data from LLM responses. 3. Tool calling: Leveraging built-in tool calling capabilities of some LLMs to generate structured outputs. -Structured outputs are crucial for routing as they ensure the LLM's decision can be reliably interpreted and acted upon by the system. Learn more about [structured outputs in this how-to guide](https://python.langchain.com/v0.2/docs/how_to/structured_output/). +Structured outputs are crucial for routing as they ensure the LLM's decision can be reliably interpreted and acted upon by the system. Learn more about [structured outputs in this how-to guide](https://python.langchain.com/docs/how_to/structured_output/). ## Tool calling agent diff --git a/docs/docs/concepts/low_level.md b/docs/docs/concepts/low_level.md index 47043bcdc..c8a01879f 100644 --- a/docs/docs/concepts/low_level.md +++ b/docs/docs/concepts/low_level.md @@ -114,7 +114,7 @@ You can use `Context` channels to define shared resources (such as database conn #### Why use messages? -Most modern LLM providers have a chat model interface that accepts a list of messages as input. LangChain's [`ChatModel`](https://python.langchain.com/v0.2/docs/concepts/#chat-models) in particular accepts a list of `Message` objects as inputs. These messages come in a variety of forms such as `HumanMessage` (user input) or `AIMessage` (LLM response). To read more about what message objects are, please refer to [this](https://python.langchain.com/v0.2/docs/concepts/#messages) conceptual guide. +Most modern LLM providers have a chat model interface that accepts a list of messages as input. LangChain's [`ChatModel`](https://python.langchain.com/docs/concepts/#chat-models) in particular accepts a list of `Message` objects as inputs. These messages come in a variety of forms such as `HumanMessage` (user input) or `AIMessage` (LLM response). To read more about what message objects are, please refer to [this](https://python.langchain.com/docs/concepts/#messages) conceptual guide. #### Using Messages in your Graph @@ -124,7 +124,7 @@ However, you might also want to manually update messages in your graph state (e. #### Serialization -In addition to keeping track of message IDs, the `add_messages` function will also try to deserialize messages into LangChain `Message` objects whenever a state update is received on the `messages` channel. See more information on LangChain serialization/deserialization [here](https://python.langchain.com/v0.2/docs/how_to/serialization/). This allows sending graph inputs / state updates in the following format: +In addition to keeping track of message IDs, the `add_messages` function will also try to deserialize messages into LangChain `Message` objects whenever a state update is received on the `messages` channel. See more information on LangChain serialization/deserialization [here](https://python.langchain.com/docs/how_to/serialization/). This allows sending graph inputs / state updates in the following format: ```python # this is supported diff --git a/docs/docs/concepts/streaming.md b/docs/docs/concepts/streaming.md index 564144e25..8c557558d 100644 --- a/docs/docs/concepts/streaming.md +++ b/docs/docs/concepts/streaming.md @@ -20,11 +20,11 @@ The below visualization shows the difference between the `values` and `updates` In addition, you can use the [`astream_events`](../how-tos/streaming-events-from-within-tools.ipynb) method to stream back events that happen _inside_ nodes. This is useful for [streaming tokens of LLM calls](../how-tos/streaming-tokens.ipynb). -This is a standard method on all [LangChain objects](https://python.langchain.com/v0.2/docs/concepts/#runnable-interface). This means that as the graph is executed, certain events are emitted along the way and can be seen if you run the graph using `.astream_events`. +This is a standard method on all [LangChain objects](https://python.langchain.com/docs/concepts/#runnable-interface). This means that as the graph is executed, certain events are emitted along the way and can be seen if you run the graph using `.astream_events`. All events have (among other things) `event`, `name`, and `data` fields. What do these mean? -- `event`: This is the type of event that is being emitted. You can find a detailed table of all callback events and triggers [here](https://python.langchain.com/v0.2/docs/concepts/#callback-events). +- `event`: This is the type of event that is being emitted. You can find a detailed table of all callback events and triggers [here](https://python.langchain.com/docs/concepts/#callback-events). - `name`: This is the name of event. - `data`: This is the data associated with the event. diff --git a/docs/docs/how-tos/many-tools.ipynb b/docs/docs/how-tos/many-tools.ipynb index 0016092f0..b01d88fd1 100644 --- a/docs/docs/how-tos/many-tools.ipynb +++ b/docs/docs/how-tos/many-tools.ipynb @@ -7,9 +7,9 @@ "source": [ "# How to handle large numbers of tools\n", "\n", - "The subset of available tools to call is generally at the discretion of the model (although many providers also enable the user to [specify or constrain the choice of tool](https://python.langchain.com/v0.2/docs/how_to/tool_choice/)). As the number of available tools grows, you may want to limit the scope of the LLM's selection, to decrease token consumption and to help manage sources of error in LLM reasoning.\n", + "The subset of available tools to call is generally at the discretion of the model (although many providers also enable the user to [specify or constrain the choice of tool](https://python.langchain.com/docs/how_to/tool_choice/)). As the number of available tools grows, you may want to limit the scope of the LLM's selection, to decrease token consumption and to help manage sources of error in LLM reasoning.\n", "\n", - "Here we will demonstrate how to dynamically adjust the tools available to a model. Bottom line up front: like [RAG](https://python.langchain.com/v0.2/docs/concepts/#retrieval) and similar methods, we prefix the model invocation by retrieving over available tools. Although we demonstrate one implementation that searches over tool descriptions, the details of the tool selection can be customized as needed.\n", + "Here we will demonstrate how to dynamically adjust the tools available to a model. Bottom line up front: like [RAG](https://python.langchain.com/docs/concepts/#retrieval) and similar methods, we prefix the model invocation by retrieving over available tools. Although we demonstrate one implementation that searches over tool descriptions, the details of the tool selection can be customized as needed.\n", "\n", "## Setup\n", "\n", @@ -142,7 +142,7 @@ "id": "8798a0d2-ea93-45bc-ab55-071ab975f2c2", "metadata": {}, "source": [ - "We will construct a node that retrieves a subset of available tools given the information in the state-- such as a recent user message. In general, the full scope of [retrieval solutions](https://python.langchain.com/v0.2/docs/concepts/#retrieval) are available for this step. As a simple solution, we index embeddings of tool descriptions in a vector store, and associate user queries to tools via semantic search." + "We will construct a node that retrieves a subset of available tools given the information in the state-- such as a recent user message. In general, the full scope of [retrieval solutions](https://python.langchain.com/docs/concepts/#retrieval) are available for this step. As a simple solution, we index embeddings of tool descriptions in a vector store, and associate user queries to tools via semantic search." ] }, { @@ -517,7 +517,7 @@ "This guide provides a minimal implementation for dynamically selecting tools. There is a host of possible improvements and optimizations:\n", "\n", "- **Repeating tool selection**: Here, we repeated tool selection by modifying the `select_tools` node. Another option is to equip the agent with a `reselect_tools` tool, allowing it to re-select tools at its discretion.\n", - "- **Optimizing tool selection**: In general, the full scope of [retrieval solutions](https://python.langchain.com/v0.2/docs/concepts/#retrieval) are available for tool selection. Additional options include:\n", + "- **Optimizing tool selection**: In general, the full scope of [retrieval solutions](https://python.langchain.com/docs/concepts/#retrieval) are available for tool selection. Additional options include:\n", " - Group tools and retrieve over groups;\n", " - Use a chat model to select tools or groups of tool." ] diff --git a/docs/docs/how-tos/memory/manage-conversation-history.ipynb b/docs/docs/how-tos/memory/manage-conversation-history.ipynb index 08f415ef2..efa26872c 100644 --- a/docs/docs/how-tos/memory/manage-conversation-history.ipynb +++ b/docs/docs/how-tos/memory/manage-conversation-history.ipynb @@ -11,8 +11,8 @@ "\n", "Note: this guide focuses on how to do this in LangGraph, where you can fully customize how this is done. If you want a more off-the-shelf solution, you can look into functionality provided in LangChain:\n", "\n", - "- [How to filter messages](https://python.langchain.com/v0.2/docs/how_to/filter_messages/)\n", - "- [How to trim messages](https://python.langchain.com/v0.2/docs/how_to/trim_messages/)" + "- [How to filter messages](https://python.langchain.com/docs/how_to/filter_messages/)\n", + "- [How to trim messages](https://python.langchain.com/docs/how_to/trim_messages/)" ] }, { @@ -347,8 +347,8 @@ "source": [ "In the above example we defined the `filter_messages` function ourselves. We also provide off-the-shelf ways to trim and filter messages in LangChain. \n", "\n", - "- [How to filter messages](https://python.langchain.com/v0.2/docs/how_to/filter_messages/)\n", - "- [How to trim messages](https://python.langchain.com/v0.2/docs/how_to/trim_messages/)" + "- [How to filter messages](https://python.langchain.com/docs/how_to/filter_messages/)\n", + "- [How to trim messages](https://python.langchain.com/docs/how_to/trim_messages/)" ] } ], diff --git a/docs/docs/how-tos/pass-run-time-values-to-tools.ipynb b/docs/docs/how-tos/pass-run-time-values-to-tools.ipynb index 0710309de..401cb8787 100644 --- a/docs/docs/how-tos/pass-run-time-values-to-tools.ipynb +++ b/docs/docs/how-tos/pass-run-time-values-to-tools.ipynb @@ -11,7 +11,7 @@ "\n", "In this guide we'll demonstrate how to create tools that take agent state as input.\n", "\n", - "This is a special case of [passing runtime arguments to tools](https://python.langchain.com/v0.2/docs/how_to/tool_runtime/), which you can learn about in the LangChain docs." + "This is a special case of [passing runtime arguments to tools](https://python.langchain.com/docs/how_to/tool_runtime/), which you can learn about in the LangChain docs." ] }, { @@ -288,7 +288,7 @@ "## Define the nodes\n", "\n", "We now need to define a few different nodes in our graph.\n", - "In `langgraph`, a node can be either a function or a [runnable](https://python.langchain.com/v0.2/docs/concepts/#langchain-expression-language-lcel).\n", + "In `langgraph`, a node can be either a function or a [runnable](https://python.langchain.com/docs/concepts/#langchain-expression-language-lcel).\n", "There are two main nodes we need for this:\n", "\n", "1. The agent: responsible for deciding what (if any) actions to take.\n", @@ -445,7 +445,7 @@ "## Use it!\n", "\n", "We can now use it!\n", - "This now exposes the [same interface](https://python.langchain.com/v0.2/docs/concepts/#langchain-expression-language-lcel) as all other LangChain runnables." + "This now exposes the [same interface](https://python.langchain.com/docs/concepts/#langchain-expression-language-lcel) as all other LangChain runnables." ] }, { diff --git a/docs/docs/how-tos/persistence.ipynb b/docs/docs/how-tos/persistence.ipynb index ce8524ddf..148f59384 100644 --- a/docs/docs/how-tos/persistence.ipynb +++ b/docs/docs/how-tos/persistence.ipynb @@ -39,7 +39,7 @@ "
\n", "

Note

\n", "

\n", - " In this how-to, we will create our agent from scratch to be transparent (but verbose). You can accomplish similar functionality using the create_react_agent(model, tools=tool, checkpointer=checkpointer) (API doc) constructor. This may be more appropriate if you are used to LangChain’s AgentExecutor class.\n", + " In this how-to, we will create our agent from scratch to be transparent (but verbose). You can accomplish similar functionality using the create_react_agent(model, tools=tool, checkpointer=checkpointer) (API doc) constructor. This may be more appropriate if you are used to LangChain’s AgentExecutor class.\n", "

\n", "
" ] @@ -147,7 +147,7 @@ "\n", "We will first define the tools we want to use.\n", "For this simple example, we will use create a placeholder search engine.\n", - "However, it is really easy to create your own tools - see documentation [here](https://python.langchain.com/v0.2/docs/how_to/custom_tools) on how to do that.\n" + "However, it is really easy to create your own tools - see documentation [here](https://python.langchain.com/docs/how_to/custom_tools) on how to do that.\n" ] }, { @@ -198,11 +198,11 @@ "source": [ "## Define the model\n", "\n", - "Now we need to load the [chat model](https://python.langchain.com/v0.2/docs/concepts/#chat-models) to power our agent.\n", + "Now we need to load the [chat model](https://python.langchain.com/docs/concepts/#chat-models) to power our agent.\n", "For the design below, it must satisfy two criteria:\n", "\n", "1. It should work with **messages** (since our state contains a list of chat messages)\n", - "2. It should work with [**tool calling**](https://python.langchain.com/v0.2/docs/concepts/#functiontool-calling).\n", + "2. It should work with [**tool calling**](https://python.langchain.com/docs/concepts/#functiontool-calling).\n", "\n", "
\n", "

Note

\n", @@ -255,7 +255,7 @@ "## Define nodes and edges \n", "\n", "We now need to define a few different nodes in our graph.\n", - "In `langgraph`, a node can be either a function or a [runnable](https://python.langchain.com/v0.2/docs/concepts/#langchain-expression-language-lcel).\n", + "In `langgraph`, a node can be either a function or a [runnable](https://python.langchain.com/docs/concepts/#langchain-expression-language-lcel).\n", "There are two main nodes we need for this:\n", "\n", "1. The agent: responsible for deciding what (if any) actions to take.\n", diff --git a/docs/docs/how-tos/react-agent-structured-output.ipynb b/docs/docs/how-tos/react-agent-structured-output.ipynb index b816dc49c..79b09d751 100644 --- a/docs/docs/how-tos/react-agent-structured-output.ipynb +++ b/docs/docs/how-tos/react-agent-structured-output.ipynb @@ -100,7 +100,7 @@ "\n", "Now we can define how we want to structure our output, define our graph state, and also our tools and the models we are going to use.\n", "\n", - "To use structured output, we will use the `with_structured_output` method from LangChain, which you can read more about [here](https://python.langchain.com/v0.2/docs/how_to/structured_output/).\n", + "To use structured output, we will use the `with_structured_output` method from LangChain, which you can read more about [here](https://python.langchain.com/docs/how_to/structured_output/).\n", "\n", "We are going to use a single tool in this example for finding the weather, and will return a structured weather response to the user." ] diff --git a/docs/docs/how-tos/state-context-key.ipynb b/docs/docs/how-tos/state-context-key.ipynb index a6e983b31..9cca2dab1 100644 --- a/docs/docs/how-tos/state-context-key.ipynb +++ b/docs/docs/how-tos/state-context-key.ipynb @@ -87,7 +87,7 @@ "\n", "We will first define the tools we want to use.\n", "For this simple example, we will use create a placeholder search engine.\n", - "However, it is really easy to create your own tools - see documentation [here](https://python.langchain.com/v0.2/docs/how_to/custom_tools) on how to do that.\n" + "However, it is really easy to create your own tools - see documentation [here](https://python.langchain.com/docs/how_to/custom_tools) on how to do that.\n" ] }, { @@ -280,7 +280,7 @@ "## Define the nodes\n", "\n", "We now need to define a few different nodes in our graph.\n", - "In `langgraph`, a node can be either a function or a [runnable](https://python.langchain.com/v0.2/docs/concepts/#langchain-expression-language-lcel).\n", + "In `langgraph`, a node can be either a function or a [runnable](https://python.langchain.com/docs/concepts/#langchain-expression-language-lcel).\n", "There are two main nodes we need for this:\n", "\n", "1. The agent: responsible for deciding what (if any) actions to take.\n", @@ -426,7 +426,7 @@ "## Use it!\n", "\n", "We can now use it!\n", - "This now exposes the [same interface](https://python.langchain.com/v0.2/docs/concepts/#langchain-expression-language-lcel) as all other LangChain runnables." + "This now exposes the [same interface](https://python.langchain.com/docs/concepts/#langchain-expression-language-lcel) as all other LangChain runnables." ] }, { diff --git a/docs/docs/how-tos/state-model.ipynb b/docs/docs/how-tos/state-model.ipynb index fe02507df..addaaae08 100644 --- a/docs/docs/how-tos/state-model.ipynb +++ b/docs/docs/how-tos/state-model.ipynb @@ -94,7 +94,7 @@ "\n", "We will first define the tools we want to use.\n", "For this simple example, we will use create a placeholder search engine.\n", - "However, it is really easy to create your own tools - see documentation [here](https://python.langchain.com/v0.2/docs/how_to/custom_tools) on how to do that.\n" + "However, it is really easy to create your own tools - see documentation [here](https://python.langchain.com/docs/how_to/custom_tools) on how to do that.\n" ] }, { @@ -243,7 +243,7 @@ "## Define the nodes\n", "\n", "We now need to define a few different nodes in our graph.\n", - "In `langgraph`, a node can be either a function or a [runnable](https://python.langchain.com/v0.2/docs/concepts/#langchain-expression-language-lcel).\n", + "In `langgraph`, a node can be either a function or a [runnable](https://python.langchain.com/docs/concepts/#langchain-expression-language-lcel).\n", "There are two main nodes we need for this:\n", "\n", "1. The agent: responsible for deciding what (if any) actions to take.\n", @@ -385,7 +385,7 @@ "## Use it!\n", "\n", "We can now use it!\n", - "This now exposes the [same interface](https://python.langchain.com/v0.2/docs/concepts/#langchain-expression-language-lcel) as all other LangChain runnables." + "This now exposes the [same interface](https://python.langchain.com/docs/concepts/#langchain-expression-language-lcel) as all other LangChain runnables." ] }, { diff --git a/docs/docs/how-tos/streaming-tokens.ipynb b/docs/docs/how-tos/streaming-tokens.ipynb index e358e4f6c..170373cd1 100644 --- a/docs/docs/how-tos/streaming-tokens.ipynb +++ b/docs/docs/how-tos/streaming-tokens.ipynb @@ -14,7 +14,7 @@ "
\n", "

Note

\n", "

\n", - " In this how-to, we will create our agent from scratch to be transparent (but verbose). You can accomplish similar functionality using the create_react_agent(model, tools=tool) (API doc) constructor. This may be more appropriate if you are used to LangChain’s AgentExecutor class.\n", + " In this how-to, we will create our agent from scratch to be transparent (but verbose). You can accomplish similar functionality using the create_react_agent(model, tools=tool) (API doc) constructor. This may be more appropriate if you are used to LangChain’s AgentExecutor class.\n", "

\n", "
\n", "\n", @@ -138,7 +138,7 @@ "\n", "We will first define the tools we want to use.\n", "For this simple example, we will use create a placeholder search engine.\n", - "It is really easy to create your own tools - see documentation [here](https://python.langchain.com/v0.2/docs/how_to/custom_tools) on how to do that.\n" + "It is really easy to create your own tools - see documentation [here](https://python.langchain.com/docs/how_to/custom_tools) on how to do that.\n" ] }, { @@ -238,7 +238,7 @@ "## Define the nodes\n", "\n", "We now need to define a few different nodes in our graph.\n", - "In `langgraph`, a node can be either a function or a [runnable](https://python.langchain.com/v0.2/docs/concepts/#langchain-expression-language-lcel).\n", + "In `langgraph`, a node can be either a function or a [runnable](https://python.langchain.com/docs/concepts/#langchain-expression-language-lcel).\n", "There are two main nodes we need for this:\n", "\n", "1. The agent: responsible for deciding what (if any) actions to take.\n", diff --git a/docs/docs/tutorials/code_assistant/langgraph_code_assistant.ipynb b/docs/docs/tutorials/code_assistant/langgraph_code_assistant.ipynb index f17447a73..9db776443 100644 --- a/docs/docs/tutorials/code_assistant/langgraph_code_assistant.ipynb +++ b/docs/docs/tutorials/code_assistant/langgraph_code_assistant.ipynb @@ -88,7 +88,7 @@ "source": [ "## Docs\n", "\n", - "Load [LangChain Expression Language](https://python.langchain.com/v0.2/docs/concepts/#langchain-expression-language-lcel) (LCEL) docs as an example." + "Load [LangChain Expression Language](https://python.langchain.com/docs/concepts/#langchain-expression-language-lcel) (LCEL) docs as an example." ] }, { @@ -102,7 +102,7 @@ "from langchain_community.document_loaders.recursive_url_loader import RecursiveUrlLoader\n", "\n", "# LCEL docs\n", - "url = \"https://python.langchain.com/v0.2/docs/concepts/#langchain-expression-language-lcel\"\n", + "url = \"https://python.langchain.com/docs/concepts/#langchain-expression-language-lcel\"\n", "loader = RecursiveUrlLoader(\n", " url=url, max_depth=20, extractor=lambda x: Soup(x, \"html.parser\").text\n", ")\n", diff --git a/docs/docs/tutorials/customer-support/customer-support.ipynb b/docs/docs/tutorials/customer-support/customer-support.ipynb index 2db7586a9..0e4ab07b7 100644 --- a/docs/docs/tutorials/customer-support/customer-support.ipynb +++ b/docs/docs/tutorials/customer-support/customer-support.ipynb @@ -238,7 +238,7 @@ "\n", "Define the (`fetch_user_flight_information`) tool to let the agent see the current user's flight information. Then define tools to search for flights and manage the passenger's bookings stored in the SQL database.\n", "\n", - "We the can [access the RunnableConfig](https://python.langchain.com/v0.2/docs/how_to/tool_configure/#inferring-by-parameter-type) for a given run to check the `passenger_id` of the user accessing this application. The LLM never has to provide these explicitly, they are provided for a given invocation of the graph so that each user cannot access other passengers' booking information.\n", + "We the can [access the RunnableConfig](https://python.langchain.com/docs/how_to/tool_configure/#inferring-by-parameter-type) for a given run to check the `passenger_id` of the user accessing this application. The LLM never has to provide these explicitly, they are provided for a given invocation of the graph so that each user cannot access other passengers' booking information.\n", "\n", "
\n", "

Compatibility

\n", diff --git a/docs/docs/tutorials/extraction/retries.ipynb b/docs/docs/tutorials/extraction/retries.ipynb index 2603b790b..7cc088469 100644 --- a/docs/docs/tutorials/extraction/retries.ipynb +++ b/docs/docs/tutorials/extraction/retries.ipynb @@ -435,7 +435,7 @@ "from langchain_core.prompts import ChatPromptTemplate\n", "\n", "# Or you can use ChatGroq, ChatOpenAI, ChatGoogleGemini, ChatCohere, etc.\n", - "# See https://python.langchain.com/v0.2/docs/integrations/chat/ for more info on tool calling\n", + "# See https://python.langchain.com/docs/integrations/chat/ for more info on tool calling\n", "llm = ChatAnthropic(model=\"claude-3-haiku-20240307\")\n", "bound_llm = bind_validator_with_retries(llm, tools=tools)\n", "prompt = ChatPromptTemplate.from_messages(\n", diff --git a/docs/docs/tutorials/introduction.ipynb b/docs/docs/tutorials/introduction.ipynb index 63063a9d5..579f9dbaf 100644 --- a/docs/docs/tutorials/introduction.ipynb +++ b/docs/docs/tutorials/introduction.ipynb @@ -404,7 +404,7 @@ "\n", "Before we start, make sure you have the necessary packages installed and API keys set up:\n", "\n", - "First, install the requirements to use the [Tavily Search Engine](https://python.langchain.com/v0.2/docs/integrations/tools/tavily_search/), and set your [TAVILY_API_KEY](https://tavily.com/)." + "First, install the requirements to use the [Tavily Search Engine](https://python.langchain.com/docs/integrations/tools/tavily_search/), and set your [TAVILY_API_KEY](https://tavily.com/)." ] }, { diff --git a/docs/docs/tutorials/llm-compiler/LLMCompiler.ipynb b/docs/docs/tutorials/llm-compiler/LLMCompiler.ipynb index 688d2ff68..9c51ccda5 100644 --- a/docs/docs/tutorials/llm-compiler/LLMCompiler.ipynb +++ b/docs/docs/tutorials/llm-compiler/LLMCompiler.ipynb @@ -454,7 +454,7 @@ "\n", "We'll first define the tools for the agent to use in our demo. We'll give it the class search engine + calculator combo.\n", "\n", - "If you don't want to sign up for tavily, you can replace it with the free [DuckDuckGo](https://python.langchain.com/v0.2/docs/integrations/tools/ddg/)." + "If you don't want to sign up for tavily, you can replace it with the free [DuckDuckGo](https://python.langchain.com/docs/integrations/tools/ddg/)." ] }, { diff --git a/docs/docs/tutorials/plan-and-execute/plan-and-execute.ipynb b/docs/docs/tutorials/plan-and-execute/plan-and-execute.ipynb index ae626ba97..bb383477e 100644 --- a/docs/docs/tutorials/plan-and-execute/plan-and-execute.ipynb +++ b/docs/docs/tutorials/plan-and-execute/plan-and-execute.ipynb @@ -102,7 +102,7 @@ "source": [ "## Define Tools\n", "\n", - "We will first define the tools we want to use. For this simple example, we will use a built-in search tool via Tavily. However, it is really easy to create your own tools - see documentation [here](https://python.langchain.com/v0.2/docs/how_to/custom_tools) on how to do that." + "We will first define the tools we want to use. For this simple example, we will use a built-in search tool via Tavily. However, it is really easy to create your own tools - see documentation [here](https://python.langchain.com/docs/how_to/custom_tools) on how to do that." ] }, { diff --git a/docs/docs/tutorials/rag/langgraph_agentic_rag.ipynb b/docs/docs/tutorials/rag/langgraph_agentic_rag.ipynb index 5802ee040..060c230ef 100644 --- a/docs/docs/tutorials/rag/langgraph_agentic_rag.ipynb +++ b/docs/docs/tutorials/rag/langgraph_agentic_rag.ipynb @@ -7,7 +7,7 @@ "source": [ "# Agentic RAG\n", "\n", - "[Retrieval Agents](https://python.langchain.com/v0.2/docs/tutorials/qa_chat_history/#agents) are useful when we want to make decisions about whether to retrieve from an index.\n", + "[Retrieval Agents](https://python.langchain.com/docs/tutorials/qa_chat_history/#agents) are useful when we want to make decisions about whether to retrieve from an index.\n", "\n", "To implement a retrieval agent, we simple need to give an LLM access to a retriever tool.\n", "\n", diff --git a/docs/docs/tutorials/rag/langgraph_crag.ipynb b/docs/docs/tutorials/rag/langgraph_crag.ipynb index caa2e3f78..74b5190c9 100644 --- a/docs/docs/tutorials/rag/langgraph_crag.ipynb +++ b/docs/docs/tutorials/rag/langgraph_crag.ipynb @@ -27,7 +27,7 @@ "\n", "* Let's skip the knowledge refinement phase as a first pass. This can be added back as a node, if desired. \n", "* If *any* documents are irrelevant, let's opt to supplement retrieval with web search. \n", - "* We'll use [Tavily Search](https://python.langchain.com/v0.2/docs/integrations/tools/tavily_search/) for web search.\n", + "* We'll use [Tavily Search](https://python.langchain.com/docs/integrations/tools/tavily_search/) for web search.\n", "* Let's use query re-writing to optimize the query for web search.\n", "\n", "![Screenshot 2024-04-01 at 9.28.30 AM.png](attachment:683fae34-980f-43f0-a9c2-9894bebd9157.png)" diff --git a/docs/docs/tutorials/rag/langgraph_crag_local.ipynb b/docs/docs/tutorials/rag/langgraph_crag_local.ipynb index 906e5a8d2..e93ad7337 100644 --- a/docs/docs/tutorials/rag/langgraph_crag_local.ipynb +++ b/docs/docs/tutorials/rag/langgraph_crag_local.ipynb @@ -26,7 +26,7 @@ "\n", "* If *any* documents are irrelevant, we'll supplement retrieval with web search. \n", "* We'll skip the knowledge refinement, but this can be added back as a node if desired. \n", - "* We'll use [Tavily Search](https://python.langchain.com/v0.2/docs/integrations/tools/tavily_search/) for web search.\n", + "* We'll use [Tavily Search](https://python.langchain.com/docs/integrations/tools/tavily_search/) for web search.\n", "\n", "![Screenshot 2024-06-24 at 3.03.16 PM.png](attachment:b77a7d3b-b28a-4dcf-9f1a-861f2f2c5f6c.png)" ] @@ -43,7 +43,7 @@ "* Download [Ollama app](https://ollama.ai/).\n", "* Pull your model of choice, e.g.: `ollama pull llama3`\n", "\n", - "We'll use [Tavily](https://python.langchain.com/v0.2/docs/integrations/tools/tavily_search/) for web search.\n", + "We'll use [Tavily](https://python.langchain.com/docs/integrations/tools/tavily_search/) for web search.\n", "\n", "We'll use a vectorstore with [Nomic local embeddings](https://blog.nomic.ai/posts/nomic-embed-text-v1) or, optionally, OpenAI embeddings.\n", "\n", diff --git a/docs/docs/tutorials/rewoo/rewoo.ipynb b/docs/docs/tutorials/rewoo/rewoo.ipynb index 8c68a3460..88dffde26 100644 --- a/docs/docs/tutorials/rewoo/rewoo.ipynb +++ b/docs/docs/tutorials/rewoo/rewoo.ipynb @@ -41,7 +41,7 @@ "\n", "## Setup\n", "\n", - "For this example, we will provide the agent with a Tavily search engine tool. You can get an API key [here](https://app.tavily.com/sign-in) or replace with a free tool option (e.g., [duck duck go search](https://python.langchain.com/v0.2/docs/integrations/tools/ddg/)).\n", + "For this example, we will provide the agent with a Tavily search engine tool. You can get an API key [here](https://app.tavily.com/sign-in) or replace with a free tool option (e.g., [duck duck go search](https://python.langchain.com/docs/integrations/tools/ddg/)).\n", "\n", "Let's install the required packages and set our API keys" ] diff --git a/docs/docs/tutorials/usaco/usaco.ipynb b/docs/docs/tutorials/usaco/usaco.ipynb index 490e29240..49b3a6e72 100644 --- a/docs/docs/tutorials/usaco/usaco.ipynb +++ b/docs/docs/tutorials/usaco/usaco.ipynb @@ -336,7 +336,7 @@ "source": [ "#### Node 1: Solver\n", "\n", - "Create a `solver` node that prompts an LLM \"agent\" to use a [writePython tool](https://python.langchain.com/v0.2/docs/integrations/chat/anthropic/#integration-details) to generate the submitted code." + "Create a `solver` node that prompts an LLM \"agent\" to use a [writePython tool](https://python.langchain.com/docs/integrations/chat/anthropic/#integration-details) to generate the submitted code." ] }, {