Skip to content

Commit

Permalink
Update all URLs to langchain docunotebooks (#1745)
Browse files Browse the repository at this point in the history
  • Loading branch information
eyurtsev authored Sep 17, 2024
1 parent 36c757e commit b11552a
Show file tree
Hide file tree
Showing 22 changed files with 44 additions and 44 deletions.
2 changes: 1 addition & 1 deletion docs/docs/concepts/agentic_concepts.md
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@ Structured outputs with LLMs work by providing a specific format or schema that
2. Output parsers: Using post-processing to extract structured data from LLM responses.
3. Tool calling: Leveraging built-in tool calling capabilities of some LLMs to generate structured outputs.

Structured outputs are crucial for routing as they ensure the LLM's decision can be reliably interpreted and acted upon by the system. Learn more about [structured outputs in this how-to guide](https://python.langchain.com/v0.2/docs/how_to/structured_output/).
Structured outputs are crucial for routing as they ensure the LLM's decision can be reliably interpreted and acted upon by the system. Learn more about [structured outputs in this how-to guide](https://python.langchain.com/docs/how_to/structured_output/).

## Tool calling agent

Expand Down
4 changes: 2 additions & 2 deletions docs/docs/concepts/low_level.md
Original file line number Diff line number Diff line change
Expand Up @@ -114,7 +114,7 @@ You can use `Context` channels to define shared resources (such as database conn

#### Why use messages?

Most modern LLM providers have a chat model interface that accepts a list of messages as input. LangChain's [`ChatModel`](https://python.langchain.com/v0.2/docs/concepts/#chat-models) in particular accepts a list of `Message` objects as inputs. These messages come in a variety of forms such as `HumanMessage` (user input) or `AIMessage` (LLM response). To read more about what message objects are, please refer to [this](https://python.langchain.com/v0.2/docs/concepts/#messages) conceptual guide.
Most modern LLM providers have a chat model interface that accepts a list of messages as input. LangChain's [`ChatModel`](https://python.langchain.com/docs/concepts/#chat-models) in particular accepts a list of `Message` objects as inputs. These messages come in a variety of forms such as `HumanMessage` (user input) or `AIMessage` (LLM response). To read more about what message objects are, please refer to [this](https://python.langchain.com/docs/concepts/#messages) conceptual guide.

#### Using Messages in your Graph

Expand All @@ -124,7 +124,7 @@ However, you might also want to manually update messages in your graph state (e.

#### Serialization

In addition to keeping track of message IDs, the `add_messages` function will also try to deserialize messages into LangChain `Message` objects whenever a state update is received on the `messages` channel. See more information on LangChain serialization/deserialization [here](https://python.langchain.com/v0.2/docs/how_to/serialization/). This allows sending graph inputs / state updates in the following format:
In addition to keeping track of message IDs, the `add_messages` function will also try to deserialize messages into LangChain `Message` objects whenever a state update is received on the `messages` channel. See more information on LangChain serialization/deserialization [here](https://python.langchain.com/docs/how_to/serialization/). This allows sending graph inputs / state updates in the following format:

```python
# this is supported
Expand Down
4 changes: 2 additions & 2 deletions docs/docs/concepts/streaming.md
Original file line number Diff line number Diff line change
Expand Up @@ -20,11 +20,11 @@ The below visualization shows the difference between the `values` and `updates`

In addition, you can use the [`astream_events`](../how-tos/streaming-events-from-within-tools.ipynb) method to stream back events that happen _inside_ nodes. This is useful for [streaming tokens of LLM calls](../how-tos/streaming-tokens.ipynb).

This is a standard method on all [LangChain objects](https://python.langchain.com/v0.2/docs/concepts/#runnable-interface). This means that as the graph is executed, certain events are emitted along the way and can be seen if you run the graph using `.astream_events`.
This is a standard method on all [LangChain objects](https://python.langchain.com/docs/concepts/#runnable-interface). This means that as the graph is executed, certain events are emitted along the way and can be seen if you run the graph using `.astream_events`.

All events have (among other things) `event`, `name`, and `data` fields. What do these mean?

- `event`: This is the type of event that is being emitted. You can find a detailed table of all callback events and triggers [here](https://python.langchain.com/v0.2/docs/concepts/#callback-events).
- `event`: This is the type of event that is being emitted. You can find a detailed table of all callback events and triggers [here](https://python.langchain.com/docs/concepts/#callback-events).
- `name`: This is the name of event.
- `data`: This is the data associated with the event.

Expand Down
8 changes: 4 additions & 4 deletions docs/docs/how-tos/many-tools.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -7,9 +7,9 @@
"source": [
"# How to handle large numbers of tools\n",
"\n",
"The subset of available tools to call is generally at the discretion of the model (although many providers also enable the user to [specify or constrain the choice of tool](https://python.langchain.com/v0.2/docs/how_to/tool_choice/)). As the number of available tools grows, you may want to limit the scope of the LLM's selection, to decrease token consumption and to help manage sources of error in LLM reasoning.\n",
"The subset of available tools to call is generally at the discretion of the model (although many providers also enable the user to [specify or constrain the choice of tool](https://python.langchain.com/docs/how_to/tool_choice/)). As the number of available tools grows, you may want to limit the scope of the LLM's selection, to decrease token consumption and to help manage sources of error in LLM reasoning.\n",
"\n",
"Here we will demonstrate how to dynamically adjust the tools available to a model. Bottom line up front: like [RAG](https://python.langchain.com/v0.2/docs/concepts/#retrieval) and similar methods, we prefix the model invocation by retrieving over available tools. Although we demonstrate one implementation that searches over tool descriptions, the details of the tool selection can be customized as needed.\n",
"Here we will demonstrate how to dynamically adjust the tools available to a model. Bottom line up front: like [RAG](https://python.langchain.com/docs/concepts/#retrieval) and similar methods, we prefix the model invocation by retrieving over available tools. Although we demonstrate one implementation that searches over tool descriptions, the details of the tool selection can be customized as needed.\n",
"\n",
"## Setup\n",
"\n",
Expand Down Expand Up @@ -142,7 +142,7 @@
"id": "8798a0d2-ea93-45bc-ab55-071ab975f2c2",
"metadata": {},
"source": [
"We will construct a node that retrieves a subset of available tools given the information in the state-- such as a recent user message. In general, the full scope of [retrieval solutions](https://python.langchain.com/v0.2/docs/concepts/#retrieval) are available for this step. As a simple solution, we index embeddings of tool descriptions in a vector store, and associate user queries to tools via semantic search."
"We will construct a node that retrieves a subset of available tools given the information in the state-- such as a recent user message. In general, the full scope of [retrieval solutions](https://python.langchain.com/docs/concepts/#retrieval) are available for this step. As a simple solution, we index embeddings of tool descriptions in a vector store, and associate user queries to tools via semantic search."
]
},
{
Expand Down Expand Up @@ -517,7 +517,7 @@
"This guide provides a minimal implementation for dynamically selecting tools. There is a host of possible improvements and optimizations:\n",
"\n",
"- **Repeating tool selection**: Here, we repeated tool selection by modifying the `select_tools` node. Another option is to equip the agent with a `reselect_tools` tool, allowing it to re-select tools at its discretion.\n",
"- **Optimizing tool selection**: In general, the full scope of [retrieval solutions](https://python.langchain.com/v0.2/docs/concepts/#retrieval) are available for tool selection. Additional options include:\n",
"- **Optimizing tool selection**: In general, the full scope of [retrieval solutions](https://python.langchain.com/docs/concepts/#retrieval) are available for tool selection. Additional options include:\n",
" - Group tools and retrieve over groups;\n",
" - Use a chat model to select tools or groups of tool."
]
Expand Down
8 changes: 4 additions & 4 deletions docs/docs/how-tos/memory/manage-conversation-history.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -11,8 +11,8 @@
"\n",
"Note: this guide focuses on how to do this in LangGraph, where you can fully customize how this is done. If you want a more off-the-shelf solution, you can look into functionality provided in LangChain:\n",
"\n",
"- [How to filter messages](https://python.langchain.com/v0.2/docs/how_to/filter_messages/)\n",
"- [How to trim messages](https://python.langchain.com/v0.2/docs/how_to/trim_messages/)"
"- [How to filter messages](https://python.langchain.com/docs/how_to/filter_messages/)\n",
"- [How to trim messages](https://python.langchain.com/docs/how_to/trim_messages/)"
]
},
{
Expand Down Expand Up @@ -347,8 +347,8 @@
"source": [
"In the above example we defined the `filter_messages` function ourselves. We also provide off-the-shelf ways to trim and filter messages in LangChain. \n",
"\n",
"- [How to filter messages](https://python.langchain.com/v0.2/docs/how_to/filter_messages/)\n",
"- [How to trim messages](https://python.langchain.com/v0.2/docs/how_to/trim_messages/)"
"- [How to filter messages](https://python.langchain.com/docs/how_to/filter_messages/)\n",
"- [How to trim messages](https://python.langchain.com/docs/how_to/trim_messages/)"
]
}
],
Expand Down
6 changes: 3 additions & 3 deletions docs/docs/how-tos/pass-run-time-values-to-tools.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@
"\n",
"In this guide we'll demonstrate how to create tools that take agent state as input.\n",
"\n",
"This is a special case of [passing runtime arguments to tools](https://python.langchain.com/v0.2/docs/how_to/tool_runtime/), which you can learn about in the LangChain docs."
"This is a special case of [passing runtime arguments to tools](https://python.langchain.com/docs/how_to/tool_runtime/), which you can learn about in the LangChain docs."
]
},
{
Expand Down Expand Up @@ -288,7 +288,7 @@
"## Define the nodes\n",
"\n",
"We now need to define a few different nodes in our graph.\n",
"In `langgraph`, a node can be either a function or a [runnable](https://python.langchain.com/v0.2/docs/concepts/#langchain-expression-language-lcel).\n",
"In `langgraph`, a node can be either a function or a [runnable](https://python.langchain.com/docs/concepts/#langchain-expression-language-lcel).\n",
"There are two main nodes we need for this:\n",
"\n",
"1. The agent: responsible for deciding what (if any) actions to take.\n",
Expand Down Expand Up @@ -445,7 +445,7 @@
"## Use it!\n",
"\n",
"We can now use it!\n",
"This now exposes the [same interface](https://python.langchain.com/v0.2/docs/concepts/#langchain-expression-language-lcel) as all other LangChain runnables."
"This now exposes the [same interface](https://python.langchain.com/docs/concepts/#langchain-expression-language-lcel) as all other LangChain runnables."
]
},
{
Expand Down
10 changes: 5 additions & 5 deletions docs/docs/how-tos/persistence.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -39,7 +39,7 @@
"<div class=\"admonition tip\">\n",
" <p class=\"admonition-title\">Note</p>\n",
" <p>\n",
" In this how-to, we will create our agent from scratch to be transparent (but verbose). You can accomplish similar functionality using the <code>create_react_agent(model, tools=tool, checkpointer=checkpointer)</code> (<a href=\"https://langchain-ai.github.io/langgraph/reference/prebuilt/#create_react_agent\">API doc</a>) constructor. This may be more appropriate if you are used to LangChain’s <a href=\"https://python.langchain.com/v0.2/docs/how_to/agent_executor/#concepts\">AgentExecutor</a> class.\n",
" In this how-to, we will create our agent from scratch to be transparent (but verbose). You can accomplish similar functionality using the <code>create_react_agent(model, tools=tool, checkpointer=checkpointer)</code> (<a href=\"https://langchain-ai.github.io/langgraph/reference/prebuilt/#create_react_agent\">API doc</a>) constructor. This may be more appropriate if you are used to LangChain’s <a href=\"https://python.langchain.com/docs/how_to/agent_executor/#concepts\">AgentExecutor</a> class.\n",
" </p>\n",
"</div> "
]
Expand Down Expand Up @@ -147,7 +147,7 @@
"\n",
"We will first define the tools we want to use.\n",
"For this simple example, we will use create a placeholder search engine.\n",
"However, it is really easy to create your own tools - see documentation [here](https://python.langchain.com/v0.2/docs/how_to/custom_tools) on how to do that.\n"
"However, it is really easy to create your own tools - see documentation [here](https://python.langchain.com/docs/how_to/custom_tools) on how to do that.\n"
]
},
{
Expand Down Expand Up @@ -198,11 +198,11 @@
"source": [
"## Define the model\n",
"\n",
"Now we need to load the [chat model](https://python.langchain.com/v0.2/docs/concepts/#chat-models) to power our agent.\n",
"Now we need to load the [chat model](https://python.langchain.com/docs/concepts/#chat-models) to power our agent.\n",
"For the design below, it must satisfy two criteria:\n",
"\n",
"1. It should work with **messages** (since our state contains a list of chat messages)\n",
"2. It should work with [**tool calling**](https://python.langchain.com/v0.2/docs/concepts/#functiontool-calling).\n",
"2. It should work with [**tool calling**](https://python.langchain.com/docs/concepts/#functiontool-calling).\n",
"\n",
"<div class=\"admonition tip\">\n",
" <p class=\"admonition-title\">Note</p>\n",
Expand Down Expand Up @@ -255,7 +255,7 @@
"## Define nodes and edges \n",
"\n",
"We now need to define a few different nodes in our graph.\n",
"In `langgraph`, a node can be either a function or a [runnable](https://python.langchain.com/v0.2/docs/concepts/#langchain-expression-language-lcel).\n",
"In `langgraph`, a node can be either a function or a [runnable](https://python.langchain.com/docs/concepts/#langchain-expression-language-lcel).\n",
"There are two main nodes we need for this:\n",
"\n",
"1. The agent: responsible for deciding what (if any) actions to take.\n",
Expand Down
2 changes: 1 addition & 1 deletion docs/docs/how-tos/react-agent-structured-output.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -100,7 +100,7 @@
"\n",
"Now we can define how we want to structure our output, define our graph state, and also our tools and the models we are going to use.\n",
"\n",
"To use structured output, we will use the `with_structured_output` method from LangChain, which you can read more about [here](https://python.langchain.com/v0.2/docs/how_to/structured_output/).\n",
"To use structured output, we will use the `with_structured_output` method from LangChain, which you can read more about [here](https://python.langchain.com/docs/how_to/structured_output/).\n",
"\n",
"We are going to use a single tool in this example for finding the weather, and will return a structured weather response to the user."
]
Expand Down
6 changes: 3 additions & 3 deletions docs/docs/how-tos/state-context-key.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -87,7 +87,7 @@
"\n",
"We will first define the tools we want to use.\n",
"For this simple example, we will use create a placeholder search engine.\n",
"However, it is really easy to create your own tools - see documentation [here](https://python.langchain.com/v0.2/docs/how_to/custom_tools) on how to do that.\n"
"However, it is really easy to create your own tools - see documentation [here](https://python.langchain.com/docs/how_to/custom_tools) on how to do that.\n"
]
},
{
Expand Down Expand Up @@ -280,7 +280,7 @@
"## Define the nodes\n",
"\n",
"We now need to define a few different nodes in our graph.\n",
"In `langgraph`, a node can be either a function or a [runnable](https://python.langchain.com/v0.2/docs/concepts/#langchain-expression-language-lcel).\n",
"In `langgraph`, a node can be either a function or a [runnable](https://python.langchain.com/docs/concepts/#langchain-expression-language-lcel).\n",
"There are two main nodes we need for this:\n",
"\n",
"1. The agent: responsible for deciding what (if any) actions to take.\n",
Expand Down Expand Up @@ -426,7 +426,7 @@
"## Use it!\n",
"\n",
"We can now use it!\n",
"This now exposes the [same interface](https://python.langchain.com/v0.2/docs/concepts/#langchain-expression-language-lcel) as all other LangChain runnables."
"This now exposes the [same interface](https://python.langchain.com/docs/concepts/#langchain-expression-language-lcel) as all other LangChain runnables."
]
},
{
Expand Down
6 changes: 3 additions & 3 deletions docs/docs/how-tos/state-model.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -94,7 +94,7 @@
"\n",
"We will first define the tools we want to use.\n",
"For this simple example, we will use create a placeholder search engine.\n",
"However, it is really easy to create your own tools - see documentation [here](https://python.langchain.com/v0.2/docs/how_to/custom_tools) on how to do that.\n"
"However, it is really easy to create your own tools - see documentation [here](https://python.langchain.com/docs/how_to/custom_tools) on how to do that.\n"
]
},
{
Expand Down Expand Up @@ -243,7 +243,7 @@
"## Define the nodes\n",
"\n",
"We now need to define a few different nodes in our graph.\n",
"In `langgraph`, a node can be either a function or a [runnable](https://python.langchain.com/v0.2/docs/concepts/#langchain-expression-language-lcel).\n",
"In `langgraph`, a node can be either a function or a [runnable](https://python.langchain.com/docs/concepts/#langchain-expression-language-lcel).\n",
"There are two main nodes we need for this:\n",
"\n",
"1. The agent: responsible for deciding what (if any) actions to take.\n",
Expand Down Expand Up @@ -385,7 +385,7 @@
"## Use it!\n",
"\n",
"We can now use it!\n",
"This now exposes the [same interface](https://python.langchain.com/v0.2/docs/concepts/#langchain-expression-language-lcel) as all other LangChain runnables."
"This now exposes the [same interface](https://python.langchain.com/docs/concepts/#langchain-expression-language-lcel) as all other LangChain runnables."
]
},
{
Expand Down
6 changes: 3 additions & 3 deletions docs/docs/how-tos/streaming-tokens.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@
"<div class=\"admonition tip\">\n",
" <p class=\"admonition-title\">Note</p>\n",
" <p>\n",
" In this how-to, we will create our agent from scratch to be transparent (but verbose). You can accomplish similar functionality using the <code>create_react_agent(model, tools=tool)</code> (<a href=\"https://langchain-ai.github.io/langgraph/reference/prebuilt/#create_react_agent\">API doc</a>) constructor. This may be more appropriate if you are used to LangChain’s <a href=\"https://python.langchain.com/v0.2/docs/how_to/agent_executor/#concepts\">AgentExecutor</a> class.\n",
" In this how-to, we will create our agent from scratch to be transparent (but verbose). You can accomplish similar functionality using the <code>create_react_agent(model, tools=tool)</code> (<a href=\"https://langchain-ai.github.io/langgraph/reference/prebuilt/#create_react_agent\">API doc</a>) constructor. This may be more appropriate if you are used to LangChain’s <a href=\"https://python.langchain.com/docs/how_to/agent_executor/#concepts\">AgentExecutor</a> class.\n",
" </p>\n",
"</div> \n",
"\n",
Expand Down Expand Up @@ -138,7 +138,7 @@
"\n",
"We will first define the tools we want to use.\n",
"For this simple example, we will use create a placeholder search engine.\n",
"It is really easy to create your own tools - see documentation [here](https://python.langchain.com/v0.2/docs/how_to/custom_tools) on how to do that.\n"
"It is really easy to create your own tools - see documentation [here](https://python.langchain.com/docs/how_to/custom_tools) on how to do that.\n"
]
},
{
Expand Down Expand Up @@ -238,7 +238,7 @@
"## Define the nodes\n",
"\n",
"We now need to define a few different nodes in our graph.\n",
"In `langgraph`, a node can be either a function or a [runnable](https://python.langchain.com/v0.2/docs/concepts/#langchain-expression-language-lcel).\n",
"In `langgraph`, a node can be either a function or a [runnable](https://python.langchain.com/docs/concepts/#langchain-expression-language-lcel).\n",
"There are two main nodes we need for this:\n",
"\n",
"1. The agent: responsible for deciding what (if any) actions to take.\n",
Expand Down
Loading

0 comments on commit b11552a

Please sign in to comment.