Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: support multi agent for ts #300

Merged
merged 32 commits into from
Sep 26, 2024
Merged
Show file tree
Hide file tree
Changes from 4 commits
Commits
Show all changes
32 commits
Select commit Hold shift + click to select a range
413593b
feat: update question to ask creating multiagent in express
thucpn Sep 18, 2024
622b84b
feat: add express simple multiagent
thucpn Sep 18, 2024
f464b40
fix: import from agent
thucpn Sep 18, 2024
0ebcb9f
Create yellow-jokes-protect.md
marcusschiesser Sep 19, 2024
f43f00a
create workflow with example agents
thucpn Sep 19, 2024
6c05872
remove unused files
thucpn Sep 19, 2024
2c7a538
update doc
thucpn Sep 19, 2024
5daf519
feat: streaming event
thucpn Sep 19, 2024
b875618
fix: streaming final result
thucpn Sep 19, 2024
b030a3d
fix: pipe final streaming result
thucpn Sep 19, 2024
33ce593
feat: funtional calling agent
thucpn Sep 20, 2024
de5ba29
fix: let default max attempt 2
thucpn Sep 20, 2024
aff4f0c
fix lint
thucpn Sep 20, 2024
c4041e2
refactor: move workflow folder to src
thucpn Sep 20, 2024
f659721
refactor: share settings file for ts templates
thucpn Sep 20, 2024
54d74f8
fix: move settings.ts to setting folder
thucpn Sep 20, 2024
d69cd42
refactor: move workflow to components
thucpn Sep 20, 2024
054ee5b
Update templates/components/multiagent/typescript/workflow/index.ts
marcusschiesser Sep 23, 2024
7297edf
create ts multi agent from streaming template
thucpn Sep 23, 2024
3ebc3ec
remove copy express template
thucpn Sep 23, 2024
8cfabc5
enhance streaming and add handle tool call step
thucpn Sep 23, 2024
305296b
update changeset
thucpn Sep 23, 2024
ea3bbcf
refactor: code review
thucpn Sep 25, 2024
325c7ca
fix: coderabbit comment
thucpn Sep 25, 2024
45f7529
enable multiagent ts test
thucpn Sep 25, 2024
234b15e
fix: e2e apptype for nextjs
thucpn Sep 25, 2024
32c3d89
refactor: use context write event instead of append data annotation d…
thucpn Sep 25, 2024
7079b68
fix streaming
marcusschiesser Sep 25, 2024
6ecd5f8
Merge branch 'main' into feat/support-multi-agent-for-ts
marcusschiesser Sep 26, 2024
0679c37
fix: writer is just streaming
marcusschiesser Sep 26, 2024
fa45102
fix: clearly separate streaming events and content and use workflowEv…
marcusschiesser Sep 26, 2024
2fb502e
fix: add correct tool calls for tool messages
marcusschiesser Sep 26, 2024
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
5 changes: 5 additions & 0 deletions .changeset/yellow-jokes-protect.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
---
thucpn marked this conversation as resolved.
Show resolved Hide resolved
"create-llama": patch
---

Add multi agents template for Express
6 changes: 5 additions & 1 deletion helpers/typescript.ts
Original file line number Diff line number Diff line change
Expand Up @@ -33,7 +33,11 @@ export const installTSTemplate = async ({
* Copy the template files to the target directory.
*/
console.log("\nInitializing project with template:", template, "\n");
const type = template === "multiagent" ? "streaming" : template; // use nextjs streaming template for multiagent
let type = "streaming";
if (template === "multiagent" && framework === "express") {
// use nextjs streaming template as frontend for express and fastapi
type = "multiagent";
}
const templatePath = path.join(templatesDir, "types", type, framework);
const copySource = ["**"];

Expand Down
9 changes: 4 additions & 5 deletions questions.ts
Original file line number Diff line number Diff line change
Expand Up @@ -410,10 +410,7 @@ export const askQuestions = async (
return; // early return - no further questions needed for llamapack projects
}

if (program.template === "multiagent") {
// TODO: multi-agents currently only supports FastAPI
program.framework = preferences.framework = "fastapi";
} else if (program.template === "extractor") {
if (program.template === "extractor") {
// Extractor template only supports FastAPI, empty data sources, and llamacloud
// So we just use example file for extractor template, this allows user to choose vector database later
program.dataSources = [EXAMPLE_FILE];
Expand All @@ -424,7 +421,9 @@ export const askQuestions = async (
program.framework = getPrefOrDefault("framework");
} else {
const choices = [
{ title: "NextJS", value: "nextjs" },
...(program.template === "multiagent"
? []
: [{ title: "NextJS", value: "nextjs" }]), // Not supported nextjs for multiagent for now
{ title: "Express", value: "express" },
{ title: "FastAPI (Python)", value: "fastapi" },
];
Expand Down
103 changes: 103 additions & 0 deletions templates/types/multiagent/express/README-template.md
thucpn marked this conversation as resolved.
Show resolved Hide resolved
Original file line number Diff line number Diff line change
@@ -0,0 +1,103 @@
This is a [LlamaIndex](https://www.llamaindex.ai/) project using [Express](https://expressjs.com/) bootstrapped with [`create-llama`](https://github.com/run-llama/LlamaIndexTS/tree/main/packages/create-llama).

## Getting Started

First, install the dependencies:

```
npm install
```

Second, generate the embeddings of the documents in the `./data` directory (if this folder exists - otherwise, skip this step):

```
npm run generate
```

Third, run the development server:

```
npm run dev
```

The example provides two different API endpoints:

1. `/api/chat` - a streaming chat endpoint (found in `src/controllers/chat.controller.ts`)
2. `/api/chat/request` - a non-streaming chat endpoint (found in `src/controllers/chat-request.controller.ts`)

You can test the streaming endpoint with the following curl request:

```
curl --location 'localhost:8000/api/chat' \
--header 'Content-Type: application/json' \
--data '{ "messages": [{ "role": "user", "content": "Hello" }] }'
```

And for the non-streaming endpoint run:

```
curl --location 'localhost:8000/api/chat/request' \
--header 'Content-Type: application/json' \
--data '{ "messages": [{ "role": "user", "content": "Hello" }] }'
```

You can start editing the API by modifying `src/controllers/chat.controller.ts` or `src/controllers/chat-request.controller.ts`. The endpoint auto-updates as you save the file.
You can delete the endpoint that you're not using.

## Production

First, build the project:

```
npm run build
```

You can then run the production server:

```
NODE_ENV=production npm run start
```

> Note that the `NODE_ENV` environment variable is set to `production`. This disables CORS for all origins.

## Using Docker

1. Build an image for the Express API:

```
docker build -t <your_backend_image_name> .
```

2. Generate embeddings:

Parse the data and generate the vector embeddings if the `./data` folder exists - otherwise, skip this step:

```
docker run --rm \
-v $(pwd)/.env:/app/.env \ # Use ENV variables and configuration from your file-system
-v $(pwd)/config:/app/config \
-v $(pwd)/data:/app/data \
-v $(pwd)/cache:/app/cache \ # Use your file system to store the vector database
<your_backend_image_name>
npm run generate
```

3. Start the API:

```
docker run \
-v $(pwd)/.env:/app/.env \ # Use ENV variables and configuration from your file-system
-v $(pwd)/config:/app/config \
-v $(pwd)/cache:/app/cache \ # Use your file system to store the vector database
-p 8000:8000 \
<your_backend_image_name>
```

## Learn More

To learn more about LlamaIndex, take a look at the following resources:

- [LlamaIndex Documentation](https://docs.llamaindex.ai) - learn about LlamaIndex (Python features).
- [LlamaIndexTS Documentation](https://ts.llamaindex.ai) - learn about LlamaIndex (Typescript features).

You can check out [the LlamaIndexTS GitHub repository](https://github.com/run-llama/LlamaIndexTS) - your feedback and contributions are welcome!
122 changes: 122 additions & 0 deletions templates/types/multiagent/express/agents/single.ts
thucpn marked this conversation as resolved.
Show resolved Hide resolved
Original file line number Diff line number Diff line change
@@ -0,0 +1,122 @@
import {
Context,
StartEvent,
StopEvent,
Workflow,
WorkflowEvent,
} from "@llamaindex/core/workflow";
import {
BaseTool,
ChatMemoryBuffer,
ChatMessage,
ChatResponse,
LLM,
Settings,
ToolCall,
ToolOutput,
} from "llamaindex";

export class InputEvent extends WorkflowEvent<{
input: ChatMessage[];
}> {}

export class ToolCallEvent extends WorkflowEvent<{
tool_calls: ToolCall[];
}> {}

export class AgentRunEvent extends WorkflowEvent<{
name: string;
msg: string;
}> {}

export class MessageEvent extends WorkflowEvent<{ msg: string }> {}

export class AgentRunResult {
constructor(
public response: ChatResponse,
public sources: ToolOutput[],
) {}
}

export class FunctionCallingAgent extends Workflow {
tools: BaseTool[];
name: string;
writeEvents: boolean;
role?: string;
llm: LLM;
systemPrompt?: string;
memory: ChatMemoryBuffer;
sources: ToolOutput[];

constructor(options: {
name: string;
llm?: LLM;
chatHistory?: ChatMessage[];
tools?: BaseTool[];
systemPrompt?: string;
verbose?: boolean;
timeout?: number;
writeEvents?: boolean;
role?: string;
}) {
super({
verbose: options?.verbose ?? false,
timeout: options?.timeout ?? 360,
});
this.tools = options?.tools ?? [];
this.name = options?.name;
this.writeEvents = options?.writeEvents ?? true;
this.role = options?.role;
this.llm = options.llm ?? Settings.llm;
this.systemPrompt = options.systemPrompt;
this.memory = new ChatMemoryBuffer({
llm: this.llm,
chatHistory: options.chatHistory,
});
this.sources = [];
this.addStep(StartEvent, this.prepareChatHistory, {
outputs: InputEvent,
});
this.addStep(InputEvent, this.handleLLMInput, {
outputs: [ToolCallEvent, StopEvent],
});
this.addStep(ToolCallEvent, this.handleToolCalls, {
outputs: InputEvent,
});
}

get steps() {
thucpn marked this conversation as resolved.
Show resolved Hide resolved
return [
{
step: StartEvent,
handler: () => this.prepareChatHistory,
params: { outputs: InputEvent },
},
{
step: InputEvent,
handler: this.handleLLMInput,
params: { outputs: [ToolCallEvent, StopEvent] },
},
{
step: ToolCallEvent,
handler: this.handleToolCalls,
params: { outputs: InputEvent },
},
];
}

async prepareChatHistory(ctx: Context, ev: StartEvent): Promise<InputEvent> {
throw new Error("Method not implemented.");
}

async handleLLMInput(
ctx: Context,
ev: InputEvent,
): Promise<ToolCallEvent | StopEvent> {
throw new Error("Method not implemented.");
}

async handleToolCalls(ctx: Context, ev: ToolCallEvent): Promise<InputEvent> {
throw new Error("Method not implemented.");
}
}
10 changes: 10 additions & 0 deletions templates/types/multiagent/express/eslintrc.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,10 @@
{
"extends": ["eslint:recommended", "prettier"],
"rules": {
"max-params": ["error", 4],
"prefer-const": "error"
},
"parserOptions": {
"sourceType": "module"
}
}
32 changes: 32 additions & 0 deletions templates/types/multiagent/express/examples/researcher.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,32 @@
import { ChatMessage, QueryEngineTool } from "llamaindex";
import { FunctionCallingAgent } from "../agents/single";
import { getDataSource } from "../controllers/engine";

const getQueryEngineTool = async () => {
const index = await getDataSource();
if (!index) {
throw new Error("Index not found. Please create an index first.");
}

const topK = process.env.TOP_K ? parseInt(process.env.TOP_K) : undefined;
return new QueryEngineTool({
queryEngine: index.asQueryEngine({
similarityTopK: topK,
}),
metadata: {
name: "query_index",
description: `Use this tool to retrieve information about the text corpus from the index.`,
},
});
};

export const createResearcher = async (chatHistory: ChatMessage[]) => {
return new FunctionCallingAgent({
name: "researcher",
tools: [await getQueryEngineTool()],
role: "expert in retrieving any unknown content",
systemPrompt:
"You are a researcher agent. You are given a researching task. You must use your tools to complete the research.",
chatHistory,
});
};
35 changes: 35 additions & 0 deletions templates/types/multiagent/express/examples/workflow.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,35 @@
import { StepFunction, Workflow } from "@llamaindex/core/workflow";
import { ChatMessage } from "llamaindex";
import { FunctionCallingAgent } from "../agents/single";
import { createResearcher } from "./researcher";

// TODO: implement
class BlogPostWorkflow extends Workflow {}
thucpn marked this conversation as resolved.
Show resolved Hide resolved

export async function createWorkflow(chatHistory: ChatMessage[]) {
const researcher = await createResearcher(chatHistory);
const writer = new FunctionCallingAgent({
name: "writer",
role: "expert in writing blog posts",
systemPrompt:
"You are an expert in writing blog posts. You are given a task to write a blog post. Don't make up any information yourself.",
chatHistory: chatHistory,
});
const reviewer = new FunctionCallingAgent({
name: "reviewer",
role: "expert in reviewing blog posts",
systemPrompt:
"You are an expert in reviewing blog posts. You are given a task to review a blog post. Review the post for logical inconsistencies, ask critical questions, and provide suggestions for improvement. Furthermore, proofread the post for grammar and spelling errors. Only if the post is good enough for publishing, then you MUST return 'The post is good.'. In all other cases return your review.",
chatHistory: chatHistory,
});
const workflow = new BlogPostWorkflow();

// FIXME: Workflow in LITS doesn't support adding workflows
// like in Python (workflow.addWorkflows), so we have to add steps manually
const steps = [...researcher.steps, ...writer.steps, ...reviewer.steps];
thucpn marked this conversation as resolved.
Show resolved Hide resolved
steps.forEach((step) =>
workflow.addStep(step.step, step.handler as StepFunction, step.params),
);
thucpn marked this conversation as resolved.
Show resolved Hide resolved

return workflow;
}
5 changes: 5 additions & 0 deletions templates/types/multiagent/express/gitignore
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
# local env files
.env
node_modules/

output/
Loading
Loading