Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: support multi agent for ts #300

Merged
merged 32 commits into from
Sep 26, 2024
Merged

Conversation

thucpn
Copy link
Collaborator

@thucpn thucpn commented Sep 18, 2024

Summary by CodeRabbit

Release Notes

  • New Features

    • Introduced a multi-agent template for Express, enhancing modularity in agent management.
    • Added comprehensive documentation with a new README-template.md for projects using LlamaIndex with Express.
    • Implemented new API endpoints for chat functionalities.
    • Added functionality for creating specialized agents: researcher, writer, and reviewer.
    • Introduced a multi-agent workflow for generating and reviewing blog posts.
    • Enhanced asynchronous data stream handling within the workflow system.
  • Bug Fixes

    • Improved logic for template selection to ensure correct framework compatibility.
  • Chores

    • Added new dependencies for enhanced functionality in projects.

Copy link

changeset-bot bot commented Sep 18, 2024

🦋 Changeset detected

Latest commit: 2fb502e

The changes in this PR will be included in the next version bump.

This PR includes changesets to release 1 package
Name Type
create-llama Patch

Not sure what this means? Click here to learn what changesets are.

Click here if you're a maintainer who wants to add another changeset to this PR

Copy link

coderabbitai bot commented Sep 18, 2024

📝 Walkthrough

Walkthrough

A new multi-agent template has been introduced for the Express framework, facilitating the management of multiple agents within applications. This update includes comprehensive documentation, enhancements to existing functions for template handling, and the implementation of various agent functionalities, such as chat processing and agent creation. The changes also encompass new files for project setup, including ESLint and Prettier configurations, and modifications to existing files for improved functionality.

Changes

File Path Change Summary
.changeset/yellow-jokes-protect.md Added multi-agent template for Express and new README documentation.
helpers/typescript.ts Modified installTSTemplate logic for template type selection and added settings file copy functionality.
templates/types/multiagent/express/src/controllers/chat.controller.ts Added chat controller for processing messages and managing chat interactions.
templates/types/multiagent/express/src/workflow/agents.ts Implemented agent creation functions for researcher, writer, and reviewer roles.
templates/types/multiagent/express/src/workflow/factory.ts Added workflow management for research, writing, and reviewing processes.
templates/types/multiagent/express/src/workflow/index.ts Defined multi-agent workflow for generating and reviewing blog posts.
templates/types/multiagent/express/src/workflow/stream.ts Introduced functions for handling asynchronous data streams in workflows.
templates/types/multiagent/express/README-template.md Provided documentation for LlamaIndex with Express, including API endpoints and setup instructions.
templates/types/multiagent/express/eslintrc.json Introduced ESLint configuration with custom rules.
templates/types/multiagent/express/gitignore Added .gitignore file to exclude unnecessary files from version control.
templates/types/multiagent/express/index.ts Implemented Express server for multi-agent API handling.
templates/types/multiagent/express/npmrc Added .npmrc configuration for package management.
templates/types/multiagent/express/package.json Created package.json with project metadata and dependencies.
templates/types/multiagent/express/prettier.config.cjs Added Prettier configuration for code formatting.
templates/types/multiagent/express/tsconfig.json Created TypeScript configuration for project compilation settings.
templates/types/streaming/nextjs/app/api/chat/engine/settings.ts Deleted settings file related to AI model provider initialization.

Possibly related PRs

Suggested reviewers

  • marcusschiesser

🐇 In the meadow, agents play,
Crafting words both night and day.
With chat and flow, they weave their art,
A multi-agent work of heart.
New templates bloom, like flowers bright,
Guiding code with sheer delight! 🌼


Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

@thucpn thucpn marked this pull request as ready for review September 20, 2024 08:44
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 18

Outside diff range and nitpick comments (18)
templates/types/multiagent/express/tsconfig.json (1)

7-7: Consider removing the skipLibCheck option.

The skipLibCheck option skips type checking of declaration files. While this can speed up compilation, it may cause type errors in dependencies to be missed.

Consider removing the skipLibCheck option to ensure type errors in dependencies are caught:

-    "skipLibCheck": true,
templates/types/multiagent/express/README-template.md (10)

7-9: Specify the language for the code block.

To improve the rendering of the code block in Markdown, specify the language as bash.

Apply this diff:

-```
+```bash
 npm install

<details>
<summary>Tools</summary>

<details>
<summary>Markdownlint</summary><blockquote>

7-7: null
Fenced code blocks should have a language specified

(MD040, fenced-code-language)

</blockquote></details>

</details>

---

`13-15`: **Specify the language for the code block.**

To improve the rendering of the code block in Markdown, specify the language as `bash`.

Apply this diff:

```diff
-```
+```bash
 npm run generate

<details>
<summary>Tools</summary>

<details>
<summary>Markdownlint</summary><blockquote>

13-13: null
Fenced code blocks should have a language specified

(MD040, fenced-code-language)

</blockquote></details>

</details>

---

`19-21`: **Specify the language for the code block.**

To improve the rendering of the code block in Markdown, specify the language as `bash`.

Apply this diff:

```diff
-```
+```bash
 npm run dev

<details>
<summary>Tools</summary>

<details>
<summary>Markdownlint</summary><blockquote>

19-19: null
Fenced code blocks should have a language specified

(MD040, fenced-code-language)

</blockquote></details>

</details>

---

`30-34`: **Specify the language for the code block.**

To improve the rendering of the code block in Markdown, specify the language as `bash`.

Apply this diff:

```diff
-```
+```bash
 curl --location 'localhost:8000/api/chat' \
 --header 'Content-Type: application/json' \
 --data '{ "messages": [{ "role": "user", "content": "Hello" }] }'

<details>
<summary>Tools</summary>

<details>
<summary>Markdownlint</summary><blockquote>

30-30: null
Fenced code blocks should have a language specified

(MD040, fenced-code-language)

</blockquote></details>

</details>

---

`38-42`: **Specify the language for the code block.**

To improve the rendering of the code block in Markdown, specify the language as `bash`.

Apply this diff:

```diff
-```
+```bash
 curl --location 'localhost:8000/api/chat/request' \
 --header 'Content-Type: application/json' \
 --data '{ "messages": [{ "role": "user", "content": "Hello" }] }'

<details>
<summary>Tools</summary>

<details>
<summary>Markdownlint</summary><blockquote>

38-38: null
Fenced code blocks should have a language specified

(MD040, fenced-code-language)

</blockquote></details>

</details>

---

`51-53`: **Specify the language for the code block.**

To improve the rendering of the code block in Markdown, specify the language as `bash`.

Apply this diff:

```diff
-```
+```bash
 npm run build

<details>
<summary>Tools</summary>

<details>
<summary>Markdownlint</summary><blockquote>

51-51: null
Fenced code blocks should have a language specified

(MD040, fenced-code-language)

</blockquote></details>

</details>

---

`57-59`: **Specify the language for the code block.**

To improve the rendering of the code block in Markdown, specify the language as `bash`.

Apply this diff:

```diff
-```
+```bash
 NODE_ENV=production npm run start

<details>
<summary>Tools</summary>

<details>
<summary>Markdownlint</summary><blockquote>

57-57: null
Fenced code blocks should have a language specified

(MD040, fenced-code-language)

</blockquote></details>

</details>

---

`67-69`: **Specify the language for the code block.**

To improve the rendering of the code block in Markdown, specify the language as `bash`.

Apply this diff:

```diff
-```
+```bash
 docker build -t <your_backend_image_name> .

<details>
<summary>Tools</summary>

<details>
<summary>Markdownlint</summary><blockquote>

67-67: null
Fenced code blocks should have a language specified

(MD040, fenced-code-language)

</blockquote></details>

</details>

---

`75-83`: **Specify the language for the code block.**

To improve the rendering of the code block in Markdown, specify the language as `bash`.

Apply this diff:

```diff
-```
+```bash
 docker run --rm \
   -v $(pwd)/.env:/app/.env \ # Use ENV variables and configuration from your file-system
   -v $(pwd)/config:/app/config \
   -v $(pwd)/data:/app/data \
   -v $(pwd)/cache:/app/cache \ # Use your file system to store the vector database
   <your_backend_image_name>
   npm run generate

<details>
<summary>Tools</summary>

<details>
<summary>Markdownlint</summary><blockquote>

75-75: null
Fenced code blocks should have a language specified

(MD040, fenced-code-language)

</blockquote></details>

</details>

---

`87-94`: **Specify the language for the code block.**

To improve the rendering of the code block in Markdown, specify the language as `bash`.

Apply this diff:

```diff
-```
+```bash
 docker run \
   -v $(pwd)/.env:/app/.env \ # Use ENV variables and configuration from your file-system
   -v $(pwd)/config:/app/config \
   -v $(pwd)/cache:/app/cache \ # Use your file system to store the vector database
   -p 8000:8000 \
   <your_backend_image_name>

<details>
<summary>Tools</summary>

<details>
<summary>Markdownlint</summary><blockquote>

87-87: null
Fenced code blocks should have a language specified

(MD040, fenced-code-language)

</blockquote></details>

</details>

</blockquote></details>
<details>
<summary>helpers/typescript.ts (1)</summary><blockquote>

`36-40`: **LGTM!**

The changes to the logic for determining the `type` variable based on the `template` and `framework` parameters look good. The default value is explicitly set to "streaming", and the condition to set it to "multiagent" is more specific now.


Consider adding a comment to clarify the logic and the specific case when "multiagent" is used. For example:

```typescript
// Set the default type to "streaming"
let type = "streaming";

// Use the "multiagent" type only when the template is "multiagent" and the framework is "express"
if (template === "multiagent" && framework === "express") {
  type = "multiagent";
}
templates/types/multiagent/express/src/controllers/chat.controller.ts (1)

9-9: Consider renaming vercelStreamData for clarity

The variable name vercelStreamData may imply specificity to Vercel. If the code is intended to be platform-agnostic, consider renaming it to streamData for better clarity.

Apply the following diff and update all references accordingly:

-  const vercelStreamData = new StreamData();
+  const streamData = new StreamData();
templates/types/multiagent/express/src/workflow/stream.ts (2)

31-33: Add error handling for unexpected value.data types.

If value.data is not an instance of AgentRunResult, the code currently does nothing. Adding error handling or logging can help identify issues during runtime and improve the robustness of your stream processing.

You could log a warning or throw an error:

if (value.data instanceof AgentRunResult) {
  // existing code
+ } else {
+   console.warn('Received unexpected data type:', value.data);
+   controller.error(new Error('Unexpected data type in generator output'));
}

35-53: Add error handling for unexpected event.data types.

Within the for await loop, if event.data is not an instance of FunctionCallingStreamResult, it is ignored. Logging unexpected event types can help with debugging and ensure that all data is correctly processed.

For example:

if (event.data instanceof FunctionCallingStreamResult) {
  // existing code
+ } else {
+   console.warn('Received unexpected event data type:', event.data);
}
templates/types/multiagent/express/src/workflow/agents.ts (2)

44-45: Enhance the system prompt for clarity and emphasis on factual accuracy

Consider rephrasing the system prompt to more clearly instruct the writer agent to use accurate information and avoid fabrication.

Suggested change:

-      "You are an expert in writing blog posts. You are given a task to write a blog post. Don't make up any information yourself.",
+      "You are an expert in writing blog posts. You are tasked with writing a blog post. Ensure all information is accurate and based on verified sources; do not fabricate any content.",

57-58: Clarify the system prompt for the reviewer agent to avoid ambiguity

The current system prompt may be ambiguous regarding the expected response. Clarify that the reviewer should return 'The post is good.' only if the post meets all publishing standards without any issues.

Suggested change:

-      "You are an expert in reviewing blog posts. You are given a task to review a blog post. Review the post for logical inconsistencies, ask critical questions, and provide suggestions for improvement. Furthermore, proofread the post for grammar and spelling errors. Only if the post is good enough for publishing, then you MUST return 'The post is good.'. In all other cases return your review.",
+      "You are an expert in reviewing blog posts. Your task is to thoroughly review a blog post. If the post is flawless and ready for publishing without any changes, you MUST return exactly: 'The post is good.'. Otherwise, provide a detailed review highlighting any logical inconsistencies, posing critical questions, suggesting improvements, and noting any grammar or spelling errors.",
templates/types/multiagent/express/src/workflow/factory.ts (1)

36-47: Document required parameters in constructor options.

The stream parameter is essential but isn't explicitly marked as required in the constructor options. To enhance code clarity, consider updating the parameter definition to indicate that it is required.

Apply this diff to adjust the parameter definition:

      constructor(options: {
        name: string;
        llm?: LLM;
        chatHistory?: ChatMessage[];
        tools?: BaseToolWithCall[];
        systemPrompt?: string;
        writeEvents?: boolean;
        role?: string;
        verbose?: boolean;
        timeout?: number;
-       stream: StreamData;
+       stream: StreamData; // Note: This parameter is required
      }) {
Review details

Configuration used: CodeRabbit UI
Review profile: CHILL

Commits

Files that changed from the base of the PR and between 8c1087f and f659721.

Files selected for processing (21)
  • .changeset/yellow-jokes-protect.md (1 hunks)
  • helpers/typescript.ts (2 hunks)
  • questions.ts (2 hunks)
  • templates/types/multiagent/express/README-template.md (1 hunks)
  • templates/types/multiagent/express/eslintrc.json (1 hunks)
  • templates/types/multiagent/express/gitignore (1 hunks)
  • templates/types/multiagent/express/index.ts (1 hunks)
  • templates/types/multiagent/express/npmrc (1 hunks)
  • templates/types/multiagent/express/package.json (1 hunks)
  • templates/types/multiagent/express/prettier.config.cjs (1 hunks)
  • templates/types/multiagent/express/src/controllers/chat-config.controller.ts (1 hunks)
  • templates/types/multiagent/express/src/controllers/chat.controller.ts (1 hunks)
  • templates/types/multiagent/express/src/observability/index.ts (1 hunks)
  • templates/types/multiagent/express/src/routes/chat.route.ts (1 hunks)
  • templates/types/multiagent/express/src/workflow/agents.ts (1 hunks)
  • templates/types/multiagent/express/src/workflow/factory.ts (1 hunks)
  • templates/types/multiagent/express/src/workflow/index.ts (1 hunks)
  • templates/types/multiagent/express/src/workflow/stream.ts (1 hunks)
  • templates/types/multiagent/express/src/workflow/type.ts (1 hunks)
  • templates/types/multiagent/express/tsconfig.json (1 hunks)
  • templates/types/streaming/nextjs/app/api/chat/engine/settings.ts (0 hunks)
Files not reviewed due to no reviewable changes (1)
  • templates/types/streaming/nextjs/app/api/chat/engine/settings.ts
Files skipped from review due to trivial changes (1)
  • templates/types/multiagent/express/src/observability/index.ts
Additional context used
Markdownlint
templates/types/multiagent/express/README-template.md

7-7: null
Fenced code blocks should have a language specified

(MD040, fenced-code-language)


13-13: null
Fenced code blocks should have a language specified

(MD040, fenced-code-language)


19-19: null
Fenced code blocks should have a language specified

(MD040, fenced-code-language)


30-30: null
Fenced code blocks should have a language specified

(MD040, fenced-code-language)


38-38: null
Fenced code blocks should have a language specified

(MD040, fenced-code-language)


51-51: null
Fenced code blocks should have a language specified

(MD040, fenced-code-language)


57-57: null
Fenced code blocks should have a language specified

(MD040, fenced-code-language)


67-67: null
Fenced code blocks should have a language specified

(MD040, fenced-code-language)


75-75: null
Fenced code blocks should have a language specified

(MD040, fenced-code-language)


87-87: null
Fenced code blocks should have a language specified

(MD040, fenced-code-language)

Biome
templates/types/multiagent/express/src/controllers/chat-config.controller.ts

[error] 7-8: Change to an optional chain.

Unsafe fix: Change to an optional chain.

(lint/complexity/useOptionalChain)

Additional comments not posted (28)
templates/types/multiagent/express/npmrc (1)

1-1: LGTM!

The node-linker=hoisted configuration is valid and commonly used to ensure compatibility with older npm versions (before v7) or to avoid issues with certain packages that have not been updated to work with the new npm behavior.

This configuration maximizes compatibility by replicating the behavior of npm versions prior to v7. It's a good choice if you need to ensure that your project works consistently across different npm versions or if you are using dependencies that have not yet been updated to support the new npm behavior.

templates/types/multiagent/express/gitignore (1)

1-5: LGTM!

The .gitignore file is set up correctly to ignore common files and directories that should not be tracked by Git. This includes:

  • .env file for storing environment variables
  • node_modules/ directory for Node.js dependencies
  • output/ directory for generated files

Ignoring these files and directories is a best practice and helps keep the repository clean.

templates/types/multiagent/express/prettier.config.cjs (1)

1-3: LGTM!

The Prettier configuration file is set up correctly with the prettier-plugin-organize-imports plugin. This plugin will help maintain a consistent import order across the codebase, improving readability and maintainability.

.changeset/yellow-jokes-protect.md (1)

1-5: LGTM!

The changeset follows the correct format and the content looks good:

  • The package name "create-llama" is specified correctly.
  • The version bump type is set to "patch", which is appropriate for adding a new feature that doesn't break existing functionality.
  • The message provides a clear and concise description of the change.
templates/types/multiagent/express/eslintrc.json (1)

1-10: LGTM!

The ESLint configuration file is set up correctly:

  • It extends the recommended ESLint rules and Prettier rules.
  • It sets a reasonable maximum of 4 parameters for functions.
  • It enforces the use of const for variables that are never reassigned.
  • It sets the sourceType to module for a project that uses ES modules.

This configuration will help enforce consistent code style and best practices across the project.

templates/types/multiagent/express/src/routes/chat.route.ts (1)

1-12: LGTM!

The Express router for chat-related routes is defined correctly:

  • Necessary dependencies and controllers are imported.
  • Routes are mapped to the appropriate controllers.
  • The router is exported as the default export.

The code follows a standard structure and there are no apparent issues.

templates/types/multiagent/express/src/workflow/type.ts (4)

4-7: LGTM!

The AgentInput type definition is clear and concise. The properties are well-defined with appropriate types and optionality.


9-12: LGTM!

The AgentRunEvent class definition is clear and concise. It extends WorkflowEvent with a well-defined generic type parameter.


14-16: LGTM!

The AgentRunResult class definition is clear and concise. The constructor takes a response parameter of type AsyncGenerator<WorkflowEvent> and assigns it to a public property.


18-20: LGTM!

The FunctionCallingStreamResult class definition is clear and concise. The constructor takes a response parameter of type ReadableStream<EngineResponse> and assigns it to a public property.

templates/types/multiagent/express/src/controllers/chat-config.controller.ts (2)

4-15: LGTM!

The function logic is correct, and the implementation is accurate. It correctly checks for the existence and non-empty value of the CONVERSATION_STARTERS environment variable, splits it by newline characters, and returns the resulting array in the response.

Tools
Biome

[error] 7-8: Change to an optional chain.

Unsafe fix: Change to an optional chain.

(lint/complexity/useOptionalChain)


17-31: LGTM!

The function logic is correct, and the implementation is accurate. It correctly checks for the existence of the LLAMA_CLOUD_API_KEY environment variable, retrieves all projects with pipelines, and returns the configuration in the response.

The static analysis hint suggests using optional chaining for the environment variable check, but it's not applicable in this case as the check is already correctly implemented using an if statement.

templates/types/multiagent/express/package.json (2)

17-30: Verify compatibility of "ai" and "llamaindex" dependencies.

Using both "ai" and "llamaindex" libraries together could potentially lead to dependency conflicts, as they are separate AI libraries. Please verify that these dependencies are compatible and being used for distinct purposes to avoid any issues.


1-45: Dependencies and scripts look good!

The overall structure of the package.json file looks great. The dependencies seem reasonable for a multi-agent AI project built with Express.js, with a good mix of AI libraries, web frameworks, and utility packages. The dev dependencies and scripts are also standard for a TypeScript project.

Just verify the compatibility of the AI libraries as mentioned in the other comment. Otherwise, this looks ready to go!

templates/types/multiagent/express/index.ts (5)

1-7: LGTM!

The code segment imports necessary dependencies and modules. Disabling the ESLint rule for undeclared environment variables is acceptable in this case.


8-15: LGTM!

The code segment sets up the Express app instance, port number, environment determination, and initializes observability correctly.


17-32: LGTM!

The CORS configuration logic is implemented correctly based on the environment. It allows CORS for all origins in development mode and restricts it to a specific domain in production mode. Defaulting to no CORS if PROD_CORS_ORIGIN is not set in production is a safe fallback.


34-42: LGTM!

The code segment sets up static file serving, request body parsing, a root route, and mounts the chatRouter correctly.


44-46: LGTM!

The code segment starts the Express server and listens on the specified port correctly. Logging a message to the console when the server starts is a good practice for visibility.

helpers/typescript.ts (1)

152-159: Looks good!

The new functionality to copy the settings file (settings.ts) to the engine folder is implemented correctly. The path for the settings file is constructed based on the compPath variable, and the file is copied to the appropriate location within the enginePath.

questions.ts (2)

Line range hint 413-417: LGTM!

The logic for the "extractor" template is clear and concise. It explicitly states the supported framework (FastAPI), data sources (empty), and llamacloud. Using a predefined example file as the data source for the extractor template allows users to select a vector database later.


424-426: LGTM!

The conditional inclusion of the "NextJS" option based on the template type ensures that unsupported combinations are not presented to the user. If the template is "multiagent", the "NextJS" option is correctly omitted from the choices.

templates/types/multiagent/express/src/controllers/chat.controller.ts (2)

23-23: Consider awaiting agent.run(userMessage.content) if it's asynchronous

If agent.run() is an asynchronous function, it should be awaited to ensure that it completes before proceeding. Not awaiting it may lead to unexpected behavior or unhandled promises.

Apply the following diff to await the function call:

-    agent.run(userMessage.content);
+    await agent.run(userMessage.content);

Please verify if agent.run() returns a promise and should be awaited.


21-21: Verify the type casting from Message[] to ChatMessage[]

Casting messages from Message[] to ChatMessage[] may not be safe if Message and ChatMessage have different structures. Ensure that the properties align or transform messages appropriately.

To confirm compatibility, you can verify the structures using the following script:

templates/types/multiagent/express/src/workflow/stream.ts (2)

46-49: Verify if data.close() is necessary and correctly implemented.

In the close() method of the WritableStream, you call data.close();. Ensure that the data object has a close() method and that invoking it here is appropriate. If data is of type StreamData from the ai library, confirm that it includes a close() method and that it behaves as expected in this context.

Run the following script to check the StreamData type definition:

#!/bin/bash
# Description: Check if `StreamData` has a `close` method.

# Search for the definition of `StreamData` in the codebase.
rg --type=typescript 'interface StreamData' -A 10

# If not found locally, check the `ai` package for `StreamData` definition.
rg --type=typescript 'export.*StreamData' -A 10 node_modules/ai

If data.close() is not necessary or does not exist, remove it:

close() {
  controller.close();
- data.close();
},

40-45: Ensure chunk.delta is always defined before processing.

In the write(chunk) method of the WritableStream, you access chunk.delta using the nullish coalescing operator ?? "". Ensure that chunk always has a delta property, or handle cases where it might be undefined or null to prevent runtime errors.

Run the following script to verify that chunk.delta is always defined:

Alternatively, add a type guard or validation:

write(chunk) {
+ if (!('delta' in chunk)) {
+   throw new Error('chunk.delta is undefined');
+ }
  const text = trimStartOfStream(chunk.delta ?? "");
  if (text) {
    controller.enqueue(text);
  }
},
Verification successful

Nullish coalescing operator handles potential undefined delta, but type definition could improve safety.

The current implementation already uses the nullish coalescing operator (??) to handle cases where chunk.delta might be undefined or null. This approach is consistent across the codebase, as seen in both templates/types/multiagent/express/src/workflow/stream.ts and templates/components/llamaindex/typescript/streaming/stream.ts.

While this handles the runtime safety, consider adding a type definition for the chunk object to improve type safety:

interface Chunk {
  delta?: string;
  // Add other properties if needed
}

write(chunk: Chunk) {
  const text = trimStartOfStream(chunk.delta ?? "");
  // ... rest of the method
}

This would provide better type checking while maintaining the current runtime safety with the nullish coalescing operator.

Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Verify that `chunk.delta` is always defined in the stream.

# Search for all places where chunks are created and ensure they have a `delta` property.
rg --type=typescript 'new WritableStream' -A 20

# Check the type definitions of `chunk` to confirm it includes `delta`.
rg --type=typescript 'interface Chunk' -A 10

Length of output: 281


Script:

#!/bin/bash

# Find and display the content of the file containing the 'WritableStream' implementation
echo "=== WritableStream Implementation ==="
fd -e ts -e js | xargs grep -n "new WritableStream" -A 20 || echo "No WritableStream found"

echo "\n=== Type Definitions or Interfaces ==="
fd -e ts -e js | xargs grep -n "interface.*Stream\|type.*Stream\|interface.*Chunk\|type.*Chunk" -A 10 || echo "No relevant type definitions found"

echo "\n=== Usage of 'delta' property ==="
fd -e ts -e js | xargs grep -n "\.delta" -B 5 -A 5 || echo "No usage of 'delta' property found"

Length of output: 6246

templates/types/multiagent/express/src/workflow/agents.ts (2)

1-4: Import statements are well-organized

The import statements are properly structured and follow best practices.


24-36: createResearcher function implementation is correct

The createResearcher function is correctly implemented and makes appropriate use of getQueryEngineTool().

templates/types/multiagent/express/tsconfig.json Outdated Show resolved Hide resolved
templates/types/multiagent/express/src/workflow/stream.ts Outdated Show resolved Hide resolved
templates/types/multiagent/express/src/workflow/stream.ts Outdated Show resolved Hide resolved
templates/types/multiagent/express/src/workflow/index.ts Outdated Show resolved Hide resolved
templates/types/multiagent/express/src/workflow/index.ts Outdated Show resolved Hide resolved
templates/types/multiagent/express/src/workflow/index.ts Outdated Show resolved Hide resolved
templates/types/multiagent/express/src/workflow/index.ts Outdated Show resolved Hide resolved
templates/types/multiagent/express/src/workflow/index.ts Outdated Show resolved Hide resolved
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 14

Outside diff range and nitpick comments (3)
templates/components/multiagent/typescript/workflow/stream.ts (1)

28-28: Re-evaluate the necessity of enqueuing an empty string

In the start method, you enqueue an empty string with controller.enqueue(""); to kickstart the stream. If this isn't necessary for the stream's functionality, consider removing it to avoid sending unnecessary data to the stream consumers.

templates/components/multiagent/typescript/workflow/factory.ts (2)

49-50: Unnecessary optional chaining on the 'options' object

The options parameter is required in the constructor, so using optional chaining (options?.) when accessing its properties is unnecessary and may imply that options could be undefined.

Remove the optional chaining for clearer code:

- verbose: options?.verbose ?? false,
- timeout: options?.timeout ?? 360,
+ verbose: options.verbose ?? false,
+ timeout: options.timeout ?? 360,

- this.name = options?.name;
+ this.name = options.name;

- this.tools = options?.tools ?? [];
+ this.tools = options.tools ?? [];

- this.writeEvents = options?.writeEvents ?? true;
+ this.writeEvents = options.writeEvents ?? true;

Also applies to: 52-52, 58-58, 60-60


37-46: Consider marking 'stream' as optional in constructor options

The stream property in the constructor options is currently required, but there may be cases where it's not provided. To increase flexibility, consider marking it as optional.

- stream: StreamData;
+ stream?: StreamData;

And adjust the assignment accordingly, ensuring you handle the possibility of undefined where this.stream is used.

Review details

Configuration used: CodeRabbit UI
Review profile: CHILL

Commits

Files that changed from the base of the PR and between 54d74f8 and d69cd42.

Files selected for processing (7)
  • helpers/typescript.ts (3 hunks)
  • templates/components/multiagent/typescript/workflow/agents.ts (1 hunks)
  • templates/components/multiagent/typescript/workflow/factory.ts (1 hunks)
  • templates/components/multiagent/typescript/workflow/index.ts (1 hunks)
  • templates/components/multiagent/typescript/workflow/stream.ts (1 hunks)
  • templates/components/multiagent/typescript/workflow/type.ts (1 hunks)
  • templates/types/multiagent/express/src/controllers/chat.controller.ts (1 hunks)
Files skipped from review as they are similar to previous changes (2)
  • helpers/typescript.ts
  • templates/types/multiagent/express/src/controllers/chat.controller.ts
Additional comments not posted (5)
templates/components/multiagent/typescript/workflow/type.ts (4)

4-7: LGTM!

The AgentInput type definition is clear and concise. The message property is required, while streaming is optional, which makes sense for an agent input.


9-12: LGTM!

The AgentRunEvent class definition is clear and concise. Extending the WorkflowEvent class is appropriate for an agent run event, and the generic type parameter properties are well-defined.


14-16: LGTM!

The AgentRunResult class definition is clear and concise. Using an AsyncGenerator for the response property is appropriate for streaming workflow events.


18-20: LGTM!

The FunctionCallingStreamResult class definition is clear and concise. Using a ReadableStream for the response property is appropriate for streaming engine responses.

templates/components/multiagent/typescript/workflow/factory.ts (1)

145-151: Potential PII exposure in event messages

The writeEvent method logs messages that may include user inputs or other sensitive information. This could lead to unintentional exposure of PII in logs or streams.

Please ensure that the messages logged do not contain sensitive information. If necessary, sanitize or exclude such data from the logs.

Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 20

Outside diff range and nitpick comments (6)
templates/components/multiagent/typescript/express/chat.controller.ts (2)

29-29: Consistent error response keys

Ensure that the error responses use consistent keys. Earlier, a 400 error uses the key error, while here it uses detail.

Use the same key for consistency:

    return res.status(500).json({
-     error: "An unexpected error occurred. Please try again later.",
+     message: "An unexpected error occurred. Please try again later.",
    });

Alternatively, adjust the 400 error to match.


32-32: Remove unnecessary clearTimeout if not needed

If streamTimeout is not set elsewhere, calling clearTimeout(streamTimeout); may not be necessary.

Verify if streamTimeout needs to be cleared.

templates/components/multiagent/typescript/workflow/factory.ts (1)

111-111: Correct the typo in the message template

In the message, "You're blog post is ready for publication...", the word "You're" should be "Your".

Apply this diff to fix the typo:

-        input: `You're blog post is ready for publication. Please respond with just the blog post. Blog post: \`\`\`${oldContent}\`\`\``,
+        input: `Your blog post is ready for publication. Please respond with just the blog post. Blog post: \`\`\`${oldContent}\`\`\``,
templates/components/multiagent/typescript/workflow/single-agent.ts (3)

104-107: Use consistent parameter naming for 'ctx' and 'context'

The methods prepareChatHistory (lines 90-92) and handleLLMInput (lines 104-107) use ctx as the parameter name for the context object, whereas handleLLMInputStream (lines 125-128) uses context. For consistency and readability, consider using the same parameter name throughout these methods.

Also applies to: 125-128


236-237: Offer assistance in removing the 'toolCalled' flag

The TODO comment (lines 236-237) suggests that if the LLM had a method to get tool calls from the response, the toolCalled flag wouldn't be necessary. I can help implement a solution that leverages a method in the LLM to retrieve tool calls directly from the response, simplifying the code and eliminating the need for the flag.

Would you like me to generate the code to implement this functionality or open a GitHub issue to track this task?


181-215: Handle cases where tool execution fails gracefully

In the handleToolCalls method (lines 181-215), if a tool throws an exception during execution, it is caught, and an error message is added to toolMsgs. Ensure that this approach aligns with the desired error-handling strategy and consider whether additional actions (e.g., retries, alternative flows) are needed when tool execution fails.

Consider implementing a more robust error-handling mechanism, possibly including retries or fallback tools, to improve the resilience of the workflow.

Review details

Configuration used: CodeRabbit UI
Review profile: CHILL

Commits

Files that changed from the base of the PR and between 054ee5b and 8cfabc5.

Files selected for processing (9)
  • helpers/typescript.ts (3 hunks)
  • questions.ts (1 hunks)
  • templates/components/multiagent/typescript/express/chat.controller.ts (1 hunks)
  • templates/components/multiagent/typescript/nextjs/route.ts (1 hunks)
  • templates/components/multiagent/typescript/workflow/agents.ts (1 hunks)
  • templates/components/multiagent/typescript/workflow/factory.ts (1 hunks)
  • templates/components/multiagent/typescript/workflow/single-agent.ts (1 hunks)
  • templates/components/multiagent/typescript/workflow/stream.ts (1 hunks)
  • templates/components/multiagent/typescript/workflow/type.ts (1 hunks)
Files skipped from review as they are similar to previous changes (4)
  • questions.ts
  • templates/components/multiagent/typescript/workflow/agents.ts
  • templates/components/multiagent/typescript/workflow/stream.ts
  • templates/components/multiagent/typescript/workflow/type.ts
Additional comments not posted (6)
helpers/typescript.ts (2)

Line range hint 1-324: Overall, the changes align well with the PR objective.

The modifications to helpers/typescript.ts successfully introduce support for multi-agent functionality in TypeScript. Key improvements include:

  1. Simplified template handling
  2. Addition of multiagent template support
  3. Copying of necessary settings to the engine folder

These changes enhance the flexibility and functionality of the TypeScript setup process. The implementation is generally sound, with only minor suggestions for improvement in terms of code organization and error handling.


36-36: Verify the impact of removing template conditional logic.

The template path is now set to "streaming" for all cases, removing previous conditional logic. While this simplifies the code, it may reduce flexibility.

Please confirm if this change is intentional and doesn't negatively impact other use cases. Run the following script to check for any remaining references to other template types:

Verification successful

Template conditional logic removal verified.

No references to other template types (e.g., "chat" or "rag") were found in the TypeScript files. The change to set the template path to "streaming" for all cases does not negatively impact other use cases.

Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Search for references to other template types
rg --type typescript 'template(?!s).*(?!streaming)(?:chat|rag)' .

Length of output: 319


Script:

#!/bin/bash
# Search for specific references to other template types without using look-around
rg --type typescript 'template\s*(?:chat|rag)' .

Length of output: 85


Script:

#!/bin/bash
# Search for specific references to other template types in TypeScript files using the correct file type
rg --type ts 'template\s*(?:chat|rag)' .

Length of output: 42

templates/components/multiagent/typescript/express/chat.controller.ts (1)

24-25: Handle streaming properly with asynchronous operations

If agent.run is awaited, ensure that the streaming logic remains consistent and that the response is properly handled.

Verify that the stream correctly processes and sends data to the client after awaiting agent.run.

templates/components/multiagent/typescript/nextjs/route.ts (1)

35-35: Verify type compatibility between 'Message' and 'ChatMessage'

Ensure that casting messages to ChatMessage[] is safe. Verify that the properties of Message are compatible with ChatMessage to prevent potential runtime errors due to type mismatches.

Run the following script to compare the type definitions:

This will help confirm that the interfaces are compatible for casting.

templates/components/multiagent/typescript/workflow/single-agent.ts (2)

67-75: Validate the workflow step configurations

In the constructor (lines 67-75), when adding steps to the workflow, ensure that the outputs are correctly specified and that all possible outputs are accounted for. Verify that the steps transition appropriately based on the events produced.

Run the following script to check the workflow step configurations:

#!/bin/bash
# Description: Verify that the workflow steps have correct outputs configured.

# Test: Search for 'addStep' usages and inspect the 'outputs' parameter.
rg --type typescript 'this\.addStep' -A 3

This script helps to ensure that the workflow steps are correctly set up and that event transitions are properly handled.


153-159: Verify that 'fullResponse.delta' contains the expected content

In the handleLLMInputStream method (lines 153-159), fullResponse.delta is used to update the memory. Ensure that fullResponse.delta contains the expected content before adding it to the memory. If delta might be undefined or does not contain the complete response, consider using fullResponse.message.content or adjusting the logic to construct the full message.

Run the following script to check if ChatResponseChunk includes delta and whether it contains the full response:

This script searches for the definition of ChatResponseChunk in the codebase to verify if the delta property exists and behaves as expected.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 6

Outside diff range and nitpick comments (5)
templates/components/multiagent/typescript/workflow/factory.ts (2)

1-15: Consider adding documentation for constants

The TIMEOUT and MAX_ATTEMPTS constants are defined without explanation. Adding comments to clarify their purpose and why these specific values were chosen would improve code readability and maintainability.

Consider adding explanatory comments:

// Maximum time allowed for the entire workflow execution (in milliseconds)
const TIMEOUT = 360 * 1000;

// Maximum number of writing attempts before proceeding with the current version
const MAX_ATTEMPTS = 2;

1-138: Overall implementation is solid with room for improvements

The multi-agent workflow for blog post creation and review is well-structured and implements the required functionality. The use of typed events and clear step definitions contributes to a maintainable codebase.

However, there are several areas where the implementation could be enhanced:

  1. Resolve the context.writeEventToStream issue for consistent event handling.
  2. Improve type safety, particularly in the runAgent function and its usage.
  3. Enhance error handling throughout the workflow, especially in agent interactions.
  4. Implement a more robust post evaluation mechanism in the review step.
  5. Consider adding more detailed documentation, particularly for complex logic and configuration constants.

Addressing these points will further improve the reliability, maintainability, and type safety of the codebase.

For future iterations, consider:

  • Implementing a more modular structure for easier testing of individual workflow steps.
  • Adding telemetry or logging for better observability of the workflow execution.
  • Exploring the use of more advanced TypeScript features like conditional types or branded types to enhance type safety.
templates/components/multiagent/typescript/express/chat.controller.ts (1)

9-9: Rename vercelStreamData for clarity

The variable name vercelStreamData suggests a specific association with Vercel, which might not be accurate or could cause confusion if the code is used in different contexts. Consider renaming it to something more general like streamData to improve clarity.

Apply this diff:

-  const vercelStreamData = new StreamData();
+  const streamData = new StreamData();

Ensure all references to vercelStreamData are updated accordingly.

templates/components/multiagent/typescript/workflow/single-agent.ts (2)

111-114: Use 'llmArgs' in non-streaming LLM call for consistency

In the streaming version, llmArgs is used when calling llm.chat, but in the non-streaming version, arguments are passed directly. For consistency and maintainability, consider using llmArgs in the non-streaming version as well.

Apply this diff for consistency:

       const result = await this.llm.chat({
-        messages: this.chatHistory,
-        tools: this.tools,
+        ...llmArgs,
       });

234-235: Address the TODO: Implement 'getToolCallsFromResponse' in LLM

The TODO comment suggests that the LLM should have a method getToolCallsFromResponse. Implementing this method would eliminate the need for the toolCalled flag and simplify the workflow.

Would you like assistance in implementing the getToolCallsFromResponse method in the LLM interface or opening a GitHub issue to track this task?

Review details

Configuration used: CodeRabbit UI
Review profile: CHILL

Commits

Files that changed from the base of the PR and between 305296b and ea3bbcf.

Files selected for processing (6)
  • templates/components/multiagent/typescript/express/chat.controller.ts (1 hunks)
  • templates/components/multiagent/typescript/nextjs/route.ts (1 hunks)
  • templates/components/multiagent/typescript/workflow/factory.ts (1 hunks)
  • templates/components/multiagent/typescript/workflow/single-agent.ts (1 hunks)
  • templates/components/multiagent/typescript/workflow/stream.ts (1 hunks)
  • templates/components/multiagent/typescript/workflow/type.ts (1 hunks)
Files skipped from review as they are similar to previous changes (2)
  • templates/components/multiagent/typescript/workflow/stream.ts
  • templates/components/multiagent/typescript/workflow/type.ts
Additional comments not posted (10)
templates/components/multiagent/typescript/workflow/factory.ts (3)

16-21: Event classes look good

The ResearchEvent, WriteEvent, and ReviewEvent classes are well-defined and extend WorkflowEvent with appropriate input types. This structure provides a clear foundation for the workflow implementation.


131-138: Workflow creation and step addition look good

The creation of the Workflow instance and the addition of steps using addStep are implemented correctly. The use of the TIMEOUT constant and the validate option enhances the robustness of the workflow.


23-42: Investigate and resolve the context.writeEventToStream issue

There's a TODO comment indicating that context.writeEventToStream should be used instead of directly appending to the stream. However, it's noted that this fails after the third event.

Please investigate why context.writeEventToStream fails after the third event. This could be a bug in the LITS code or an issue with how events are being processed. Consider running the following script to gather more information:

Once the issue is resolved, update the code to use context.writeEventToStream for consistent event handling within the workflow.

templates/components/multiagent/typescript/express/chat.controller.ts (1)

24-25: Ensure proper streaming response handling

While streaming the response using streamToResponse, make sure that the stream is correctly managed to handle backpressure and errors within the stream. This ensures that resource usage is optimized and unexpected issues are handled gracefully during data transmission.

To confirm that the stream handles errors appropriately, consider testing with a script that simulates stream errors:

Observe the server logs and the client output to ensure that errors are logged and the client receives appropriate responses without resource leaks.

templates/components/multiagent/typescript/nextjs/route.ts (1)

1-53: Overall, the implementation is solid

The code effectively handles the POST request, processes chat messages, and returns a streaming response. Initialization and cleanup are properly managed.

templates/components/multiagent/typescript/workflow/single-agent.ts (5)

54-55: Ensure 'llm' is defined to prevent potential errors

In the constructor, this.llm is assigned using options.llm ?? (Settings.llm as ToolCallLLM). If both options.llm and Settings.llm are undefined, this.llm would be undefined, leading to errors when methods like checkToolCallSupport() are called. Consider adding a check to ensure this.llm is defined and throw an informative error if it is not.


77-79: Consider caching 'chatHistory' to improve performance

The getter chatHistory retrieves messages from memory each time it's accessed. If chatHistory is accessed multiple times without changes to the memory, consider caching the result during a workflow step and invalidating it only when new messages are added. This could improve performance.


97-98: Use 'system' role when adding the system prompt to memory

When adding the system prompt to memory, it's more appropriate to use the role 'system' instead of 'assistant' to accurately reflect the nature of the message.


116-116: Questioning the necessity of 'toolCalled' flag

Is the toolCalled flag necessary? It seems to limit ToolCallEvent to be triggered only once per agent. In Python implementations, the LLM has a method to determine whether to call the tool again or not, which might eliminate the need for this flag.


206-206: Use a logging mechanism instead of 'console.error'

Directly using console.error(e) may not be appropriate for production code, as it can be less flexible and harder to manage in different environments. Consider using a logging library that allows for configurable log levels and outputs.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

Outside diff range and nitpick comments (1)
e2e/multiagent_template.spec.ts (1)

13-15: LGTM! Consider enhancing readability.

The dynamic assignment of templateFramework based on the FRAMEWORK environment variable is a good improvement. It adds flexibility to the tests while maintaining backwards compatibility.

For slightly improved readability, consider using nullish coalescing:

const templateFramework: TemplateFramework = process.env.FRAMEWORK as TemplateFramework ?? "fastapi";

This change would make the code more concise while maintaining the same functionality.

Review details

Configuration used: CodeRabbit UI
Review profile: CHILL

Commits

Files that changed from the base of the PR and between 325c7ca and 45f7529.

Files selected for processing (1)
  • e2e/multiagent_template.spec.ts (2 hunks)
Additional comments not posted (1)
e2e/multiagent_template.spec.ts (1)

24-26: ⚠️ Potential issue

Review and update the test skip condition and its comment.

The updated skip condition has been simplified, but it introduces potential issues:

  1. The condition no longer checks for the FRAMEWORK, which might lead to unexpected test executions on non-FastAPI frameworks.
  2. The comment below the skip condition is now inconsistent with the actual condition.

Consider the following improvements:

  1. Update the skip condition to include the FRAMEWORK check:
process.platform !== "linux" || process.env.DATASOURCE === "--no-files" || process.env.FRAMEWORK !== "fastapi",
  1. Update the comment to accurately reflect the skip condition:
"This test is designed for Linux platforms with FastAPI framework and file-based datasources. Skipped for other configurations to optimize test execution.",

To ensure we're not missing any important context, let's check for other usages of FRAMEWORK in the test file:

This will help us understand if there are any other places where FRAMEWORK is used that might affect our decision.

Verification successful

The FRAMEWORK check is not required in the test skip condition.

  • The FRAMEWORK environment variable is only used in test descriptions and does not influence test execution logic.
  • Therefore, adding a FRAMEWORK check to the skip condition is unnecessary.
Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
rg --type typescript 'process\.env\.FRAMEWORK' e2e/multiagent_template.spec.ts

Length of output: 115


Script:

#!/bin/bash
rg --type ts 'process\.env\.FRAMEWORK' e2e/multiagent_template.spec.ts

Length of output: 188


Script:

#!/bin/bash
rg --type ts 'templateFramework' e2e/multiagent_template.spec.ts

Length of output: 301

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 5

🧹 Outside diff range and nitpick comments (1)
templates/components/multiagent/typescript/workflow/factory.ts (1)

1-134: Summary of review for factory.ts

This file implements a multi-agent workflow for blog post creation and review. The overall structure and implementation are sound, but there are several areas where improvements can be made:

  1. Make TIMEOUT and MAX_ATTEMPTS configurable for greater flexibility.
  2. Improve type safety in the runAgent function to avoid unnecessary type assertions.
  3. Add error handling to the research function to gracefully handle potential failures.
  4. Enhance type safety and error handling in the write function.
  5. Implement a more sophisticated post evaluation logic in the review function.

Addressing these points will significantly improve the robustness, flexibility, and maintainability of the code. The suggested changes will make the workflow more resilient to errors and easier to adapt to different use cases.

Consider breaking down this file into smaller, more focused modules. For example, you could separate the event definitions, agent runners, and workflow steps into different files. This would improve the overall structure and make the code easier to maintain and test.

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL

Commits

Files that changed from the base of the PR and between 234b15e and 32c3d89.

Files selected for processing (4)
  • templates/components/multiagent/typescript/express/chat.controller.ts (1 hunks)
  • templates/components/multiagent/typescript/nextjs/route.ts (1 hunks)
  • templates/components/multiagent/typescript/workflow/factory.ts (1 hunks)
  • templates/components/multiagent/typescript/workflow/stream.ts (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (3)
  • templates/components/multiagent/typescript/express/chat.controller.ts
  • templates/components/multiagent/typescript/nextjs/route.ts
  • templates/components/multiagent/typescript/workflow/stream.ts
🔇 Additional comments not posted (2)
templates/components/multiagent/typescript/workflow/factory.ts (2)

15-20: Event classes are well-defined

The ResearchEvent, WriteEvent, and ReviewEvent classes are correctly implemented, extending WorkflowEvent with appropriate data structures for each event type. This design provides a clear structure for the workflow events.


37-42: Start function is well-implemented

The start function correctly initializes the workflow by setting the task in the context and creating a ResearchEvent. This implementation provides a clear entry point for the workflow.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 6

📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL

📥 Commits

Files that changed from the base of the PR and between fa45102 and 2fb502e.

📒 Files selected for processing (1)
  • templates/components/multiagent/typescript/workflow/single-agent.ts (1 hunks)
🧰 Additional context used
📓 Path-based instructions (1)
templates/components/multiagent/typescript/workflow/single-agent.ts (1)

Pattern templates/**: For files under the templates folder, do not report 'Missing Dependencies Detected' errors.

🔇 Additional comments (2)
templates/components/multiagent/typescript/workflow/single-agent.ts (2)

1-28: LGTM: Imports and type definitions are well-structured.

The imports from "@llamaindex/core/workflow" and "llamaindex" are appropriate for the functionality of the FunctionCallingAgent class. The custom event classes InputEvent and ToolCallEvent are correctly defined, extending WorkflowEvent. This structure supports the workflow-based approach of the agent.


82-94: LGTM: prepareChatHistory method is well-implemented

The prepareChatHistory method correctly prepares the chat history for the agent. It appropriately handles the system prompt (if provided) and the user message, storing them in memory. The method also sets the streaming flag in the context and writes an event to indicate the start of work.

Comment on lines +78 to +80
private get chatHistory() {
return this.memory.getMessages();
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Consider caching chatHistory for performance

The chatHistory getter retrieves messages from memory each time it's accessed. If chatHistory is accessed frequently without changes to the memory, this could lead to unnecessary computations.

Consider caching the chat history and invalidating the cache when new messages are added to memory:

private _cachedChatHistory: ChatMessage[] | null = null;

private get chatHistory() {
  if (!this._cachedChatHistory) {
    this._cachedChatHistory = this.memory.getMessages();
  }
  return this._cachedChatHistory;
}

// In methods that modify memory (e.g., prepareChatHistory):
private invalidateChatHistoryCache() {
  this._cachedChatHistory = null;
}

This approach would improve performance for repeated accesses to chatHistory within a single workflow step.

Comment on lines +96 to +116
private async handleLLMInput(
ctx: Context,
ev: InputEvent,
): Promise<StopEvent<string | AsyncGenerator> | ToolCallEvent> {
if (ctx.get("streaming")) {
return await this.handleLLMInputStream(ctx, ev);
}

const result = await this.llm.chat({
messages: this.chatHistory,
tools: this.tools,
});
this.memory.put(result.message);

const toolCalls = this.getToolCallsFromResponse(result);
if (toolCalls.length) {
return new ToolCallEvent({ toolCalls });
}
this.writeEvent("Finished task", ctx);
return new StopEvent({ result: result.message.content.toString() });
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Improve error handling in handleLLMInput

The handleLLMInput method processes the LLM response correctly, but it lacks error handling for the LLM chat call.

Consider adding try-catch block to handle potential errors from the LLM chat call:

 private async handleLLMInput(
   ctx: Context,
   ev: InputEvent,
 ): Promise<StopEvent<string | AsyncGenerator> | ToolCallEvent> {
   if (ctx.get("streaming")) {
     return await this.handleLLMInputStream(ctx, ev);
   }

+  try {
     const result = await this.llm.chat({
       messages: this.chatHistory,
       tools: this.tools,
     });
     this.memory.put(result.message);

     const toolCalls = this.getToolCallsFromResponse(result);
     if (toolCalls.length) {
       return new ToolCallEvent({ toolCalls });
     }
     this.writeEvent("Finished task", ctx);
     return new StopEvent({ result: result.message.content.toString() });
+  } catch (error) {
+    this.writeEvent(`Error in LLM chat: ${error.message}`, ctx);
+    return new StopEvent({ result: `Error: ${error.message}` });
+  }
 }

This change will ensure that any errors during the LLM chat call are properly handled and reported.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
private async handleLLMInput(
ctx: Context,
ev: InputEvent,
): Promise<StopEvent<string | AsyncGenerator> | ToolCallEvent> {
if (ctx.get("streaming")) {
return await this.handleLLMInputStream(ctx, ev);
}
const result = await this.llm.chat({
messages: this.chatHistory,
tools: this.tools,
});
this.memory.put(result.message);
const toolCalls = this.getToolCallsFromResponse(result);
if (toolCalls.length) {
return new ToolCallEvent({ toolCalls });
}
this.writeEvent("Finished task", ctx);
return new StopEvent({ result: result.message.content.toString() });
}
private async handleLLMInput(
ctx: Context,
ev: InputEvent,
): Promise<StopEvent<string | AsyncGenerator> | ToolCallEvent> {
if (ctx.get("streaming")) {
return await this.handleLLMInputStream(ctx, ev);
}
try {
const result = await this.llm.chat({
messages: this.chatHistory,
tools: this.tools,
});
this.memory.put(result.message);
const toolCalls = this.getToolCallsFromResponse(result);
if (toolCalls.length) {
return new ToolCallEvent({ toolCalls });
}
this.writeEvent("Finished task", ctx);
return new StopEvent({ result: result.message.content.toString() });
} catch (error) {
this.writeEvent(`Error in LLM chat: ${error.message}`, ctx);
return new StopEvent({ result: `Error: ${error.message}` });
}
}

Comment on lines +208 to +235
private writeEvent(msg: string, context: Context) {
if (!this.writeEvents) return;
context.writeEventToStream({
data: new AgentRunEvent({ name: this.name, msg }),
});
}

private checkToolCallSupport() {
const { supportToolCall } = this.llm as ToolCallLLM;
if (!supportToolCall) throw new Error("LLM does not support tool calls");
}

private getToolCallsFromResponse(
response:
| ChatResponse<ToolCallLLMMessageOptions>
| ChatResponseChunk<ToolCallLLMMessageOptions>,
): ToolCall[] {
let options;
if ("message" in response) {
options = response.message.options;
} else {
options = response.options;
}
if (options && "toolCall" in options) {
return options.toolCall as ToolCall[];
}
return [];
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Improve type checking in getToolCallsFromResponse

The getToolCallsFromResponse method extracts tool calls from the LLM response, but it could benefit from more robust type checking.

Consider improving the type checking in the getToolCallsFromResponse method:

 private getToolCallsFromResponse(
   response:
     | ChatResponse<ToolCallLLMMessageOptions>
     | ChatResponseChunk<ToolCallLLMMessageOptions>,
 ): ToolCall[] {
   let options;
   if ("message" in response) {
     options = response.message.options;
   } else {
     options = response.options;
   }
-  if (options && "toolCall" in options) {
-    return options.toolCall as ToolCall[];
+  if (options && "toolCall" in options && options.toolCall) {
+    return Array.isArray(options.toolCall) ? options.toolCall : [options.toolCall];
   }
   return [];
 }

This change ensures that toolCall is defined before accessing it and handles both single tool call and array of tool calls correctly.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
private writeEvent(msg: string, context: Context) {
if (!this.writeEvents) return;
context.writeEventToStream({
data: new AgentRunEvent({ name: this.name, msg }),
});
}
private checkToolCallSupport() {
const { supportToolCall } = this.llm as ToolCallLLM;
if (!supportToolCall) throw new Error("LLM does not support tool calls");
}
private getToolCallsFromResponse(
response:
| ChatResponse<ToolCallLLMMessageOptions>
| ChatResponseChunk<ToolCallLLMMessageOptions>,
): ToolCall[] {
let options;
if ("message" in response) {
options = response.message.options;
} else {
options = response.options;
}
if (options && "toolCall" in options) {
return options.toolCall as ToolCall[];
}
return [];
}
private writeEvent(msg: string, context: Context) {
if (!this.writeEvents) return;
context.writeEventToStream({
data: new AgentRunEvent({ name: this.name, msg }),
});
}
private checkToolCallSupport() {
const { supportToolCall } = this.llm as ToolCallLLM;
if (!supportToolCall) throw new Error("LLM does not support tool calls");
}
private getToolCallsFromResponse(
response:
| ChatResponse<ToolCallLLMMessageOptions>
| ChatResponseChunk<ToolCallLLMMessageOptions>,
): ToolCall[] {
let options;
if ("message" in response) {
options = response.message.options;
} else {
options = response.options;
}
if (options && "toolCall" in options && options.toolCall) {
return Array.isArray(options.toolCall) ? options.toolCall : [options.toolCall];
}
return [];
}

Comment on lines +170 to +206
private async handleToolCalls(
ctx: Context,
ev: ToolCallEvent,
): Promise<InputEvent> {
const { toolCalls } = ev.data;

const toolMsgs: ChatMessage[] = [];

for (const call of toolCalls) {
const targetTool = this.tools.find(
(tool) => tool.metadata.name === call.name,
);
// TODO: make logger optional in callTool in framework
const toolOutput = await callTool(targetTool, call, {
log: () => {},
error: console.error.bind(console),
warn: () => {},
});
toolMsgs.push({
content: JSON.stringify(toolOutput.output),
role: "user",
options: {
toolResult: {
result: toolOutput.output,
isError: toolOutput.isError,
id: call.id,
},
},
});
}

for (const msg of toolMsgs) {
this.memory.put(msg);
}

return new InputEvent({ input: this.memory.getMessages() });
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Address TODO and improve error handling in handleToolCalls

The handleToolCalls method processes tool calls correctly, but there's a TODO comment about making the logger optional in callTool.

Consider addressing the TODO by implementing a more flexible logging approach:

-  // TODO: make logger optional in callTool in framework
-  const toolOutput = await callTool(targetTool, call, {
-    log: () => {},
-    error: console.error.bind(console),
-    warn: () => {},
-  });
+  const logger = this.options.logger || {
+    log: () => {},
+    error: console.error.bind(console),
+    warn: () => {},
+  };
+  const toolOutput = await callTool(targetTool, call, logger);

Also, consider adding error handling for individual tool calls:

 for (const call of toolCalls) {
+  try {
     const targetTool = this.tools.find(
       (tool) => tool.metadata.name === call.name,
     );
+    if (!targetTool) {
+      throw new Error(`Tool ${call.name} not found`);
+    }
     // ... existing code ...
+  } catch (error) {
+    this.writeEvent(`Error in tool call ${call.name}: ${error.message}`, ctx);
+    toolMsgs.push({
+      content: JSON.stringify({ error: error.message }),
+      role: "system",
+      options: {
+        toolResult: {
+          result: { error: error.message },
+          isError: true,
+          id: call.id,
+        },
+      },
+    });
+  }
 }

These changes will improve error handling and logging flexibility in the handleToolCalls method.

Committable suggestion was skipped due to low confidence.

Comment on lines +118 to +168
private async handleLLMInputStream(
context: Context,
ev: InputEvent,
): Promise<StopEvent<AsyncGenerator> | ToolCallEvent> {
const { llm, tools, memory } = this;
const llmArgs = { messages: this.chatHistory, tools };

const responseGenerator = async function* () {
const responseStream = await llm.chat({ ...llmArgs, stream: true });

let fullResponse = null;
let yieldedIndicator = false;
for await (const chunk of responseStream) {
const hasToolCalls = chunk.options && "toolCall" in chunk.options;
if (!hasToolCalls) {
if (!yieldedIndicator) {
yield false;
yieldedIndicator = true;
}
yield chunk;
} else if (!yieldedIndicator) {
yield true;
yieldedIndicator = true;
}

fullResponse = chunk;
}

if (fullResponse) {
memory.put({
role: "assistant",
content: "",
options: fullResponse.options,
});
yield fullResponse;
}
};

const generator = responseGenerator();
const isToolCall = await generator.next();
if (isToolCall.value) {
const fullResponse = await generator.next();
const toolCalls = this.getToolCallsFromResponse(
fullResponse.value as ChatResponseChunk<ToolCallLLMMessageOptions>,
);
return new ToolCallEvent({ toolCalls });
}

this.writeEvent("Finished task", context);
return new StopEvent({ result: generator });
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Enhance error handling in handleLLMInputStream

The handleLLMInputStream method processes streaming LLM inputs, but it lacks error handling for potential issues during stream processing.

Consider adding try-catch blocks to handle potential errors in the stream processing:

 private async handleLLMInputStream(
   context: Context,
   ev: InputEvent,
 ): Promise<StopEvent<AsyncGenerator> | ToolCallEvent> {
   const { llm, tools, memory } = this;
   const llmArgs = { messages: this.chatHistory, tools };

   const responseGenerator = async function* () {
+    try {
       const responseStream = await llm.chat({ ...llmArgs, stream: true });

       let fullResponse = null;
       let yieldedIndicator = false;
       for await (const chunk of responseStream) {
         // ... existing code ...
       }

       if (fullResponse) {
         memory.put({
           role: "assistant",
           content: "",
           options: fullResponse.options,
         });
         yield fullResponse;
       }
+    } catch (error) {
+      yield { error: `Error in stream processing: ${error.message}` };
+    }
   };

   const generator = responseGenerator();
+  try {
     const isToolCall = await generator.next();
     if (isToolCall.value) {
       const fullResponse = await generator.next();
       const toolCalls = this.getToolCallsFromResponse(
         fullResponse.value as ChatResponseChunk<ToolCallLLMMessageOptions>,
       );
       return new ToolCallEvent({ toolCalls });
     }

     this.writeEvent("Finished task", context);
     return new StopEvent({ result: generator });
+  } catch (error) {
+    this.writeEvent(`Error in stream processing: ${error.message}`, context);
+    return new StopEvent({ result: `Error: ${error.message}` });
+  }
 }

This change will ensure that any errors during stream processing are properly handled and reported.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
private async handleLLMInputStream(
context: Context,
ev: InputEvent,
): Promise<StopEvent<AsyncGenerator> | ToolCallEvent> {
const { llm, tools, memory } = this;
const llmArgs = { messages: this.chatHistory, tools };
const responseGenerator = async function* () {
const responseStream = await llm.chat({ ...llmArgs, stream: true });
let fullResponse = null;
let yieldedIndicator = false;
for await (const chunk of responseStream) {
const hasToolCalls = chunk.options && "toolCall" in chunk.options;
if (!hasToolCalls) {
if (!yieldedIndicator) {
yield false;
yieldedIndicator = true;
}
yield chunk;
} else if (!yieldedIndicator) {
yield true;
yieldedIndicator = true;
}
fullResponse = chunk;
}
if (fullResponse) {
memory.put({
role: "assistant",
content: "",
options: fullResponse.options,
});
yield fullResponse;
}
};
const generator = responseGenerator();
const isToolCall = await generator.next();
if (isToolCall.value) {
const fullResponse = await generator.next();
const toolCalls = this.getToolCallsFromResponse(
fullResponse.value as ChatResponseChunk<ToolCallLLMMessageOptions>,
);
return new ToolCallEvent({ toolCalls });
}
this.writeEvent("Finished task", context);
return new StopEvent({ result: generator });
}
private async handleLLMInputStream(
context: Context,
ev: InputEvent,
): Promise<StopEvent<AsyncGenerator> | ToolCallEvent> {
const { llm, tools, memory } = this;
const llmArgs = { messages: this.chatHistory, tools };
const responseGenerator = async function* () {
try {
const responseStream = await llm.chat({ ...llmArgs, stream: true });
let fullResponse = null;
let yieldedIndicator = false;
for await (const chunk of responseStream) {
const hasToolCalls = chunk.options && "toolCall" in chunk.options;
if (!hasToolCalls) {
if (!yieldedIndicator) {
yield false;
yieldedIndicator = true;
}
yield chunk;
} else if (!yieldedIndicator) {
yield true;
yieldedIndicator = true;
}
fullResponse = chunk;
}
if (fullResponse) {
memory.put({
role: "assistant",
content: "",
options: fullResponse.options,
});
yield fullResponse;
}
} catch (error) {
yield { error: `Error in stream processing: ${error.message}` };
}
};
const generator = responseGenerator();
try {
const isToolCall = await generator.next();
if (isToolCall.value) {
const fullResponse = await generator.next();
const toolCalls = this.getToolCallsFromResponse(
fullResponse.value as ChatResponseChunk<ToolCallLLMMessageOptions>,
);
return new ToolCallEvent({ toolCalls });
}
this.writeEvent("Finished task", context);
return new StopEvent({ result: generator });
} catch (error) {
this.writeEvent(`Error in stream processing: ${error.message}`, context);
return new StopEvent({ result: `Error: ${error.message}` });
}
}

Comment on lines +30 to +76
export class FunctionCallingAgent extends Workflow {
name: string;
llm: ToolCallLLM;
memory: ChatMemoryBuffer;
tools: BaseToolWithCall[];
systemPrompt?: string;
writeEvents: boolean;
role?: string;

constructor(options: {
name: string;
llm?: ToolCallLLM;
chatHistory?: ChatMessage[];
tools?: BaseToolWithCall[];
systemPrompt?: string;
writeEvents?: boolean;
role?: string;
verbose?: boolean;
timeout?: number;
}) {
super({
verbose: options?.verbose ?? false,
timeout: options?.timeout ?? 360,
});
this.name = options?.name;
this.llm = options.llm ?? (Settings.llm as ToolCallLLM);
this.checkToolCallSupport();
this.memory = new ChatMemoryBuffer({
llm: this.llm,
chatHistory: options.chatHistory,
});
this.tools = options?.tools ?? [];
this.systemPrompt = options.systemPrompt;
this.writeEvents = options?.writeEvents ?? true;
this.role = options?.role;

// add steps
this.addStep(StartEvent<AgentInput>, this.prepareChatHistory, {
outputs: InputEvent,
});
this.addStep(InputEvent, this.handleLLMInput, {
outputs: [ToolCallEvent, StopEvent],
});
this.addStep(ToolCallEvent, this.handleToolCalls, {
outputs: InputEvent,
});
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Ensure llm is always initialized with a ToolCallLLM

The constructor initializes this.llm with options.llm ?? (Settings.llm as ToolCallLLM). This approach might lead to runtime errors if Settings.llm is not a ToolCallLLM.

Consider modifying the constructor to ensure this.llm is always a ToolCallLLM:

-  this.llm = options.llm ?? (Settings.llm as ToolCallLLM);
+  const llm = options.llm ?? Settings.llm;
+  if (!this.isToolCallLLM(llm)) {
+    throw new Error("LLM must be a ToolCallLLM");
+  }
+  this.llm = llm;

   // Add this method to the class
+  private isToolCallLLM(llm: any): llm is ToolCallLLM {
+    return typeof (llm as ToolCallLLM).supportToolCall === 'function';
+  }

This change ensures that this.llm is always a ToolCallLLM and provides a clear error message if it's not.

Committable suggestion was skipped due to low confidence.

Comment on lines +34 to +36
const result = agent.run<AsyncGenerator<ChatResponseChunk>>(
userMessage.content,
) as unknown as Promise<StopEvent<AsyncGenerator<ChatResponseChunk>>>;
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@himself65 I saw you changing the workflow.run method. Can you also add a type for the StopEvent returned by run? (Here StopEvent<AsyncGenerator<ChatResponseChunk>>)

Comment on lines +183 to +187
const toolOutput = await callTool(targetTool, call, {
log: () => {},
error: console.error.bind(console),
warn: () => {},
});
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@himself65 using callTool from the framework is very helpful, but would be nice if the logger object would be optional

});
toolMsgs.push({
content: JSON.stringify(toolOutput.output),
role: "user",
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is role: "user" correct? It looks to me that the role should be tool or assistant?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes, it will be transformed by LITS

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

was also confusing me

@marcusschiesser marcusschiesser merged commit ef070c0 into main Sep 26, 2024
37 checks passed
@marcusschiesser marcusschiesser deleted the feat/support-multi-agent-for-ts branch September 26, 2024 10:11
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants