Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG] Model id and memory are required for all agent types #396

Closed
joshuali925 opened this issue Jan 10, 2024 · 0 comments
Closed

[BUG] Model id and memory are required for all agent types #396

joshuali925 opened this issue Jan 10, 2024 · 0 comments
Assignees
Labels
bug Something isn't working v2.12.0

Comments

@joshuali925
Copy link
Member

What is the bug?

I tried to create a flow agent that calls a query generator tool. The tool it self uses a model for generating queries, and it should not persist history. The agent does not use any model but it cannot be created unless given a model id and a memory config.

How can one reproduce the bug?

  1. build opensearch 3.0.0, install plugins from latest commit of ml-commons, ai-flow, skills
  2. start opensearch with plugins.flow_framework.enabled: true
  3. save flow.json
{
  "name": "query-register-agent",
  "description": "",
  "use_case": "REGISTER_AGENT",
  "version": {
    "template": "1.0.0",
    "compatibility": ["2.12.0", "3.0.0"]
  },
  "workflows": {
    "provision": {
      "user_params": {},
      "nodes": [
        {
          "id": "create_custom_connector",
          "type": "create_connector",
          "previous_node_inputs": {},
          "user_inputs": {
            "version": "1",
            "name": "sagemaker: custom",
            "protocol": "aws_sigv4",
            "description": "",
            "parameters": { "region": "us-west-2", "service_name": "sagemaker" },
            "credential": { "access_key": "1", "secret_key": "2" },
            "actions": [
              {
                "action_type": "predict",
                "method": "POST",
                "headers": { "content-type": "application/json" },
                "url": "https://runtime.sagemaker.us-west-2.amazonaws.com/endpoints/my-generator/invocations",
                "request_body": "{\"prompt\":\"${parameters.prompt}\"}"
              }
            ]
          }
        },
        {
          "id": "register_custom_model",
          "type": "register_remote_model",
          "previous_node_inputs": { "create_custom_connector": "connector_id" },
          "user_inputs": { "description": "", "deploy": true, "name": "custom fine-tuned model" }
        },
        {
          "id": "PPLTool",
          "type": "create_tool",
          "previous_node_inputs": { "register_custom_model": "model_id" },
          "user_inputs": {
            "type": "PPLTool",
            "name": "PPLTool",
            "description": "",
            "parameters": { "response_filter": "$.completion" },
            "include_output_in_agent_response": true
          }
        },
        {
          "id": "ppl_agent",
          "type": "register_agent",
          "previous_node_inputs": { "PPLTool": "tools" },
          "user_inputs": {
            "parameters": {},
            "app_type": "query_assist",
            "name": "PPL agent",
            "description": "this is the PPL agent",
            "type": "flow"
          }
        }
      ]
    }
  }
}
  1. run
#!/usr/bin/env bash

endpoint=http://localhost:9200

curl -s -k "${endpoint}/_cluster/settings" -XPUT -H 'Content-Type: application/json' -d '{
  "persistent" : {
    "plugins.ml_commons.only_run_on_ml_node":"false",
    "plugins.ml_commons.memory_feature_enabled":"true",
    "plugins.ml_commons.trusted_connector_endpoints_regex":
    [ "^https://runtime\\.sagemaker\\..*[a-z0-9-]\\.amazonaws\\.com/.*$",
      "^https://api\\.openai\\.com/.*$",
      "^https://api\\.cohere\\.ai/.*$",
      "^https://bedrock-runtime\\.[a-z0-9-]+\\.amazonaws\\.com/.*$"
    ]
  }
}' | jq

output=$(curl -s -X POST -k -H 'Content-Type: application/json' ${endpoint}/_plugins/_flow_framework/workflow --data-binary @flow.json) && echo "$output" && id=$(echo "$output" | jq -r '.workflow_id') && echo "${id}"

curl -s -X POST -k -H 'Content-Type: application/json' ${endpoint}/_plugins/_flow_framework/workflow/"$id"/_provision
curl -s -k -H 'Content-Type: application/json' "${endpoint}/_plugins/_flow_framework/workflow/$id/_status?all=true" | jq
  1. see error
  "error": "llm model id is not provided for workflow: s3qG9YwBZL__J6J0DJZK on node: ppl_agent",
  "state": "FAILED",
  "provisioning_progress": "FAILED",
  1. try to add model
          "previous_node_inputs": {
            "register_custom_model": "model_id",
            "PPLTool": "tools"
          },
  1. provision again, see new error
  "error": "Cannot invoke \"java.util.Map.get(Object)\" because \"map\" is null",
  "state": "FAILED",
  "provisioning_progress": "FAILED",
  1. add memory "memory": { "type": "conversation_index" },, provision succeeds

But my agent doesn't use model, it should not require a model_id. And i don't want to persist memory for this agent, it should not give NPE when memory is missing

What is the expected behavior?

memory and model id should be optional for agent. If memory is required, it should give a better error message than Cannot invoke \"java.util.Map.get(Object)\" because \"map\" is null

What is your host/environment?

Amazon Linux 2 x64

Do you have any screenshots?

If applicable, add screenshots to help explain your problem.

Do you have any additional context?

Add any other context about the problem.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working v2.12.0
Projects
None yet
Development

No branches or pull requests

3 participants