Skip to content
This repository has been archived by the owner on Mar 1, 2024. It is now read-only.

[Bug]: tools.0.function.description is too long #905

Open
fengyunzaidushi opened this issue Jan 27, 2024 · 0 comments
Open

[Bug]: tools.0.function.description is too long #905

fengyunzaidushi opened this issue Jan 27, 2024 · 0 comments
Labels
bug Something isn't working triage

Comments

@fengyunzaidushi
Copy link

Bug Description

{
	"name": "BadRequestError",
	"message": "Error code: 400 - {'error': {'message': '\\'\\\
        Use this tool to load data from the following function. It must then be read from\\\
        the corresponding read_search_and_retrieve_documents function.\\\
\\\
        search_and_retrieve_documents(query: str, num_results: Union[int, NoneType] = 10, include_domains: Union[List[str], NoneType] = None, exclude_domains: Union[List[str], NoneType] = None, start_published_date: Union[str, NoneType] = None, end_published_date: Union[str, NoneType] = None) -> List[llama_index.schema.Document]\\\
\\\
        Combines the functionality of `search` and `retrieve_documents`\\\
\\\
        Args:\\\
            query (str): the natural language query\\\
            num_results (Optional[int]): Number of results. Defaults to 10.\\\
            include_domains (Optional[List(str)]): A list of top level domains to search, like [\"wsj.com\"]\\\
            exclude_domains (Optional[List(str)]): Top level domains to exclude.\\\
            start_published_date (Optional[str]): A date string like \"2020-06-15\".\\\
            end_published_date (Optional[str]): End date string\\\
        \\\
    \\' is too long - \\'tools.0.function.description\\'', 'type': 'invalid_request_error', 'param': None, 'code': None}}",
	"stack": "---------------------------------------------------------------------------
BadRequestError                           Traceback (most recent call last)
/mnt/sda/github/01yue/llama-hub/llama_hub/tools/notebooks/exa.ipynb 单元格 26 line 2
      <a href='vscode-notebook-cell:/mnt/sda/github/01yue/llama-hub/llama_hub/tools/notebooks/exa.ipynb#Y113sZmlsZQ%3D%3D?line=0'>1</a> print(
----> <a href='vscode-notebook-cell:/mnt/sda/github/01yue/llama-hub/llama_hub/tools/notebooks/exa.ipynb#Y113sZmlsZQ%3D%3D?line=1'>2</a>     agent.chat(
      <a href='vscode-notebook-cell:/mnt/sda/github/01yue/llama-hub/llama_hub/tools/notebooks/exa.ipynb#Y113sZmlsZQ%3D%3D?line=2'>3</a>         \"Can you summarize everything published in the last month regarding news on\"
      <a href='vscode-notebook-cell:/mnt/sda/github/01yue/llama-hub/llama_hub/tools/notebooks/exa.ipynb#Y113sZmlsZQ%3D%3D?line=3'>4</a>         \" research augment generation\"
      <a href='vscode-notebook-cell:/mnt/sda/github/01yue/llama-hub/llama_hub/tools/notebooks/exa.ipynb#Y113sZmlsZQ%3D%3D?line=4'>5</a>     )
      <a href='vscode-notebook-cell:/mnt/sda/github/01yue/llama-hub/llama_hub/tools/notebooks/exa.ipynb#Y113sZmlsZQ%3D%3D?line=5'>6</a> )

File /mnt/sda/github/12yue/llama_index/llama_index/callbacks/utils.py:41, in trace_method.<locals>.decorator.<locals>.wrapper(self, *args, **kwargs)
     39 callback_manager = cast(CallbackManager, callback_manager)
     40 with callback_manager.as_trace(trace_id):
---> 41     return func(self, *args, **kwargs)

File /mnt/sda/github/12yue/llama_index/llama_index/agent/runner/base.py:497, in AgentRunner.chat(self, message, chat_history, tool_choice)
    492     tool_choice = self.default_tool_choice
    493 with self.callback_manager.event(
    494     CBEventType.AGENT_STEP,
    495     payload={EventPayload.MESSAGES: [message]},
    496 ) as e:
--> 497     chat_response = self._chat(
    498         message, chat_history, tool_choice, mode=ChatResponseMode.WAIT
    499     )
    500     assert isinstance(chat_response, AgentChatResponse)
    501     e.on_end(payload={EventPayload.RESPONSE: chat_response})

File /mnt/sda/github/12yue/llama_index/llama_index/agent/runner/base.py:442, in AgentRunner._chat(self, message, chat_history, tool_choice, mode)
    439 result_output = None
    440 while True:
    441     # pass step queue in as argument, assume step executor is stateless
--> 442     cur_step_output = self._run_step(
    443         task.task_id, mode=mode, tool_choice=tool_choice
    444     )
    446     if cur_step_output.is_last:
    447         result_output = cur_step_output

File /mnt/sda/github/12yue/llama_index/llama_index/agent/runner/base.py:304, in AgentRunner._run_step(self, task_id, step, mode, **kwargs)
    300 # TODO: figure out if you can dynamically swap in different step executors
    301 # not clear when you would do that by theoretically possible
    303 if mode == ChatResponseMode.WAIT:
--> 304     cur_step_output = self.agent_worker.run_step(step, task, **kwargs)
    305 elif mode == ChatResponseMode.STREAM:
    306     cur_step_output = self.agent_worker.stream_step(step, task, **kwargs)

File /mnt/sda/github/12yue/llama_index/llama_index/callbacks/utils.py:41, in trace_method.<locals>.decorator.<locals>.wrapper(self, *args, **kwargs)
     39 callback_manager = cast(CallbackManager, callback_manager)
     40 with callback_manager.as_trace(trace_id):
---> 41     return func(self, *args, **kwargs)

File /mnt/sda/github/12yue/llama_index/llama_index/agent/openai/step.py:573, in OpenAIAgentWorker.run_step(self, step, task, **kwargs)
    571 \"\"\"Run step.\"\"\"
    572 tool_choice = kwargs.get(\"tool_choice\", \"auto\")
--> 573 return self._run_step(
    574     step, task, mode=ChatResponseMode.WAIT, tool_choice=tool_choice
    575 )

File /mnt/sda/github/12yue/llama_index/llama_index/agent/openai/step.py:448, in OpenAIAgentWorker._run_step(self, step, task, mode, tool_choice)
    444 openai_tools = [tool.metadata.to_openai_tool() for tool in tools]
    446 llm_chat_kwargs = self._get_llm_chat_kwargs(task, openai_tools, tool_choice)
--> 448 agent_chat_response = self._get_agent_response(
    449     task, mode=mode, **llm_chat_kwargs
    450 )
    452 # TODO: implement _should_continue
    453 latest_tool_calls = self.get_latest_tool_calls(task) or []

File /mnt/sda/github/12yue/llama_index/llama_index/agent/openai/step.py:322, in OpenAIAgentWorker._get_agent_response(self, task, mode, **llm_chat_kwargs)
    318 def _get_agent_response(
    319     self, task: Task, mode: ChatResponseMode, **llm_chat_kwargs: Any
    320 ) -> AGENT_CHAT_RESPONSE_TYPE:
    321     if mode == ChatResponseMode.WAIT:
--> 322         chat_response: ChatResponse = self._llm.chat(**llm_chat_kwargs)
    323         return self._process_message(task, chat_response)
    324     elif mode == ChatResponseMode.STREAM:

File /mnt/sda/github/12yue/llama_index/llama_index/llms/base.py:100, in llm_chat_callback.<locals>.wrap.<locals>.wrapped_llm_chat(_self, messages, **kwargs)
     91 with wrapper_logic(_self) as callback_manager:
     92     event_id = callback_manager.on_event_start(
     93         CBEventType.LLM,
     94         payload={
   (...)
     98         },
     99     )
--> 100     f_return_val = f(_self, messages, **kwargs)
    102     if isinstance(f_return_val, Generator):
    103         # intercept the generator and add a callback to the end
    104         def wrapped_gen() -> ChatResponseGen:

File /mnt/sda/github/12yue/llama_index/llama_index/llms/openai.py:237, in OpenAI.chat(self, messages, **kwargs)
    235 else:
    236     chat_fn = completion_to_chat_decorator(self._complete)
--> 237 return chat_fn(messages, **kwargs)

File /mnt/sda/github/12yue/llama_index/llama_index/llms/openai.py:296, in OpenAI._chat(self, messages, **kwargs)
    294 client = self._get_client()
    295 message_dicts = to_openai_message_dicts(messages)
--> 296 response = client.chat.completions.create(
    297     messages=message_dicts,
    298     stream=False,
    299     **self._get_model_kwargs(**kwargs),
    300 )
    301 openai_message = response.choices[0].message
    302 message = from_openai_message(openai_message)

File /opt/miniconda3/envs/llamapy38/lib/python3.8/site-packages/openai/_utils/_utils.py:271, in required_args.<locals>.inner.<locals>.wrapper(*args, **kwargs)
    269             msg = f\"Missing required argument: {quote(missing[0])}\"
    270     raise TypeError(msg)
--> 271 return func(*args, **kwargs)

File /opt/miniconda3/envs/llamapy38/lib/python3.8/site-packages/openai/resources/chat/completions.py:648, in Completions.create(self, messages, model, frequency_penalty, function_call, functions, logit_bias, logprobs, max_tokens, n, presence_penalty, response_format, seed, stop, stream, temperature, tool_choice, tools, top_logprobs, top_p, user, extra_headers, extra_query, extra_body, timeout)
    599 @required_args([\"messages\", \"model\"], [\"messages\", \"model\", \"stream\"])
    600 def create(
    601     self,
   (...)
    646     timeout: float | httpx.Timeout | None | NotGiven = NOT_GIVEN,
    647 ) -> ChatCompletion | Stream[ChatCompletionChunk]:
--> 648     return self._post(
    649         \"/chat/completions\",
    650         body=maybe_transform(
    651             {
    652                 \"messages\": messages,
    653                 \"model\": model,
    654                 \"frequency_penalty\": frequency_penalty,
    655                 \"function_call\": function_call,
    656                 \"functions\": functions,
    657                 \"logit_bias\": logit_bias,
    658                 \"logprobs\": logprobs,
    659                 \"max_tokens\": max_tokens,
    660                 \"n\": n,
    661                 \"presence_penalty\": presence_penalty,
    662                 \"response_format\": response_format,
    663                 \"seed\": seed,
    664                 \"stop\": stop,
    665                 \"stream\": stream,
    666                 \"temperature\": temperature,
    667                 \"tool_choice\": tool_choice,
    668                 \"tools\": tools,
    669                 \"top_logprobs\": top_logprobs,
    670                 \"top_p\": top_p,
    671                 \"user\": user,
    672             },
    673             completion_create_params.CompletionCreateParams,
    674         ),
    675         options=make_request_options(
    676             extra_headers=extra_headers, extra_query=extra_query, extra_body=extra_body, timeout=timeout
    677         ),
    678         cast_to=ChatCompletion,
    679         stream=stream or False,
    680         stream_cls=Stream[ChatCompletionChunk],
    681     )

File /opt/miniconda3/envs/llamapy38/lib/python3.8/site-packages/openai/_base_client.py:1179, in SyncAPIClient.post(self, path, cast_to, body, options, files, stream, stream_cls)
   1165 def post(
   1166     self,
   1167     path: str,
   (...)
   1174     stream_cls: type[_StreamT] | None = None,
   1175 ) -> ResponseT | _StreamT:
   1176     opts = FinalRequestOptions.construct(
   1177         method=\"post\", url=path, json_data=body, files=to_httpx_files(files), **options
   1178     )
-> 1179     return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls))

File /opt/miniconda3/envs/llamapy38/lib/python3.8/site-packages/openai/_base_client.py:868, in SyncAPIClient.request(self, cast_to, options, remaining_retries, stream, stream_cls)
    859 def request(
    860     self,
    861     cast_to: Type[ResponseT],
   (...)
    866     stream_cls: type[_StreamT] | None = None,
    867 ) -> ResponseT | _StreamT:
--> 868     return self._request(
    869         cast_to=cast_to,
    870         options=options,
    871         stream=stream,
    872         stream_cls=stream_cls,
    873         remaining_retries=remaining_retries,
    874     )

File /opt/miniconda3/envs/llamapy38/lib/python3.8/site-packages/openai/_base_client.py:959, in SyncAPIClient._request(self, cast_to, options, remaining_retries, stream, stream_cls)
    956         err.response.read()
    958     log.debug(\"Re-raising status error\")
--> 959     raise self._make_status_error_from_response(err.response) from None
    961 return self._process_response(
    962     cast_to=cast_to,
    963     options=options,
   (...)
    966     stream_cls=stream_cls,
    967 )

BadRequestError: Error code: 400 - {'error': {'message': '\\'\\\
        Use this tool to load data from the following function. It must then be read from\\\
        the corresponding read_search_and_retrieve_documents function.\\\
\\\
        search_and_retrieve_documents(query: str, num_results: Union[int, NoneType] = 10, include_domains: Union[List[str], NoneType] = None, exclude_domains: Union[List[str], NoneType] = None, start_published_date: Union[str, NoneType] = None, end_published_date: Union[str, NoneType] = None) -> List[llama_index.schema.Document]\\\
\\\
        Combines the functionality of `search` and `retrieve_documents`\\\
\\\
        Args:\\\
            query (str): the natural language query\\\
            num_results (Optional[int]): Number of results. Defaults to 10.\\\
            include_domains (Optional[List(str)]): A list of top level domains to search, like [\"wsj.com\"]\\\
            exclude_domains (Optional[List(str)]): Top level domains to exclude.\\\
            start_published_date (Optional[str]): A date string like \"2020-06-15\".\\\
            end_published_date (Optional[str]): End date string\\\
        \\\
    \\' is too long - \\'tools.0.function.description\\'', 'type': 'invalid_request_error', 'param': None, 'code': None}}"
}

Version

Version: 0.9.39

Steps to Reproduce

https://github.com/run-llama/llama-hub/blob/main/llama_hub/tools/notebooks/exa.ipynb
when i run the last line code:
print( agent.chat( "Can you summarize everything published in the last month regarding news on" " superconductors" ) )

Relevant Logs/Tracbacks

No response

@fengyunzaidushi fengyunzaidushi added bug Something isn't working triage labels Jan 27, 2024
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
bug Something isn't working triage
Projects
None yet
Development

No branches or pull requests

1 participant