Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Pin litellm version 1.50.0 while 1.50.2 is getting fixed #1494

Closed
blakkd opened this issue Oct 24, 2024 · 3 comments
Closed

Pin litellm version 1.50.0 while 1.50.2 is getting fixed #1494

blakkd opened this issue Oct 24, 2024 · 3 comments
Labels
External This Issue is related to an external dependency and not Open Interpreter's core

Comments

@blakkd
Copy link

blakkd commented Oct 24, 2024

Describe the bug

Instead of showing code blocks, it shows

<tool_call>       
  {"name": "execute", "arguments": {"code": "import os\n# List all .pdf files in the current       
  directory\npdf_files = [f for f in os.listdir('.') if f.endswith('.pdf')]\npdf_files",           
  "language": "python"}} </tool_call> 

for example

Reproduce

(i1) user@user-PC:~/workspaces/interpreter/i1$ interpreter --disable_telemetry --context_window 11000 --api_base "http://localhost:11434" --no-llm_supports_vision --model ollama_chat/qwen2.5:32b-instruct-q4_K_M_11k_fullgpu --max_output 60000 -y

▌ A new version of Open Interpreter is available.                                                

▌ Please run: pip install --upgrade open-interpreter                                             

───────────────────────────────────────────────────────────────────────────────────────────────────
> cat a part of the pdf, I don't remember what it is
                                                                                                   
  To accomplish this task, we will use a Python library called PyPDF2 to read and extract text     
  from a PDF file. Since you're not sure which PDF you want to cat (view) a part of, let's start   
  by listing all the PDF files in your current directory.                                          
                                                                                                   
  First, we'll need to install the PyPDF2 package if it's not already installed:                   
                                                                                                   
                                                                                                   
   pip install PyPDF2                                                                              
                                                                                                   
                                                                                                   
  Then, we can list out the PDF files for you to choose from. Let's do that now. <tool_call>       
  {"name": "execute", "arguments": {"code": "import os\n# List all .pdf files in the current       
  directory\npdf_files = [f for f in os.listdir('.') if f.endswith('.pdf')]\npdf_files",           
  "language": "python"}} </tool_call>                                                              

Expected behavior

(i1) user@user-PC:~/workspaces/interpreter/i1$ uv pip install -U litellm==1.50
Using Python 3.11.10 environment at /home/user/miniconda3/envs/i1
Resolved 45 packages in 195ms
Prepared 1 package in 0.34ms
Uninstalled 1 package in 16ms
Installed 1 package in 14ms
 - litellm==1.50.2
 + litellm==1.50.0

(i1) user@user-PC:~/workspaces/interpreter/i1$ interpreter --disable_telemetry --context_window 11000 --api_base "http://localhost:11434" --no-llm_supports_vision --model ollama_chat/qwen2.5:32b-instruct-q4_K_M_11k_fullgpu --max_output 60000 -y
> cat a part of the pdf, I don't remember what it is
                                                                                                   
  To accomplish this task, we need to determine which PDF you're referring to and then extract a   
  portion of its content.                                                                          
                                                                                                   
  Let's start by searching for PDF files in your current directory:                                
                                                                                                   

                                                                                                   
  import os                                                                                        
                                                                                                   
  # List all .pdf files in the current directory.                                                  
  pdf_files = [f for f in os.listdir() if f.endswith('.pdf')]                                      
                                                                                                   
  for i, pdf_file in enumerate(pdf_files):                                                         
      print(f"{i + 1}: {pdf_file}")                                                                
                                                                                                   
  if not pdf_files:                                                                                
      print("No PDF files found in the current directory.")                                        
                                                                                                   
                                                                                                   
  1: TA142.pdf 

Screenshots

No response

Open Interpreter version

0.4.0

Python version

3.11.10

Operating System name and version

Ubuntix

Additional context

As seen, downgrading litellm from 1.50.2 to 1.50.0 fixes the issue.
I only see the behavior with an ollama instance, and not with groq for example.

I guess it's on litellm side, but it might be to consider to pin the litellm version for now because it makes it unsusable for local ollama users in the current state.
That's just a suggestion, I don't know how you deal with such situation usually.

Cheers mates

EDIT: Sorry, the copy paste was from an earlier version of OI, I might have messed up my windows or something. But the behavior of course remains there on 0.4.0

@MikeBirdTech
Copy link
Collaborator

@blakkd Is this still an issue on the latest version (0.4.3) when you run interpreter --local to select the ollama model?

@blakkd
Copy link
Author

blakkd commented Oct 28, 2024

I realize I never tried the interactive option before!
That said, I don't know what's going on as I currently can't make it work using it.
It correctly calls the API to retrieve the model tags but then it exits with the following error:

Invalid URL '0.0.0.0/api/tags': No scheme supplied. Perhaps you meant https://0.0.0.0/api/tags?

Here are the logs:

litellm

Installed 1 package in 176ms
 - litellm==1.50.0
 + litellm==1.51.0

without --local

~ ❯❯❯ interpreter --disable_telemetry --context_window 11000 --api_base "http://localhost:11434" --no-llm_supports_vision --model ollama_chat/qwen2.5:32b-instruct-q4_K_M_11k_fullgpu --max_output 20000 -y
> What time is it in Seattle?
                                                                                                                 
  To find out the current time in Seattle, we can use Python's datetime module along with the pytz library to    
  get the timezone information for Seattle.                                                                      
                                                                                                                 
  Let's start by getting the current time in Seattle. <tool_call> {"name": "execute", "arguments": {"code":      
  "import datetime\nimport pytz\n\nseattle_tz = pytz.timezone('America/Los_Angeles')\nseattle_time =             
  datetime.datetime.now(seattle_tz)\nseattle_time.strftime('%Y-%m-%d %H:%M:%S')", "language": "python"}}         
  </tool_call>                                                                                                   
                                                                                                                 
> 

Exiting...                                                                                                       


with --local

~ ❯❯❯ interpreter --local --disable_telemetry --context_window 11000 --no-llm_supports_vision --max_output 20000 -y

Open Interpreter supports multiple local model providers.                                                        

[?] Select a provider: 
 > Ollama
   Llamafile
   LM Studio
   Jan

[?] Select a model: 
   deepseek-coder-v2:16b-lite-instruct-q6_K_29k_fullgpu_miro2
   deepseek-coder-v2:16b-lite-instruct-q6_K
   qwen2.5:32b-instruct-q4_K_M_24k_miro2
   qwen2.5:32b-instruct-q4_K_M_11k_fullgpu_miro2
 > qwen2.5:32b-instruct-q4_K_M_11k_fullgpu
   mistral-nemo:12b-instruct-2407-q8_0_32k
   minicpm-v:8b-2.6-fp16
   mistral-small:22b-instruct-2409-q6_K_32k
   bge-m3_gpu
   bge-m3_cpu
   nomic-embed-text_cpu

Invalid URL '0.0.0.0/api/tags': No scheme supplied. Perhaps you meant https://0.0.0.0/api/tags?

▌ Ollama not found                                                                                             

Please download Ollama from ollama.com to use qwen2.5:32b-instruct-q4_K_M_11k_fullgpu.                           

ollama serve

[GIN] 2024/10/28 - 22:05:00 | 200 |      20.799µs |       127.0.0.1 | HEAD     "/"
[GIN] 2024/10/28 - 22:05:00 | 200 |    3.081969ms |       127.0.0.1 | GET      "/api/tags"

@MikeBirdTech MikeBirdTech added the External This Issue is related to an external dependency and not Open Interpreter's core label Nov 4, 2024
@blakkd
Copy link
Author

blakkd commented Nov 7, 2024

Fixed with the litellm bump to 1.52.0 :)

@blakkd blakkd closed this as completed Nov 7, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
External This Issue is related to an external dependency and not Open Interpreter's core
Projects
None yet
Development

No branches or pull requests

2 participants