Replies: 7 comments 1 reply
-
EDIT: Guide posted (08a2dfb) This assumes that you have already installed Ollama and Litellm on some Conda environment and that you have installed OpenDevin properly.
The above example will work on your machine if you have run this before:
|
Beta Was this translation helpful? Give feedback.
-
Hello,
Hi, I have ollama installed on my machine and I regularly use it with the open webui interface. I also have litellm installed on my machine. I have done the configuration as you suggested, but I'm getting a bunch of messages and at the end of the log, I have an error message indicating that I'm missing a file.
Here is an excerpt from the log when I start the backend with your configuration.:
Traceback (most recent call last):
File "<frozen runpy>", line 198, in _run_module_as_main
File "<frozen runpy>", line 88, in _run_code
File "C:\Users\Christian Bwanakawa\.virtualenvs\OpenDevin-ah_1KZiD\Scripts\uvicorn.exe\__main__.py", line 7, in <module>
File "C:\Users\Christian Bwanakawa\.virtualenvs\OpenDevin-ah_1KZiD\Lib\site-packages\click\core.py", line 1157, in __call__
return self.main(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Christian Bwanakawa\.virtualenvs\OpenDevin-ah_1KZiD\Lib\site-packages\click\core.py", line 1078, in main
rv = self.invoke(ctx)
^^^^^^^^^^^^^^^^
File "C:\Users\Christian Bwanakawa\.virtualenvs\OpenDevin-ah_1KZiD\Lib\site-packages\click\core.py", line 1434, in invoke
return ctx.invoke(self.callback, **ctx.params)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Christian Bwanakawa\.virtualenvs\OpenDevin-ah_1KZiD\Lib\site-packages\click\core.py", line 783, in invoke
return __callback(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Christian Bwanakawa\.virtualenvs\OpenDevin-ah_1KZiD\Lib\site-packages\uvicorn\main.py", line 409, in main
run(
File "C:\Users\Christian Bwanakawa\.virtualenvs\OpenDevin-ah_1KZiD\Lib\site-packages\uvicorn\main.py", line 575, in run
server.run()
File "C:\Users\Christian Bwanakawa\.virtualenvs\OpenDevin-ah_1KZiD\Lib\site-packages\uvicorn\server.py", line 65, in run
return asyncio.run(self.serve(sockets=sockets))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Christian Bwanakawa\AppData\Local\Programs\Python\Python311\Lib\asyncio\runners.py", line 190, in run
return runner.run(main)
^^^^^^^^^^^^^^^^
File "C:\Users\Christian Bwanakawa\AppData\Local\Programs\Python\Python311\Lib\asyncio\runners.py", line 118, in run
return self._loop.run_until_complete(task)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Christian Bwanakawa\AppData\Local\Programs\Python\Python311\Lib\asyncio\base_events.py", line 653, in run_until_complete
return future.result()
^^^^^^^^^^^^^^^
File "C:\Users\Christian Bwanakawa\.virtualenvs\OpenDevin-ah_1KZiD\Lib\site-packages\uvicorn\server.py", line 69, in serve
await self._serve(sockets)
File "C:\Users\Christian Bwanakawa\.virtualenvs\OpenDevin-ah_1KZiD\Lib\site-packages\uvicorn\server.py", line 76, in _serve
config.load()
File "C:\Users\Christian Bwanakawa\.virtualenvs\OpenDevin-ah_1KZiD\Lib\site-packages\uvicorn\config.py", line 433, in load
self.loaded_app = import_from_string(self.app)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Christian Bwanakawa\.virtualenvs\OpenDevin-ah_1KZiD\Lib\site-packages\uvicorn\importer.py", line 19, in import_from_string
module = importlib.import_module(module_str)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Christian Bwanakawa\AppData\Local\Programs\Python\Python311\Lib\importlib\__init__.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "<frozen importlib._bootstrap>", line 1204, in _gcd_import
File "<frozen importlib._bootstrap>", line 1176, in _find_and_load
File "<frozen importlib._bootstrap>", line 1147, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 690, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 940, in exec_module
File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
File "C:\Dos\IA\Chat\OpenDevin\opendevin\server\listen.py", line 4, in <module>
import agenthub # noqa F401 (we import this to get the agents registered)
^^^^^^^^^^^^^^^
File "C:\Dos\IA\Chat\OpenDevin\agenthub\__init__.py", line 5, in <module>
from . import langchains_agent # noqa: E402
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Dos\IA\Chat\OpenDevin\agenthub\langchains_agent\__init__.py", line 2, in <module>
from .langchains_agent import LangchainsAgent
File "C:\Dos\IA\Chat\OpenDevin\agenthub\langchains_agent\langchains_agent.py", line 8, in <module>
from agenthub.langchains_agent.utils.memory import LongTermMemory
File "C:\Dos\IA\Chat\OpenDevin\agenthub\langchains_agent\utils\memory.py", line 37, in <module>
embed_model = HuggingFaceEmbedding(
^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Christian Bwanakawa\.virtualenvs\OpenDevin-ah_1KZiD\Lib\site-packages\llama_index\embeddings\huggingface\base.py", line 86, in __init__
self._model = SentenceTransformer(
^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Christian Bwanakawa\.virtualenvs\OpenDevin-ah_1KZiD\Lib\site-packages\sentence_transformers\SentenceTransformer.py", line 191, in __init__
modules = self._load_sbert_model(
^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Christian Bwanakawa\.virtualenvs\OpenDevin-ah_1KZiD\Lib\site-packages\sentence_transformers\SentenceTransformer.py", line 1246, in _load_sbert_model
module = module_class.load(module_path)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Christian Bwanakawa\.virtualenvs\OpenDevin-ah_1KZiD\Lib\site-packages\sentence_transformers\models\Pooling.py", line 227, in load
with open(os.path.join(input_path, "config.json")) as fIn:
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
FileNotFoundError: [Errno 2] No such file or directory: 'C:\\Users\\Christian Bwanakawa\\AppData\\Local\\llama_index\\models--BAAI--bge-small-en-v1.5\\snapshots\\5c38ec7c405ec4b44b94cc5a9bb96e735b38267a\\1_Pooling\\config.json'
Voilà
De : Jack Quimby ***@***.***>
Envoyé : mardi 2 avril 2024 11:15
À : OpenDevin/OpenDevin ***@***.***>
Cc : gorbabor ***@***.***>; Author ***@***.***>
Objet : Re: [OpenDevin/OpenDevin] Using Ollama LLM (Discussion #509)
This assumes that you have already installed Ollama and Litellm on some Conda environment and that you have installed OpenDevin properly.
* Once you install OpenDevin and run the make setup-config command you can edit the generated config.toml file to look something like this:
LLM_API_KEY="0"
LLM_BASE_URL="http://0.0.0.0:8000"
WORKSPACE_DIR="./workspace"
* You will need to manually run the llm of your choice using litellm in a separate terminal like so:
conda activate <your_env_name>
litellm --model ollama/llama2 --port 8000
The above example will work on your machine if you have run this before:
ollama run llama2
Note:
I have also found that I need to run the backend and frontend separately to get this to work properly, not sure why but it does work the way I have outlined above.
—
Reply to this email directly, view it on GitHub <#509 (comment)> , or unsubscribe <https://github.com/notifications/unsubscribe-auth/BCEN4KXEXBNCQJXCAP36DV3Y3KAKVAVCNFSM6AAAAABFR3IRRGVHI2DSMVQWIX3LMV43SRDJONRXK43TNFXW4Q3PNVWWK3TUHM4DSOBSGEZDG> .
You are receiving this because you authored the thread. <https://github.com/notifications/beacon/BCEN4KWCVTNH7SF3UUJPDQLY3KAKVA5CNFSM6AAAAABFR3IRRGWGG33NNVSW45C7OR4XAZNRIRUXGY3VONZWS33OINXW23LFNZ2KUY3PNVWWK3TUL5UWJTQAREHGW.gif> Message ID: ***@***.*** ***@***.***> >
|
Beta Was this translation helpful? Give feedback.
-
The issue you are having seems to be one related to the model config file location.
This part of the output tells us that there is a problem loading the
Make sure that you can see the model you are trying to use. This could be an issue with opendevin or the way that you have configured your files. Are you trying to run this in WSL? Does Litellm work on its own? Have you tried running
frontend and backend need to run in separate terminals. |
Beta Was this translation helpful? Give feedback.
-
I am having the same problem:
Here is my config.toml:
I am running ollama on Windows I have some questions,
this error " I came to this discussions section looking for answers but i'll go and try and find other docs after posting this. |
Beta Was this translation helpful? Give feedback.
-
For me too,
Thins was going well …
plan :: Project Name: Hello World in Python
Your Reply to the Human Prompter: As an AI Software Engineer, I understand that you want m
e to create a plan for writing "Hello World" program in Python. I will guide you through t
he process of implementing this task step-by-step, and provide necessary details to ensure
a successful outcome.
Current Focus: Implementing the "Hello World" program in Python.
Plan:
1. Open a new Python script using a text editor or IDE.
2. Write the code for the "Hello World" program as follows:
```python
print("Hello, world!")
```
3. Save the script with a meaningful name, such as "hello_world.py".
4. Run the script in the terminal using the Python interpreter.
5. Observe the output of the script, which should be "Hello, world!"
6. Test the program to ensure it is working correctly and produce no errors.
7. If necessary, troubleshoot any issues or bugs that may arise during implementation.
8. Report any findings or conclusions based on the testing process.
Summary: This plan outlines the steps required to write a simple "Hello World" program in
Python using a text editor or IDE. The first step is to open a new script, followed by wri
ting the code for the program and saving it with a meaningful name. The script is then run
in the terminal using the Python interpreter, which outputs the expected result of "Hello
, world!". The final step is to test the program to ensure it is working correctly and pro
duce no errors. Any potential issues or bugs that may arise during implementation can be t
roubleshot, and conclusions can be drawn based on the testing process.
And than :
Link :: https://www.programiz.com/python-programming/examples/hello-world
TimeoutError: Timeout 20000ms exceeded. when trying to navigate to https://www.programiz.c
om/python-programming/examples/hello-world
Exception in thread Thread-7 (<lambda>):
Traceback (most recent call last):
File "C:\Python312\Lib\threading.py", line 1073, in _bootstrap_inner
self.run()
File "C:\Python312\Lib\threading.py", line 1010, in run
self._target(*self._args, **self._kwargs)
File "C:\Dos\IA\Chat\devika\terminal\devika\devika.py", line 94, in <lambda>
thread = Thread(target=lambda: agent.execute(message, project_name, search_engine))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Dos\IA\Chat\devika\terminal\devika\src\agents\agent.py", line 336, in execute
search_results = self.search_queries(queries, project_name, engine)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Dos\IA\Chat\devika\terminal\devika\src\agents\agent.py", line 96, in search_que
ries
browser.screenshot(project_name)
File "C:\Dos\IA\Chat\devika\terminal\devika\src\browser\browser.py", line 40, in screens
hot
self.page.screenshot(path=path_to_save)
File "C:\Python312\Lib\site-packages\playwright\sync_api\_generated.py", line 9327, in s
creenshot
self._sync(
File "C:\Python312\Lib\site-packages\playwright\_impl\_sync_base.py", line 113, in _sync
return task.result()
^^^^^^^^^^^^^
File "C:\Python312\Lib\site-packages\playwright\_impl\_page.py", line 715, in screenshot
encoded_binary = await self._channel.send("screenshot", params)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Python312\Lib\site-packages\playwright\_impl\_connection.py", line 59, in send
return await self._connection.wrap_api_call(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Python312\Lib\site-packages\playwright\_impl\_connection.py", line 509, in wrap
_api_call
return await cb()
^^^^^^^^^^
File "C:\Python312\Lib\site-packages\playwright\_impl\_connection.py", line 97, in inner
_send
result = next(iter(done)).result()
^^^^^^^^^^^^^^^^^^^^^^^^^
playwright._impl._errors.TimeoutError: Timeout 30000ms exceeded.
De : tonyppe ***@***.***>
Envoyé : mercredi 3 avril 2024 14:17
À : OpenDevin/OpenDevin ***@***.***>
Cc : gorbabor ***@***.***>; Author ***@***.***>
Objet : Re: [OpenDevin/OpenDevin] Using Ollama LLM (Discussion #509)
I am having the same problem:
==============
STEP 0
PLAN:
🔵 0 please make a simple python script to test that you are working correctly. The script should just print "hello world" in the simplest way possible.
ACTION:
FileReadAction(path='./', action='read')
ACTION RUN ERROR:
[Errno 21] Is a directory: 'new_workspace/./'
Traceback (most recent call last):
File "/home/user/opendevin/OpenDevin/opendevin/controller/agent_controller.py", line 123, in step
observation = action.run(self)
^^^^^^^^^^^^^^^^
File "/home/user/opendevin/OpenDevin/opendevin/action/fileop.py", line 24, in run
with open(path, 'r', encoding='utf-8') as file:
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
IsADirectoryError: [Errno 21] Is a directory: 'new_workspace/./'
OBSERVATION:
[Errno 21] Is a directory: 'new_workspace/./'
==============
Here is my config.toml:
LLM_API_KEY="none"
WORKSPACE_DIR="./new_workspace/"
LLM_BASE_URL="http://192.168.1.40:11435"
LLM_MODEL="ollama/mistral"
I am running ollama on Windows
and devin is running (front and back) on a separate Ubuntu VM on the LAN network.
I have some questions,
1. what is this "workspace" directory and what is it used for? It gets created if it's not present, in the opendevin directory which is where I am running the backend from
2. I copied the ollama config.json file into this directory and, on next test I dont see the config.json not found error any more. Whether this is a valid config.json file I dont know at the moment.
3. I'm still getting the error in opendevin CLI and web ui: [Errno 21] Is a directory: 'new_workspace/./'
this error "[Errno 21] Is a directory: 'new_workspace/./'" is really confusing and I am not sure how to proceed. It indeed, is a directory, because it's the workspace directory and it is automatically created by opendevin - so I am unsure why opendevin errors out due to it.
I came to this discussions section looking for answers but i'll go and try and find other docs after posting this.
—
Reply to this email directly, view it on GitHub <#509 (comment)> , or unsubscribe <https://github.com/notifications/unsubscribe-auth/BCEN4KRWU3Z4PNJVF4KCTFDY3P6OTAVCNFSM6AAAAABFR3IRRGVHI2DSMVQWIX3LMV43SRDJONRXK43TNFXW4Q3PNVWWK3TUHM4DSOJXGAZDC> .
You are receiving this because you authored the thread. <https://github.com/notifications/beacon/BCEN4KRVXQRJILDL6HYEMOLY3P6OTA5CNFSM6AAAAABFR3IRRGWGG33NNVSW45C7OR4XAZNRIRUXGY3VONZWS33OINXW23LFNZ2KUY3PNVWWK3TUL5UWJTQARFEJ2.gif> Message ID: ***@***.*** ***@***.***> >
|
Beta Was this translation helpful? Give feedback.
-
Would like to see easy setup of using ollama models in OpenDevin |
Beta Was this translation helpful? Give feedback.
-
FWIW I got OpenDevin connected to ollama very easily. Here's the docker command I used : #2088 (comment) One issue I see with the setups shown above is that the workspace path isn't an absolute path . |
Beta Was this translation helpful? Give feedback.
-
I am a newbie. I want to use opendevin whith my local ollama llm Server. what to do ?
Beta Was this translation helpful? Give feedback.
All reactions