Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

求助:openapi_base和代理问题 #1

Open
ahuohuo78 opened this issue Jan 7, 2024 · 0 comments
Open

求助:openapi_base和代理问题 #1

ahuohuo78 opened this issue Jan 7, 2024 · 0 comments

Comments

@ahuohuo78
Copy link

1.openapi_key可以在用户界面找到,想问下openapi_base的链接是填这个嘛?https://api.openai.com/v1/chat/completions

2.为了使用gpt3.5,需要开代理,我的是代理到10080端口,打开测试web的时候,爬取网页就报错没反应,提示超时和几个关联的错误,如果不开代理,爬取网页则有回显,但是添加文本的按钮没反应,无法同步到数据管理页面,报错一样提示无法连接网页,baseurl也提示连不上。但是在开了代理的情况下,我的电脑是可以正常使用gpt,也可以打开粘贴到测试web文本框中的网址的。最后,无论开不开代理,聊天功能都没有反应(这个聊天是不是只能回答数据管理里的内容?因为数据管理内没数据所以没回复?)

运行后记录的日志如下:
127.0.0.1 - - [07/Jan/2024 20:54:34] "GET / HTTP/1.1" 200 -
127.0.0.1 - - [07/Jan/2024 20:54:34] "GET /static/avatar.jpg HTTP/1.1" 304 -
127.0.0.1 - - [07/Jan/2024 20:54:35] "GET /db/get?page=1&limit=10 HTTP/1.1" 200 -
127.0.0.1 - - [07/Jan/2024 20:54:52] "GET /static/momo.jpg HTTP/1.1" 304 -
[2024-01-07 20:54:52,251] ERROR in app: Exception on /chat/ask [POST]
Traceback (most recent call last):
File "E:\AI_Helper\llm_knowledge-master\venv\lib\site-packages\urllib3\connectionpool.py", line 712, in urlopen
self._prepare_proxy(conn)
File "E:\AI_Helper\llm_knowledge-master\venv\lib\site-packages\urllib3\connectionpool.py", line 1012, in _prepare_proxy
conn.connect()
File "E:\AI_Helper\llm_knowledge-master\venv\lib\site-packages\urllib3\connection.py", line 369, in connect
self.sock = conn = self._connect_tls_proxy(hostname, conn)
File "E:\AI_Helper\llm_knowledge-master\venv\lib\site-packages\urllib3\connection.py", line 504, in connect_tls_proxy
socket = ssl_wrap_socket(
File "E:\AI_Helper\llm_knowledge-master\venv\lib\site-packages\urllib3\util\ssl
.py", line 453, in ssl_wrap_socket
ssl_sock = ssl_wrap_socket_impl(sock, context, tls_in_tls)
File "E:\AI_Helper\llm_knowledge-master\venv\lib\site-packages\urllib3\util\ssl
.py", line 495, in _ssl_wrap_socket_impl
return ssl_context.wrap_socket(sock)
File "E:\Python\lib\ssl.py", line 500, in wrap_socket
return self.sslsocket_class._create(
File "E:\Python\lib\ssl.py", line 1040, in _create
self.do_handshake()
File "E:\Python\lib\ssl.py", line 1309, in do_handshake
self._sslobj.do_handshake()
OSError: [Errno 0] Error

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "E:\AI_Helper\llm_knowledge-master\venv\lib\site-packages\requests\adapters.py", line 486, in send
resp = conn.urlopen(
File "E:\AI_Helper\llm_knowledge-master\venv\lib\site-packages\urllib3\connectionpool.py", line 827, in urlopen
return self.urlopen(
File "E:\AI_Helper\llm_knowledge-master\venv\lib\site-packages\urllib3\connectionpool.py", line 827, in urlopen
return self.urlopen(
File "E:\AI_Helper\llm_knowledge-master\venv\lib\site-packages\urllib3\connectionpool.py", line 799, in urlopen
retries = retries.increment(
File "E:\AI_Helper\llm_knowledge-master\venv\lib\site-packages\urllib3\util\retry.py", line 592, in increment
raise MaxRetryError(_pool, url, error or ResponseError(cause))
urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='api.openai.com', port=443): Max retries exceeded with url: /v1/chat/completions/embeddings (Caused by ProxyError('Cannot connect to proxy.', OSError(0, 'Error')))

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "E:\AI_Helper\llm_knowledge-master\venv\lib\site-packages\openai\api_requestor.py", line 606, in request_raw
result = _thread_context.session.request(
File "E:\AI_Helper\llm_knowledge-master\venv\lib\site-packages\requests\sessions.py", line 589, in request
resp = self.send(prep, **send_kwargs)
File "E:\AI_Helper\llm_knowledge-master\venv\lib\site-packages\requests\sessions.py", line 703, in send
r = adapter.send(request, **kwargs)
File "E:\AI_Helper\llm_knowledge-master\venv\lib\site-packages\requests\adapters.py", line 513, in send
raise ProxyError(e, request=request)
requests.exceptions.ProxyError: HTTPSConnectionPool(host='api.openai.com', port=443): Max retries exceeded with url: /v1/chat/completions/embeddings (Caused by ProxyError('Cannot connect to proxy.', OSError(0, 'Error')))

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File "E:\AI_Helper\llm_knowledge-master\venv\lib\site-packages\flask\app.py", line 2190, in wsgi_app
response = self.full_dispatch_request()
File "E:\AI_Helper\llm_knowledge-master\venv\lib\site-packages\flask\app.py", line 1486, in full_dispatch_request
rv = self.handle_user_exception(e)
File "E:\AI_Helper\llm_knowledge-master\venv\lib\site-packages\flask\app.py", line 1484, in full_dispatch_request
rv = self.dispatch_request()
File "E:\AI_Helper\llm_knowledge-master\venv\lib\site-packages\flask\app.py", line 1469, in dispatch_request
return self.ensure_sync(self.view_functions[rule.endpoint])(**view_args)
File "E:\AI_Helper\llm_knowledge-master\web_init_.py", line 90, in ask_question
"answer": embedding.ask_question(data["question"])
File "E:\AI_Helper\llm_knowledge-master\embedding_init_.py", line 42, in ask_question
content = self.__db.query_text(text, 1)
File "E:\AI_Helper\llm_knowledge-master\embedding\vectordb\chromadb.py", line 29, in query_text
results = self.__collection.query(
File "E:\AI_Helper\llm_knowledge-master\venv\lib\site-packages\chromadb\api\models\Collection.py", line 327, in query
valid_query_embeddings = self._embed(input=valid_query_texts)
File "E:\AI_Helper\llm_knowledge-master\venv\lib\site-packages\chromadb\api\models\Collection.py", line 633, in _embed
return self._embedding_function(input=input)
File "E:\AI_Helper\llm_knowledge-master\venv\lib\site-packages\chromadb\utils\embedding_functions.py", line 194, in call
embeddings = self._client.create(input=input, model=self._model_name)[
File "E:\AI_Helper\llm_knowledge-master\venv\lib\site-packages\openai\api_resources\embedding.py", line 33, in create
response = super().create(*args, **kwargs)
File "E:\AI_Helper\llm_knowledge-master\venv\lib\site-packages\openai\api_resources\abstract\engine_api_resource.py", line 155, in create
response, _, api_key = requestor.request(
File "E:\AI_Helper\llm_knowledge-master\venv\lib\site-packages\openai\api_requestor.py", line 289, in request
result = self.request_raw(
File "E:\AI_Helper\llm_knowledge-master\venv\lib\site-packages\openai\api_requestor.py", line 619, in request_raw
raise error.APIConnectionError(
openai.error.APIConnectionError: Error communicating with OpenAI: HTTPSConnectionPool(host='api.openai.com', port=443): Max retries exceeded with url: /v1/chat/completions/embeddings (Caused by ProxyError('Cannot connect to proxy.', OSError(0, 'Error')))
127.0.0.1 - - [07/Jan/2024 20:54:52] "POST /chat/ask HTTP/1.1" 500 -

如果要在代码中增加代理,是哪些.py文件需要修改呢?如果要改的地方太多,则仅仅在调用API的地方增加代理是不是好一点?【添加文本】按钮是不是用到了API?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant