-
Notifications
You must be signed in to change notification settings - Fork 44.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix: Fix Local Server Issue by Adding Provider Configuration Check and Exception Handling #7522
fix: Fix Local Server Issue by Adding Provider Configuration Check and Exception Handling #7522
Conversation
Added a check to ensure that only configured providers are queried for available chat models. Added exception handling to log warnings when a provider fails to return models, improving robustness and debugging capabilities.
PR Description updated to latest commit (0dea1f3)
|
✅ Deploy Preview for auto-gpt-docs canceled.
|
PR Reviewer Guide 🔍
|
PR Code Suggestions ✨
|
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## master #7522 +/- ##
==========================================
+ Coverage 53.79% 58.12% +4.33%
==========================================
Files 124 106 -18
Lines 7027 5760 -1267
Branches 911 719 -192
==========================================
- Hits 3780 3348 -432
+ Misses 3114 2307 -807
+ Partials 133 105 -28
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. |
@kcze Can you take a look at this please |
I've test this fix and it seems to partially work. I'm able to run the agent, but it seems like there is still issues with checks for with llm's are available: Note that it is reporting that I don't have access to gpt-3.5-turbo, but it sets the fast_llm to gpt-3.5-turbo. It also reports that I don't have access to gpt-4-turbo. I have checked and ensured that both gpt-4-turbo and gpt-3.5-turbo have been configured as Allowed models in my openai project configuration --> Limits --> Model Usage
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the contribution, have you pushed all your changes?
@Pwuts you may want to have a look at this.
forge/forge/llm/providers/multi.py
Outdated
models.extend(await provider.get_available_chat_models()) | ||
if hasattr(provider, 'is_configured') and provider.is_configured(): | ||
try: | ||
models.extend(await provider.get_available_chat_models()) | ||
except Exception as e: | ||
logger.warning(f"Failed to get models from {provider.__class__.__name__}: {e}") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This isn't going to fix the issue:
- It seems there's no
is_configured
in any of the providers, so no models will be added. get_available_chat_models()
just lists model names from a constant and has nothing to do with provider being ready or not.
create_chat_completion
calls get_model_provider
and then _get_provider
which triggers initialization if needed.
This pull request has conflicts with the base branch, please resolve those so we can evaluate the pull request. |
…t-Gravitas#7558) Return mutated copy rather than in-place mutated `flowRuns` in `refreshFlowRuns(..)` Fixes Significant-Gravitas#7507
* replace SQLite with Postgres * make sqlite default * add migrations for sqlite and postgres * update readme * fix formatting
…urce of input (Significant-Gravitas#7539) ### Background Input from the input pin is consumed only once, and the default input can always be used. So when you have an input pin overriding the default input, the value will be used only once and the following run will always fall back to the default input. This can mislead the user. Expected behaviour: the node should NOT RUN, making connected pins only use their connection(s) for sources of data. ### Changes 🏗️ * Make pin connection the mandatory source of input and not falling back to default value. * Fix the type flakiness on block input & output. Unify the typing for BlockInput & BlockOutput using the right alias to avoid wrong typing. * Add comment on alias * automated test on the new behaviour.
- fix(builder/monitor): Export `Graph` rather than `GraphMeta` - Fixes Significant-Gravitas#7557 - refactor(builder): Split up `lib/autogpt_server_api` into multi-file module - Resolves Significant-Gravitas#7555 - Rename `lib/autogpt_server_api` to `lib/autogpt-server-api` - Split up `lib/autogpt-server-api` into `/client`, `/types` - Move `ObjectSchema` from `lib/types` to `lib/autogpt-server-api/types` - Make definition of `Node['metadata']['position']` independent of `reactflow.XYPosition` - fix(builder/monitor): Strip secrets from graph on export - Resolves Significant-Gravitas#7492 - Add `safeCopyGraph` function in `lib/autogpt-server-api/utils` - Use `safeCopyGraph` to strip secrets from graph on export in `/monitor` > `FlowInfo`
- Handles: - Add `NodeHandle` to draw input and output handles - Position handles relatively - Make entire handle label clickable/connectable - Add input/output types below labels - Change color on hover and when connected - "Connected" no longer shows up when connected - Edges: - Draw edge above node when connecting to the same node - Add custom `ConnectionLine`; drawn when making a connection - Add `CustomEdge`; drawn for existing connections - Add arrow to the edge end - Colorize depending on type - Input field modal: - Select all text when opened - Disable node dragging - CSS: - Remove not needed styling - Use tailwind classes instead of css for some components - Minor style changes - Add shadcn switch - Change bottom node buttons (for properties and advanced) to switches - Format code
…ant-Gravitas#7531) * feat: Add ForEachBlock for iterating over a List. --------- Co-authored-by: Swifty <[email protected]>
…-Gravitas#7533) * feat: Add RSSReaderBlock for reading RSS feeds The commit adds a new `RSSReaderBlock` class in the `rss-reader-block.py` file. This block allows users to read RSS feeds by providing the URL of the feed, start datetime, polling rate, and a flag to run the block continuously. The block fetches the feed using the `feedparser` library and returns the title, link, description, publication date, author, and categories of each RSS item. This commit also includes the addition of the `feedparser` dependency in the `pyproject.toml` file. * fix(server): update lock file * updated poetry lock * fixed rss reader testing * Updated error message in test to include check info * Set starttime as 1 day ago * Changed start time to time period --------- Co-authored-by: Swifty <[email protected]>
* update CI * add poetry run * schema prisma
…ficant-Gravitas#7529) * feat(blocks): Add MathsBlock for performing mathematical operations The commit adds a new block called MathsBlock to perform various mathematical operations such as addition, subtraction, multiplication, division, and exponentiation. The block takes input parameters for the operation type, two numbers, and an option to round the result. It returns the result of the calculation along with an explanation of the performed operation. --------- Co-authored-by: Swifty <[email protected]>
…icant-Gravitas#7567) feat: Add support for new Groq models The commit adds support for new Groq models, including LLAMA3_1_405B, LLAMA3_1_70B, and LLAMA3_1_8B. These models are part of the preview release and offer enhanced reasoning and versatility capabilities.
…#7564) Co-authored-by: Swifty <[email protected]>
* replace SQLite with Postgres * dockerfiles and optional docker compose set up * Update rnd/autogpt_builder/Dockerfile Co-authored-by: Reinier van der Leer <[email protected]> * address feedback * Update .dockerignore Co-authored-by: Reinier van der Leer <[email protected]> * Remove example files folder * remove backend and frontend from docker compose --------- Co-authored-by: Reinier van der Leer <[email protected]>
* fix schema file * remove invalide migrations * make sqlite url hardcode
* ci(server): add sqlite processing * ci(server): try setting DATABASE_URL based on db platform * fix(server): swap default back to sqlite * ci(server): go back to database url --------- Co-authored-by: Aarushi <[email protected]>
…Numeric Results Only (Significant-Gravitas#7582) * refactor(MathsBlock): Simplify output to return numeric result directly - Remove MathsResult class and explanation field - Update Output schema to use float type - Simplify run method to yield numeric result only - Adjust error handling to return inf or nan for errors - Update test cases to reflect new output structure * run format * refactor(CounterBlock): Simplify output to return count as integer - Remove CounterResult class - Update Output schema to use int type directly - Simplify run method to yield count without explanation - Modify error handling to return -1 for any errors - Update test case to reflect new output structure
Added a check to ensure that only configured providers are queried for available chat models. Added exception handling to log warnings when a provider fails to return models, improving robustness and debugging capabilities.
Conflicts have been resolved! 🎉 A maintainer will review the pull request shortly. |
User description
Added a check to ensure that only configured providers are queried for available chat models. Added exception handling to log warnings when a provider fails to return models.
Background
Fixing a local server issue related to incorrect provider querying for chat models. The issue was due to irrelevant configurations being checked, leading to confusion and errors, despite port 8080 not being occupied.
Reported issue
Changes 🏗️
Provider Configuration Check:
Exception Handling:
PR Quality Scorecard ✨
+2 pts
+5 pts
+5 pts
+5 pts
-4 pts
+4 pts
+5 pts
-5 pts
agbenchmark
to verify that these changes do not regress performance?+10 pts
PR Type
Bug fix, Enhancement
Description
Changes walkthrough 📝
multi.py
Add provider configuration check and exception handling
forge/forge/llm/providers/multi.py
available chat models.
to return models.