Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix: Fix Local Server Issue by Adding Provider Configuration Check and Exception Handling #7522

Conversation

elvinmahmudov
Copy link

@elvinmahmudov elvinmahmudov commented Jul 21, 2024

User description

Added a check to ensure that only configured providers are queried for available chat models. Added exception handling to log warnings when a provider fails to return models.

Background

Fixing a local server issue related to incorrect provider querying for chat models. The issue was due to irrelevant configurations being checked, leading to confusion and errors, despite port 8080 not being occupied.
Reported issue

Changes 🏗️

Provider Configuration Check:

  • Only query configured providers for available chat models.

Exception Handling:

  • Log warnings when a provider fails to return models, improving debugging.

PR Quality Scorecard ✨

  • Have you used the PR description template?   +2 pts
  • Is your pull request atomic, focusing on a single change?   +5 pts
  • Have you linked the GitHub issue(s) that this PR addresses?   +5 pts
  • Have you documented your changes clearly and comprehensively?   +5 pts
  • Have you changed or added a feature?   -4 pts
    • Have you added/updated corresponding documentation?   +4 pts
    • Have you added/updated corresponding integration tests?   +5 pts
  • Have you changed the behavior of AutoGPT?   -5 pts
    • Have you also run agbenchmark to verify that these changes do not regress performance?   +10 pts

PR Type

Bug fix, Enhancement


Description

  • Added a check to ensure that only configured providers are queried for available chat models.
  • Implemented exception handling to log warnings when a provider fails to return models, improving robustness and debugging capabilities.

Changes walkthrough 📝

Relevant files
Bug fix
multi.py
Add provider configuration check and exception handling   

forge/forge/llm/providers/multi.py

  • Added a check to ensure only configured providers are queried for
    available chat models.
  • Implemented exception handling to log warnings when a provider fails
    to return models.
  • +5/-1     

    💡 PR-Agent usage:
    Comment /help on the PR to get a list of all available PR-Agent tools and their descriptions

    Added a check to ensure that only configured providers are queried for available chat models. Added exception handling to log warnings when a provider fails to return models, improving robustness and debugging capabilities.
    @codiumai-pr-agent-pro codiumai-pr-agent-pro bot added enhancement New feature or request Bug fix labels Jul 21, 2024
    Copy link

    PR Description updated to latest commit (0dea1f3)

    Copy link

    netlify bot commented Jul 21, 2024

    Deploy Preview for auto-gpt-docs canceled.

    Name Link
    🔨 Latest commit 3b4b537
    🔍 Latest deploy log https://app.netlify.com/sites/auto-gpt-docs/deploys/66a16c158207c2000850e9d8

    Copy link

    PR Reviewer Guide 🔍

    ⏱️ Estimated effort to review: 2 🔵🔵⚪⚪⚪
    🧪 No relevant tests
    🔒 No security concerns identified
    ⚡ Key issues to review

    Error Handling
    The exception handling in the new code logs a warning but does not handle the exception further or re-raise it. This could potentially allow the function to proceed with incomplete data, which might not be the intended behavior. Consider how critical this failure is and if the function should fail, return an error, or handle this case differently.

    Copy link

    PR Code Suggestions ✨

    CategorySuggestion                                                                                                                                    Score
    Best practice
    Rethrow exceptions after logging to ensure they are not swallowed

    To avoid catching and logging exceptions silently, consider re-raising the exception
    after logging or handling it in a way that the calling function can also be aware of
    it.

    forge/forge/llm/providers/multi.py [75-76]

     except Exception as e:
       logger.warning(f"Failed to get models from {provider.__class__.__name__}: {e}")
    +  raise
     
    • Apply this suggestion
    Suggestion importance[1-10]: 9

    Why: Rethrowing exceptions after logging ensures that errors are not silently ignored, which is crucial for debugging and maintaining the integrity of the application. This is a best practice for error handling.

    9
    Use specific exceptions in the except block for better error handling

    Replace the generic Exception type in the except block with more specific exceptions
    that you expect could occur during the operation. This will help in better error
    handling and debugging.

    forge/forge/llm/providers/multi.py [75]

    -except Exception as e:
    +except (SpecificExceptionType1, SpecificExceptionType2) as e:
     
    • Apply this suggestion
    Suggestion importance[1-10]: 8

    Why: Using specific exceptions improves error handling and debugging by making it clear which errors are expected and handled. This is a best practice for robust code.

    8

    Copy link

    codecov bot commented Jul 21, 2024

    Codecov Report

    All modified and coverable lines are covered by tests ✅

    Project coverage is 58.12%. Comparing base (e7885f9) to head (3b4b537).
    Report is 13 commits behind head on master.

    Additional details and impacted files
    @@            Coverage Diff             @@
    ##           master    #7522      +/-   ##
    ==========================================
    + Coverage   53.79%   58.12%   +4.33%     
    ==========================================
      Files         124      106      -18     
      Lines        7027     5760    -1267     
      Branches      911      719     -192     
    ==========================================
    - Hits         3780     3348     -432     
    + Misses       3114     2307     -807     
    + Partials      133      105      -28     
    Flag Coverage Δ
    Linux 57.82% <ø> (+4.27%) ⬆️
    Windows 54.00% <ø> (+3.60%) ⬆️
    autogpt-agent ?
    forge 58.00% <ø> (ø)
    macOS 56.99% <ø> (+4.12%) ⬆️

    Flags with carried forward coverage won't be shown. Click here to find out more.

    ☔ View full report in Codecov by Sentry.
    📢 Have feedback on the report? Share it here.

    @Swiftyos Swiftyos requested a review from kcze July 22, 2024 10:14
    @Swiftyos
    Copy link
    Contributor

    @kcze Can you take a look at this please

    @ntindle ntindle self-assigned this Jul 22, 2024
    @QuinRiva
    Copy link

    I've test this fix and it seems to partially work. I'm able to run the agent, but it seems like there is still issues with checks for with llm's are available:

    Note that it is reporting that I don't have access to gpt-3.5-turbo, but it sets the fast_llm to gpt-3.5-turbo. It also reports that I don't have access to gpt-4-turbo. I have checked and ensured that both gpt-4-turbo and gpt-3.5-turbo have been configured as Allowed models in my openai project configuration --> Limits --> Model Usage

    2024-07-22 22:11:55,926 INFO  HTTP Request: GET https://api.openai.com/v1/models "HTTP/1.1 200 OK"
    2024-07-22 22:11:56,044 WARNING  You don't have access to gpt-3.5-turbo. Setting fast_llm to gpt-3.5-turbo.
    2024-07-22 22:11:56,163 WARNING  You don't have access to gpt-4-turbo. Setting smart_llm to gpt-3.5-turbo.
    2024-07-22 22:11:57,105 INFO  NEWS: Welcome to AutoGPT!
    2024-07-22 22:11:57,105 INFO  NEWS: Below you'll find the latest AutoGPT News and feature updates!
    2024-07-22 22:11:57,105 INFO  NEWS: If you don't wish to see this message, you can run AutoGPT with the --skip-news flag.
    2024-07-22 22:11:57,105 INFO  NEWS: 
    2024-07-22 22:11:57,106 INFO  NEWS: QUICK LINKS 🔗
    2024-07-22 22:11:57,106 INFO  NEWS: --------------
    2024-07-22 22:11:57,106 INFO  NEWS: 🌎 Official Website: https://agpt.co.
    2024-07-22 22:11:57,106 INFO  NEWS: 📖 User Guide: https://docs.agpt.co/autogpt.
    2024-07-22 22:11:57,106 INFO  NEWS: 👩 Contributors Wiki: https://github.com/Significant-Gravitas/AutoGPT/wiki/Contributing.
    2024-07-22 22:11:57,106 INFO  NEWS: 
    2024-07-22 22:11:57,106 INFO  NEWS: v0.5.0 RELEASE HIGHLIGHTS! 🚀🚀
    2024-07-22 22:11:57,107 INFO  NEWS: -------------------------------
    2024-07-22 22:11:57,107 INFO  NEWS: Cloud-readiness, a new UI, support for the newest Agent Protocol version, and much more:
    2024-07-22 22:11:57,107 INFO  NEWS: v0.5.0 is our biggest release yet!
    2024-07-22 22:11:57,107 INFO  NEWS: 
    2024-07-22 22:11:57,107 INFO  NEWS: Take a look at the Release Notes on Github for the full changelog:
    2024-07-22 22:11:57,107 INFO  NEWS: https://github.com/Significant-Gravitas/AutoGPT/releases.
    2024-07-22 22:11:57,107 INFO  NEWS: 
    2024-07-22 22:11:57,108 WARNING  You are running on `fix-crash-on-uncofigured-providers` branch - this is not a supported branch.
    2024-07-22 22:11:57,108 INFO  Smart LLM: gpt-3.5-turbo
    2024-07-22 22:11:57,108 INFO  Fast LLM: gpt-3.5-turbo
    2024-07-22 22:11:57,177 INFO  Code Execution: ENABLED
    

    Copy link
    Contributor

    @kcze kcze left a comment

    Choose a reason for hiding this comment

    The reason will be displayed to describe this comment to others. Learn more.

    Thanks for the contribution, have you pushed all your changes?
    @Pwuts you may want to have a look at this.

    Comment on lines 72 to 76
    models.extend(await provider.get_available_chat_models())
    if hasattr(provider, 'is_configured') and provider.is_configured():
    try:
    models.extend(await provider.get_available_chat_models())
    except Exception as e:
    logger.warning(f"Failed to get models from {provider.__class__.__name__}: {e}")
    Copy link
    Contributor

    Choose a reason for hiding this comment

    The reason will be displayed to describe this comment to others. Learn more.

    This isn't going to fix the issue:

    1. It seems there's no is_configured in any of the providers, so no models will be added.
    2. get_available_chat_models() just lists model names from a constant and has nothing to do with provider being ready or not.

    create_chat_completion calls get_model_provider and then _get_provider which triggers initialization if needed.

    @github-actions github-actions bot added the conflicts Automatically applied to PRs with merge conflicts label Jul 23, 2024
    Copy link
    Contributor

    This pull request has conflicts with the base branch, please resolve those so we can evaluate the pull request.

    @ntindle ntindle removed their assignment Jul 24, 2024
    Pwuts and others added 10 commits July 24, 2024 22:59
    …t-Gravitas#7558)
    
    Return mutated copy rather than in-place mutated `flowRuns` in `refreshFlowRuns(..)`
    
    Fixes Significant-Gravitas#7507
    * replace SQLite with Postgres
    
    * make sqlite default
    
    * add migrations for sqlite and postgres
    
    * update readme
    
    * fix formatting
    …urce of input (Significant-Gravitas#7539)
    
    ### Background
    
    Input from the input pin is consumed only once, and the default input can always be used. So when you have an input pin overriding the default input, the value will be used only once and the following run will always fall back to the default input. This can mislead the user.
    
    Expected behaviour: the node should NOT RUN, making connected pins only use their connection(s) for sources of data.
    
    ### Changes 🏗️
    
    * Make pin connection the mandatory source of input and not falling back to default value.
    * Fix the type flakiness on block input & output. Unify the typing for BlockInput & BlockOutput using the right alias to avoid wrong typing.
    * Add comment on alias
    * automated test on the new behaviour.
    - fix(builder/monitor): Export `Graph` rather than `GraphMeta`
      - Fixes Significant-Gravitas#7557
    
    - refactor(builder): Split up `lib/autogpt_server_api` into multi-file module
      - Resolves Significant-Gravitas#7555
      - Rename `lib/autogpt_server_api` to `lib/autogpt-server-api`
      - Split up `lib/autogpt-server-api` into `/client`, `/types`
      - Move `ObjectSchema` from `lib/types` to `lib/autogpt-server-api/types`
      - Make definition of `Node['metadata']['position']` independent of `reactflow.XYPosition`
    
    - fix(builder/monitor): Strip secrets from graph on export
      - Resolves Significant-Gravitas#7492
      - Add `safeCopyGraph` function in `lib/autogpt-server-api/utils`
      - Use `safeCopyGraph` to strip secrets from graph on export in `/monitor` > `FlowInfo`
    - Handles:
      - Add `NodeHandle` to draw input and output handles
      - Position handles relatively
      - Make entire handle label clickable/connectable
      - Add input/output types below labels
      - Change color on hover and when connected
      - "Connected" no longer shows up when connected
    - Edges:
      - Draw edge above node when connecting to the same node
      - Add custom `ConnectionLine`; drawn when making a connection
      - Add `CustomEdge`; drawn for existing connections
      - Add arrow to the edge end
      - Colorize depending on type
    - Input field modal:
      - Select all text when opened
      - Disable node dragging
    - CSS:
      - Remove not needed styling
      - Use tailwind classes instead of css for some components
      - Minor style changes
    - Add shadcn switch
    - Change bottom node buttons (for properties and advanced) to switches
    - Format code
    …ant-Gravitas#7531)
    
    * feat: Add ForEachBlock for iterating over a List.
    
    ---------
    
    Co-authored-by: Swifty <[email protected]>
    …-Gravitas#7533)
    
    * feat: Add RSSReaderBlock for reading RSS feeds
    
    The commit adds a new `RSSReaderBlock` class in the `rss-reader-block.py` file. This block allows users to read RSS feeds by providing the URL of the feed, start datetime, polling rate, and a flag to run the block continuously. The block fetches the feed using the `feedparser` library and returns the title, link, description, publication date, author, and categories of each RSS item.
    
    This commit also includes the addition of the `feedparser` dependency in the `pyproject.toml` file.
    
    * fix(server): update lock file
    
    * updated poetry lock
    
    * fixed rss reader testing
    
    * Updated error message in test to include check info
    
    * Set starttime as 1 day ago
    
    * Changed start time to time period
    
    ---------
    
    Co-authored-by: Swifty <[email protected]>
    * update CI
    
    * add poetry run
    
    * schema prisma
    …ficant-Gravitas#7529)
    
    * feat(blocks): Add MathsBlock for performing mathematical operations
    
    The commit adds a new block called MathsBlock to perform various mathematical operations such as addition, subtraction, multiplication, division, and exponentiation. The block takes input parameters for the operation type, two numbers, and an option to round the result. It returns the result of the calculation along with an explanation of the performed operation.
    
    ---------
    
    Co-authored-by: Swifty <[email protected]>
    …icant-Gravitas#7567)
    
    feat: Add support for new Groq models
    
    The commit adds support for new Groq models, including LLAMA3_1_405B, LLAMA3_1_70B, and LLAMA3_1_8B. These models are part of the preview release and offer enhanced reasoning and versatility capabilities.
    Bentlybro and others added 10 commits July 24, 2024 23:00
    * replace SQLite with Postgres
    
    * dockerfiles and optional docker compose set up
    
    * Update rnd/autogpt_builder/Dockerfile
    
    Co-authored-by: Reinier van der Leer <[email protected]>
    
    * address feedback
    
    * Update .dockerignore
    
    Co-authored-by: Reinier van der Leer <[email protected]>
    
    * Remove example files folder
    
    * remove backend and frontend from docker compose
    
    ---------
    
    Co-authored-by: Reinier van der Leer <[email protected]>
    * fix schema file
    
    * remove invalide migrations
    
    * make sqlite url hardcode
    * ci(server): add sqlite processing
    
    * ci(server): try setting DATABASE_URL based on db platform
    
    * fix(server): swap default back to sqlite
    
    * ci(server): go back to database url
    
    ---------
    
    Co-authored-by: Aarushi <[email protected]>
    …Numeric Results Only (Significant-Gravitas#7582)
    
    * refactor(MathsBlock): Simplify output to return numeric result directly
    
    - Remove MathsResult class and explanation field
    - Update Output schema to use float type
    - Simplify run method to yield numeric result only
    - Adjust error handling to return inf or nan for errors
    - Update test cases to reflect new output structure
    
    * run format
    
    * refactor(CounterBlock): Simplify output to return count as integer
    
    - Remove CounterResult class
    - Update Output schema to use int type directly
    - Simplify run method to yield count without explanation
    - Modify error handling to return -1 for any errors
    - Update test case to reflect new output structure
    )
    
    * fix(Builder): fixes to copy and pasted blocks
    
    * clear the blocks state on paste
    Added a check to ensure that only configured providers are queried for available chat models. Added exception handling to log warnings when a provider fails to return models, improving robustness and debugging capabilities.
    @elvinmahmudov elvinmahmudov requested review from a team as code owners July 24, 2024 21:00
    @elvinmahmudov elvinmahmudov requested review from ntindle and kcze and removed request for a team July 24, 2024 21:00
    @github-actions github-actions bot added Classic AutoGPT Agent platform/frontend AutoGPT Platform - Front end and removed conflicts Automatically applied to PRs with merge conflicts labels Jul 24, 2024
    Copy link
    Contributor

    Conflicts have been resolved! 🎉 A maintainer will review the pull request shortly.

    @github-actions github-actions bot added platform/backend AutoGPT Platform - Back end size/xl and removed size/s labels Jul 24, 2024
    @github-actions github-actions bot added size/xs and removed Classic AutoGPT Agent platform/frontend AutoGPT Platform - Front end platform/backend AutoGPT Platform - Back end size/xl labels Jul 24, 2024
    @elvinmahmudov elvinmahmudov deleted the fix-crash-on-uncofigured-providers branch July 24, 2024 21:14
    Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
    Projects
    Status: Done
    Development

    Successfully merging this pull request may close these issues.

    10 participants