Skip to content
Change the repository type filter

All

    Repositories list

    • Docker compose stub of Guardrails as a Service
      Python
      Other
      0557Updated Nov 1, 2024Nov 1, 2024
    • Adding guardrails to large language models.
      Python
      Apache License 2.0
      3064k355Updated Nov 1, 2024Nov 1, 2024
    • Guardrails AI: PII Filter - Validates that any text does not contain any PII
      Python
      Apache License 2.0
      1642Updated Oct 31, 2024Oct 31, 2024
    • Python
      Apache License 2.0
      0000Updated Oct 31, 2024Oct 31, 2024
    • Python
      Apache License 2.0
      0000Updated Oct 30, 2024Oct 30, 2024
    • Guardrails AI: QA Relevance LLM eval - Validates that an answer is relevant to the question asked by asking the LLM to self evaluate
      Python
      Apache License 2.0
      0100Updated Oct 30, 2024Oct 30, 2024
    • Python
      Apache License 2.0
      1002Updated Oct 30, 2024Oct 30, 2024
    • A validator which ensures that a generated output answers the prompt given.
      Python
      Apache License 2.0
      1000Updated Oct 30, 2024Oct 30, 2024
    • Prototype Jailbreak Detection Guard
      Python
      Apache License 2.0
      1000Updated Oct 30, 2024Oct 30, 2024
    • Python
      Apache License 2.0
      0001Updated Oct 29, 2024Oct 29, 2024
    • Guardrails AI: Provenance LLM - Validates that the LLM-generated text is supported by the provided contexts.
      Python
      Apache License 2.0
      1006Updated Oct 25, 2024Oct 25, 2024
    • Python
      Apache License 2.0
      1001Updated Oct 23, 2024Oct 23, 2024
    • Validator for GuardrailsHub to check if a text is related with a topic.
      Python
      Apache License 2.0
      5103Updated Oct 22, 2024Oct 22, 2024
    • OpenAPI Specifications and scripts for generating SDKs for the various Guardrails services
      JavaScript
      1110Updated Oct 18, 2024Oct 18, 2024
    • Shared interfaces defined in JSON Schema.
      JavaScript
      0010Updated Oct 18, 2024Oct 18, 2024
    • NeMo Guardrails is an open-source toolkit for easily adding programmable guardrails to LLM-based conversational systems.
      Python
      Other
      387000Updated Oct 17, 2024Oct 17, 2024
    • Guardrails AI: Toxic language - Validates that the generated text is toxic
      Jupyter Notebook
      Apache License 2.0
      1242Updated Oct 17, 2024Oct 17, 2024
    • Python
      Apache License 2.0
      0200Updated Oct 1, 2024Oct 1, 2024
    • Guardrails AI: Competitor Check - Validates that LLM-generated text is not naming any competitors from a given list
      Jupyter Notebook
      Apache License 2.0
      3103Updated Sep 17, 2024Sep 17, 2024
    • nsfw_text

      Public
      A Guardrails AI validator that detects inappropriate/ Not Safe For Work (NSFW) text during validation
      Python
      Apache License 2.0
      2101Updated Sep 17, 2024Sep 17, 2024
    • Fork of BrainLogic AI's validator
      Python
      Apache License 2.0
      4000Updated Sep 16, 2024Sep 16, 2024
    • Python
      Apache License 2.0
      2000Updated Sep 16, 2024Sep 16, 2024
    • Scans LLM outputs for code, code fragments, and keys
      Python
      Apache License 2.0
      0000Updated Sep 16, 2024Sep 16, 2024
    • valid_url

      Public
      Guardrails AI: Valid url - Validates that a value is a valid URL
      Python
      Apache License 2.0
      0100Updated Sep 16, 2024Sep 16, 2024
    • Guardrails AI: Valid range - validates that a value is within a range
      Python
      Apache License 2.0
      0100Updated Sep 16, 2024Sep 16, 2024
    • Guardrails AI: Valid JSON - Validates that a value is parseable as valid JSON.
      Python
      Apache License 2.0
      0100Updated Sep 16, 2024Sep 16, 2024
    • A Guardrails AI validator that validates whether a given address is valid
      Python
      Apache License 2.0
      0100Updated Sep 16, 2024Sep 16, 2024
    • uppercase

      Public
      Guardrails AI: Upper case - Validates that a value is upper case
      Python
      Apache License 2.0
      0100Updated Sep 16, 2024Sep 16, 2024
    • A Guardrails AI input validator that detects if the user is trying to jailbreak an LLM using unusual prompting techniques that involve jailbreaking and tricking the LLM
      Python
      Apache License 2.0
      2300Updated Sep 16, 2024Sep 16, 2024
    • A Guardrails AI validator that validates LLM responses by re-prompting the LLM to self-evaluate
      Python
      Apache License 2.0
      0000Updated Sep 16, 2024Sep 16, 2024