Developed by | Jonathan Bennion |
---|---|
Date of development | Mar 29, 2024 |
Validator type | Format |
License | Apache 2 |
Input/Output | Output |
This bias check format validator ensures textual outputs do not contain biased language towards specific demographics, such as race, gender, sex, religion, ethnicity.
This validator can be used to ensure fairness of model output across various demographic groups.
-
Dependencies:
- guardrails-ai>=0.5.0
- transformers>=4.40
-
Dev Dependencies:
- pytest
- pyright
- ruff
-
Foundation model access keys:
- None (basic HuggingFace hub access)
$ guardrails hub install hub://guardrails/bias_check
In this example, we apply the validator to a string output generated by an LLM.
# Import Guard and Validator
from guardrails.hub import BiasCheck
from guardrails import Guard
# Setup Guard
guard = Guard().use(
BiasCheck(threshold=0.9, on_fail="exception")
)
guard.validate("The movie was great!") # Validator passes
guard.validate("Why do men always think the movie was great?") # Validator fails