-
Notifications
You must be signed in to change notification settings - Fork 178
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat(test-data-generation): first pass at generating deck configurations #14962
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This all looks reasonable to me so far. Thank you!
test-data-generation/src/test_data_generation/deck_configuration/datashapes.py
Outdated
Show resolved
Hide resolved
test-data-generation/src/test_data_generation/deck_configuration/datashapes.py
Outdated
Show resolved
Hide resolved
class EvaluationFunction(typing.Protocol): | ||
"""A method that validates a deck configuration.""" | ||
|
||
def __call__(self, deck: DeckConfiguration) -> bool: | ||
"""Evaluates the deck configuration and returns True if it is valid.""" | ||
... |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What is the best way to determine if my created valid and invalid deck configurations meet our internal requirements of being valid and invalid? I am thinking of just starting to throw them at the analysis engine and the deck_configuration endpoint and see what happens.
Our "official" validation lives in robot_server/deck_configuration/validation.py and robot_server/deck_configuration/validation_mapping.py. I don't have an informed opinion about whether it's a good idea to just import
that directly, or go through python -m opentrons.cli analyze
, or what.
As far as the machine-readable schema is concerned, it's currently pretty weakly-defined. For example, the fixture names are typed as str
s instead of enum
s, and we validate that them "manually," in our application layer, instead of in our model layer. The idea was to avoid duplicating things from the deck definition. A downside to this is that you can't just feed Hypothesis our Pydantic models and have it automatically generate all the valid deck definitions, because it will generate bogus values for those strings.
On the other hand, maybe that's a good thing? Maybe we should be generating a bunch of invalid deck configurations to test with, as long as they don't drown out the valid ones? I'm curious what you think about this, philosophically.
There's also https://pypi.org/project/hypothesis-jsonschema/. I suppose we could use that to generate deck configurations from robot-server's OpenAPI spec. Or add a deck configuration schema to shared-data, and generate it from that. Those options would have the same str
vs. enum
problem, but we might have more options for dealing with that at the JSON schema level because JSON schemas can reference and include each other.
test-data-generation/src/test_data_generation/deck_configuration/strategies.py
Outdated
Show resolved
Hide resolved
test-data-generation/tests/test_data_generation/deck_configuration/test_deck_configuration.py
Outdated
Show resolved
Hide resolved
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Overall looks like a good start but I have some questions below regarding a few validations. Also just to clear up some of the theory behind this PR, I think it's important not to mix up the notion of literal "areas" (or addressable areas as it were) on the deck versus the "fixtures" in a deck's configuration file. The deck configuration represents 12 exact physical cutouts in the deck, and the physical hardware mounted within them. The engine interprets that configuration file's information and loads the definitions for the fixtures provided in those 12 cutouts. From those definitions we then find out what "spaces" on the deck are provided by each of our fixtures. A generic singleRightSlot
fixture may only provide a "slot" called "D3" on the deck, but a stagingAreaRightSlot
fixture would provide a "D3" slot and a "D4" slot. The deck config doesn't see 4 columns, it only sees that some fixture was loaded into cutout D3, what that fixture "creates" is an engine concern.
test-data-generation/src/test_data_generation/deck_configuration/assumptions.py
Outdated
Show resolved
Hide resolved
test-data-generation/src/test_data_generation/deck_configuration/assumptions.py
Outdated
Show resolved
Hide resolved
ca287d9
to
008d82d
Compare
008d82d
to
32ee716
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This looks great and the changes so far seem comprehensive. I've got two comments below regarding some possible future behavior to keep in mind, but nothing blocking.
test-data-generation/src/test_data_generation/deck_configuration/deck_evaluation.py
Outdated
Show resolved
Hide resolved
test-data-generation/src/test_data_generation/deck_configuration/datashapes.py
Show resolved
Hide resolved
4e65279
to
949cfd0
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This looks great, love the explicit hypotheses and the approach towards generating the data.
test-data-generation/tests/test_data_generation/deck_configuration/test_deck_configuration.py
Outdated
Show resolved
Hide resolved
test-data-generation/src/test_data_generation/deck_configuration/strategies.py
Outdated
Show resolved
Hide resolved
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
let me have the protocols
suppress_health_check=[HealthCheck.filter_too_much, HealthCheck.too_slow], | ||
) | ||
def test_above_below_heater_shaker(deck_config: DeckConfiguration) -> None: | ||
"""I hypothesize, that any deck configuration with a non-labware slot fixture above or below a heater-shaker is invalid.""" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
helpful
suppress_health_check=[HealthCheck.filter_too_much, HealthCheck.too_slow], | ||
) | ||
def test_invalid_fixture_in_col_2(deck_config: DeckConfiguration) -> None: | ||
"""I hypothesize, that any deck configuration that contains at least one, Heater-Shaker, Trash Bin, or Temperature module, in column 2 is invalid.""" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
❤️ 😊
…ons (#14962) # Overview With the large code change in #14684 we want to test it as thoroughly as possible. This PR adds generation of test cases with [hypothesis](https://hypothesis.readthedocs.io/en/latest/index.html) for deck configuration The idea is that hypothesis will generate DeckConfiguration objects that represent what a user defines in their protocol or their deck configuration in the UI. These objects will then end up being used to auto-generate Python protocols to pipe through analysis to exercise our deck configuration validation logic # Test Plan - I went through some of the generated deck configurations to verify they were being created correctly # Changelog - Create datashapes.py - This defines a simplified deck configuration model and all of its contents - Create helper_strategies.py - This file provides the building block strategies that are utilized to make a final strategy that makes a DeckConfiguration object - Create final_strategies.py - This contains the logic for generating the final DeckConfiguration objects # Review requests - Should I add some tests to confirm that DeckConfiguration objects are being generated as expected? # Risk assessment None.... yet
Overview
With the large code change in #14684 we want to test it as thoroughly as possible. This PR adds generation of test cases with hypothesis for deck configuration
The idea is that hypothesis will generate DeckConfiguration objects that represent what a user defines in their protocol or their deck configuration in the UI. These objects will then end up being used to auto-generate Python protocols to pipe through analysis to exercise our deck configuration validation logic
Test Plan
Changelog
Review requests
Risk assessment
None.... yet