Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat(test-data-generation): first pass at generating deck configurations #14962

Merged
merged 14 commits into from
May 1, 2024

Conversation

DerekMaggio
Copy link
Contributor

@DerekMaggio DerekMaggio commented Apr 19, 2024

Overview

With the large code change in #14684 we want to test it as thoroughly as possible. This PR adds generation of test cases with hypothesis for deck configuration

The idea is that hypothesis will generate DeckConfiguration objects that represent what a user defines in their protocol or their deck configuration in the UI. These objects will then end up being used to auto-generate Python protocols to pipe through analysis to exercise our deck configuration validation logic

Test Plan

  • I went through some of the generated deck configurations to verify they were being created correctly

Changelog

  • Create datashapes.py
    • This defines a simplified deck configuration model and all of its contents
  • Create helper_strategies.py
    • This file provides the building block strategies that are utilized to make a final strategy that makes a DeckConfiguration object
  • Create final_strategies.py
    • This contains the logic for generating the final DeckConfiguration objects

Review requests

  • Should I add some tests to confirm that DeckConfiguration objects are being generated as expected?

Risk assessment

None.... yet

@DerekMaggio DerekMaggio changed the base branch from edge to RQA-2612-create-test-data-generation-project April 19, 2024 18:27
@DerekMaggio DerekMaggio changed the title Deck configuration testing rough draft feat(test-data-generation): first pass at generating deck configurations Apr 19, 2024
@DerekMaggio DerekMaggio requested a review from a team April 22, 2024 13:21
@DerekMaggio DerekMaggio marked this pull request as ready for review April 22, 2024 16:36
@DerekMaggio DerekMaggio requested a review from a team as a code owner April 22, 2024 16:36
Copy link
Contributor

@SyntaxColoring SyntaxColoring left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This all looks reasonable to me so far. Thank you!

Comment on lines 16 to 21
class EvaluationFunction(typing.Protocol):
"""A method that validates a deck configuration."""

def __call__(self, deck: DeckConfiguration) -> bool:
"""Evaluates the deck configuration and returns True if it is valid."""
...
Copy link
Contributor

@SyntaxColoring SyntaxColoring Apr 23, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What is the best way to determine if my created valid and invalid deck configurations meet our internal requirements of being valid and invalid? I am thinking of just starting to throw them at the analysis engine and the deck_configuration endpoint and see what happens.

Our "official" validation lives in robot_server/deck_configuration/validation.py and robot_server/deck_configuration/validation_mapping.py. I don't have an informed opinion about whether it's a good idea to just import that directly, or go through python -m opentrons.cli analyze, or what.

As far as the machine-readable schema is concerned, it's currently pretty weakly-defined. For example, the fixture names are typed as strs instead of enums, and we validate that them "manually," in our application layer, instead of in our model layer. The idea was to avoid duplicating things from the deck definition. A downside to this is that you can't just feed Hypothesis our Pydantic models and have it automatically generate all the valid deck definitions, because it will generate bogus values for those strings.

On the other hand, maybe that's a good thing? Maybe we should be generating a bunch of invalid deck configurations to test with, as long as they don't drown out the valid ones? I'm curious what you think about this, philosophically.

There's also https://pypi.org/project/hypothesis-jsonschema/. I suppose we could use that to generate deck configurations from robot-server's OpenAPI spec. Or add a deck configuration schema to shared-data, and generate it from that. Those options would have the same str vs. enum problem, but we might have more options for dealing with that at the JSON schema level because JSON schemas can reference and include each other.

Copy link
Contributor

@CaseyBatten CaseyBatten left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Overall looks like a good start but I have some questions below regarding a few validations. Also just to clear up some of the theory behind this PR, I think it's important not to mix up the notion of literal "areas" (or addressable areas as it were) on the deck versus the "fixtures" in a deck's configuration file. The deck configuration represents 12 exact physical cutouts in the deck, and the physical hardware mounted within them. The engine interprets that configuration file's information and loads the definitions for the fixtures provided in those 12 cutouts. From those definitions we then find out what "spaces" on the deck are provided by each of our fixtures. A generic singleRightSlot fixture may only provide a "slot" called "D3" on the deck, but a stagingAreaRightSlot fixture would provide a "D3" slot and a "D4" slot. The deck config doesn't see 4 columns, it only sees that some fixture was loaded into cutout D3, what that fixture "creates" is an engine concern.

@DerekMaggio DerekMaggio force-pushed the deck-configuration-testing-rough-draft branch from ca287d9 to 008d82d Compare April 30, 2024 16:54
@DerekMaggio DerekMaggio requested a review from a team as a code owner April 30, 2024 16:54
@DerekMaggio DerekMaggio force-pushed the deck-configuration-testing-rough-draft branch from 008d82d to 32ee716 Compare April 30, 2024 16:58
@DerekMaggio DerekMaggio changed the base branch from RQA-2612-create-test-data-generation-project to edge April 30, 2024 17:05
Copy link
Contributor

@CaseyBatten CaseyBatten left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This looks great and the changes so far seem comprehensive. I've got two comments below regarding some possible future behavior to keep in mind, but nothing blocking.

@DerekMaggio DerekMaggio force-pushed the deck-configuration-testing-rough-draft branch from 4e65279 to 949cfd0 Compare May 1, 2024 17:28
Copy link
Member

@sfoster1 sfoster1 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This looks great, love the explicit hypotheses and the approach towards generating the data.

Copy link
Collaborator

@y3rsh y3rsh left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

let me have the protocols :shipit:

suppress_health_check=[HealthCheck.filter_too_much, HealthCheck.too_slow],
)
def test_above_below_heater_shaker(deck_config: DeckConfiguration) -> None:
"""I hypothesize, that any deck configuration with a non-labware slot fixture above or below a heater-shaker is invalid."""
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

helpful

suppress_health_check=[HealthCheck.filter_too_much, HealthCheck.too_slow],
)
def test_invalid_fixture_in_col_2(deck_config: DeckConfiguration) -> None:
"""I hypothesize, that any deck configuration that contains at least one, Heater-Shaker, Trash Bin, or Temperature module, in column 2 is invalid."""
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

❤️ 😊

@DerekMaggio DerekMaggio merged commit b15af5e into edge May 1, 2024
29 checks passed
Carlos-fernandez pushed a commit that referenced this pull request May 20, 2024
…ons (#14962)

# Overview

With the large code change in
#14684 we want to test it as
thoroughly as possible. This PR adds generation of test cases with
[hypothesis](https://hypothesis.readthedocs.io/en/latest/index.html) for
deck configuration


The idea is that hypothesis will generate DeckConfiguration objects that
represent what a user defines in their protocol or their deck
configuration in the UI. These objects will then end up being used to
auto-generate Python protocols to pipe through analysis to exercise our
deck configuration validation logic

# Test Plan

- I went through some of the generated deck configurations to verify
they were being created correctly

# Changelog

- Create datashapes.py
- This defines a simplified deck configuration model and all of its
contents
- Create helper_strategies.py
- This file provides the building block strategies that are utilized to
make a final strategy that makes a DeckConfiguration object
- Create final_strategies.py
- This contains the logic for generating the final DeckConfiguration
objects

# Review requests

- Should I add some tests to confirm that DeckConfiguration objects are
being generated as expected?

# Risk assessment

None.... yet
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants