You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In order to make the integrations tests of the dcat_usmetadata app more consistent, predictable, and reliable, datagov wants to rewrite existing flaky tests.
Acceptance Criteria
GIVEN the current integration tests
WHEN new tests are written and executed
THEN the tests will pass or fail without the need to retry
AND the results will accurately reflect the nature of the app.
Background
many of the integrations test are known for being unreliable, requiring multiple re-runs to pass.
the result of a test should be reliable and reflective of how the app works.
examples
user-flow
Displays confirmation page when clicked
resource-upload
Resource radio buttons work, Works when editing a resource during dataset creation
additional-metadata
Goes back to previous page, Saves dataset using "Save draft" button
Sketch
identify complete list of tests needing to be re-written.
determine new test to create
The text was updated successfully, but these errors were encountered:
User Story
In order to make the integrations tests of the dcat_usmetadata app more consistent, predictable, and reliable, datagov wants to rewrite existing flaky tests.
Acceptance Criteria
WHEN new tests are written and executed
THEN the tests will pass or fail without the need to retry
AND the results will accurately reflect the nature of the app.
Background
examples
Displays confirmation page when clicked
Resource radio buttons work
,Works when editing a resource during dataset creation
Goes back to previous page
,Saves dataset using "Save draft" button
Sketch
The text was updated successfully, but these errors were encountered: