You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
First, we want to explain the commonalities of both workflows. We want to explain the structure of the tests, what's being passed in as solution_script, and what's being returned by each test as [expected, ...some code...]. We should highlight all tests are executed currently, since other tracks support tests being set to pending if you're working offline. Then you just enable each test after the previous one fails until all passed. That's not relevant since both workflows use the test runner so students should be aware all tests will be run for both.
We should also highlight how to pull down the test runner image for offline testing. To that end, maybe the test runner should have an optional mode for students. They navigate to their solution folder and run "docker test runner whatever --offline". The test runner picks up the current folder, parses the slug out, and then uses the current folder as the output directory for the results.json. Then we just need to highlight what the results.json is and how to read it (preferably with examples).
The text was updated successfully, but these errors were encountered:
@pfertyk, some thoughts on the TESTS.md setup.
First, we want to explain the commonalities of both workflows. We want to explain the structure of the tests, what's being passed in as
solution_script
, and what's being returned by each test as[expected, ...some code...]
. We should highlight all tests are executed currently, since other tracks support tests being set to pending if you're working offline. Then you just enable each test after the previous one fails until all passed. That's not relevant since both workflows use the test runner so students should be aware all tests will be run for both.We should also highlight how to pull down the test runner image for offline testing. To that end, maybe the test runner should have an optional mode for students. They navigate to their solution folder and run "docker test runner whatever --offline". The test runner picks up the current folder, parses the slug out, and then uses the current folder as the output directory for the results.json. Then we just need to highlight what the results.json is and how to read it (preferably with examples).
The text was updated successfully, but these errors were encountered: