Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Try finding a "after all" method that verifies the relevant test cases are hit during fuzz test. #29

Open
a-t-0 opened this issue Jul 13, 2024 · 0 comments

Comments

@a-t-0
Copy link
Member

a-t-0 commented Jul 13, 2024

Currently a counter for each test scenario is tracked and exported during the fuzz testing.

For example, some random initialisations may not allow one to initialise a valid dim contract, even though one may want to test something for which a valid dim contract is required. One can overcome this in multiple ways:

  1. Write a expect Revert or assert valid for each configuration of random values
  2. See if you can have a valid configuration out of the random test parameters, and if yes, test the case you want to test.
  3. ... Other ways.

Option 1 would lead to large complicated test files.
Option 2 is chosen, yet it could lead to false positives, where one assumes a fuzz test covers some (edge) case, even though the actual test is never reached because all the random initialisations prevent reaching the required conditions to reach the test case.

To manage these potential false positives, a tracker is built that logs to an output file, per test run how often each case is reached, so that one can see how often the fuzz test actually tested the case that needs to be tested in that fuzz test.

Current state

Currently one can manually inspect the logs to verify the fuzz test has reached the actual tests cases.

Risky state

One could add a requirement for the fuzz tests to run untill each (relevant) test case is hit n times. However, that is risky because a test-developer may erroneously expect the test cases will be hit, leading to an infinite wait without any signals of their being a problem.

Ideal state

After the fuzz tests are completed, an additional test is performed on the output logs, which throw a failure if the required test cases are not hit (often enough).

Difficulties

  • I did not yet find a afterAll (fuzz runs/test files) method in Solidity Foundry.
  • (Fuzz) tests should always be runnable in random order, but if you need something/an additional test to run at the end, that would not be the random order.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant