-
Notifications
You must be signed in to change notification settings - Fork 68
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add interceptors that catch assumption errors #191
base: master
Are you sure you want to change the base?
Conversation
@renatoathaydes |
aa38b04
to
9348a6b
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A test that does not run due to an assumption not being met should not be treated as a failure, but as an ignored test.
That's the general idea behind having assumptions rather than just assertions.
For example, this is how IntelliJ shows a JUnit test that throws AssumptionViolatedException
:
So just using the existing machinery for handling skipped tests would be the correct solution.
Using an interceptor to catch the Exception seems to me to be a valid solution, given that the spock team has not shown interest in helping us fix this problem, even now that Spock is adding mechanisms for skipping iterations, not only specifications, which will cause this problem to become even more visible to Spock users.
@@ -45,6 +54,19 @@ class ProblemBlockWriter { | |||
} | |||
} | |||
|
|||
void skipsContainer( MarkupBuilder builder, Runnable createProblemList ) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it would be much better to just treated skipped tests as we treat ignored tests. I don't see why we would want to show skipped tests differently.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you point me to the right direction please? Do you expect this to be reported inside writeFeatureDescription
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I mean that you don't need to change anything other than add a new method, iterationSkipped
, to the spock-reports extension. Also, just consider any test that throws AssumptionViolationException
as "skipped". No changes are required to the methods that build the xml.
Is it fine to treat AssumptionException as a 'SpecProblem'? That looked like a path of least resistance for me. I can instead call a new 'iterationSkipped' method in the listener, add 'ignoresByIteration' in |
invocation.proceed() | ||
} catch( Throwable t ) { | ||
if( t in AssumptionViolatedException ) { | ||
listener.error new ErrorInfo( invocation.method, t ) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Here is where I mean we should be calling iterationSkipped
or specSkipped
depending on the level which i currently running. This method of intercepting the spec run is very smart! If you don't mind, I can make the changes using this strategy... would you want to submit a much smaller PR that only does:
- add this interceptor.
- invoke either
featureSkipped
oriterationSkipped
depending on "what" was skipped.
Leave the actual report changes to me as they should be minimal (but I think will require some more thought about how to represent them).
I can check whether we are inside 'unrolled iteration' or 'not parameterized feature' and call the relevant listener method and even mark the feature as 'skipped', but the drawback in calling Now while I'm writing this I realize that I propose to consider any assumption error from-inside the test as a separate case. A not parameterized feature can be considered as a test with 1 iteration, while parameterized feature is considered a test with N iterations. We need to pass down the actual error that caused the test abortion in order to present it in the report. |
Sorry for the delay... been busy with other projects.
As I mentioned before, aborting a test due to a assumption violation is not an error, it should not be treated as such. You can check other report systems, they will all aborted tests as ignored tests, not errors. If you want to see the "error" message, you should fail the test, not abort it. I'm sorry but I can only accept this change if this is the behaviour introduced. It's fine if you don't want to change the code to do that, but as I said above, please let me know if you don't mind if I use some of it to implement the behaviour I am describing. |
I don't mind if you use any ideas from this pr, I'd actually appreciate that)
I never wanted to turn skipped tests into errored tests, they should always be marked and reported as ignored/skipped and I don't dispute this. In current implementation I pass it to the listener as 'ErrorInfo', but the exception in that errorInfo is always a TestAbortedException or AssumptionException. The test will be still marked as 'skipped', it's just an object to hold method executed and exception with abortion reason |
Related to #89