Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Two tests fail when warnings are present in the test run #7

Open
koobs opened this issue Jun 18, 2019 · 1 comment
Open

Two tests fail when warnings are present in the test run #7

koobs opened this issue Jun 18, 2019 · 1 comment

Comments

@koobs
Copy link

koobs commented Jun 18, 2019

The following two tests fail if there are any warnings in the test run:

test_shows_tests_nested_under_classes_without_files
test_tests_are_colorized_by_test_result

with failure output:

>           assert "== 1 failed, 4 passed, 1 skipped in " in output
E           AssertionError: assert '== 1 failed, 4 passed, 1 skipped in ' in '============================= test session starts ==============================\nplatform freebsd13 -- Python.../en/latest/warnings.html\n========== 1 failed, 4 passed, 1 skipped, 1 warnings in 0.28 seconds ==========='
=================================== FAILURES ===================================
_________________________ OtherBehaviors.behavior_four _________________________

self = <other_behaviors.OtherBehaviors instance at 0x80467f9e0>

    def behavior_four(self):
>       assert False
E       AssertionError

other_behaviors.py:13: AssertionError
=============================== warnings summary ===============================
behaviors.py::Behaviors::behavior_one
  /usr/local/lib/python2.7/site-packages/pytest_relaxed/reporter.py:79: UserWarning: Argument(s) ('config',) which are declared in the hookspec can not be found in this hook call
    cat, letter, word = status_getter(report=report)

-- Docs: https://docs.pytest.org/en/latest/warnings.html
========== 1 failed, 4 passed, 1 skipped, 1 warnings in 0.28 seconds ===========

I tried --disable-warnings for the test run, but this may not (it seems?) apply to captured output.

Tests should account either for the possibility of X warnings in the output, or ignore warnings by removing in from the test string assertion

@koobs
Copy link
Author

koobs commented Jun 18, 2019

The following patch fixes the issue for me, if you decide you want to go that way (ignoring warnings, rather than accounting for them):

--- tests/test_display.py.orig  2019-06-14 18:05:29 UTC
+++ tests/test_display.py
@@ -18,7 +18,7 @@ def _expect_regular_output(testdir):
     assert "== FAILURES ==" in output
     assert "AssertionError" in output
     # Summary
-    assert "== 1 failed, 4 passed, 1 skipped in " in output
+    assert "== 1 failed, 4 passed, 1 skipped" in output


 class TestRegularFunctions:
@@ -170,7 +170,7 @@ OtherBehaviors
         assert "== FAILURES ==" in output
         assert "AssertionError" in output
         # Summary
-        assert "== 1 failed, 4 passed, 1 skipped in " in output
+        assert "== 1 failed, 4 passed, 1 skipped" in output

     def test_tests_are_colorized_by_test_result(  # noqa: F811,E501
         self, testdir, environ
@@ -225,7 +225,7 @@ OtherBehaviors
         assert "== FAILURES ==" in output
         assert "AssertionError" in output
         # Summary
-        assert "== 1 failed, 4 passed, 1 skipped in " in output
+        assert "== 1 failed, 4 passed, 1 skipped" in output

     def test_nests_many_levels_deep_no_problem(self, testdir):
         testdir.makepyfile(

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant