-
Notifications
You must be signed in to change notification settings - Fork 4
PyTest
-
pytest
is one of the most widely used unit/integration testing platforms for python. -
pytest
does not break all previously written tests. By default, it runs nativeunittest
's. -
unittest
's cli discovery feature lacks the capability to find test code innamespace
style packages, so a pivot topytest
is natural given that trait alone. -
pytest
has an easy and less involved api. This cuts down on boilerplate code than theunittest
package excels in. -
pytest
is to testing aspandas
is to DataFrames. The package is actively developed and has a huge community -- its a safe dependency to add. -
pytest
comes with the ability to makemocks
andstub
objects easily through its built inmonkypatch
's. - Tests can be run in parallel using
pytest-xdist
, somethingunittest
lacks.
This is a very high level comparison. The expectation is that the reader have some knowledge and experience with the unittest
package.
Python's unittest
was introduced to the standard lib in version 2.1. It is a robust framework heavily inspired by JUnit. The framework is used by many and is not a bad choice as a testing framework.
makes it easy to write small tests, yet scales to support complex functional testing for applications and libraries.
The framework offers everything that unittest
can do in a more concisely typed manor including running unittest
tests.
Pros Cons
unittest
pytest
unittest
pytest
Native support Compact syntax that simplifies the process Uses non-pythonic camelCase Something new to learn
Will be most common to new comers from academia Simplified implementation of fixtures that don't require creating a class Overly complex assert
api implementation Added project dependency
Excellent test discovery Excess boilerplate code
Easy mock
/stub
s through its monkeypatches
Not easy to read
Feature rich, but balanced for all users Color output is not supported
Color output Sometimes struggles to integrate with IDEs
Mark
tests and change when/if they run
Lower entry barrier for TDD and writing tests in general
Here is the example code that we want to test. A simple class that takes in a list of numbers and does operations on that list.
# listoperation.py
from functools import reduce
class NumericalListOperation:
def __init__(self, list_of_numbers):
self.list_of_numbers = list_of_numbers
def sum(self):
""" Return the instance list's sum"""
return reduce(lambda x, y: x + y, self.list_of_numbers)
def mean(self):
""" Return the instance list's mean"""
return self.sum() / len(self.list_of_numbers)
Writing a few tests using pytest
.
# test_listoperation.py
import pytest
# local import
from listoperation import NumericalListOperation
# Notice, construction of NumericalListOperation is handled
# in func -- in a later example, ill show how to use fixtures to do this
def test_sum():
""" Test sum of list is computed properly """
test_list = [1,2,3]
numerical_list_operation = NumericalListOperation(test_list)
assert numerical_list_operation.sum() == 6
def test_mean():
""" Test sum of list is computed properly """
test_list = [1,2,3]
numerical_list_operation = NumericalListOperation(test_list)
assert numerical_list_operation.mean() == 2
Running the test using pytest from the command line
foo@bar:~$ # -v supplies a verbose output. Meaning it prints the PASSED tests
foo@bar:~$ # To output `print` statements in tests use -s (for all streams)
foo@bar:~$ pytest test_listoperation.py -v
collected 2 items
test_listoperation.py::test_sum PASSED [ 50%]
test_listoperation.py::test_mean PASSED [100%]
=================================== 2 passed in 0.01s ===================================
Writing the same tests using unittest
.
# test_listoperation.py
import unittest
# local import
from listoperation import NumericalListOperation
class TestNumericalListOperation(unittest.TestCase):
def test_sum(self):
""" Test sum of list is computed properly """
test_list = [1, 2, 3]
numerical_list_operation = NumericalListOperation(test_list)
self.assertEqual(numerical_list_operation.sum(), 6)
def test_mean(self):
""" Test sum of list is computed properly """
test_list = [1, 2, 3]
numerical_list_operation = NumericalListOperation(test_list)
self.assertEqual(numerical_list_operation.mean(), 2)
if __name__ == "__main__":
unittest.main()
Running the tests using unittest
from the command line
foo@bar:~$ # -v supplies a verbose output. Meaning it prints the tests which ran, both passed and failed
foo@bar:~$ python -m unittest test_listoperation.py -v
test_mean (test_listoperation.TestNumericalListOperation)
Test sum of list is computed properly ... ok
test_sum (test_listoperation.TestNumericalListOperation)
Test sum of list is computed properly ... ok
----------------------------------------------------------------------
Ran 2 tests in 0.000s
OK
Using pytest
fixture
decorator to setup and tear down class instance.
@pytest.fixture
def numerical_list_operation_fixture_simple():
test_list = [1, 2, 3]
class_instance = NumericalListOperation(test_list)
return class_instance # Give class instance object to test
def test_sum_with_simple(numerical_list_operation_fixture_simple):
""" Test sum of list is computed properly """
assert numerical_list_operation_fixture_simple.sum() == 6
# Fixture with setup and tear down
# Everything before `yield` is the setup, everything after is tear down
@pytest.fixture
def numerical_list_operation_fixture():
test_list = [1, 2, 3]
class_instance = NumericalListOperation(test_list)
# Setup fixture
print("\nSetting up `numerical_list_operation_fixture`")
yield class_instance # Give class instance object to test
# Get back from test and tear down
print("\nTearing down `numerical_list_operation_fixture`")
class_instance.list_of_numbers = None
del class_instance
def test_sum_with_setup_and_tear_down_fixture(numerical_list_operation_fixture):
""" Test sum of list is computed properly """
assert numerical_list_operation_fixture.sum() == 6
Run each test individually
foo@bar:~$ pytest test_listoperation::test_sum_with_simple -v -s
collected 1 item
test_listoperation.py::test_sum_with_simple PASSED
=================================== 1 passed in 0.03s ===================================
foo@bar:~$ pytest test_listoperation.py::test_sum_with_setup_and_tear_down_fixture -v -s
collected 1 item
test_listoperation.py::test_sum_with_setup_and_tear_down_fixture
Setting up `numerical_list_operation_fixture`
PASSED
Tearing down `numerical_list_operation_fixture`
=================================== 1 passed in 0.03s ===================================
Using unittest
's setUp
and tearDown
methods.
class TestNumericalListOperationWithSetup(unittest.TestCase):
def setUp(self):
test_list = [1, 2, 3]
self.numerical_list_operation = NumericalListOperation(test_list)
def tearDown(self):
self.numerical_list_operation.list_of_numbers = None
del self.numerical_list_operation
def test_sum(self):
""" Test sum of list is computed properly """
self.assertEqual(self.numerical_list_operation.sum(), 6)
def test_mean(self):
""" Test sum of list is computed properly """
self.assertEqual(self.numerical_list_operation.mean(), 2
Running the setup and tear down unittest
s using pytest
foo@bar:~$ pytest test_listoperation.py::TestNumericalListOperationWithSetup -v -s
collected 2 items
test_listoperation.py::TestNumericalListOperationWithSetup::test_mean PASSED
test_listoperation.py::TestNumericalListOperationWithSetup::test_sum PASSED
=================================== 2 passed in 0.03s ===================================
As shown, there is added complexity when using unittest
. The more complex api renders tests less readable and requiring a lot of boilerplate code. Those who may not be use to python's decorator syntax may be thrown by the example. I would suggest further reading on that topic given their prevalence in python 3.5+ code.
- Marks:
pytest
feature to tag tests with a user defined metadata. This allows the tester to alter test behavior. One example is aslow
mark which can be used to blacklist certain tests by default. - Monkeypatchs:
pytest
object that canmock
orstub
an object in your test easily. - Parametrize: Default
pytest
mark for super charging tests with different inputs
By using the pytest.mark helper you can easily set metadata on your test functions.
Note, here only a few example use cases are show. However, the possibilities for pytest.mark
implementation are vast. It's highly encouraged to checkout the pytest.mark
api docs if interested in wider use cases.
To use any of the below, simply decorate the test with @pytest.mark.{ the-default-mark }
-
xfail
- produce an “expected failure”. Should be used when a test's functionality has not been implemented fully -
skipif
- skip a test if a condition is met.- Example:
import sys @pytest.mark.skipif(sys.platform == "win32", reason="does not run on windows") def test_function(): # do some test things
-
parametrize
this is a must know Ive devoted a section to it below. Allows user to pass parameters or a parameter combination to a test.
As previously mentioned, these allow you to add user defined metadata to a test. These marks can be filtered among other things, to just run tests with a given mark or run all tests that don't have a given mark.
Things to know ahead of time:
-
pytest uses 2 main setup files, they both live at the repos root:
- pytest.ini - Where you define marks so
pytest
knows about them - conftest.py - Where you can tell
pytest
how to change its default behavior
- pytest.ini - Where you define marks so
So, if you want to use a custom mark, you must at least define it in pytest.ini
Example:
[pytest]
markers =
slow: marks tests as slow (deselect with '-m "not slow"')
To use an example, one might write an integration test that hit a REST API over the wire or a test that takes a really long time. It's not bad practice to have these sorts of tests, however running them all the time would be a poor use of time. Using a user defined mark could resolve this problem.
Reminder slow
must be added to pytest.ini
. This marker is used in this project already, so don't worry about it.
# test_slow.py
# User defined mark
# This mark *IS* used in this repo and runs on the CI every Monday at 10 UTC.
@python.mark.slow
def test_rest_endpoint():
request.get('http://my-rest-endpoint.com/test')
To run a specific mark or its negation in the cli:
foo@bar:~$ pytest test_slow.py -m "slow"
foo@bar:~$ pytest test_slow.py -m "not slow"
We want to change the behavior of this mark. We want tests marked slow
only to run when a user passes a flag to the pytest
cli. To do this, the features needs to be added in conftest.py
. The parser
object in the below is example is an argparse
-like object if that helps. Here the flag --run-slow
or -S
for short are added. In pytest_configure
the behavior of the mark is changed. In short, by default the cli is told pytest test_slow.py -m "not slow"
unless --run-slow | -S
are passed.
# conftest.py
import pytest
def pytest_addoption(parser):
# Add flag to `pytest` to run tests decorated with:
# @pytest.mark.slow
# Mark was defined in pytest.ini
parser.addoption(
"-S",
"--run-slow",
action="store_true",
default=False,
dest="runslow",
help="run slow tests",
)
def pytest_configure(config):
if config.option.runslow is False:
# If --run-slow not specified, do not run slow marked tests by default
setattr(config.option, "markexpr", "not slow")
Monkeypatchs, Pytests unittest.mock
wrapper
Sometimes tests need to invoke functionality which depends on global settings or which invokes code which cannot be easily tested such as network access. The monkeypatch fixture helps you to safely set/delete an attribute, dictionary item or environment variable, or to modify sys.path for importing.
This functionality allows the user to modifying the behavior of a function or the property of a class for a test. For example a web api call or database connection. For more information I highly recommend checking out the pytest
docs on this topic. Here I will just be showing one usage of monkeypatches
. We use requests heavily in this project, so for scope sake, we will monkeypatch
a requests.get
call in a simple example.
simple_request.py
import requests
def get_request_to_json(url: str) -> dict:
req = requests.get(url)
if req.status_code < 202:
return req.json()
else:
raise requests.ConnectionError(f"Received non 200 or 201 reply.\nStatus code: {req.status_code}")
So the question is, how can we write a fake request.get response to use in our test?
import pytest
# local imports
from simple_request import get_request_to_json
# monkeypatch object is magically imported by pytest.
def test_get_request_to_json_pass(monkeypatch):
# import requests for monkeypatch.
# This does not have to be done in this scope
import requests
def wrapper_func(*args, **kwargs):
# This isnt used in this example, but for general knowledge
# args[0] is the url passed by get_request_to_json made below -> "http://fake.response"
return FakeRequests()
# monkeypatch will look for `requests.get` call and replace `request.get` with the passed function
monkeypatch.setattr(requests, "get", wrapper_func)
# call function that uses `requests.get` call
json_res = get_request_to_json("http://fake.response")
assert json_res == {"fake":"response"}
class FakeRequests:
""" Mocked requests object """
def __init__(self, status_code: int = 200):
# Set default status code to 200
self.status_code = status_code
@staticmethod
def json():
# requests.json call back message after monkeypatch
return {"fake":"response"}
Understandably, this example may be a bit hard to follow. What is happening here in a crued since, is that the monkeypatch
looks for the requests.get
call made in get_request_to_json
. When it find it, it does something like this
# in `get_request_to_json`
requests.get = FakeRequests()
# req now has the attiributes of the `FakeRequests` object
req = requests.get(url)
So, the request
objects get
method is set to an instance of our FakeRequests
class. Fancy things are done in the background so that the instance methods of a typical requests.get
call are not lost, just the ones provide override their typical implimentation.