It's not realistic to enable full-blown type checks within test_runner's codebase, since the amount of warnings revealed by mypy is overwhelming. Tests are supposed to be easy to use, so we can't cripple everybody's workflow for the sake of imaginary benefit. Ultimately, the purpose of this attempt is three-fold: * Facilitate code navigation when paired with python-language-server. * Make method signatures apparent to a fellow programmer. * Occasionally catch some obvious type errors.
3.5 KiB
Zenith test runner
This directory contains integration tests.
Prerequisites:
- Python 3.6 or later
- Dependencies: install them via
pipenv install. Note that Debian/Ubuntu packages are stale, as it commonly happens, so manual installation is not recommended. Runpipenv shellto activate the venv. - Zenith and Postgres binaries
- See the root README.md for build directions
- Tests can be run from the git tree; or see the environment variables below to run from other directories.
- The zenith git repo, including the postgres submodule (for some tests, e.g. pg_regress)
Test Organization
The tests are divided into a few batches, such that each batch takes roughly the same amount of time. The batches can be run in parallel, to minimize total runtime. Currently, there are only two batches:
- test_batch_pg_regress: Runs PostgreSQL regression tests
- test_others: All other tests
Running the tests
Because pytest will search all subdirectories for tests, it's easiest to
run the tests from within the test_runner directory.
Test state (postgres data, pageserver state, and log files) will
be stored under a directory test_output.
You can run all the tests with:
pytest
If you want to run all the tests in a particular file:
pytest test_pgbench.py
If you want to run all tests that have the string "bench" in their names:
pytest -k bench
Useful environment variables:
ZENITH_BIN: The directory where zenith binaries can be found.
POSTGRES_DISTRIB_DIR: The directory where postgres distribution can be found.
TEST_OUTPUT: Set the directory where test state and test output files
should go.
TEST_SHARED_FIXTURES: Try to re-use a single pageserver for all the tests.
Let stdout and stderr go to the terminal instead of capturing them:
pytest -s ...
(Note many tests capture subprocess outputs separately, so this may not
show much.)
Exit after the first test failure:
pytest -x ...
(there are many more pytest options; run pytest -h to see them.)
Building new tests
The tests make heavy use of pytest fixtures. You can read about how they work here: https://docs.pytest.org/en/stable/fixture.html
Essentially, this means that each time you see a fixture named as an input parameter, the function with that name will be run and passed as a parameter to the function.
So this code:
def test_something(zenith_cli, pg_bin):
pass
... will run the fixtures called zenith_cli and pg_bin and deliver those results to the test function.
Fixtures can't be imported using the normal python syntax. Instead, use this:
pytest_plugins = ("fixtures.something")
That will make all the fixtures in the fixtures/something.py file available.
Anything that's likely to be used in multiple tests should be built into a fixture.
Note that fixtures can clean up after themselves if they use the yield syntax.
Cleanup will happen even if the test fails (raises an unhandled exception).
Python destructors, e.g. __del__() aren't recommended for cleanup.
Code quality
Before submitting a patch, please consider:
- Writing a couple of docstrings to clarify the reasoning behind a new test.
- Running
flake8(or a linter of your choice, e.g.pycodestyle) and fixing possible defects, if any. - Formatting the code with
yapf -r -i .(TODO: implement an opt-in pre-commit hook for that). - (Optional) Typechecking the code with
mypy .. Currently this mostly affectsfixtures/zenith_fixtures.py.
The tools can be installed with pipenv install --dev.