Files
neon/test_runner/fixtures/reruns.py
Alexander Bayandin e04dd3be0b test_runner: rerun all failed tests (#9917)
## Problem

Currently, we rerun only known flaky tests. This approach was chosen to
reduce the number of tests that go unnoticed (by forcing people to take
a look at failed tests and rerun the job manually), but it has some
drawbacks:
- In PRs, people tend to push new changes without checking failed tests
(that's ok)
- In the main, tests are just restarted without checking
(understandable)
- Parametrised tests become flaky one by one, i.e. if `test[1]` is flaky
`, test[2]` is not marked as flaky automatically (which may or may not
be the case).

I suggest rerunning all failed tests to increase the stability of GitHub
jobs and using the Grafana Dashboard with flaky tests for deeper
analysis.

## Summary of changes
- Rerun all failed tests twice at max
2024-11-28 19:02:57 +00:00

32 lines
1.1 KiB
Python

from __future__ import annotations
from collections.abc import MutableMapping
from typing import TYPE_CHECKING, cast
import pytest
if TYPE_CHECKING:
from collections.abc import MutableMapping
from typing import Any
from _pytest.config import Config
def pytest_collection_modifyitems(config: Config, items: list[pytest.Item]):
# pytest-rerunfailures is not compatible with pytest-timeout (timeout is not set for reruns),
# we can workaround it by setting `timeout_func_only` to True[1].
# Unfortunately, setting `timeout_func_only = True` globally in pytest.ini is broken[2],
# but we still can do it using pytest marker.
#
# - [1] https://github.com/pytest-dev/pytest-rerunfailures/issues/99
# - [2] https://github.com/pytest-dev/pytest-timeout/issues/142
if not config.getoption("--reruns"):
return
for item in items:
timeout_marker = item.get_closest_marker("timeout")
if timeout_marker is not None:
kwargs = cast("MutableMapping[str, Any]", timeout_marker.kwargs)
kwargs["func_only"] = True