moved sanitizers in its owm workflow
merged all jobs into onme
cleaned up failing job
cleaned up failing job
running just tests
fixing build
reverting changes
fixing linter error and build error
clearning up job
added wal and extension builds
fixing build
fixing build
fixing build
added use sanitizer patch
testing if sanitiser work in main workflow
fixed format issue
fixing format issue
fixing format issue
added flags
disabled flags
enabling flags
enabling flags
added more options to flag
fixing build
fixing build
testing the regression run
added asan and usban flag for regression test
commented unit test and release build
fixing build
fix neon for sanitizers
enabled unit test
updated branch to test the fix
updated branch to test the fix
updated the commit id
fixing build
restoring the submodules to main
updated git modules and revision of commit
updated postgres 16 vendor dir
removed test
## Problem
For PRs with `run-benchmarks` label, we don't upload results to the db,
making it harder to debug such tests. The only way to see some
numbers is by examining GitHub Action output which is really
inconvenient.
This PR adds zenbenchmark metrics to Allure reports.
## Summary of changes
- Create a json file with zenbenchmark results and attach it to allure
report
In
7f828890cf
we changed the logic for persisting control_files. Previously it was
updated if `peer_horizon_lsn` jumped more than one segment, which made
`peer_horizon_lsn` initialized on disk as soon as safekeeper has
received a first `AppendRequest`.
This caused an issue with `truncateLsn`, which now can be zero
sometimes. This PR fixes it, and now `truncateLsn/peer_horizon_lsn` can
never be zero once we know `timeline_start_lsn`.
Closes https://github.com/neondatabase/neon/issues/6248