Prepare pg 15 support (build system and submodules) (#2337)

* Add submodule postgres-15

* Support pg_15 in pgxn/neon

* Renamed zenith -> neon in Makefile

* fix name of codestyle check

* Refactor build system to prepare for building multiple Postgres versions.

Rename "vendor/postgres" to "vendor/postgres-v14"

Change Postgres build and install directory paths to be version-specific:

- tmp_install/build -> pg_install/build/14
- tmp_install/* -> pg_install/14/*

And Makefile targets:

- "make postgres" -> "make postgres-v14"
- "make postgres-headers" -> "make postgres-v14-headers"
- etc.

Add Makefile aliases:

- "make postgres" to build "postgres-v14" and in future, "postgres-v15"
- similarly for "make postgres-headers"

Fix POSTGRES_DISTRIB_DIR path in pytest scripts

* Make postgres version a variable in codestyle workflow

* Support vendor/postgres-v15 in codestyle check workflow

* Support postgres-v15 building in Makefile

* fix pg version in Dockerfile.compute-node

* fix kaniko path

* Build neon extensions in version-specific directories

* fix obsolete mentions of vendor/postgres

* use vendor/postgres-v14 in Dockerfile.compute-node.legacy

* Use PG_VERSION_NUM to gate dependencies in inmem_smgr.c

* Use versioned ECR repositories and image names for compute-node.
The image name format is compute-node-vXX, where XX is postgres major version number.
For now only v14 is supported.
Old format unversioned name (compute-node) is left, because cloud repo depends on it.

* update vendor/postgres submodule url (zenith->neondatabase rename)

* Fix postgres path in python tests after rebase

* fix path in regress test

* Use separate dockerfiles to build compute-node:
Dockerfile.compute-node-v15 should be identical to Dockerfile.compute-node-v14 except for the version number.
This is a hack, because Kaniko doesn't support build ARGs properly

* bump vendor/postgres-v14 and vendor/postgres-v15

* Don't use Kaniko cache for v14 and v15 compute-node images

* Build compute-node images for different versions in different jobs

Co-authored-by: Heikki Linnakangas <heikki@neon.tech>
This commit is contained in:
Anastasia Lubennikova
2022-09-05 18:30:54 +03:00
committed by GitHub
parent ee0071e90d
commit 05e263d0d3
30 changed files with 593 additions and 133 deletions

View File

@@ -13,6 +13,7 @@
!pgxn/ !pgxn/
!proxy/ !proxy/
!safekeeper/ !safekeeper/
!vendor/postgres/ !vendor/postgres-v14/
!vendor/postgres-v15/
!workspace_hack/ !workspace_hack/
!neon_local/ !neon_local/

View File

@@ -88,7 +88,7 @@ runs:
# PLATFORM will be embedded in the perf test report # PLATFORM will be embedded in the perf test report
# and it is needed to distinguish different environments # and it is needed to distinguish different environments
export PLATFORM=${PLATFORM:-github-actions-selfhosted} export PLATFORM=${PLATFORM:-github-actions-selfhosted}
export POSTGRES_DISTRIB_DIR=${POSTGRES_DISTRIB_DIR:-/tmp/neon/pg_install} export POSTGRES_DISTRIB_DIR=${POSTGRES_DISTRIB_DIR:-/tmp/neon/pg_install/v14}
if [ "${BUILD_TYPE}" = "remote" ]; then if [ "${BUILD_TYPE}" = "remote" ]; then
export REMOTE_ENV=1 export REMOTE_ENV=1

View File

@@ -78,8 +78,8 @@ jobs:
fetch-depth: 1 fetch-depth: 1
- name: Set pg revision for caching - name: Set pg revision for caching
id: pg_ver id: pg_v14_rev
run: echo ::set-output name=pg_rev::$(git rev-parse HEAD:vendor/postgres) run: echo ::set-output name=pg_rev::$(git rev-parse HEAD:vendor/postgres-v14)
shell: bash -euxo pipefail {0} shell: bash -euxo pipefail {0}
# Set some environment variables used by all the steps. # Set some environment variables used by all the steps.
@@ -124,12 +124,12 @@ jobs:
v7-${{ runner.os }}-${{ matrix.build_type }}-cargo-${{ matrix.rust_toolchain }}-${{ hashFiles('Cargo.lock') }} v7-${{ runner.os }}-${{ matrix.build_type }}-cargo-${{ matrix.rust_toolchain }}-${{ hashFiles('Cargo.lock') }}
v7-${{ runner.os }}-${{ matrix.build_type }}-cargo-${{ matrix.rust_toolchain }}- v7-${{ runner.os }}-${{ matrix.build_type }}-cargo-${{ matrix.rust_toolchain }}-
- name: Cache postgres build - name: Cache postgres v14 build
id: cache_pg id: cache_pg
uses: actions/cache@v3 uses: actions/cache@v3
with: with:
path: tmp_install/ path: pg_install/v14
key: v1-${{ runner.os }}-${{ matrix.build_type }}-pg-${{ steps.pg_ver.outputs.pg_rev }}-${{ hashFiles('Makefile') }} key: v1-${{ runner.os }}-${{ matrix.build_type }}-pg-${{ steps.pg_v14_rev.outputs.pg_rev }}-${{ hashFiles('Makefile') }}
- name: Build postgres - name: Build postgres
if: steps.cache_pg.outputs.cache-hit != 'true' if: steps.cache_pg.outputs.cache-hit != 'true'
@@ -192,7 +192,7 @@ jobs:
shell: bash -euxo pipefail {0} shell: bash -euxo pipefail {0}
- name: Install postgres binaries - name: Install postgres binaries
run: cp -a tmp_install /tmp/neon/pg_install run: cp -a pg_install /tmp/neon/pg_install
shell: bash -euxo pipefail {0} shell: bash -euxo pipefail {0}
- name: Upload Neon artifact - name: Upload Neon artifact
@@ -447,7 +447,6 @@ jobs:
compute-node-image: compute-node-image:
runs-on: dev runs-on: dev
container: gcr.io/kaniko-project/executor:v1.9.0-debug container: gcr.io/kaniko-project/executor:v1.9.0-debug
steps: steps:
- name: Checkout - name: Checkout
uses: actions/checkout@v1 # v3 won't work with kaniko uses: actions/checkout@v1 # v3 won't work with kaniko
@@ -458,18 +457,57 @@ jobs:
- name: Configure ECR login - name: Configure ECR login
run: echo "{\"credsStore\":\"ecr-login\"}" > /kaniko/.docker/config.json run: echo "{\"credsStore\":\"ecr-login\"}" > /kaniko/.docker/config.json
- name: Kaniko build compute node with extensions # compute-node uses postgres 14, which is default now
run: /kaniko/executor --skip-unused-stages --snapshotMode=redo --cache=true --cache-repo 369495373322.dkr.ecr.eu-central-1.amazonaws.com/cache --snapshotMode=redo --context . --dockerfile Dockerfile.compute-node --destination 369495373322.dkr.ecr.eu-central-1.amazonaws.com/compute-node:$GITHUB_RUN_ID # cloud repo depends on this image name, thus duplicating it
# remove compute-node when cloud repo is updated
- name: Kaniko build compute node with extensions v14 (compatibility)
run: /kaniko/executor --skip-unused-stages --snapshotMode=redo --cache=true --cache-repo 369495373322.dkr.ecr.eu-central-1.amazonaws.com/cache --snapshotMode=redo --context . --dockerfile Dockerfile.compute-node-v14 --destination 369495373322.dkr.ecr.eu-central-1.amazonaws.com/compute-node:$GITHUB_RUN_ID
compute-node-image-v14:
runs-on: dev
container: gcr.io/kaniko-project/executor:v1.9.0-debug
steps:
- name: Checkout
uses: actions/checkout@v1 # v3 won't work with kaniko
with:
submodules: true
fetch-depth: 0
- name: Configure ECR login
run: echo "{\"credsStore\":\"ecr-login\"}" > /kaniko/.docker/config.json
- name: Kaniko build compute node with extensions v14
run: /kaniko/executor --skip-unused-stages --snapshotMode=redo --cache=true --cache-repo 369495373322.dkr.ecr.eu-central-1.amazonaws.com/cache --context . --dockerfile Dockerfile.compute-node-v14 --destination 369495373322.dkr.ecr.eu-central-1.amazonaws.com/compute-node-v14:$GITHUB_RUN_ID
compute-node-image-v15:
runs-on: dev
container: gcr.io/kaniko-project/executor:v1.9.0-debug
steps:
- name: Checkout
uses: actions/checkout@v1 # v3 won't work with kaniko
with:
submodules: true
fetch-depth: 0
- name: Configure ECR login
run: echo "{\"credsStore\":\"ecr-login\"}" > /kaniko/.docker/config.json
- name: Kaniko build compute node with extensions v15
run: /kaniko/executor --skip-unused-stages --snapshotMode=redo --cache=true --cache-repo 369495373322.dkr.ecr.eu-central-1.amazonaws.com/cache --context . --dockerfile Dockerfile.compute-node-v15 --destination 369495373322.dkr.ecr.eu-central-1.amazonaws.com/compute-node-v15:$GITHUB_RUN_ID
promote-images: promote-images:
runs-on: dev runs-on: dev
needs: [ neon-image, compute-node-image, compute-tools-image ] needs: [ neon-image, compute-node-image, compute-node-image-v14, compute-tools-image ]
if: github.event_name != 'workflow_dispatch' if: github.event_name != 'workflow_dispatch'
container: amazon/aws-cli container: amazon/aws-cli
strategy: strategy:
fail-fast: false fail-fast: false
matrix: matrix:
name: [ neon, compute-node, compute-tools ] # compute-node uses postgres 14, which is default now
# cloud repo depends on this image name, thus duplicating it
# remove compute-node when cloud repo is updated
name: [ neon, compute-node, compute-node-v14, compute-tools ]
steps: steps:
- name: Promote image to latest - name: Promote image to latest
@@ -501,6 +539,9 @@ jobs:
- name: Pull compute node image from ECR - name: Pull compute node image from ECR
run: crane pull 369495373322.dkr.ecr.eu-central-1.amazonaws.com/compute-node:latest compute-node run: crane pull 369495373322.dkr.ecr.eu-central-1.amazonaws.com/compute-node:latest compute-node
- name: Pull compute node v14 image from ECR
run: crane pull 369495373322.dkr.ecr.eu-central-1.amazonaws.com/compute-node-v14:latest compute-node-v14
- name: Pull rust image from ECR - name: Pull rust image from ECR
run: crane pull 369495373322.dkr.ecr.eu-central-1.amazonaws.com/rust:pinned rust run: crane pull 369495373322.dkr.ecr.eu-central-1.amazonaws.com/rust:pinned rust
@@ -519,6 +560,9 @@ jobs:
- name: Push compute node image to Docker Hub - name: Push compute node image to Docker Hub
run: crane push compute-node neondatabase/compute-node:${{needs.tag.outputs.build-tag}} run: crane push compute-node neondatabase/compute-node:${{needs.tag.outputs.build-tag}}
- name: Push compute node v14 image to Docker Hub
run: crane push compute-node-v14 neondatabase/compute-node-v14:${{needs.tag.outputs.build-tag}}
- name: Push rust image to Docker Hub - name: Push rust image to Docker Hub
run: crane push rust neondatabase/rust:pinned run: crane push rust neondatabase/rust:pinned
@@ -530,6 +574,7 @@ jobs:
crane tag neondatabase/neon:${{needs.tag.outputs.build-tag}} latest crane tag neondatabase/neon:${{needs.tag.outputs.build-tag}} latest
crane tag neondatabase/compute-tools:${{needs.tag.outputs.build-tag}} latest crane tag neondatabase/compute-tools:${{needs.tag.outputs.build-tag}} latest
crane tag neondatabase/compute-node:${{needs.tag.outputs.build-tag}} latest crane tag neondatabase/compute-node:${{needs.tag.outputs.build-tag}} latest
crane tag neondatabase/compute-node-v14:${{needs.tag.outputs.build-tag}} latest
calculate-deploy-targets: calculate-deploy-targets:
runs-on: [ self-hosted, Linux, k8s-runner ] runs-on: [ self-hosted, Linux, k8s-runner ]

View File

@@ -27,8 +27,10 @@ jobs:
# Rust toolchains (e.g. nightly or 1.37.0), add them here. # Rust toolchains (e.g. nightly or 1.37.0), add them here.
rust_toolchain: [1.58] rust_toolchain: [1.58]
os: [ubuntu-latest, macos-latest] os: [ubuntu-latest, macos-latest]
# To support several Postgres versions, add them here.
postgres_version: [v14, v15]
timeout-minutes: 60 timeout-minutes: 60
name: run regression test suite name: check codestyle rust and postgres
runs-on: ${{ matrix.os }} runs-on: ${{ matrix.os }}
steps: steps:
@@ -61,14 +63,14 @@ jobs:
- name: Set pg revision for caching - name: Set pg revision for caching
id: pg_ver id: pg_ver
run: echo ::set-output name=pg_rev::$(git rev-parse HEAD:vendor/postgres) run: echo ::set-output name=pg_rev::$(git rev-parse HEAD:vendor/postgres-${{matrix.postgres_version}})
- name: Cache postgres build - name: Cache postgres ${{matrix.postgres_version}} build
id: cache_pg id: cache_pg
uses: actions/cache@v3 uses: actions/cache@v3
with: with:
path: | path: |
tmp_install/ pg_install/${{matrix.postgres_version}}
key: ${{ runner.os }}-pg-${{ steps.pg_ver.outputs.pg_rev }} key: ${{ runner.os }}-pg-${{ steps.pg_ver.outputs.pg_rev }}
- name: Set extra env for macOS - name: Set extra env for macOS
@@ -90,10 +92,10 @@ jobs:
if: failure() if: failure()
continue-on-error: true continue-on-error: true
run: | run: |
echo '' && echo '=== config.log ===' && echo '' echo '' && echo '=== Postgres ${{matrix.postgres_version}} config.log ===' && echo ''
cat tmp_install/build/config.log cat pg_install/build/${{matrix.postgres_version}}/config.log
echo '' && echo '=== configure.log ===' && echo '' echo '' && echo '=== Postgres ${{matrix.postgres_version}} configure.log ===' && echo ''
cat tmp_install/build/configure.log cat pg_install/build/${{matrix.postgres_version}}/configure.log
- name: Cache cargo deps - name: Cache cargo deps
id: cache_cargo id: cache_cargo

View File

@@ -52,7 +52,7 @@ jobs:
REMOTE_ENV: 1 REMOTE_ENV: 1
BENCHMARK_CONNSTR: "${{ secrets.BENCHMARK_STAGING_CONNSTR }}" BENCHMARK_CONNSTR: "${{ secrets.BENCHMARK_STAGING_CONNSTR }}"
POSTGRES_DISTRIB_DIR: /tmp/neon/pg_install POSTGRES_DISTRIB_DIR: /tmp/neon/pg_install/v14
shell: bash -euxo pipefail {0} shell: bash -euxo pipefail {0}
run: | run: |
# Test framework expects we have psql binary; # Test framework expects we have psql binary;

2
.gitignore vendored
View File

@@ -1,6 +1,6 @@
/pg_install
/target /target
/tmp_check /tmp_check
/tmp_install
/tmp_check_cli /tmp_check_cli
__pycache__/ __pycache__/
test_output/ test_output/

10
.gitmodules vendored
View File

@@ -1,4 +1,8 @@
[submodule "vendor/postgres"] [submodule "vendor/postgres-v14"]
path = vendor/postgres path = vendor/postgres-v14
url = https://github.com/zenithdb/postgres url = https://github.com/neondatabase/postgres.git
branch = main branch = main
[submodule "vendor/postgres-v15"]
path = vendor/postgres-v15
url = https://github.com/neondatabase/postgres.git
branch = REL_15_STABLE_neon

View File

@@ -5,20 +5,24 @@
ARG REPOSITORY=369495373322.dkr.ecr.eu-central-1.amazonaws.com ARG REPOSITORY=369495373322.dkr.ecr.eu-central-1.amazonaws.com
ARG IMAGE=rust ARG IMAGE=rust
ARG TAG=pinned ARG TAG=pinned
# ARGs don't get replaced in RUN commands in Kaniko
# so use hardcoded value below
# ARG PG_VERSION=v14
# Build Postgres # Build Postgres
FROM $REPOSITORY/$IMAGE:$TAG AS pg-build FROM $REPOSITORY/$IMAGE:$TAG AS pg-build
WORKDIR /home/nonroot WORKDIR /home/nonroot
COPY --chown=nonroot vendor/postgres vendor/postgres ARG PG_VERSION=v14
COPY --chown=nonroot vendor/postgres-v14 vendor/postgres-v14
COPY --chown=nonroot pgxn pgxn COPY --chown=nonroot pgxn pgxn
COPY --chown=nonroot Makefile Makefile COPY --chown=nonroot Makefile Makefile
ENV BUILD_TYPE release ENV BUILD_TYPE release
RUN set -e \ RUN set -e \
&& mold -run make -j $(nproc) -s neon-pg-ext \ && mold -run make -j $(nproc) -s neon-pg-ext-v14 \
&& rm -rf tmp_install/build \ && rm -rf pg_install/v14/build \
&& tar -C tmp_install -czf /home/nonroot/postgres_install.tar.gz . && tar -C pg_install/v14 -czf /home/nonroot/postgres_install.tar.gz .
# Build zenith binaries # Build zenith binaries
FROM $REPOSITORY/$IMAGE:$TAG AS build FROM $REPOSITORY/$IMAGE:$TAG AS build
@@ -35,7 +39,8 @@ ARG CACHEPOT_BUCKET=neon-github-dev
#ARG AWS_ACCESS_KEY_ID #ARG AWS_ACCESS_KEY_ID
#ARG AWS_SECRET_ACCESS_KEY #ARG AWS_SECRET_ACCESS_KEY
COPY --from=pg-build /home/nonroot/tmp_install/include/postgresql/server tmp_install/include/postgresql/server ARG PG_VERSION=v14
COPY --from=pg-build /home/nonroot/pg_install/v14/include/postgresql/server pg_install/v14/include/postgresql/server
COPY . . COPY . .
# Show build caching stats to check if it was used in the end. # Show build caching stats to check if it was used in the end.
@@ -64,7 +69,9 @@ COPY --from=build --chown=zenith:zenith /home/nonroot/target/release/pageserver
COPY --from=build --chown=zenith:zenith /home/nonroot/target/release/safekeeper /usr/local/bin COPY --from=build --chown=zenith:zenith /home/nonroot/target/release/safekeeper /usr/local/bin
COPY --from=build --chown=zenith:zenith /home/nonroot/target/release/proxy /usr/local/bin COPY --from=build --chown=zenith:zenith /home/nonroot/target/release/proxy /usr/local/bin
COPY --from=pg-build /home/nonroot/tmp_install/ /usr/local/ # v14 is default for now
ARG PG_VERSION=v14
COPY --from=pg-build /home/nonroot/pg_install/v14 /usr/local/
COPY --from=pg-build /home/nonroot/postgres_install.tar.gz /data/ COPY --from=pg-build /home/nonroot/postgres_install.tar.gz /data/
# By default, pageserver uses `.neon/` working directory in WORKDIR, so create one and fill it with the dummy config. # By default, pageserver uses `.neon/` working directory in WORKDIR, so create one and fill it with the dummy config.

View File

@@ -2,6 +2,7 @@ ARG TAG=pinned
# apparently, ARGs don't get replaced in RUN commands in kaniko # apparently, ARGs don't get replaced in RUN commands in kaniko
# ARG POSTGIS_VERSION=3.3.0 # ARG POSTGIS_VERSION=3.3.0
# ARG PLV8_VERSION=3.1.4 # ARG PLV8_VERSION=3.1.4
# ARG PG_VERSION=v14
# #
# Layer "build-deps" # Layer "build-deps"
@@ -16,7 +17,7 @@ RUN apt update && \
# Build Postgres from the neon postgres repository. # Build Postgres from the neon postgres repository.
# #
FROM build-deps AS pg-build FROM build-deps AS pg-build
COPY vendor/postgres postgres COPY vendor/postgres-v14 postgres
RUN cd postgres && \ RUN cd postgres && \
./configure CFLAGS='-O2 -g3' --enable-debug --with-uuid=ossp && \ ./configure CFLAGS='-O2 -g3' --enable-debug --with-uuid=ossp && \
make MAKELEVEL=0 -j $(getconf _NPROCESSORS_ONLN) -s install && \ make MAKELEVEL=0 -j $(getconf _NPROCESSORS_ONLN) -s install && \
@@ -28,8 +29,8 @@ RUN cd postgres && \
# #
# Layer "postgis-build" # Layer "postgis-build"
# Build PostGIS from the upstream PostGIS mirror. # Build PostGIS from the upstream PostGIS mirror.
# #
# PostGIS compiles against neon postgres sources without changes. Perhaps we # PostGIS compiles against neon postgres sources without changes. Perhaps we
# could even use the upstream binaries, compiled against vanilla Postgres, but # could even use the upstream binaries, compiled against vanilla Postgres, but
# it would require some investigation to check that it works, and also keeps # it would require some investigation to check that it works, and also keeps
# working in the future. So for now, we compile our own binaries. # working in the future. So for now, we compile our own binaries.

172
Dockerfile.compute-node-v15 Normal file
View File

@@ -0,0 +1,172 @@
#
# This file is identical to the Dockerfile.compute-node-v14 file
# except for the version of Postgres that is built.
#
ARG TAG=pinned
# apparently, ARGs don't get replaced in RUN commands in kaniko
# ARG POSTGIS_VERSION=3.3.0
# ARG PLV8_VERSION=3.1.4
# ARG PG_VERSION=v15
#
# Layer "build-deps"
#
FROM debian:bullseye-slim AS build-deps
RUN apt update && \
apt install -y git autoconf automake libtool build-essential bison flex libreadline-dev zlib1g-dev libxml2-dev \
libcurl4-openssl-dev libossp-uuid-dev
#
# Layer "pg-build"
# Build Postgres from the neon postgres repository.
#
FROM build-deps AS pg-build
COPY vendor/postgres-v15 postgres
RUN cd postgres && \
./configure CFLAGS='-O2 -g3' --enable-debug --with-uuid=ossp && \
make MAKELEVEL=0 -j $(getconf _NPROCESSORS_ONLN) -s install && \
make MAKELEVEL=0 -j $(getconf _NPROCESSORS_ONLN) -s -C contrib/ install && \
# Install headers
make MAKELEVEL=0 -j $(getconf _NPROCESSORS_ONLN) -s -C src/include install && \
make MAKELEVEL=0 -j $(getconf _NPROCESSORS_ONLN) -s -C src/interfaces/libpq install
#
# Layer "postgis-build"
# Build PostGIS from the upstream PostGIS mirror.
#
# PostGIS compiles against neon postgres sources without changes. Perhaps we
# could even use the upstream binaries, compiled against vanilla Postgres, but
# it would require some investigation to check that it works, and also keeps
# working in the future. So for now, we compile our own binaries.
FROM build-deps AS postgis-build
COPY --from=pg-build /usr/local/pgsql/ /usr/local/pgsql/
RUN apt update && \
apt install -y gdal-bin libgdal-dev libprotobuf-c-dev protobuf-c-compiler xsltproc wget
RUN wget https://download.osgeo.org/postgis/source/postgis-3.3.0.tar.gz && \
tar xvzf postgis-3.3.0.tar.gz && \
cd postgis-3.3.0 && \
./autogen.sh && \
export PATH="/usr/local/pgsql/bin:$PATH" && \
./configure && \
make -j $(getconf _NPROCESSORS_ONLN) install && \
cd extensions/postgis && \
make clean && \
make -j $(getconf _NPROCESSORS_ONLN) install && \
echo 'trusted = true' >> /usr/local/pgsql/share/extension/postgis.control && \
echo 'trusted = true' >> /usr/local/pgsql/share/extension/postgis_raster.control && \
echo 'trusted = true' >> /usr/local/pgsql/share/extension/postgis_tiger_geocoder.control && \
echo 'trusted = true' >> /usr/local/pgsql/share/extension/postgis_topology.control
#
# Layer "plv8-build"
# Build plv8
#
FROM build-deps AS plv8-build
COPY --from=postgis-build /usr/local/pgsql/ /usr/local/pgsql/
RUN apt update && \
apt install -y git curl wget make ninja-build build-essential libncurses5 python3-dev pkg-config libc++-dev libc++abi-dev libglib2.0-dev
# https://github.com/plv8/plv8/issues/475
# Debian bullseye provides binutils 2.35 when >= 2.38 is necessary
RUN echo "deb http://ftp.debian.org/debian testing main" >> /etc/apt/sources.list && \
echo "APT::Default-Release \"stable\";" > /etc/apt/apt.conf.d/default-release && \
apt update && \
apt install -y --no-install-recommends -t testing binutils
RUN wget https://github.com/plv8/plv8/archive/refs/tags/v3.1.4.tar.gz && \
tar xvzf v3.1.4.tar.gz && \
cd plv8-3.1.4 && \
export PATH="/usr/local/pgsql/bin:$PATH" && \
make -j $(getconf _NPROCESSORS_ONLN) && \
make -j $(getconf _NPROCESSORS_ONLN) install && \
rm -rf /plv8-* && \
echo 'trusted = true' >> /usr/local/pgsql/share/extension/plv8.control
#
# Layer "neon-pg-ext-build"
# compile neon extensions
#
FROM build-deps AS neon-pg-ext-build
COPY --from=postgis-build /usr/local/pgsql/ /usr/local/pgsql/
COPY pgxn/ pgxn/
RUN make -j $(getconf _NPROCESSORS_ONLN) \
PG_CONFIG=/usr/local/pgsql/bin/pg_config \
-C pgxn/neon \
-s install
# Compile and run the Neon-specific `compute_ctl` binary
FROM 369495373322.dkr.ecr.eu-central-1.amazonaws.com/rust:$TAG AS compute-tools
USER nonroot
# Copy entire project to get Cargo.* files with proper dependencies for the whole project
COPY --chown=nonroot . .
RUN cd compute_tools && cargo build --locked --profile release-line-debug-size-lto
#
# Clean up postgres folder before inclusion
#
FROM neon-pg-ext-build AS postgres-cleanup-layer
COPY --from=neon-pg-ext-build /usr/local/pgsql /usr/local/pgsql
# Remove binaries from /bin/ that we won't use (or would manually copy & install otherwise)
RUN cd /usr/local/pgsql/bin && rm ecpg raster2pgsql shp2pgsql pgtopo_export pgtopo_import pgsql2shp
# Remove headers that we won't need anymore - we've completed installation of all extensions
RUN rm -r /usr/local/pgsql/include
# Remove now-useless PGXS src infrastructure
RUN rm -r /usr/local/pgsql/lib/pgxs/src
# Remove static postgresql libraries - all compilation is finished, so we
# can now remove these files - they must be included in other binaries by now
# if they were to be used by other libraries.
RUN rm /usr/local/pgsql/lib/lib*.a
#
# Final layer
# Put it all together into the final image
#
FROM debian:bullseye-slim
# Add user postgres
RUN mkdir /var/db && useradd -m -d /var/db/postgres postgres && \
echo "postgres:test_console_pass" | chpasswd && \
mkdir /var/db/postgres/compute && mkdir /var/db/postgres/specs && \
chown -R postgres:postgres /var/db/postgres && \
chmod 0750 /var/db/postgres/compute && \
echo '/usr/local/lib' >> /etc/ld.so.conf && /sbin/ldconfig
# TODO: Check if we can make the extension setup more modular versus a linear build
# currently plv8-build copies the output /usr/local/pgsql from postgis-build, etc#
COPY --from=postgres-cleanup-layer --chown=postgres /usr/local/pgsql /usr/local
COPY --from=compute-tools --chown=postgres /home/nonroot/target/release-line-debug-size-lto/compute_ctl /usr/local/bin/compute_ctl
# Install:
# libreadline8 for psql
# libossp-uuid16 for extension ossp-uuid
# libgeos, libgdal, libproj and libprotobuf-c1 for PostGIS
# GLIBC 2.34 for plv8.
# Debian bullseye provides GLIBC 2.31, so we install the library from testing
#
# Lastly, link compute_ctl into zenith_ctl while we're at it,
# so that we don't need to put this in another layer.
RUN apt update && \
apt install --no-install-recommends -y \
libreadline8 \
libossp-uuid16 \
libgeos-c1v5 \
libgdal28 \
libproj19 \
libprotobuf-c1 && \
rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/* && \
echo "Installing GLIBC 2.34" && \
echo "deb http://ftp.debian.org/debian testing main" >> /etc/apt/sources.list && \
echo "APT::Default-Release \"stable\";" > /etc/apt/apt.conf.d/default-release && \
apt update && \
apt install -y --no-install-recommends -t testing libc6 && \
rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/* && \
ln /usr/local/bin/compute_ctl /usr/local/bin/zenith_ctl
USER postgres
ENTRYPOINT ["/usr/local/bin/compute_ctl"]

View File

@@ -37,7 +37,8 @@ RUN adduser postgres
RUN mkdir /pg && chown postgres:postgres /pg RUN mkdir /pg && chown postgres:postgres /pg
# Copy source files # Copy source files
COPY ./vendor/postgres /pg/ # version 14 is default for now
COPY ./vendor/postgres-v14 /pg/
COPY ./pgxn /pg/ COPY ./pgxn /pg/
# Build and install Postgres locally # Build and install Postgres locally

174
Makefile
View File

@@ -1,15 +1,7 @@
ROOT_PROJECT_DIR := $(dir $(abspath $(lastword $(MAKEFILE_LIST)))) ROOT_PROJECT_DIR := $(dir $(abspath $(lastword $(MAKEFILE_LIST))))
# Where to install Postgres, default is ./tmp_install, maybe useful for package managers # Where to install Postgres, default is ./pg_install, maybe useful for package managers
POSTGRES_INSTALL_DIR ?= $(ROOT_PROJECT_DIR)/tmp_install POSTGRES_INSTALL_DIR ?= $(ROOT_PROJECT_DIR)/pg_install/
# Seccomp BPF is only available for Linux
UNAME_S := $(shell uname -s)
ifeq ($(UNAME_S),Linux)
SECCOMP = --with-libseccomp
else
SECCOMP =
endif
# #
# We differentiate between release / debug build types using the BUILD_TYPE # We differentiate between release / debug build types using the BUILD_TYPE
@@ -28,6 +20,13 @@ else
$(error Bad build type '$(BUILD_TYPE)', see Makefile for options) $(error Bad build type '$(BUILD_TYPE)', see Makefile for options)
endif endif
# Seccomp BPF is only available for Linux
UNAME_S := $(shell uname -s)
ifeq ($(UNAME_S),Linux)
PG_CONFIGURE_OPTS += --with-libseccomp
endif
# macOS with brew-installed openssl requires explicit paths # macOS with brew-installed openssl requires explicit paths
# It can be configured with OPENSSL_PREFIX variable # It can be configured with OPENSSL_PREFIX variable
UNAME_S := $(shell uname -s) UNAME_S := $(shell uname -s)
@@ -48,75 +47,136 @@ CARGO_CMD_PREFIX += $(if $(filter n,$(MAKEFLAGS)),,+)
CARGO_CMD_PREFIX += CARGO_TERM_PROGRESS_WHEN=never CI=1 CARGO_CMD_PREFIX += CARGO_TERM_PROGRESS_WHEN=never CI=1
# #
# Top level Makefile to build Zenith and PostgreSQL # Top level Makefile to build Neon and PostgreSQL
# #
.PHONY: all .PHONY: all
all: zenith postgres neon-pg-ext all: neon postgres neon-pg-ext
### Zenith Rust bits ### Neon Rust bits
# #
# The 'postgres_ffi' depends on the Postgres headers. # The 'postgres_ffi' depends on the Postgres headers.
.PHONY: zenith .PHONY: neon
zenith: postgres-headers neon: postgres-v14-headers
+@echo "Compiling Zenith" +@echo "Compiling Neon"
$(CARGO_CMD_PREFIX) cargo build $(CARGO_BUILD_FLAGS) $(CARGO_CMD_PREFIX) cargo build $(CARGO_BUILD_FLAGS)
### PostgreSQL parts ### PostgreSQL parts
$(POSTGRES_INSTALL_DIR)/build/config.status: # The rules are duplicated for Postgres v14 and 15. We may want to refactor
+@echo "Configuring postgres build" # to avoid the duplication in the future, but it's tolerable for now.
mkdir -p $(POSTGRES_INSTALL_DIR)/build #
(cd $(POSTGRES_INSTALL_DIR)/build && \ $(POSTGRES_INSTALL_DIR)/build/v14/config.status:
$(ROOT_PROJECT_DIR)/vendor/postgres/configure CFLAGS='$(PG_CFLAGS)' \ +@echo "Configuring Postgres v14 build"
mkdir -p $(POSTGRES_INSTALL_DIR)/build/v14
(cd $(POSTGRES_INSTALL_DIR)/build/v14 && \
$(ROOT_PROJECT_DIR)/vendor/postgres-v14/configure CFLAGS='$(PG_CFLAGS)' \
$(PG_CONFIGURE_OPTS) \ $(PG_CONFIGURE_OPTS) \
$(SECCOMP) \ --prefix=$(abspath $(POSTGRES_INSTALL_DIR))/v14 > configure.log)
--prefix=$(abspath $(POSTGRES_INSTALL_DIR)) > configure.log)
# nicer alias for running 'configure' $(POSTGRES_INSTALL_DIR)/build/v15/config.status:
.PHONY: postgres-configure +@echo "Configuring Postgres v15 build"
postgres-configure: $(POSTGRES_INSTALL_DIR)/build/config.status mkdir -p $(POSTGRES_INSTALL_DIR)/build/v15
(cd $(POSTGRES_INSTALL_DIR)/build/v15 && \
$(ROOT_PROJECT_DIR)/vendor/postgres-v15/configure CFLAGS='$(PG_CFLAGS)' \
$(PG_CONFIGURE_OPTS) \
--prefix=$(abspath $(POSTGRES_INSTALL_DIR))/v15 > configure.log)
# Install the PostgreSQL header files into $(POSTGRES_INSTALL_DIR)/include # nicer alias to run 'configure'
.PHONY: postgres-headers .PHONY: postgres-v14-configure
postgres-headers: postgres-configure postgres-v14-configure: $(POSTGRES_INSTALL_DIR)/build/v14/config.status
+@echo "Installing PostgreSQL headers"
$(MAKE) -C $(POSTGRES_INSTALL_DIR)/build/src/include MAKELEVEL=0 install
# Compile and install PostgreSQL and contrib/neon .PHONY: postgres-v15-configure
.PHONY: postgres postgres-v15-configure: $(POSTGRES_INSTALL_DIR)/build/v15/config.status
postgres: postgres-configure \
postgres-headers # to prevent `make install` conflicts with zenith's `postgres-headers`
+@echo "Compiling PostgreSQL"
$(MAKE) -C $(POSTGRES_INSTALL_DIR)/build MAKELEVEL=0 install
+@echo "Compiling libpq"
$(MAKE) -C $(POSTGRES_INSTALL_DIR)/build/src/interfaces/libpq install
+@echo "Compiling pg_buffercache"
$(MAKE) -C $(POSTGRES_INSTALL_DIR)/build/contrib/pg_buffercache install
+@echo "Compiling pageinspect"
$(MAKE) -C $(POSTGRES_INSTALL_DIR)/build/contrib/pageinspect install
.PHONY: postgres-clean # Install the PostgreSQL header files into $(POSTGRES_INSTALL_DIR)/<version>/include
postgres-clean: .PHONY: postgres-v14-headers
$(MAKE) -C $(POSTGRES_INSTALL_DIR)/build MAKELEVEL=0 clean postgres-v14-headers: postgres-v14-configure
$(MAKE) -C $(POSTGRES_INSTALL_DIR)/build/contrib/pg_buffercache clean +@echo "Installing PostgreSQL v14 headers"
$(MAKE) -C $(POSTGRES_INSTALL_DIR)/build/contrib/pageinspect clean $(MAKE) -C $(POSTGRES_INSTALL_DIR)/build/v14/src/include MAKELEVEL=0 install
$(MAKE) -C $(POSTGRES_INSTALL_DIR)/build/src/interfaces/libpq clean
neon-pg-ext: postgres .PHONY: postgres-v15-headers
+@echo "Compiling neon" postgres-v15-headers: postgres-v15-configure
$(MAKE) PG_CONFIG=$(POSTGRES_INSTALL_DIR)/bin/pg_config \ +@echo "Installing PostgreSQL v15 headers"
-C $(ROOT_PROJECT_DIR)/pgxn/neon install $(MAKE) -C $(POSTGRES_INSTALL_DIR)/build/v15/src/include MAKELEVEL=0 install
+@echo "Compiling neon_test_utils"
$(MAKE) PG_CONFIG=$(POSTGRES_INSTALL_DIR)/bin/pg_config \ # Compile and install PostgreSQL
-C $(ROOT_PROJECT_DIR)/pgxn/neon_test_utils install .PHONY: postgres-v14
postgres-v14: postgres-v14-configure \
postgres-v14-headers # to prevent `make install` conflicts with neon's `postgres-headers`
+@echo "Compiling PostgreSQL v14"
$(MAKE) -C $(POSTGRES_INSTALL_DIR)/build/v14 MAKELEVEL=0 install
+@echo "Compiling libpq v14"
$(MAKE) -C $(POSTGRES_INSTALL_DIR)/build/v14/src/interfaces/libpq install
+@echo "Compiling pg_buffercache v14"
$(MAKE) -C $(POSTGRES_INSTALL_DIR)/build/v14/contrib/pg_buffercache install
+@echo "Compiling pageinspect v14"
$(MAKE) -C $(POSTGRES_INSTALL_DIR)/build/v14/contrib/pageinspect install
.PHONY: postgres-v15
postgres-v15: postgres-v15-configure \
postgres-v15-headers # to prevent `make install` conflicts with neon's `postgres-headers`
+@echo "Compiling PostgreSQL v15"
$(MAKE) -C $(POSTGRES_INSTALL_DIR)/build/v15 MAKELEVEL=0 install
+@echo "Compiling libpq v15"
$(MAKE) -C $(POSTGRES_INSTALL_DIR)/build/v15/src/interfaces/libpq install
+@echo "Compiling pg_buffercache v15"
$(MAKE) -C $(POSTGRES_INSTALL_DIR)/build/v15/contrib/pg_buffercache install
+@echo "Compiling pageinspect v15"
$(MAKE) -C $(POSTGRES_INSTALL_DIR)/build/v15/contrib/pageinspect install
# shorthand to build all Postgres versions
postgres: postgres-v14 postgres-v15
.PHONY: postgres-v14-clean
postgres-v14-clean:
$(MAKE) -C $(POSTGRES_INSTALL_DIR)/build/v14 MAKELEVEL=0 clean
$(MAKE) -C $(POSTGRES_INSTALL_DIR)/build/v14/contrib/pg_buffercache clean
$(MAKE) -C $(POSTGRES_INSTALL_DIR)/build/v14/contrib/pageinspect clean
$(MAKE) -C $(POSTGRES_INSTALL_DIR)/build/v14/src/interfaces/libpq clean
.PHONY: postgres-v15-clean
postgres-v15-clean:
$(MAKE) -C $(POSTGRES_INSTALL_DIR)/build/v15 MAKELEVEL=0 clean
$(MAKE) -C $(POSTGRES_INSTALL_DIR)/build/v15/contrib/pg_buffercache clean
$(MAKE) -C $(POSTGRES_INSTALL_DIR)/build/v15/contrib/pageinspect clean
$(MAKE) -C $(POSTGRES_INSTALL_DIR)/build/v15/src/interfaces/libpq clean
neon-pg-ext-v14: postgres-v14
+@echo "Compiling neon v14"
mkdir -p $(POSTGRES_INSTALL_DIR)/build/neon-v14
(cd $(POSTGRES_INSTALL_DIR)/build/neon-v14 && \
$(MAKE) PG_CONFIG=$(POSTGRES_INSTALL_DIR)/v14/bin/pg_config \
-f $(ROOT_PROJECT_DIR)/pgxn/neon/Makefile install)
+@echo "Compiling neon_test_utils" v14
mkdir -p $(POSTGRES_INSTALL_DIR)/build/neon-test-utils-v14
(cd $(POSTGRES_INSTALL_DIR)/build/neon-test-utils-v14 && \
$(MAKE) PG_CONFIG=$(POSTGRES_INSTALL_DIR)/v14/bin/pg_config \
-f $(ROOT_PROJECT_DIR)/pgxn/neon_test_utils/Makefile install)
neon-pg-ext-v15: postgres-v15
+@echo "Compiling neon v15"
mkdir -p $(POSTGRES_INSTALL_DIR)/build/neon-v15
(cd $(POSTGRES_INSTALL_DIR)/build/neon-v15 && \
$(MAKE) PG_CONFIG=$(POSTGRES_INSTALL_DIR)/v15/bin/pg_config \
-f $(ROOT_PROJECT_DIR)/pgxn/neon/Makefile install)
+@echo "Compiling neon_test_utils" v15
mkdir -p $(POSTGRES_INSTALL_DIR)/build/neon-test-utils-v15
(cd $(POSTGRES_INSTALL_DIR)/build/neon-test-utils-v15 && \
$(MAKE) PG_CONFIG=$(POSTGRES_INSTALL_DIR)/v15/bin/pg_config \
-f $(ROOT_PROJECT_DIR)/pgxn/neon_test_utils/Makefile install)
.PHONY: neon-pg-ext-clean .PHONY: neon-pg-ext-clean
$(MAKE) -C $(ROOT_PROJECT_DIR)/pgxn/neon clean $(MAKE) -C $(ROOT_PROJECT_DIR)/pgxn/neon clean
$(MAKE) -C $(ROOT_PROJECT_DIR)/pgxn/neon_test_utils clean $(MAKE) -C $(ROOT_PROJECT_DIR)/pgxn/neon_test_utils clean
neon-pg-ext: neon-pg-ext-v14 neon-pg-ext-v15
postgres-headers: postgres-v14-headers postgres-v15-headers
postgres-clean: postgres-v14-clean postgres-v15-clean
# This doesn't remove the effects of 'configure'. # This doesn't remove the effects of 'configure'.
.PHONY: clean .PHONY: clean
clean: clean:
cd $(POSTGRES_INSTALL_DIR)/build && $(MAKE) clean cd $(POSTGRES_INSTALL_DIR)/build/v14 && $(MAKE) clean
cd $(POSTGRES_INSTALL_DIR)/build/v15 && $(MAKE) clean
$(CARGO_CMD_PREFIX) cargo clean $(CARGO_CMD_PREFIX) cargo clean
cd pgxn/neon && $(MAKE) clean cd pgxn/neon && $(MAKE) clean
cd pgxn/neon_test_utils && $(MAKE) clean cd pgxn/neon_test_utils && $(MAKE) clean

4
NOTICE
View File

@@ -1,5 +1,5 @@
Neon Neon
Copyright 2022 Neon Inc. Copyright 2022 Neon Inc.
The PostgreSQL submodule in vendor/postgres is licensed under the The PostgreSQL submodules in vendor/postgres-v14 and vendor/postgres-v15 are licensed under the
PostgreSQL license. See vendor/postgres/COPYRIGHT. PostgreSQL license. See vendor/postgres-v14/COPYRIGHT and vendor/postgres-v15/COPYRIGHT.

View File

@@ -25,6 +25,7 @@ Pageserver consists of:
- WAL receiver - service that receives WAL from WAL service and stores it in the repository. - WAL receiver - service that receives WAL from WAL service and stores it in the repository.
- Page service - service that communicates with compute nodes and responds with pages from the repository. - Page service - service that communicates with compute nodes and responds with pages from the repository.
- WAL redo - service that builds pages from base images and WAL records on Page service request - WAL redo - service that builds pages from base images and WAL records on Page service request
## Running local installation ## Running local installation
@@ -101,7 +102,7 @@ make -j`sysctl -n hw.logicalcpu`
``` ```
#### Dependency installation notes #### Dependency installation notes
To run the `psql` client, install the `postgresql-client` package or modify `PATH` and `LD_LIBRARY_PATH` to include `tmp_install/bin` and `tmp_install/lib`, respectively. To run the `psql` client, install the `postgresql-client` package or modify `PATH` and `LD_LIBRARY_PATH` to include `pg_install/bin` and `pg_install/lib`, respectively.
To run the integration tests or Python scripts (not required to use the code), install To run the integration tests or Python scripts (not required to use the code), install
Python (3.9 or higher), and install python3 packages using `./scripts/pysync` (requires [poetry](https://python-poetry.org/)) in the project directory. Python (3.9 or higher), and install python3 packages using `./scripts/pysync` (requires [poetry](https://python-poetry.org/)) in the project directory.
@@ -208,7 +209,7 @@ Ensure your dependencies are installed as described [here](https://github.com/ne
```sh ```sh
git clone --recursive https://github.com/neondatabase/neon.git git clone --recursive https://github.com/neondatabase/neon.git
make # builds also postgres and installs it to ./tmp_install make # builds also postgres and installs it to ./pg_install
./scripts/pytest ./scripts/pytest
``` ```

View File

@@ -289,13 +289,13 @@ impl LocalEnv {
let mut env: LocalEnv = toml::from_str(toml)?; let mut env: LocalEnv = toml::from_str(toml)?;
// Find postgres binaries. // Find postgres binaries.
// Follow POSTGRES_DISTRIB_DIR if set, otherwise look in "tmp_install". // Follow POSTGRES_DISTRIB_DIR if set, otherwise look in "pg_install/v14".
if env.pg_distrib_dir == Path::new("") { if env.pg_distrib_dir == Path::new("") {
if let Some(postgres_bin) = env::var_os("POSTGRES_DISTRIB_DIR") { if let Some(postgres_bin) = env::var_os("POSTGRES_DISTRIB_DIR") {
env.pg_distrib_dir = postgres_bin.into(); env.pg_distrib_dir = postgres_bin.into();
} else { } else {
let cwd = env::current_dir()?; let cwd = env::current_dir()?;
env.pg_distrib_dir = cwd.join("tmp_install") env.pg_distrib_dir = cwd.join("pg_install/v14")
} }
} }

View File

@@ -157,7 +157,7 @@ for other files and for sockets for incoming connections.
A directory with Postgres installation to use during pageserver activities. A directory with Postgres installation to use during pageserver activities.
Inside that dir, a `bin/postgres` binary should be present. Inside that dir, a `bin/postgres` binary should be present.
The default distrib dir is `./tmp_install/`. The default distrib dir is `./pg_install/`.
#### workdir (-D) #### workdir (-D)

View File

@@ -40,15 +40,15 @@ and create new databases and accounts (control plane API in our case).
Integration tests, written in Python using the `pytest` framework. Integration tests, written in Python using the `pytest` framework.
`/vendor/postgres`: `/vendor/postgres-v14`:
PostgreSQL source tree, with the modifications needed for Neon. PostgreSQL source tree, with the modifications needed for Neon.
`/vendor/postgres/contrib/neon`: `/pgxn/neon`:
PostgreSQL extension that implements storage manager API and network communications with remote page server. PostgreSQL extension that implements storage manager API and network communications with remote page server.
`/vendor/postgres/contrib/neon_test_utils`: `/pgxn/neon_test_utils`:
PostgreSQL extension that contains functions needed for testing and debugging. PostgreSQL extension that contains functions needed for testing and debugging.

View File

@@ -47,14 +47,17 @@ fn main() {
println!("cargo:rerun-if-changed=bindgen_deps.h"); println!("cargo:rerun-if-changed=bindgen_deps.h");
// Finding the location of C headers for the Postgres server: // Finding the location of C headers for the Postgres server:
// - if POSTGRES_INSTALL_DIR is set look into it, otherwise look into `<project_root>/tmp_install` // - if POSTGRES_INSTALL_DIR is set look into it, otherwise look into `<project_root>/pg_install`
// - if there's a `bin/pg_config` file use it for getting include server, otherwise use `<project_root>/tmp_install/include/postgresql/server` // - if there's a `bin/pg_config` file use it for getting include server, otherwise use `<project_root>/pg_install/v14/include/postgresql/server`
let mut pg_install_dir = if let Some(postgres_install_dir) = env::var_os("POSTGRES_INSTALL_DIR") let mut pg_install_dir = if let Some(postgres_install_dir) = env::var_os("POSTGRES_INSTALL_DIR")
{ {
postgres_install_dir.into() postgres_install_dir.into()
} else { } else {
PathBuf::from("tmp_install") PathBuf::from("pg_install")
}; };
// Currently, we only expect to find PostgreSQL v14 sources, in "pg_install/v14". In the
// future, we will run this for all supported PostgreSQL versions.
pg_install_dir.push("v14");
if pg_install_dir.is_relative() { if pg_install_dir.is_relative() {
let cwd = env::current_dir().unwrap(); let cwd = env::current_dir().unwrap();

View File

@@ -449,7 +449,7 @@ mod tests {
.join("..") .join("..")
.join(".."); .join("..");
let cfg = Conf { let cfg = Conf {
pg_distrib_dir: top_path.join("tmp_install"), pg_distrib_dir: top_path.join("pg_install/v14"),
datadir: top_path.join(format!("test_output/{}", test_name)), datadir: top_path.join(format!("test_output/{}", test_name)),
}; };
if cfg.datadir.exists() { if cfg.datadir.exists() {

View File

@@ -37,7 +37,7 @@ fn main() -> Result<()> {
Arg::new("pg-distrib-dir") Arg::new("pg-distrib-dir")
.long("pg-distrib-dir") .long("pg-distrib-dir")
.takes_value(true) .takes_value(true)
.help("Directory with Postgres distribution (bin and lib directories, e.g. tmp_install)") .help("Directory with Postgres distribution (bin and lib directories, e.g. pg_install/v14)")
.default_value("/usr/local") .default_value("/usr/local")
) )
) )

View File

@@ -205,7 +205,7 @@ impl Default for PageServerConfigBuilder {
workdir: Set(PathBuf::new()), workdir: Set(PathBuf::new()),
pg_distrib_dir: Set(env::current_dir() pg_distrib_dir: Set(env::current_dir()
.expect("cannot access current directory") .expect("cannot access current directory")
.join("tmp_install")), .join("pg_install/v14")),
auth_type: Set(AuthType::Trust), auth_type: Set(AuthType::Trust),
auth_validation_public_key_path: Set(None), auth_validation_public_key_path: Set(None),
remote_storage_config: Set(None), remote_storage_config: Set(None),

View File

@@ -29,6 +29,10 @@
#include "storage/relfilenode.h" #include "storage/relfilenode.h"
#include "storage/smgr.h" #include "storage/smgr.h"
#if PG_VERSION_NUM >= 150000
#include "access/xlogutils.h"
#endif
/* Size of the in-memory smgr */ /* Size of the in-memory smgr */
#define MAX_PAGES 64 #define MAX_PAGES 64

View File

@@ -64,6 +64,11 @@
#include "catalog/pg_tablespace_d.h" #include "catalog/pg_tablespace_d.h"
#include "postmaster/autovacuum.h" #include "postmaster/autovacuum.h"
#if PG_VERSION_NUM >= 150000
#include "access/xlogutils.h"
#include "access/xlogrecovery.h"
#endif
/* /*
* If DEBUG_COMPARE_LOCAL is defined, we pass through all the SMGR API * If DEBUG_COMPARE_LOCAL is defined, we pass through all the SMGR API
* calls to md.c, and *also* do the calls to the Page Server. On every * calls to md.c, and *also* do the calls to the Page Server. On every
@@ -645,7 +650,11 @@ zenith_get_request_lsn(bool *latest, RelFileNode rnode, ForkNumber forknum, Bloc
* _bt_blwritepage logs the full page without flushing WAL before * _bt_blwritepage logs the full page without flushing WAL before
* smgrextend (files are fsynced before build ends). * smgrextend (files are fsynced before build ends).
*/ */
#if PG_VERSION_NUM >= 150000
flushlsn = GetFlushRecPtr(NULL);
#else
flushlsn = GetFlushRecPtr(); flushlsn = GetFlushRecPtr();
#endif
if (lsn > flushlsn) if (lsn > flushlsn)
{ {
elog(DEBUG5, "last-written LSN %X/%X is ahead of last flushed LSN %X/%X", elog(DEBUG5, "last-written LSN %X/%X is ahead of last flushed LSN %X/%X",

View File

@@ -24,6 +24,9 @@
#include "utils/dynahash.h" #include "utils/dynahash.h"
#include "utils/guc.h" #include "utils/guc.h"
#if PG_VERSION_NUM >= 150000
#include "miscadmin.h"
#endif
typedef struct typedef struct
{ {
@@ -41,6 +44,10 @@ static HTAB *relsize_hash;
static LWLockId relsize_lock; static LWLockId relsize_lock;
static int relsize_hash_size; static int relsize_hash_size;
static shmem_startup_hook_type prev_shmem_startup_hook = NULL; static shmem_startup_hook_type prev_shmem_startup_hook = NULL;
#if PG_VERSION_NUM >= 150000
static shmem_request_hook_type prev_shmem_request_hook = NULL;
static void relsize_shmem_request(void);
#endif
/* /*
* Size of a cache entry is 20 bytes. So this default will take about 1.2 MB, * Size of a cache entry is 20 bytes. So this default will take about 1.2 MB,
@@ -158,10 +165,31 @@ relsize_hash_init(void)
if (relsize_hash_size > 0) if (relsize_hash_size > 0)
{ {
#if PG_VERSION_NUM >= 150000
prev_shmem_request_hook = shmem_request_hook;
shmem_request_hook = relsize_shmem_request;
#else
RequestAddinShmemSpace(hash_estimate_size(relsize_hash_size, sizeof(RelSizeEntry))); RequestAddinShmemSpace(hash_estimate_size(relsize_hash_size, sizeof(RelSizeEntry)));
RequestNamedLWLockTranche("neon_relsize", 1); RequestNamedLWLockTranche("neon_relsize", 1);
#endif
prev_shmem_startup_hook = shmem_startup_hook; prev_shmem_startup_hook = shmem_startup_hook;
shmem_startup_hook = zenith_smgr_shmem_startup; shmem_startup_hook = zenith_smgr_shmem_startup;
} }
} }
#if PG_VERSION_NUM >= 150000
/*
* shmem_request hook: request additional shared resources. We'll allocate or
* attach to the shared resources in zenith_smgr_shmem_startup().
*/
static void
relsize_shmem_request(void)
{
if (prev_shmem_request_hook)
prev_shmem_request_hook();
RequestAddinShmemSpace(hash_estimate_size(relsize_hash_size, sizeof(RelSizeEntry)));
RequestNamedLWLockTranche("neon_relsize", 1);
}
#endif

View File

@@ -39,6 +39,10 @@
#include "access/xact.h" #include "access/xact.h"
#include "access/xlogdefs.h" #include "access/xlogdefs.h"
#include "access/xlogutils.h" #include "access/xlogutils.h"
#include "access/xloginsert.h"
#if PG_VERSION_NUM >= 150000
#include "access/xlogrecovery.h"
#endif
#include "storage/latch.h" #include "storage/latch.h"
#include "miscadmin.h" #include "miscadmin.h"
#include "pgstat.h" #include "pgstat.h"
@@ -165,7 +169,10 @@ static bool backpressure_throttling_impl(void);
static process_interrupts_callback_t PrevProcessInterruptsCallback; static process_interrupts_callback_t PrevProcessInterruptsCallback;
static shmem_startup_hook_type prev_shmem_startup_hook_type; static shmem_startup_hook_type prev_shmem_startup_hook_type;
#if PG_VERSION_NUM >= 150000
static shmem_request_hook_type prev_shmem_request_hook = NULL;
static void walproposer_shmem_request(void);
#endif
void pg_init_walproposer(void) void pg_init_walproposer(void)
@@ -221,19 +228,38 @@ static void nwp_register_gucs(void)
GUC_UNIT_MS, GUC_UNIT_MS,
NULL, NULL, NULL NULL, NULL, NULL
); );
} }
/* shmem handling */ /* shmem handling */
static void nwp_prepare_shmem(void) static void nwp_prepare_shmem(void)
{ {
#if PG_VERSION_NUM >= 150000
prev_shmem_request_hook = shmem_request_hook;
shmem_request_hook = walproposer_shmem_request;
#else
RequestAddinShmemSpace(WalproposerShmemSize()); RequestAddinShmemSpace(WalproposerShmemSize());
#endif
prev_shmem_startup_hook_type = shmem_startup_hook; prev_shmem_startup_hook_type = shmem_startup_hook;
shmem_startup_hook = nwp_shmem_startup_hook; shmem_startup_hook = nwp_shmem_startup_hook;
} }
#if PG_VERSION_NUM >= 150000
/*
* shmem_request hook: request additional shared resources. We'll allocate or
* attach to the shared resources in nwp_shmem_startup_hook().
*/
static void
walproposer_shmem_request(void)
{
if (prev_shmem_request_hook)
prev_shmem_request_hook();
RequestAddinShmemSpace(WalproposerShmemSize());
}
#endif
static void nwp_shmem_startup_hook(void) static void nwp_shmem_startup_hook(void)
{ {
if (prev_shmem_startup_hook_type) if (prev_shmem_startup_hook_type)
@@ -248,6 +274,10 @@ static void nwp_shmem_startup_hook(void)
void void
WalProposerMain(Datum main_arg) WalProposerMain(Datum main_arg)
{ {
#if PG_VERSION_NUM >= 150000
TimeLineID tli;
#endif
/* Establish signal handlers. */ /* Establish signal handlers. */
pqsignal(SIGUSR1, procsignal_sigusr1_handler); pqsignal(SIGUSR1, procsignal_sigusr1_handler);
pqsignal(SIGHUP, SignalHandlerForConfigReload); pqsignal(SIGHUP, SignalHandlerForConfigReload);
@@ -255,9 +285,14 @@ WalProposerMain(Datum main_arg)
BackgroundWorkerUnblockSignals(); BackgroundWorkerUnblockSignals();
#if PG_VERSION_NUM >= 150000
// FIXME pass proper tli to WalProposerInit ?
GetXLogReplayRecPtr(&tli);
WalProposerInit(GetFlushRecPtr(NULL), GetSystemIdentifier());
#else
GetXLogReplayRecPtr(&ThisTimeLineID); GetXLogReplayRecPtr(&ThisTimeLineID);
WalProposerInit(GetFlushRecPtr(), GetSystemIdentifier()); WalProposerInit(GetFlushRecPtr(), GetSystemIdentifier());
#endif
last_reconnect_attempt = GetCurrentTimestamp(); last_reconnect_attempt = GetCurrentTimestamp();
@@ -468,7 +503,12 @@ WalProposerInitImpl(XLogRecPtr flushRecPtr, uint64 systemId)
!HexDecodeString(greetRequest.ztenantid, zenith_tenant_walproposer, 16)) !HexDecodeString(greetRequest.ztenantid, zenith_tenant_walproposer, 16))
elog(FATAL, "Could not parse neon.tenant_id, %s", zenith_tenant_walproposer); elog(FATAL, "Could not parse neon.tenant_id, %s", zenith_tenant_walproposer);
#if PG_VERSION_NUM >= 150000
// FIXME don't use hardcoded timeline id
greetRequest.timeline = 1;
#else
greetRequest.timeline = ThisTimeLineID; greetRequest.timeline = ThisTimeLineID;
#endif
greetRequest.walSegSize = wal_segment_size; greetRequest.walSegSize = wal_segment_size;
InitEventSet(); InitEventSet();
@@ -1702,7 +1742,12 @@ SendAppendRequests(Safekeeper *sk)
&sk->outbuf.data[sk->outbuf.len], &sk->outbuf.data[sk->outbuf.len],
req->beginLsn, req->beginLsn,
req->endLsn - req->beginLsn, req->endLsn - req->beginLsn,
#if PG_VERSION_NUM >= 150000
// FIXME don't use hardcoded timelineid here
1,
#else
ThisTimeLineID, ThisTimeLineID,
#endif
&errinfo)) &errinfo))
{ {
WALReadRaiseError(&errinfo); WALReadRaiseError(&errinfo);
@@ -2373,8 +2418,11 @@ backpressure_lag_impl(void)
XLogRecPtr writePtr; XLogRecPtr writePtr;
XLogRecPtr flushPtr; XLogRecPtr flushPtr;
XLogRecPtr applyPtr; XLogRecPtr applyPtr;
#if PG_VERSION_NUM >= 150000
XLogRecPtr myFlushLsn = GetFlushRecPtr(NULL);
#else
XLogRecPtr myFlushLsn = GetFlushRecPtr(); XLogRecPtr myFlushLsn = GetFlushRecPtr();
#endif
replication_feedback_get_lsns(&writePtr, &flushPtr, &applyPtr); replication_feedback_get_lsns(&writePtr, &flushPtr, &applyPtr);
#define MB ((XLogRecPtr)1024*1024) #define MB ((XLogRecPtr)1024*1024)

View File

@@ -21,6 +21,11 @@
#include <netinet/tcp.h> #include <netinet/tcp.h>
#include <unistd.h> #include <unistd.h>
#if PG_VERSION_NUM >= 150000
#include "access/xlogutils.h"
#include "access/xlogrecovery.h"
#endif
/* /*
* These variables are used similarly to openLogFile/SegNo, * These variables are used similarly to openLogFile/SegNo,
* but for walproposer to write the XLOG during recovery. walpropFileTLI is the TimeLineID * but for walproposer to write the XLOG during recovery. walpropFileTLI is the TimeLineID
@@ -85,7 +90,11 @@ static volatile sig_atomic_t replication_active = false;
typedef void (*WalSndSendDataCallback) (void); typedef void (*WalSndSendDataCallback) (void);
static void WalSndLoop(WalSndSendDataCallback send_data); static void WalSndLoop(WalSndSendDataCallback send_data);
static void XLogSendPhysical(void); static void XLogSendPhysical(void);
#if PG_VERSION_NUM >= 150000
static XLogRecPtr GetStandbyFlushRecPtr(TimeLineID *tli);
#else
static XLogRecPtr GetStandbyFlushRecPtr(void); static XLogRecPtr GetStandbyFlushRecPtr(void);
#endif
static void WalSndSegmentOpen(XLogReaderState *state, XLogSegNo nextSegNo, static void WalSndSegmentOpen(XLogReaderState *state, XLogSegNo nextSegNo,
TimeLineID *tli_p); TimeLineID *tli_p);
@@ -222,10 +231,10 @@ SafekeeperStateDesiredEvents(SafekeeperState state)
result = WL_SOCKET_READABLE; result = WL_SOCKET_READABLE;
break; break;
/* /*
* Flush states require write-ready for flushing. * Flush states require write-ready for flushing.
* Active state does both reading and writing. * Active state does both reading and writing.
* *
* TODO: SS_ACTIVE sometimes doesn't need to be write-ready. We should * TODO: SS_ACTIVE sometimes doesn't need to be write-ready. We should
* check sk->flushWrite here to set WL_SOCKET_WRITEABLE. * check sk->flushWrite here to set WL_SOCKET_WRITEABLE.
*/ */
@@ -398,12 +407,21 @@ XLogWalPropWrite(char *buf, Size nbytes, XLogRecPtr recptr)
if (walpropFile < 0) if (walpropFile < 0)
{ {
#if PG_VERSION_NUM >= 150000
// FIXME Is it ok to use hardcoded value here?
TimeLineID tli = 1;
#else
bool use_existent = true; bool use_existent = true;
#endif
/* Create/use new log file */ /* Create/use new log file */
XLByteToSeg(recptr, walpropSegNo, wal_segment_size); XLByteToSeg(recptr, walpropSegNo, wal_segment_size);
#if PG_VERSION_NUM >= 150000
walpropFile = XLogFileInit(walpropSegNo, tli);
walpropFileTLI = tli;
#else
walpropFile = XLogFileInit(walpropSegNo, &use_existent, false); walpropFile = XLogFileInit(walpropSegNo, &use_existent, false);
walpropFileTLI = ThisTimeLineID; walpropFileTLI = ThisTimeLineID;
#endif
} }
/* Calculate the start offset of the received logs */ /* Calculate the start offset of the received logs */
@@ -488,11 +506,14 @@ void
StartProposerReplication(StartReplicationCmd *cmd) StartProposerReplication(StartReplicationCmd *cmd)
{ {
XLogRecPtr FlushPtr; XLogRecPtr FlushPtr;
TimeLineID currTLI;
#if PG_VERSION_NUM < 150000
if (ThisTimeLineID == 0) if (ThisTimeLineID == 0)
ereport(ERROR, ereport(ERROR,
(errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE), (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
errmsg("IDENTIFY_SYSTEM has not been run before START_REPLICATION"))); errmsg("IDENTIFY_SYSTEM has not been run before START_REPLICATION")));
#endif
/* create xlogreader for physical replication */ /* create xlogreader for physical replication */
xlogreader = xlogreader =
@@ -534,10 +555,19 @@ StartProposerReplication(StartReplicationCmd *cmd)
* Select the timeline. If it was given explicitly by the client, use * Select the timeline. If it was given explicitly by the client, use
* that. Otherwise use the timeline of the last replayed record, which is * that. Otherwise use the timeline of the last replayed record, which is
* kept in ThisTimeLineID. * kept in ThisTimeLineID.
* *
* Neon doesn't currently use PG Timelines, but it may in the future, so * Neon doesn't currently use PG Timelines, but it may in the future, so
* we keep this code around to lighten the load for when we need it. * we keep this code around to lighten the load for when we need it.
*/ */
#if PG_VERSION_NUM >= 150000
if (am_cascading_walsender)
{
/* this also updates ThisTimeLineID */
FlushPtr = GetStandbyFlushRecPtr(&currTLI);
}
else
FlushPtr = GetFlushRecPtr(&currTLI);
#else
if (am_cascading_walsender) if (am_cascading_walsender)
{ {
/* this also updates ThisTimeLineID */ /* this also updates ThisTimeLineID */
@@ -546,12 +576,16 @@ StartProposerReplication(StartReplicationCmd *cmd)
else else
FlushPtr = GetFlushRecPtr(); FlushPtr = GetFlushRecPtr();
currTLI = ThisTimeLineID;
#endif
if (cmd->timeline != 0) if (cmd->timeline != 0)
{ {
XLogRecPtr switchpoint; XLogRecPtr switchpoint;
sendTimeLine = cmd->timeline; sendTimeLine = cmd->timeline;
if (sendTimeLine == ThisTimeLineID) if (sendTimeLine == currTLI)
{ {
sendTimeLineIsHistoric = false; sendTimeLineIsHistoric = false;
sendTimeLineValidUpto = InvalidXLogRecPtr; sendTimeLineValidUpto = InvalidXLogRecPtr;
@@ -566,7 +600,7 @@ StartProposerReplication(StartReplicationCmd *cmd)
* Check that the timeline the client requested exists, and the * Check that the timeline the client requested exists, and the
* requested start location is on that timeline. * requested start location is on that timeline.
*/ */
timeLineHistory = readTimeLineHistory(ThisTimeLineID); timeLineHistory = readTimeLineHistory(currTLI);
switchpoint = tliSwitchPoint(cmd->timeline, timeLineHistory, switchpoint = tliSwitchPoint(cmd->timeline, timeLineHistory,
&sendTimeLineNextTLI); &sendTimeLineNextTLI);
list_free_deep(timeLineHistory); list_free_deep(timeLineHistory);
@@ -605,7 +639,7 @@ StartProposerReplication(StartReplicationCmd *cmd)
} }
else else
{ {
sendTimeLine = ThisTimeLineID; sendTimeLine = currTLI;
sendTimeLineValidUpto = InvalidXLogRecPtr; sendTimeLineValidUpto = InvalidXLogRecPtr;
sendTimeLineIsHistoric = false; sendTimeLineIsHistoric = false;
} }
@@ -710,6 +744,34 @@ StartProposerReplication(StartReplicationCmd *cmd)
EndReplicationCommand("START_STREAMING"); EndReplicationCommand("START_STREAMING");
} }
#if PG_VERSION_NUM >= 150000
static XLogRecPtr
GetStandbyFlushRecPtr(TimeLineID *tli)
{
XLogRecPtr replayPtr;
TimeLineID replayTLI;
XLogRecPtr receivePtr;
TimeLineID receiveTLI;
XLogRecPtr result;
/*
* We can safely send what's already been replayed. Also, if walreceiver
* is streaming WAL from the same timeline, we can send anything that it
* has streamed, but hasn't been replayed yet.
*/
receivePtr = GetWalRcvFlushRecPtr(NULL, &receiveTLI);
replayPtr = GetXLogReplayRecPtr(&replayTLI);
*tli = replayTLI;
result = replayPtr;
if (receiveTLI == replayTLI && receivePtr > replayPtr)
result = receivePtr;
return result;
}
#else
/* /*
* Returns the latest point in WAL that has been safely flushed to disk, and * Returns the latest point in WAL that has been safely flushed to disk, and
* can be sent to the standby. This should only be called when in recovery, * can be sent to the standby. This should only be called when in recovery,
@@ -744,6 +806,9 @@ GetStandbyFlushRecPtr(void)
return result; return result;
} }
#endif
/* XLogReaderRoutine->segment_open callback */ /* XLogReaderRoutine->segment_open callback */
static void static void
@@ -878,6 +943,7 @@ XLogSendPhysical(void)
XLogRecPtr startptr; XLogRecPtr startptr;
XLogRecPtr endptr; XLogRecPtr endptr;
Size nbytes PG_USED_FOR_ASSERTS_ONLY; Size nbytes PG_USED_FOR_ASSERTS_ONLY;
TimeLineID currTLI;
/* If requested switch the WAL sender to the stopping state. */ /* If requested switch the WAL sender to the stopping state. */
if (got_STOPPING) if (got_STOPPING)
@@ -919,9 +985,12 @@ XLogSendPhysical(void)
* FlushPtr that was calculated before it became historic. * FlushPtr that was calculated before it became historic.
*/ */
bool becameHistoric = false; bool becameHistoric = false;
#if PG_VERSION_NUM >= 150000
SendRqstPtr = GetStandbyFlushRecPtr(&currTLI);
#else
SendRqstPtr = GetStandbyFlushRecPtr(); SendRqstPtr = GetStandbyFlushRecPtr();
currTLI = ThisTimeLineID;
#endif
if (!RecoveryInProgress()) if (!RecoveryInProgress())
{ {
/* /*
@@ -935,10 +1004,10 @@ XLogSendPhysical(void)
{ {
/* /*
* Still a cascading standby. But is the timeline we're sending * Still a cascading standby. But is the timeline we're sending
* still the one recovery is recovering from? ThisTimeLineID was * still the one recovery is recovering from? currTLI was
* updated by the GetStandbyFlushRecPtr() call above. * updated by the GetStandbyFlushRecPtr() call above.
*/ */
if (sendTimeLine != ThisTimeLineID) if (sendTimeLine != currTLI)
becameHistoric = true; becameHistoric = true;
} }
@@ -951,7 +1020,7 @@ XLogSendPhysical(void)
*/ */
List *history; List *history;
history = readTimeLineHistory(ThisTimeLineID); history = readTimeLineHistory(currTLI);
sendTimeLineValidUpto = tliSwitchPoint(sendTimeLine, history, &sendTimeLineNextTLI); sendTimeLineValidUpto = tliSwitchPoint(sendTimeLine, history, &sendTimeLineNextTLI);
Assert(sendTimeLine < sendTimeLineNextTLI); Assert(sendTimeLine < sendTimeLineNextTLI);
@@ -974,7 +1043,11 @@ XLogSendPhysical(void)
* primary: if the primary subsequently crashes and restarts, standbys * primary: if the primary subsequently crashes and restarts, standbys
* must not have applied any WAL that got lost on the primary. * must not have applied any WAL that got lost on the primary.
*/ */
#if PG_VERSION_NUM >= 150000
SendRqstPtr = GetFlushRecPtr(NULL);
#else
SendRqstPtr = GetFlushRecPtr(); SendRqstPtr = GetFlushRecPtr();
#endif
} }
/* /*

View File

@@ -59,7 +59,7 @@ Env = Dict[str, str]
Fn = TypeVar("Fn", bound=Callable[..., Any]) Fn = TypeVar("Fn", bound=Callable[..., Any])
DEFAULT_OUTPUT_DIR = "test_output" DEFAULT_OUTPUT_DIR = "test_output"
DEFAULT_POSTGRES_DIR = "tmp_install" DEFAULT_POSTGRES_DIR = "pg_install/v14"
DEFAULT_BRANCH_NAME = "main" DEFAULT_BRANCH_NAME = "main"
BASE_PORT = 15000 BASE_PORT = 15000
@@ -188,7 +188,7 @@ def can_bind(host: str, port: int) -> bool:
Check whether a host:port is available to bind for listening Check whether a host:port is available to bind for listening
Inspired by the can_bind() perl function used in Postgres tests, in Inspired by the can_bind() perl function used in Postgres tests, in
vendor/postgres/src/test/perl/PostgresNode.pm vendor/postgres-v14/src/test/perl/PostgresNode.pm
""" """
with closing(socket.socket(socket.AF_INET, socket.SOCK_STREAM)) as sock: with closing(socket.socket(socket.AF_INET, socket.SOCK_STREAM)) as sock:
# TODO: The pageserver and safekeepers don't use SO_REUSEADDR at the # TODO: The pageserver and safekeepers don't use SO_REUSEADDR at the

View File

@@ -26,8 +26,8 @@ def test_pg_regress(neon_simple_env: NeonEnv, test_output_dir: Path, pg_bin, cap
(runpath / "testtablespace").mkdir(parents=True) (runpath / "testtablespace").mkdir(parents=True)
# Compute all the file locations that pg_regress will need. # Compute all the file locations that pg_regress will need.
build_path = os.path.join(pg_distrib_dir, "build/src/test/regress") build_path = os.path.join(pg_distrib_dir, "../build/v14/src/test/regress")
src_path = os.path.join(base_dir, "vendor/postgres/src/test/regress") src_path = os.path.join(base_dir, "vendor/postgres-v14/src/test/regress")
bindir = os.path.join(pg_distrib_dir, "bin") bindir = os.path.join(pg_distrib_dir, "bin")
schedule = os.path.join(src_path, "parallel_schedule") schedule = os.path.join(src_path, "parallel_schedule")
pg_regress = os.path.join(build_path, "pg_regress") pg_regress = os.path.join(build_path, "pg_regress")
@@ -80,8 +80,8 @@ def test_isolation(neon_simple_env: NeonEnv, test_output_dir: Path, pg_bin, caps
(runpath / "testtablespace").mkdir(parents=True) (runpath / "testtablespace").mkdir(parents=True)
# Compute all the file locations that pg_isolation_regress will need. # Compute all the file locations that pg_isolation_regress will need.
build_path = os.path.join(pg_distrib_dir, "build/src/test/isolation") build_path = os.path.join(pg_distrib_dir, "../build/v14/src/test/isolation")
src_path = os.path.join(base_dir, "vendor/postgres/src/test/isolation") src_path = os.path.join(base_dir, "vendor/postgres-v14/src/test/isolation")
bindir = os.path.join(pg_distrib_dir, "bin") bindir = os.path.join(pg_distrib_dir, "bin")
schedule = os.path.join(src_path, "isolation_schedule") schedule = os.path.join(src_path, "isolation_schedule")
pg_isolation_regress = os.path.join(build_path, "pg_isolation_regress") pg_isolation_regress = os.path.join(build_path, "pg_isolation_regress")
@@ -124,7 +124,7 @@ def test_sql_regress(neon_simple_env: NeonEnv, test_output_dir: Path, pg_bin, ca
# Compute all the file locations that pg_regress will need. # Compute all the file locations that pg_regress will need.
# This test runs neon specific tests # This test runs neon specific tests
build_path = os.path.join(pg_distrib_dir, "build/src/test/regress") build_path = os.path.join(pg_distrib_dir, "../build/v14/src/test/regress")
src_path = os.path.join(base_dir, "test_runner/sql_regress") src_path = os.path.join(base_dir, "test_runner/sql_regress")
bindir = os.path.join(pg_distrib_dir, "bin") bindir = os.path.join(pg_distrib_dir, "bin")
schedule = os.path.join(src_path, "parallel_schedule") schedule = os.path.join(src_path, "parallel_schedule")

1
vendor/postgres-v15 vendored Submodule

Submodule vendor/postgres-v15 added at 26c6466873