Update ghcr.io/cloudnative-pg/cloudnative-pg Docker tag to v1.26.0
This MR contains the following updates:
Package | Update | Change |
---|---|---|
ghcr.io/cloudnative-pg/cloudnative-pg | minor | 1.25.1 -> 1.26.0 |
Release Notes
cloudnative-pg/cloudnative-pg (ghcr.io/cloudnative-pg/cloudnative-pg)
v1.26.0
Release date: May 23, 2025
Important Changes
-
CloudNativePG is now officially a CNCF project: CloudNativePG has been accepted into the Cloud Native Computing Foundation (CNCF), marking a significant milestone in its evolution. As part of this transition, the project is now governed under CloudNativePG, a Series of LF Projects, LLC, ensuring long-term sustainability and community-driven innovation. (#β7203)
-
Deprecation of Native Barman Cloud Support: Native support for Barman Cloud backups and recovery is now deprecated and will be fully removed in CloudNativePG version 1.28.0. Although still available in the current release, users are strongly encouraged to begin migrating their existing clusters to the new Barman Cloud Plugin to ensure a smooth and seamless transition. The plugin should also be used for all new deployments. This change marks the first step toward making CloudNativePG a backup-agnostic solution, a goal that will be fully realized when volume snapshot support is also moved to a plugin-based architecture. (#β6876)
-
End of Support for Barman 3.4 and Earlier: CloudNativePG no longer supports Barman versions 3.4 and earlier, including the capability detection framework. Users running older operand versions (from before April 2023) must update their operand before upgrading the operator to avoid compatibility issues. (#β7220)
-
Hibernation Command Changes: The
hibernate on
andhibernate off
commands in thecnpg
plugin forkubectl
now serve as shortcuts for declarative hibernation. The previous imperative approach has been removed in favor of this method. Additionally, thehibernate status
command has been removed, as its functionality is now covered by the standardstatus
command. Warning: Do not upgrade to version 1.26 of both the plugin and the operator unless you are prepared to migrate to the declarative hibernation method. (#β7155)
Features
-
Declarative Offline In-Place Major Upgrades of PostgreSQL: Introduced support for offline in-place major upgrades when a new operand container image with a higher PostgreSQL major version is applied to a cluster. During the upgrade, all cluster pods are shut down to ensure data consistency. A new job is created to validate upgrade conditions, run
pg_upgrade
, and set up new directories forPGDATA
, WAL files, and tablespaces as needed. Once the upgrade is complete, replicas are re-created. Failed upgrades can be rolled back declaratively. (#β6664) -
Improved Startup and Readiness Probes for Replicas: Enhanced support for Kubernetes startup and readiness probes in PostgreSQL instances, providing greater control over replicas based on the streaming lag. (#β6623)
-
Declarative management of extensions and schemas: Introduced the
extensions
andschemas
stanzas in the Database resource to declaratively create, modify, and drop PostgreSQL extensions and schemas within a database. (#β7062)
Enhancements
-
Introduced an opt-in experimental feature to enhance the liveness probe with network isolation detection for primary instances. This feature can be activated via the
alpha.cnpg.io/livenessPinger
annotation (#β7466). -
Introduced the
STANDBY_TCP_USER_TIMEOUT
operator configuration setting, allowing users to specify thetcp_user_timeout
parameter on all standby instances managed by the operator. (#β7036) -
Introduced the
DRAIN_TAINTS
operator configuration option, enabling users to customize which node taints indicate a node is being drained. This replaces the previous fixed behavior of only recognizingnode.kubernetes.io/unschedulable
as a drain signal. (#β6928) -
Added a new field in the
status
of theCluster
resource to track the latest known Pod IP (#β7546). -
Added the
pg_extensions
metric, providing information about installed PostgreSQL extensions and their latest available versions. (#β7195) -
Added the
KUBERNETES_CLUSTER_DOMAIN
configuration option to the operator, allowing users to specify the domain suffix for fully qualified domain names (FQDNs) generated within the Kubernetes cluster. If not set, it defaults tocluster.local
. (#β6989) -
Implemented the
cnpg.io/validation
annotation, enabling users to disable the validation webhook on CloudNativePG-managed resources. Use with caution, as this allows unrestricted changes. (#β7196) -
Added support for patching PostgreSQL instance pods using the
cnpg.io/podPatch
annotation with a JSON Patch. This may introduce discrepancies between the operatorβs expectations and Kubernetes behavior, so it should be used with caution. (#β6323) -
Added support for collecting
pg_stat_wal
metrics in PostgreSQL 18. (#β7005) -
Removed the
ENABLE_AZURE_PVC_UPDATES
configuration, as it is no longer required to resize Azure volumes correctly. The Azure CSI driver includes the necessary fix as of version 1.11.0. (#β7297) -
The
.spec.backup.barmanObjectStore
and.spec.backup.retentionPolicy
fields are now deprecated in favor of the external Barman Cloud Plugin, and a warning is now emitted by the admission webhook when these fields are used in theCluster
specification (#β7500). -
Added support for LZ4, XZ, and Zstandard compression methods when archiving WAL files via Barman Cloud (deprecated). (#β7151)
-
CloudNativePG Interface (CNPG-I):
Security
- Set
imagePullPolicy
toAlways
for the operator deployment to ensure that images are always pulled from the registry, reducing the risk of using outdated or potentially unsafe local images. (#β7250)
Fixes
-
Fixed native replication slot synchronization and logical replication failover for PostgreSQL 17 by appending the
dbname
parameter toprimary_conninfo
in replica configurations (#β7298). -
Fixed a regression in WAL restore operations that prevented fallback to the in-tree
barmanObjectStore
configuration defined in theexternalCluster
source when a plugin failed to locate a WAL file (#β7507). -
Improved backup efficiency by introducing a fail-fast mechanism in WAL archiving, allowing quicker detection of unexpected primary demotion and avoiding unnecessary retries (#β7483).
-
Fixed an off-by-one error in parallel WAL archiving that could cause one extra worker process to be spawned beyond the requested number (#β7389).
-
Resolved a race condition that caused the operator to perform two switchovers when updating the PostgreSQL configuration. (#β6991)
-
Corrected the
PodMonitor
configuration by adjusting thematchLabels
scope for the targeted pooler and cluster pods. Previously, thematchLabels
were too broad, inadvertently inheriting labels from the cluster and leading to data collection from unintended targets. (#β7063) -
Added a webhook warning for clusters with a missing unit (e.g., MB, GB) in the
shared_buffers
configuration. This will become an error in future releases. Users should update their configurations to include explicit units (e.g.,512MB
instead of512
). (#β7160) -
Treated timeout errors during volume snapshot creation as retryable to prevent unnecessary backup failures. (#β7010)
-
Moved the defaulting logic for
.spec.postgresql.synchronous.dataDurability
from the CRD to the webhook to avoid UI issues with OLM. (#β7600) -
CloudNativePG Interface (CNPG-I):
-
Implemented automatic reloading of TLS certificates for plugins when they change. (#β7029)
-
Ensured the operator properly closes the plugin connection when performing a backup using the plugin. (#β7095, #β7096)
-
Fixed an issue that prevented WALs from being archived on a former primary node when using a plugin. (#β6964)
-
Improved performance and resilience of CNPG-I by removing timeouts for local plugin operations, avoiding failures during longer backup or WAL archiving executions (#β7496).
-
-
cnpg
plugin:-
Increased the buffer size in the
logs pretty
command to better handle larger log output (#β7281). -
Ensured the
plugin-name
parameter is required for plugin-based backups and disallowed for non-plugin backup methods (#β7506). -
Ensured that the primary Pod is recreated during an imperative restart when
primaryUpdateMethod
is set torestart
, aligning its definition with the replicas. (#β7122)
-
Changes
-
Updated the default PostgreSQL version to 17.5 for new cluster definitions. (#β7556)
-
Updated the default PgBouncer version to 1.24.1 for new
Pooler
deployments (#β7399).
Supported versions
- Kubernetes 1.33, 1.32, 1.31, and 1.30
- PostgreSQL 17, 16, 15, 14, and 13
- PostgreSQL 17.5 is the default image
- PostgreSQL 13 support ends on November 12, 2025
v1.25.2
Release date: May 23, 2025
Important Changes
- CloudNativePG is now officially a CNCF project: CloudNativePG has been accepted into the Cloud Native Computing Foundation (CNCF), marking a significant milestone in its evolution. As part of this transition, the project is now governed under CloudNativePG, a Series of LF Projects, LLC, ensuring long-term sustainability and community-driven innovation. (#β7203)
Enhancements
-
Added the
KUBERNETES_CLUSTER_DOMAIN
configuration option to the operator, allowing users to specify the domain suffix for fully qualified domain names (FQDNs) generated within the Kubernetes cluster. If not set, it defaults tocluster.local
. (#β6989) -
Implemented the
cnpg.io/validation
annotation, enabling users to disable the validation webhook on CloudNativePG-managed resources. Use with caution, as this allows unrestricted changes. (#β7196) -
Added support for collecting
pg_stat_wal
metrics in PostgreSQL 18. (#β7005) -
Added support for LZ4, XZ, and Zstandard compression methods when archiving WAL files via Barman Cloud (deprecated). (#β7151)
-
CloudNativePG Interface (CNPG-I):
Security
- Set
imagePullPolicy
toAlways
for the operator deployment to ensure that images are always pulled from the registry, reducing the risk of using outdated or potentially unsafe local images. (#β7250)
Fixes
-
Fixed native replication slot synchronization and logical replication failover for PostgreSQL 17 by appending the
dbname
parameter toprimary_conninfo
in replica configurations (#β7298). -
Fixed a regression in WAL restore operations that prevented fallback to the in-tree
barmanObjectStore
configuration defined in theexternalCluster
source when a plugin failed to locate a WAL file (#β7507). -
Improved backup efficiency by introducing a fail-fast mechanism in WAL archiving, allowing quicker detection of unexpected primary demotion and avoiding unnecessary retries (#β7483).
-
Fixed an off-by-one error in parallel WAL archiving that could cause one extra worker process to be spawned beyond the requested number (#β7389).
-
Resolved a race condition that caused the operator to perform two switchovers when updating the PostgreSQL configuration. (#β6991)
-
Corrected the
PodMonitor
configuration by adjusting thematchLabels
scope for the targeted pooler and cluster pods. Previously, thematchLabels
were too broad, inadvertently inheriting labels from the cluster and leading to data collection from unintended targets. (#β7063) -
Added a webhook warning for clusters with a missing unit (e.g., MB, GB) in the
shared_buffers
configuration. This will become an error in future releases. Users should update their configurations to include explicit units (e.g.,512MB
instead of512
). (#β7160) -
Treated timeout errors during volume snapshot creation as retryable to prevent unnecessary backup failures. (#β7010)
-
Moved the defaulting logic for
.spec.postgresql.synchronous.dataDurability
from the CRD to the webhook to avoid UI issues with OLM. (#β7600) -
CloudNativePG Interface (CNPG-I):
-
Implemented automatic reloading of TLS certificates for plugins when they change. (#β7029)
-
Ensured the operator properly closes the plugin connection when performing a backup using the plugin. (#β7095, #β7096)
-
Improved performance and resilience of CNPG-I by removing timeouts for local plugin operations, avoiding failures during longer backup or WAL archiving executions (#β7496).
-
-
cnpg
plugin:-
Increased the buffer size in the
logs pretty
command to better handle larger log output (#β7281). -
Ensured the
plugin-name
parameter is required for plugin-based backups and disallowed for non-plugin backup methods (#β7506). -
Ensured that the primary Pod is recreated during an imperative restart when
primaryUpdateMethod
is set torestart
, aligning its definition with the replicas. (#β7122)
-