Update Helm release longhorn to v1.11.0

Update Helm release longhorn to v1.11.0
Photo by Tony Garcia / Unsplash

No problems deploying to Proxmox VE K3s Kubernetes cluster via Helm Chart and Flux V2 reconciliation in a GitOps approach.

This MR contains the following updates:

Package Update Change
longhorn (source) minor 1.10.01.11.0

Release Notes

longhorn/longhorn (longhorn)

v1.11.0: Longhorn v1.11.0

Compare Source

Longhorn v1.11.0 Release Notes

The Longhorn team is excited to announce the release of Longhorn v1.11.0. This release marks a major milestone, with the V2 Data Engine officially entering the Technical Preview stage following significant stability improvements.

Additionally, this version optimizes the stability of the whole system and introduces critical improvements in resource observability, scheduling, and utilization.

For terminology and background on Longhorn releases, see Releases.

Deprecation

V2 Backing Image Deprecation

The Backing Image feature for the V2 Data Engine is now deprecated in v1.11.0 and is scheduled for removal in v1.12.0.

Users using V2 volumes for virtual machines are encouraged to adopt the Containerized Data Importer (CDI) for volume population instead.

GitHub Issue #​12237

Primary Highlights

V2 Data Engine
Now in Technical Preview Stage

We are pleased to announce that the V2 Data Engine has officially graduated to the Technical Preview stage. This indicates increased stability and feature maturity as we move toward General Availability.

Limitation: While the engine is in Technical Preview, live upgrade is not supported yet. V2 volumes must be detached (offline) before engine upgrade.

Support for ublk Frontend

Users can now configure ublk (Userspace Block Device) as the frontend for V2 Data Engine volumes. This provides a high-performance alternative to the NVMe-oF frontend for environments running Kernel v6.0+.

GitHub Issue #​11039

V1 Data Engine
Faster Replica Rebuilding from Multiple Sources

The V1 Data Engine now supports parallel rebuilding. When a replica needs to be rebuilt, the engine can now stream data from multiple healthy replicas simultaneously rather than a single source. This significantly reduces the time required to restore redundancy for volumes containing tons of scattered data chunks.

GitHub Issue #​11331

General
Balance-Aware Algorithm Disk Selection For Replica Scheduling

Longhorn improves the disk selection for the replica scheduling by introducing an intelligent balance-aware scheduling algorithm, reducing uneven storage usage across nodes and disks.

GitHub Issue #​10512

Node Disk Health Monitoring

Longhorn now actively monitors the physical health of the underlying disks used for storage by using S.M.A.R.T. data. This allows administrators to identify issues and raise alerts when abnormal SMART metrics are detected, helping prevent failed volumes.

GitHub Issue #​12016

Share Manager Networking

Users can now configure an extra network interface for the Share Manager to support complex network segmentation requirements.

GitHub Issue #​10269

ReadWriteOncePod (RWOP) Support

Full support for the Kubernetes ReadWriteOncePod access mode has been added.

GitHub Issue #​9727

StorageClass allowedTopologies Support

Administrators can now use the allowedTopologies field in Longhorn StorageClasses to restrict volume provisioning to specific zones, regions, or nodes within the cluster.

GitHub Issue #​12261

Installation

[!IMPORTANT]
Ensure that your cluster is running Kubernetes v1.25 or later before installing Longhorn v1.11.0.

You can install Longhorn using a variety of tools, including Rancher, Kubectl, and Helm. For more information about installation methods and requirements, see Quick Installation in the Longhorn documentation.

Upgrade

[!IMPORTANT]
Ensure that your cluster is running Kubernetes v1.25 or later before upgrading from Longhorn v1.10.x to v1.11.0.

Longhorn only allows upgrades from supported versions. For more information about upgrade paths and procedures, see Upgrade in the Longhorn documentation.

Post-Release Known Issues

For information about issues identified after this release, see Release-Known-Issues.

Resolved Issues in this release

Highlight
Feature
Improvement
Bug
Misc

New Contributors

Contributors

Thank you to the following contributors who made this release possible.

Note: Starting from v1.11.0, as long as a GitHub issue is resolved in the current release, the corresponding authors will be listed in this contributor list as well. If there is still a missing, please contact Longhorn team for the update.

v1.10.2: Longhorn v1.10.2

Compare Source

Longhorn v1.10.2 Release Notes

Longhorn 1.10.2 introduces several improvements and bug fixes that are intended to improve system quality, resilience, stability and security.

We welcome feedback and contributions to help continuously improve Longhorn.

For terminology and context on Longhorn releases, see Releases.

Important Fixes

This release includes several critical stability fixes.

RWX Volume Unavailable After Node Drain

Fixed a race condition where ReadWriteMany (RWX) volumes could remain in the attaching state after node drains, causing workloads to become unavailable.

For more details, see Issue #​12231.

Encrypted Volume Cannot Be Expanded Online

Fixed an issue where online expansion of encrypted volumes did not propagate the new size to the dm-crypt device.

For more details, see Issue #​12368.

Cloned Volume Cannot Be Attached to Workload

Fixed a bug where cloned volumes could fail to reach a healthy state, preventing attachment to workloads.

For more details, see Issue #​12208.

Block Mode Volume Migration Stuck

Fixed a regression in block-mode volume migrations where newly created replicas could incorrectly inherit the lastFailedAt timestamp from source replicas, causing repeated deletion and blocking migration completion.

For more details, see Issue #​12312.

Replica Auto Balance Disk Pressure Threshold Stalled

Fixed an issue where replica auto-balance under disk pressure could be blocked if stopped volumes were present on the disk.

For more details, see Issue #​12334.

Replicas Accumulate During Engine Upgrade

Fixed a bug where temporary replicas could accumulate during engine upgrade. High etcd latency could cause new replicas to fail verification, leading to accumulation over multiple reconciliation cycles.

For more details, see Issue #​12115.

Potential Client Connection and Context Leak

Fixed potential context leaks in the instance manager client and backing image manager client, improving stability and preventing resource exhaustion.

For more details, see Issue #​12200 and Issue #​12195.

Replica Node Level Soft Anti-Affinity Ignored

Fixed a bug of replica scheduling loop where replicas could be scheduled onto nodes that already host a replica, even when Replica Node-Level Soft Anti-Affinity was disabled.

For more details, see Issue #​12251.

Installation

[!IMPORTANT]
Ensure that your cluster is running Kubernetes v1.25 or later before installing Longhorn v1.10.2.

You can install Longhorn using a variety of tools, including Rancher, Kubectl, and Helm. For more information about installation methods and requirements, see Quick Installation in the Longhorn documentation.

Upgrade

[!IMPORTANT]
Ensure that your cluster is running Kubernetes v1.25 or later before upgrading from Longhorn v1.9.x to v1.10.2.

Longhorn only allows upgrades from supported versions. For more information about upgrade paths and procedures, see Upgrade in the Longhorn documentation.

Post-Release Known Issues

For information about issues identified after this release, see Release-Known-Issues.

Resolved Issues

Feature
  • [BACKPORT][v1.10.2][FEATURE] Inherit namespace for longhorn-share-manager in FastFailover mode 12245 - @​yangchiu
  • [BACKPORT][v1.10.2][FEATURE] [Dependency] aws-sdk-go v1.55.7 is EOL as of 2025-07-31 — plan to migrate to v2? 12181 - @​mantissahz @​roger-ryao
Improvement
Bug

Contributors

v1.10.1: Longhorn v1.10.1

Compare Source

Longhorn v1.10.1 Release Notes

Longhorn 1.10.1 introduces several improvements and bug fixes that are intended to improve system quality, resilience, stability and security.

We welcome feedback and contributions to help continuously improve Longhorn.

For terminology and context on Longhorn releases, see Releases.

[!WARNING]

HotFix

The longhorn-manager:v1.10.1 image is affected by

To mitigate the issues, replace longhorn-manager:v1.10.1 with the hotfixed image longhorn-manager:v1.10.1-hotfix-2.

Follow these steps to apply the update:

  1. Disable the upgrade version check
    • Helm users: Set upgradeVersionCheck to false in the values.yaml file.
    • Manifest users: Remove the --upgrade-version-check flag from the deployment manifest.
  2. Update the longhorn-manager image
    • Change the image tag from v1.10.1 to v1.10.1-hotfix-2 in the appropriate file:
      • For Helm: Update values.yaml
      • For manifests: Update the deployment manifest directly.
  3. Proceed with the upgrade
    • Apply the changes using your standard Helm upgrade command or reapply the updated manifest.

Upgrade

If your Longhorn cluster was initially deployed with a version earlier than v1.3.0, the Custom Resources (CRs) were created using the v1beta1 APIs. While the upgrade from Longhorn v1.8 to v1.9 automatically migrates all CRs to the new v1beta2 version, a manual CR migration is strongly advised before upgrading from Longhorn v1.9 to v1.10.

Certain operations, such as an etcd or CRD restore, may leave behind v1beta1 data. Manually migrating your CRs ensures that all Longhorn data is properly updated to the v1beta2 API, preventing potential compatibility issues and unexpected behavior with the new Longhorn version.

Following the manual migration, verify that v1beta1 has been removed from the CRD stored versions to ensure completion and a successful upgrade.

For more details, see Kubernetes official document for CRD storage version, and Issue #​11886.

Migration Requirement Before Longhorn v1.10 Upgrade

Before upgrading from Longhorn v1.9 to v1.10, perform the following manual CRD storage version migration.

Note: If your Longhorn installation uses a namespace other than longhorn-system, replace longhorn-system with your custom namespace throughout the commands.

# Temporarily disable the CR validation webhook to allow updating read-only settings CRs.
kubectl patch validatingwebhookconfiguration longhorn-webhook-validator \
  --type=merge \
  -p "$(kubectl get validatingwebhookconfiguration longhorn-webhook-validator -o json | \
  jq '.webhooks[0].rules |= map(if .apiGroups == ["longhorn.io"] and .resources == ["settings"] then
    .operations |= map(select(. != "UPDATE")) else . end)')"

# Migrate CRDs that ever stored v1beta1 resources
migration_time="$(date +%Y-%m-%dT%H:%M:%S)"
crds=($(kubectl get crd -l app.kubernetes.io/name=longhorn -o json | jq -r '.items[] | select(.status.storedVersions | index("v1beta1")) | .metadata.name'))
for crd in "${crds[@​]}"; do
  echo "Migrating ${crd} ..."
  for name in $(kubectl -n longhorn-system get "$crd" -o jsonpath='{.items[*].metadata.name}'); do
    # Attach additional annotations to trigger v1beta1 resource updating in the latest storage version.
    kubectl patch "${crd}" "${name}" -n longhorn-system --type=merge -p='{"metadata":{"annotations":{"migration-time":"'"${migration_time}"'"}}}'
 done
 # Clean up the stored version in CRD status
 kubectl patch crd "${crd}" --type=merge -p '{"status":{"storedVersions":["v1beta2"]}}' --subresource=status
done

# Re-enable the CR validation webhook.
kubectl patch validatingwebhookconfiguration longhorn-webhook-validator \
  --type=merge \
  -p "$(kubectl get validatingwebhookconfiguration longhorn-webhook-validator -o json | \
  jq '.webhooks[0].rules |= map(if .apiGroups == ["longhorn.io"] and .resources == ["settings"] then
    .operations |= (. + ["UPDATE"] | unique) else . end)')"

Migration Verification

After running the script, verify the CRD stored versions using this command:

kubectl get crd -l app.kubernetes.io/name=longhorn -o=jsonpath='{range .items[*]}{.metadata.name}{": "}{.status.storedVersions}{"\n"}{end}'

Crucially, all Longhorn CRDs MUST have only "v1beta2" listed in storedVersions (i.e., "v1beta1" must be completely absent) before proceeding to the v1.10 upgrade.

Example of successful output:

backingimagedatasources.longhorn.io: ["v1beta2"]
backingimagemanagers.longhorn.io: ["v1beta2"]
backingimages.longhorn.io: ["v1beta2"]
backupbackingimages.longhorn.io: ["v1beta2"]
backups.longhorn.io: ["v1beta2"]
backuptargets.longhorn.io: ["v1beta2"]
backupvolumes.longhorn.io: ["v1beta2"]
engineimages.longhorn.io: ["v1beta2"]
engines.longhorn.io: ["v1beta2"]
instancemanagers.longhorn.io: ["v1beta2"]
nodes.longhorn.io: ["v1beta2"]
orphans.longhorn.io: ["v1beta2"]
recurringjobs.longhorn.io: ["v1beta2"]
replicas.longhorn.io: ["v1beta2"]
settings.longhorn.io: ["v1beta2"]
sharemanagers.longhorn.io: ["v1beta2"]
snapshots.longhorn.io: ["v1beta2"]
supportbundles.longhorn.io: ["v1beta2"]
systembackups.longhorn.io: ["v1beta2"]
systemrestores.longhorn.io: ["v1beta2"]
volumeattachments.longhorn.io: ["v1beta2"]
volumes.longhorn.io: ["v1beta2"]

With these steps completed, the Longhorn upgrade to v1.10 should now proceed without issues.

Troubleshooting CRD Upgrade Failures During Upgrade to Longhorn v1.10

If you did not apply the required pre-upgrade migration steps and the CRs are not fully migrated to v1beta2, the longhorn-manager Pods may fail to operate correctly. A common error message for this issue is:

Upgrade failed: cannot patch "backingimagedatasources.longhorn.io" with kind CustomResourceDefinition: CustomResourceDefinition.apiextensions.k8s.io "backingimagedatasources.longhorn.io" is invalid: status.storedVersions[0]: Invalid value: "v1beta1": missing from spec.versions; v1beta1 was previously a storage version, and must remain in spec.versions until a storage migration ensures no data remains persisted in v1beta1 and removes v1beta1 from status.storedVersions

To fix this issue, you must perform a forced downgrade back to the exact Longhorn v1.9.x version that was running before the failed upgrade attempt.

Downgrade Procedure (kubectl Installation)

If Longhorn was installed using kubectl, you must patch the current-longhorn-version setting before downgrading. Replace v1.9.x with the original version before upgrade in the following commands.

# Attaching annotation to allow patching current-longhorn-version.
kubectl patch settings.longhorn.io current-longhorn-version -n longhorn-system --type=merge -p='{"metadata":{"annotations":{"longhorn.io/update-setting-from-longhorn":""}}}'
# Temporarily override current version to allow old version installation
# Replace the value `"v1.9.x" to the original version before upgrade.
kubectl patch settings.longhorn.io current-longhorn-version -n longhorn-system --type=merge -p='{"value":"v1.9.x"}'

After modifying current-longhorn-version, you can proceed to downgrade to the original Longhorn v1.9.x deployment.

Downgrade Procedure (Helm Installation)

If Longhorn was installed using Helm, the downgrade is allowed by disabling the preUpgradeChecker.upgradeVersionCheck flag.

Post-Downgrade

Once the downgrade is complete and the Longhorn system is stable on the v1.9.x version, you must immediately follow the steps outlined in the Migration Requirement Before Longhorn v1.10 Upgrade. This step is crucial to migrate all remaining v1beta1 CRs to v1beta2 before attempting the Longhorn v1.10 upgrade again.

Important Fixes

This release includes several critical stability and performance improvements:

Goroutine Leak in Instance Manager (V2 Data Engine)

Fixed a goroutine leak in the instance manager when using the V2 data engine. This issue could lead to increased memory usage and potential stability problems over time.

For more details, see Issue #​11962.

V2 Volume Attachment Failure in Interrupt Mode

Fixed an issue where V2 volumes using interrupt mode with NVMe disks could fail to complete the attachment process, causing volumes to remain stuck in the attaching state indefinitely.

In Longhorn v1.10.0, interrupt mode supports only AIO disks. Interrupt mode for NVMe disks is supported starting in v1.10.1.

For more details, see Issue #​11816.

UI Deployment Failure on IPv4-Only Nodes

Fixed a bug introduced in v1.10.0 where the Longhorn UI failed to deploy on nodes with only IPv4 enabled. The UI now correctly supports IPv4-only configurations without requiring IPv6.

For more details, see Issue #​11875.

Share Manager Excessive Memory Usage

Fixed excessive memory consumption in the share manager for RWX (ReadWriteMany) volumes. The component now maintains stable memory usage under normal operation.

For more details, see Issue #​12043.

Installation

[!IMPORTANT]
Ensure that your cluster is running Kubernetes v1.25 or later before installing Longhorn v1.10.1.

You can install Longhorn using a variety of tools, including Rancher, Kubectl, and Helm. For more information about installation methods and requirements, see Quick Installation in the Longhorn documentation.

Upgrade

[!IMPORTANT]
Ensure that your cluster is running Kubernetes v1.25 or later before upgrading from Longhorn v1.9.x to v1.10.1.

Longhorn only allows upgrades from supported versions. For more information about upgrade paths and procedures, see Upgrade in the Longhorn documentation.

Post-Release Known Issues

For information about issues identified after this release, see Release-Known-Issues.

Resolved Issues

Improvement
  • [BACKPORT][v1.10.1][IMPROVEMENT] The auto-delete-pod-when-volume-detached-unexpectedly should only focus on the kubernetes builtin workload. 12125 - @​derekbit @​chriscchien
  • [BACKPORT][v1.10.1][IMPROVEMENT] CSIStorageCapacity objects must show schedulable (allocatable) capacity 12036 - @​chriscchien @​bachmanity1
  • [BACKPORT][v1.10.1][IMPROVEMENT] improve error logging for failed mounting during node publish volume 12033 - @​COLDTURNIP @​roger-ryao
  • [BACKPORT][v1.10.1][IMPROVEMENT] Improve Helm Chart defaultSettings handling with automatic quoting and multi-type support 12020 - @​derekbit @​chriscchien
  • [BACKPORT][v1.10.1][IMPROVEMENT] Avoid repeat engine restart when there are replica unavailable during migration 11945 - @​yangchiu @​shuo-wu
  • [BACKPORT][v1.10.1][IMPROVEMENT] Adjust maximum of GuaranteedInstanceManagerCPU to a big value 11968 - @​mantissahz
  • [BACKPORT][v1.10.1][IMPROVEMENT] Add usage metrics for Longhorn installation variant 11795 - @​derekbit
Bug
Misc

Contributors

Me on Mastodon - This link is here for verification purposes.