Topics covered in the article:
Kubera Cloud v2.2.0
Release Summary
The Kubera Cloud v2.2.0 release emphasizes improvements in Kubera SaaS. It includes Kubera SaaS onboarding and landing user-experience has been completely revamped. The cluster connection experience has been enhanced by making Kubera's foot-print as light as possible. Kubera SaaS subscription plans are now user friendly. OpenEBS ZFS Local PV adds support for Incremental Backup and Restore by enhancing the OpenEBS Velero Plugin. The Kubernetes custom resources for managing cStor Backup and Restore have been promoted to v1. Enhanced the SPC to CSPC migration feature with multiple usability fixes like supporting migrating multiple volumes in parallel has been introduced in Kubera Cloud v2.2.0.
Kubera Director
-- New features and enhancements:
- Supports installation of the latest version of OpenEBS Enterprise Edition (v2.2.0).
- Supports upgrades to the latest version of OpenEBS Enterprise Edition (v2.2.0).
- Kubera SaaS subscription plans have become more user-friendly.
- All feature restrictions in the Basic plan have now been removed.
- Users of the Basic plan (free-forever tier) will now be able to upgrade and manage their OpenEBS installation, provision CStor pools, and storage classes.
- Basic plan users will also be able to take backups and perform restores, enable topological view, and application analytics.
- The active cluster connection count is now changed from 3 to 2 in the Basic plan.
- Features of Standard and Enterprise plans remain unchanged.
- Kubera SaaS onboarding and landing user-experience has been completely revamped.
- Users now get an intuitive feel of the product as a part of their onboarding.
- Project and cluster landing dashboards now guide users on their journey in managing stateful workloads on Kubernetes.
- Users begin their journey by connecting and activating their Kubernetes cluster, navigate through installing and managing OpenEBS on their cluster, discovering block-devices, creating pools and storage classes, and finally view analytics by installing a demo MinIO application using OpenEBS persistent storage.
- The cluster connection experience has been enhanced by making Kubera's foot-print as light as possible.
- When a user connects a cluster, only the bare minimum set of agent components will now be installed in the maya-system namespace.
- These components are needed to discover the various Kubernetes resources and manage OpenEBS.
- All the monitoring and logging agents will now be installed upon user selection.
- Users can now enable or disable monitoring for volumes or pools.
- Users can now enable or disable off-cluster log collection for a namespace, application, or kubelet.
- Users can now enable or disable the cluster topology view.
-- Key bug fixes:
- Fixed an issue due to which topology view was not working for the latest schema-based CStor pools and their instances.
- Fixed an issue due to which topology view was not working for the CStor CSI volumes.
- Fixed an issue due to which snapshot of CStor CSI volumes was not happening.
- Fixed an issue due to which a user got highly restricted access after canceling the Standard subscription.
- Fixed an issue due to which the CSI drivers were not getting installed in the openebs namespace on GKE clusters with 1.17 or above master version.
OpenEBS Enterprise Edition
-- New features and enhancements:
- OpenEBS ZFS Local PV adds support for Incremental Backup and Restore by enhancing the OpenEBS Velero Plugin. For detailed instructions to try this feature, please refer to this doc.
- OpenEBS Mayastor instances now expose a gRPC API, which is used to enumerate block disk devices attached to the host node, as an aid to the identification of suitable candidates for inclusion within storage Pools during configuration. This functionality is also accessible within the mayastor-client diagnostic utility. For further details on enhancements and bug fixes in Mayastor, please refer to Mayastor release notes.
- Enhanced the Velero Plugin to restore OpenEBS ZFS Local PV into a different cluster or a different node in the cluster. This feature depends on the Velero velero.io/change-pvc-node: RestoreItemAction feature. openebs/velero-plugin#118.
- The Kubernetes custom resources for managing cStor Backup and Restore have been promoted to v1. This change is backward compatible with earlier resources and transparent to users. When the SPC resources are migrated to CSPC, the related Backup/Restore resources on older volumes are also upgraded to v1. openebs/upgrade#59.
- Enhanced the SPC to CSPC migration feature with multiple usability fixes like supporting migrating multiple volumes in parallel. openebs/upgrade#52, the ability to detect the changes in the underlying virtual disk resources (BDs), and automatically update them in CSPC openebs/upgrade#53. Prior to this release, when migrating to CSPC, the user needs to manually update the BDs.
- Enhanced the Velero Plugin to use custom certificates for S3 object storage. openebs/velero-plugin#115.
- Enhanced cStor Operators to allow users to specify the name of the new node for a previously configured cStor Pool. This will help with scenarios where a Kubernetes node can be replaced with a new node but can be attached with the block devices from the old node that contain cStor Pool and the volume data. openebs/cstor-operators#167.
- Enhanced NDM OS discovery logic for nodes that use /dev/root as the root filesystem. openebs/node-disk-manager#492.
- Enhanced NDM OS discovery logic to support excluding multiple devices that could be mounted as host filesystem directories. openebs/node-disk-manager#224.
-- Key bug fixes:
- Fixed an issue where NDM could cause data loss by creating a partition table on an uninitialized iSCSI volume. This can happen due to a race condition between NDM pod initializing and iSCSI volume initializing after a node reboot and if the iSCSI volume is not fully initialized when NDM probes for device details. This issue has been observed with NDM 0.8.0 released with OpenEBS 2.0 and has been fixed in OpenEBS 2.1.1 and OpenEBS 2.2.0 (latest) release.
-- Major Limitations and Notes:
- For a more comprehensive list of open issues uncovered through e2e and community testing, please refer to open issues. If you are using the cStor Storage Engine, please review the following before upgrading to this release.
- The recommended approach for deploying cStor Pools is using the new custom resource called cStorPoolCluster (CSPC). Even though the provisioning of cStor Pools using StoragePoolClaim(SPC) is supported, it will be deprecated in future releases. The pools provisioned using SPC can be easily migrated to CSPC.
- When using cStor Pools, make sure that raw block devices are available on the nodes. If the block devices are formatted with a filesystem or mounted, then cStor Pool will not be created on the block device. In the current release, there are manual steps that could be followed to clear the filesystem or use partitions for creating cStor Pools, please reach out to the community (#openebs) at https://slack.k8s.io.
- If you are using cStor pools with ephemeral devices, starting with 1.2 - upon node restart, cStor Pool will not be automatically re-created on the new devices. This check has been put to make sure that nodes are not accidentally restarted with new disks. The steps to recover from such a situation are provided here, which involve changing the status of the corresponding CSP to Init.
- Capacity over-provisioning is enabled by default on the cStor pools. If you don’t have alerts set up for monitoring the usage of the cStor pools, the pools can be fully utilized and the volumes can get into a read-only state. To avoid this, set up resource quotas as described in #2855.
Kubera v2.1.0
Release Summary
The Kubera v2.1.0-ee release emphasizes improvements in CSI-based cStor storage engines. It includes the provisioning of Storage Class for cStor and upgrading OpenEBS from the community to enterprise edition through Kubera. The enhanced supportability features of Mayastor and ZFS Local PV(Beta) are remarkable. Significant improvements have been made in NDM for supporting (and better handling) of partitions and virtual block devices across reboots.
Kubera Director
-
New features and enhancements:
- Supports installation of the latest version of OpenEBS Enterprise Edition (v2.1.0).
- Supports upgrades to the latest version of OpenEBS Enterprise Edition (v2.1.0).
- Supports installation of iSCSI client modules alongside OpenEBS Enterprise Edition installation or upgrade. Supported platforms are Ubuntu, CentOS, RHEL, AmazonLinux.
- Added support to stream the outputs of “kubectl describe” for pods, applications, storage-classes, block-devices and PVs, and “kubectl logs” for all the OpenEBS and Kubera agent pods. This will enhance supportability.
- Added support to perform “ssh” into Kubernetes nodes.
- Cassandra is now a supported application whose monitoring metrics can be collected and visualized.
- Added support to discover and display all kinds of persistent volumes (PVs).
- Added support for deletion of a Storage Class of CAS type CStor.
- Enhanced the in-product tour and quick-start tutorial.
- Logging and monitoring resiliency enhancements.
- Agent lifecycle controller enhancements for on-demand updates instead of polling for the next update interval.
- Key bug fixes:
- Fixed an issue due to which the count of healthy and unhealthy components was showing incorrect value in the Project and Cluster landing pages.
- Fixed an issue which was limiting the display to first hundred applications and volumes.
- Fixed an issue due to which a duplicate storage class was getting created.
- Fixed an issue related to database constraint which was resulting in volume details not getting stored in the database.
OpenEBS Enterprise Edition
-
New features and enhancements:
- OpenEBS ZFS Local PV adds support for Backup and Restore by enhancing the OpenEBS Velero Plugin. For detailed instructions to try this feature, please refer to this doc.
- OpenEBS Mayastor continues its momentum by enhancing support for Rebuild and other fixes. For detailed instructions on how to get started with Mayastor please refer to this Quickstart guide.
- Enhanced the Velero Plugin to perform Backup of a volume and Restore of another volume to run simultaneously.
- Added a validation to restrict OpenEBS Namespace deletion if there are pools or volumes configured. The validation is added via Kubernetes admission webhook.
- Added support to restrict creation of cStor Pools (via CSPC) on Block Devices that are tagged (or reserved).
- Enhanced NDM to automatically create a block device tag on the discovered device if the device matches a certain path name pattern.
-
Key bug fixes:
- Fixes an issue where local backup and restore of cStor volumes provisioned via CSI were failing.
- Fixes an issue where cStor CSI Volume remount would fail intermittently when application pod is restarted or after recovering from a network loss between application pod and target node.
- Fixes an issue where BDC cleanup by NDM would cause a panic, if the bound BD was manually deleted.
-
Major Limitations and Notes:
- Capacity over-provisioning is enabled by default on the cStor pools. If you don’t have alerts set up for monitoring the usage of the cStor pools, the pools can be fully utilized and the volumes can get into a read-only state. To avoid this, set up resource quotas as described here.
- Provisioning of cStor Pools using StoragePoolClaim(SPC) is supported but it will be deprecated in future releases. The pools provisioned using SPC can be easily migrated to cStorPoolCluster (CSPC).
- When using cStor Pools, make sure that raw block devices are available on the nodes. If the block devices are formatted with a filesystem or mounted, then cStor Pool will not be created on the block device. In the current release, there are manual steps that could be followed to clear the filesystem or use partitions for creating cStor Pools.
- If you are using cStor pools with ephemeral devices, starting with 1.2 - upon node restart, cStor Pool will not be automatically re-created on the new devices. The steps to recover from such a situation are provided here.
Kubera v2.0.0
Release Summary
The Kubera v2.0.0-ee release emphasizes improvements in CSI-based cStor storage engines. It includes the provisioning of Storage Class for cStor and upgrading OpenEBS from the community to enterprise edition through Kubera. The enhanced supportability features of Mayastor and ZFS Local PV(Beta) are remarkable. Significant improvements have been made in NDM for supporting (and better handling) of partitions and virtual block devices across reboots.
Kubera Director
-
New features and enhancements:
- Supports installation of the latest version of OpenEBS Enterprise Edition (v2.0.0).
- Supports upgrade to the latest version of OpenEBS Enterprise Edition (v2.0.0).
- Supports provisioning a Storage Class for CAS type CStor. With this, a Kubernetes SRE running OpenEBS can perform end-to-end Day-0 operations starting with installation of OpenEBS, provisioning CStor pools to creating Storage Classes.
- Supports adopting a pre-installed OpenEBS configuration in the following scenarios:
- A user’s cluster is running the community edition of OpenEBS 2.0.0 and prior. Kubera provides a UI workflow to upgrade to the latest enterprise edition i.e. OpenEBS 2.0.0-ee for management and future upgrades.
- A user’s cluster is running an older enterprise edition of OpenEBS (1.12.0-ee and prior). Kubera provides a UI workflow to upgrade to the latest enterprise edition i.e. OpenEBS 2.0.0-ee for management and future upgrades.
- Key bug fixes:
- Fixed an issue with DMaaS where restore was being allowed for a partially failed backup.
- An enhancement fix to support namespace selection for OpenEBS CSI driver installation based on the Kubernetes version. Starting with OpenEBS 2.0.0-ee, if K8s version is < 1.17, the CSI drivers will be installed in the kube-system namespace. If K8s version >= 1.17, the CSI drivers will be installed in the same namespace as the other OpenEBS components.
- Fixed an issue with DMaaS where the backup transfer size was being reported as 0 bytes.
- An enhancement fix to display all DMaaS restores at a project level. This addresses the issue of missing restores in UI if the associated schedule was deleted by the user.
- Fixed user-audit issues with UI.
- Fixed an issue for loading state of buttons in UI.
- Fixed a bug in updating the usages record for Standard subscription.
OpenEBS Enterprise Edition
-
New features and enhancements:
- OpenEBS cStor provisioning with the new schema and CSI drivers has been declared as a beta version. For detailed instructions on how to get started with new cStor Operators please refer to the Quickstart Guide.
- Significant improvements to NDM in supporting (and better handling) of partitions and virtual block devices across reboots.
- OpenEBS Mayastor continues its momentum by adding support for Rebuild, NVMe-oF Support, enhanced supportability, and several other fixes. To know more please refer to this Quickstart Guide.
- Enhanced the Jiva target controller to track the internal snapshots and re-claim the space.
- Support for enabling/disabling leader election mechanism which involves interacting with kube-apiserver.
- Continuing the focus on additional integration and e2e tests for all engines, more documentation.
- Note that existing StoragePoolClaim (SPC) pools will continue to function as-is and there is support available to migrate from SPC schema to new CSPC schema. In addition to supporting all the features of SPC based cStor pools, the CSPC ( cStor Storage Pool Cluster) enables the following:
- cStor Pool expansion by adding block devices to CSPC YAML.
- Replace a block device used within the cStor pool via editing the CSPC YAML.
- Scale-up or scale down the cStor volume replicas via editing cStor Volume Config YAML.
- Expand Volume by updating the PVC YAML.
- OpenEBS cStor provisioning with the new schema and CSI drivers has been declared as a beta version. For detailed instructions on how to get started with new cStor Operators please refer to the Quickstart Guide.
-
Key bug fixes:
- Fixes an issue where NDM would fail to wipe the filesystem of the released sparse block device.
- Fixes an issue with the mounting of the XFS cloned volume.
- Fixes an issue when PV with fsType: ZFS will fail if the capacity is not a multiple of record size specified in StorageClass.
-
Major Limitations and Notes:
- Provisioning of cStor Pools using StoragePoolClaim(SPC) is supported but it will be deprecated in future releases. The pools provisioned using SPC can be easily migrated to cStorPoolCluster (CSPC).
- When using cStor Pools, make sure that raw block devices are available on the nodes. If the block devices are formatted with a filesystem or mounted, then cStor Pool will not be created on the block device. For manual steps, please refer to this link.
- If you are using cStor pools with ephemeral devices, starting with 1.2 - upon node restart, cStor Pool will not be automatically re-created on the new devices. The steps to recover from such a situation are provided here.
- Capacity over-provisioning is enabled by default on the cStor pools. If you don’t have alerts set up for monitoring the usage of the cStor pools, the pools can be fully utilized and the volumes can get into a read-only state. To avoid this, set up resource quotas as described here.
Kubera v1.12.0
Release Summary
The Kubera v1.12.0 release emphasizes improvements in CSI-based cStor storage engines and DMaaS backups. It includes a new onboarding demo feature and a Premium Enterprise Subscription tier for the users. Enhanced supportability features of Mayastor and ZFS Local PV(Beta) are remarkable. The upgradation feature of CSI based cStor pools and volumes makes the OpenEBS Enterprise Edition well grounded and steadfast.
Kubera Director
-
New features and enhancements:
- Supports installation of the latest version of OpenEBS Enterprise Edition (v1.12.0).
- Supports upgrades to the latest version of OpenEBS Enterprise Edition (v1.12.0).
- Supports enabling premium Enterprise subscription on Kubera. User needs to get in touch with the sales/support for availing custom Enterprise subscription.
- Enabled a new onboarding demo feature where-in a user, post sign-up, is given an option to either connect a cluster or view a demo cluster.
- Supports listing of Storage Classes of different storage engines.
- Supports upgrade of cStor pools of kind CSPC/CSPI.
- Supports upgrade of cStor CSI-based volumes.
- Supports complete air-gapped installation of Kubera OnPrem.
- Kubera integration with new CRM - HubSpot.
- Rebranding existing ‘Data-Motion schedules’ as ‘Backup Strategy’.
- Supports DMaas for CSI based cStor Volumes.
- Enhanced user experience of DMaaS to show:
- List of strategies and their configuration.
- schedules created under a Strategy, their retention duration, backup count under a schedule, last successful backup.
- backups under a schedule, volume backups progress, K8s resources that are part of backup, backup expiry time, and few other details.
- Key bug fixes:
- Fixed an issue where PVC name was missing in the cluster volumes monitoring graphs.
- Fixed an issue where the Volumes link was pointing to the Application page in the Project Home dashboard.
- Fixed a multithreading issue due to which users couldn't login.
- Fixed user-audit UI issues.
OpenEBS Enterprise Edition
-
New features and enhancements:
- Refactoring and adding multi-arch image generation support on the NDM repository.
- Enhanced NDM Operator to attach events to Block-device CR while processing BDC operations.
- Added support for btrfs as an additional FS Type.
- Added support for a shared mount on ZFS Volume to support RWX use cases.
- Added support for Rebuild, NVMe-oF Support, enhanced supportability for MayaStor.
- Declared ZFS Local PV as beta.
- Marked cStor CSI support as a complete feature.
- Specified the webhook validation policy to fail/ignore via ENV on admission server deployment.
-
Key bug fixes:
- Fixes a panic on maya-apiserver caused due to PVC names longer than 63 chars.
- Fixes an issue where the upgrade was failing due to some pre-flight checks when the maya-apiserver was deployed in HA mode.
- Fixes an issue where the upgrade was failing if the deployment rollout was taking longer than 5 minutes.
-
Major Limitations and Notes:
- Automatic selection of block-devices for deploying cStor has very limited support.
- cStor Pool will not be created on the block-device if it is formatted with a filesystem, partitioned, or mounted.
- cStor Pool will not be automatically re-created on the new devices upon node restart when using ephemeral devices.
- Capacity over-provisioning is enabled by default on the cStor pools.
- Working on migration features to easily migrate the clusters to the new cStor schema.
Kubera v1.11.0
Release Summary
The Kubera v1.11.0 release features improvements in ZFS Local PV (now in beta), Mayastor (now in alpha), CSI-based cStor storage engines and DMaaS backups. We have built on top of Kubera’s existing foundation in data-agility. A notable highlight of this release: Support for the installation of Mayastor using Kubera Director -- gives you quick access to Mayastor (now in alpha), an SPDK-based storage engine for performance-intensive workloads.
Kubera Director
-
New features and enhancements:
- Supports installation of the latest version of OpenEBS Enterprise Edition (v1.11.0).
- Status and replica-count fields added for CSI-based cStor volumes.
- Workload monitoring is now supported for the latest MinIO versions.
- Kubera Director UI now supports the latest version of Ember.js (v3.4).
- Pagination quantity is now available across all screens.
- Added subscription enhancements for adding webhook handler support in Kubera Director Online.
- Added detailed extended logging support for Github OAuth.
- Enhanced stability of subscription feature.
- Support for ‘Retention count’ feature to delete older backups at backup location.
- Added support for ‘schedule intervals’ for taking full backups of cStor based volumes.
- Added Cloudian as a supported backup-provider for DMaaS.
- Added protection for stateless and stateful applications that are using any kind of Kubernetes persistent volume (DMaaS).
-
Key bug fixes:
- Fixed issues related to OpenEBS Enterprise Edition installation workflow.
- Fixed an issue related to missing Google OAuth refresh token.
- Fixed an issue with UI table filters.
- Fixed an issue with OpenEBS Enterprise Edition components naming in Kubera OnPrem helm chart.
- Fixed a memory leak in the server.
-
Alpha Features:
- Added support for installation of Mayastor using the Kubera Director UI along with OpenEBS Enterprise Edition (advanced installation).
OpenEBS Enterprise Edition
-
New features and enhancements:
- Added support for Rebuild, NVMe-oF, enhanced supportability to Mayastor (alpha). Click here to read more.
- ZFS based Local PV storage engine moves to beta. Click here to read more.
- CSI plugin based cStor storage engine marked as feature-complete.
- Made enhancements to NDM filters to exclude unusable blockdevices.
- Enhanced readability for BlockDevice resources in kubectl commands by adding filesystem information.
- Add support to mount ZFS datasets using legacy mount property to allow for multiple mounts on a single node.
- Add additional automation tests for validating ZFS Local PV and cStor Backup/Restore.
-
Key bug fixes:
- Fixes an issue where volumes meant to be filesystem datasets got created as zvols due to misspelled case for StorageClass parameter. The fix makes the StorageClass parameters case insensitive.
- Fixes an issue where the read-only option was not being set on ZFS volumes.
- Fixes an issue where incorrect pool name or other parameters in Storage Class would result in stale ZFS Volume CRs being created.
- Fixes an issue where the user configured ENV variable for MAX_CHAIN_LENGTH was not being read by Jiva.
- Fixes an issue where cStor Pool was being deleted forcefully before the replicas on cStor Pool were deleted. This can cause data loss in situations where SPCs are incorrectly edited by the user, and a cStor Pool deletion is attempted.
- Fixes an issue where a failure to delete the cStor Pool on the first attempt will leave an orphaned cStor custom resource (CSP) in the cluster.
-
Major Limitations and Notes:
- Cloning feature is not available for XFS filesystem on CentOS for ZFS Local PV storage.
- Labels in CSPI pods show incorrect versions.
Kubera v1.10.0
Release Summary
This first-ever release of Kubera includes services and enterprise support plans from MayaData to improve the operations of Kubernetes as a data layer. Kubera includes a free-forever individual tier, as well as paid tiers. Kubera introduces the OpenEBS Enterprise Edition, a platform validated and long-term support distribution of the open-source CNCF project OpenEBS and related components. The Enterprise Edition of OpenEBS brings automated life-cycle management of data layer components in addition to all of the features of OpenEBS 1.10.0 along with rigorous e2e testing, enterprise-grade platform validations, and hardening. Kubera Director introduces a large number of usability improvements and application-focused workflows including off cluster logging and alerting and enhanced back-ups and disaster recovery. Kubera includes all of these features under a single license.
Highlights
Kubera’s key features:
- Enterprise-grade dataplane on Kubernetes with OpenEBS Enterprise
- Free-Forever tier so you can always manage your clusters and apps
- Live Chat with Kubera Engineers: 5 minute SLAs available
- Monitoring, Logging, and Reporting via Kubera Director
- Improvements to the Kubera Director and Enterprise Edition components
Kubera Director
-
New features and enhancements:
- Added basic and advanced UI-based provisioner for OpenEBS Enterprise Edition based dataplane using the Kubera Director GUI. Install OpenEBS simply with one click.
- Added GUI-based upgrade support for OpenEBS control plane components and cStor (OpenEBS storage engine) pool components, for assisted upgrades to existing pools.
- Added support for the creation of cStor Pool Clusters (CSPC) and deletion of SPC and CSPC pools using Kubera Director UI. Provision your storage through Kubera Director.
- Subscription and Billing management: Start using Kubera instantly
- Added support for OpenEBS Local PV (local disk and hostdir based storage engine) volume metrics.
- Added data protection feature for applications using all types of OpenEBS persistent volumes (DMaaS).
- Kubera OnPrem now comes with an easy-to-use integrated helm3-based installer for both OpenEBS Enterprise Edition 1.10.0-ee and Kubera Director OnPrem 1.10.0-ee. Click here to know more.
-
Key bug fixes:
- Fixed UI bug where the ‘Upgrade’ button remains in view during an on-going upgrade.
- Added check for specific components in install jobs.
- Optimized API calls for labelling nodes.
- Added check to wait until label values are reflected in the API.
OpenEBS Enterprise Edition
-
New features and enhancements:
- Replaced cStor orchestration components with new and improved cStorPoolCluster (CSPC) cStor pool and CSI driver based cStor volume provisioner.
- Added Grafana dashboard to monitor ZFS based Local Persistent Volumes (PV).
- Added increased granular control over the installation of OpenEBS custom resource components (CRD) using Helm v3.
- Improved upgrade framework for OpenEBS components.
- Improved cStor Volume Config operator to support reconciliation of changes for the volume-target deployment.
- Added refinements to OpenEBS Node Disk Manager’s (NDM) filter to optimize disk detection.
- Optimized ZFS based Local PV operations.
- Added flexibility in cStor (OpenEBS storage engine) backup and restore locations.
- Added support for Chaos testing with Litmus Chaos framework experiments (litmusbook) to validate Jiva’s (OpenEBS storage engine) successful logging.
- Command line tool ‘jivactl’ can now fetch volume rebuild time estimates when using Jiva storage engine.
- Added advanced option to increase timeout on cStor volume-replica timeout to optimize rebuild attempts on unstable network environments and low-cost cloud vendors.
- Kubera E2E testing focuses on all aspects of the product instead of just functionality testing.
- End-to-end platform validation on Amazon Elastic Kubernetes Service (Amazon EKS) Google Kubernetes Engine (GKE), Azure Kubernetes Service (AKS), D2iQ Konvoy, Red Hat OpenShift, Rancher.
- Added support for installation via Kubera Director.
- Critical fixes will be backported to the last two monthly releases and the last major stable release.
-
Key bug fixes:
- OpenEBS components have been assigned tighter RBAC permissions. (#2850)
- Fixed issue where the third Jiva replica takes longer to come to RW state. (#2999)
- Pool size increases when underlying disk capacity is increased. (#3001)
- Fixed issue where ‘jivactl info’ showed two children for a metafile. (#2965)
-
Major Limitations and Notes:
- Stale BlockDeviceClaim resource issue where OpenEBS storageClass openebs-device (Local PV storage engine) could not bind to a BlockDevice resource.
- OpenEBS cStor storage engine does not support NVMe disks with Gravitational Gravity clusters.
- Detachment and reattachment of virtual disks (e.g. VMware .vmdk files) is not supported for cStor and Local PV storage engines.
- Migration of cStor pools from one node to another is not supported.
-
Alpha Features:
- Added upgrade support for CSPC components.
- Added webhook validation for CSPC expansions.
- Added upgrade support for cStor CSI.
- Removed finalizer on Block Device Claim (BDC) CRs once the corresponding Block Device (BD) has been replaced from the CSPC pool.