Mayastor is a cloud-native declarative data plan, targeting performance requirements for IO-intensive workloads. It currently is an Alpha feature, intended for early adopters and advanced users to experiment with the new features before those features are production-ready.
NOTE: Mayastor isn't recommended for production workloads, as you may encounter some breaking changes when the product moves from alpha to stable version.
- 2 x86-64 CPU cores with SSE4.2 instruction support:
Intel Nehalem processor (march=nehalem) and newer
AMD Bulldozer processor and newer
- 4GB memory
- Mayastor Daemonset must have privileged mode enabled.
- Mayastor DaemonSet (MDS) requires:
2MB hugepages support
2MB Huge Pages must be supported and enabled on a storage node. A minimum number of 512 such pages must be available on each node.
To verify HugePage availability, execute:
grep HugePages /proc/meminfoSample Output:
AnonHugePages: 0 kB ShmemHugePages: 0 kB HugePages_Total: 1024 HugePages_Free: 671 HugePages_Rsvd: 0 HugePages_Surp: 0If the number of available pages are less than 512, the page count should be configured based on the requirements of other co-resident workloads. For example,
echo 512 | sudo tee /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages
To make the change persistent across reboots you need to add the value in /etc/sysctl.conf.
After modification of huge page configuration, you need to either restart the kubelet or reboot the node.
- Creation of a Mayastor pool using block device attached to the node.
- Mayastor Volume management using CSI drivers.
- Support for accessing the Mayastor Volumes using iSCSI and NBD
- Kubernetes Custom Resources for Mayastor Pool and Volumes, and Mayastor Storage Volume (MSV) showing the status of the Volume.
- Support for Prometheus metrics and sample Grafana dashboard
- Workload protection via n-way synchronous replication
Targeted use cases:
- Low latency workloads for converged and segregated storage by leveraging NVMe/NVMe over Fabrics (NVMe-oF)
- Micro-VM based containers like Firecracker microVMs and Kata Containers by providing storage over vhost-user
- Programmatic based storage access, i.e write to block devices from within your application instead of making system calls
- Storage unification to lift barriers so that you can start deploying cloud-native apps on your existing storage without painful data gravity barriers that prevent progress and innovation.