Search
Michael Mattsson

Introducing Kubernetes CSI Sidecar Containers from HPE

August 25, 2020

With the release of the upcoming HPE CSI Driver for Kubernetes version 1.3.0, Hewlett Packard Enterprise (HPE) introduces the concept of Container Storage Interface (CSI) extensions to the CSI driver using Kubernetes CSI sidecar containers. This concept is not foreign to anyone familiar with the CSI architecture as most new major features get implemented as a sidecar in a true microservice architecture. Services are tightly coupled and communicate over a UNIX socket using a high-speed Remote Procedure Call (RPC) interface, gRPC, for secure and reliable communication.

The interface allows third parties to write extensions to their drivers to expose a particular storage platform’s differentiating feature where it’s difficult to conceive a broad stroke feature in a vendor neutral manner. It’s also possible to leapfrog SIG Storage (the Kubernetes working group for storage) for features currently in the discovery or design phase if customer demand is being prioritized over standardization.

picture1

The first (yes, there’s quite a few in the works) CSI sidecar is a volume mutator. It will allow end-users to alter their PersistentVolumeClaims (PVCs) during runtime, even while the PersistentVolume (PV) is mounted and serving a workload. What attributes are mutable depends on the backend Container Storage Provider (CSP) being used. Also, what attributes are allowed to be altered by an end-user is controlled by the Kubernetes cluster administrator through the StorageClass.

Let’s go through an example on how you could put the volume mutator to work using the HPE Nimble Storage CSP.

Mutating persistent volume claims

With the CSI driver deployed and a HPE Nimble Storage backend configured, it’s good to understand what attributes are mutable. On the HPE Storage Container Orchestration Documentation (SCOD) portal for the respective CSP, you'll find the supported parameters. For reference, this table represents the current mutable attributes.

Attribute
Type
Description
destroyOnDeleteBooleanUsed to control deletion of volume in the backend after PV removal
descriptionTextVolume description
folderTextPlace volume into an existing folder
limitIopsIntegerChange IOPS limits on volume
limitMbpsIntegerChange Throughput limits on volume
performancePolicyTextChange performance policy for volume (within the same block size)
dedupeEnabledBooleanEnable/Disable deduplication on volume
thickBooleanThick/thin provisioning of volume
syncOnDetachBooleanControl that a snapshot of the volume should be synced to the replication partner each time it is detached from a node.

For the purposes of this example, let’s assume we want to allow users to be in control of a few storage attributes. We will also allow them to override the parameters during creation of the PVC. Overriding parameters during creation is a cornerstone feature that has been part of the HPE primary storage solution since the FlexVolume days.

Create a default StorageClass with the allowOverrides and allowMutations set to allow certain performance tuning.

---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  annotations:
    storageclass.kubernetes.io/is-default-class: "true"
  name: hpe-standard
provisioner: csi.hpe.com
parameters:
  csi.storage.k8s.io/fstype: xfs
  csi.storage.k8s.io/controller-publish-secret-name: hpe-backend
  csi.storage.k8s.io/controller-publish-secret-namespace: kube-system
  csi.storage.k8s.io/node-publish-secret-name: hpe-backend
  csi.storage.k8s.io/node-publish-secret-namespace: kube-system
  csi.storage.k8s.io/node-stage-secret-name: hpe-backend
  csi.storage.k8s.io/node-stage-secret-namespace: kube-system
  csi.storage.k8s.io/provisioner-secret-name: hpe-backend
  csi.storage.k8s.io/provisioner-secret-namespace: kube-system
  description: "Volume created by the HPE CSI Driver for Kubernetes"
  allowOverrides: description,limitIops,limitMbps,performancePolicy
  allowMutations: description,limitIops,limitMbps,performancePolicy

Note: The volume mutator sidecar is dependent on the "csi.storage.k8s.io/provisioner-secret-name" and "csi.storage.k8s.io/provisioner-secret-namespace" to mutate volumes.

Next, create a PVC with the following .metadata.annotations:

---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: my-data
  annotations:
    csi.hpe.com/description: This is my volume description
    csi.hpe.com/limitIops: "10000"
    csi.hpe.com/limitMbps: "200"
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 32Gi

Switching over to the backend array, you can see that the volume was created with the desired overrides.

Nimble OS $ vol --info pvc-2d1795ec-7bce-4af8-b841-437a435f29e1 | egrep -iw 'description|iops|throughput|performance'
Description: This is my volume description
Performance policy: default
IOPS Limit: 10000
Throughput Limit (MiB/s): 200

Note: The volume name may be retrieved with kubectl get pvc/my-data.

Let’s edit the object definition. This can be done with kubectl edit or you can create a YAML file and subsequently patch the PVC.

---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: my-data
  annotations:
    csi.hpe.com/description: Need more oomph!
    csi.hpe.com/performancePolicy: double-down
    csi.hpe.com/limitIops: "50000"
    csi.hpe.com/limitMbps: "1000"
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 32Gi

Patch the PVC.

kubectl patch pvc/my-data --patch "$(cat my-data-boost.yaml)"

Back on the array, you can see that the attributes have changed.

Nimble OS $ vol --info pvc-2d1795ec-7bce-4af8-b841-437a435f29e1 | egrep -iw 'description|iops|throughput|performance'
Description: Need more oomph!
Performance policy: double-down
IOPS Limit: 50000
Throughput Limit (MiB/s): 1000

Since the .spec.csi.volumeAttributes of the PV that the backend volume was created with are immutable, the latest successful changes are annotated on the PV.

apiVersion: v1
kind: PersistentVolume
metadata:
  annotations:
    csi.hpe.com/description: Need more oomph!
    csi.hpe.com/limitIops: "50000"
    csi.hpe.com/limitMbps: "1000"
    csi.hpe.com/performancePolicy: double-down
    pv.kubernetes.io/provisioned-by: csi.hpe.com
...

Further adjustments may be performed anytime at any stage of the lifecycle of the PV.

Use cases

Given the gamut of options for the HPE Nimble Storage CSP, there are a number of creative ways to accelerate certain use cases that require runtime tuning of storage characteristics.

Performance management

Like in the example above, throttling volumes to adhere to a certain performance characteristic is by far the most prolific use case, especially if there's cost associated with the performance limits. The use case can be further extended by allowing users to move volumes between folders on the Nimble array, such as Gold, Silver and Bronze, all with different performance caps. Certain restrictions apply. See the documentation for more information.

Data reduction changes

Using compression and deduplication may be desirable for the initial ingest of a dataset. Just note that future churn might cause issues on the workload requirement and data reduction capabilities may be toggled at will. The need might arise during runtime to prioritize space reserve. Toggling thin-provisioning with the thick parameter may be used to control the reservations.

Data migration control

In the event where you need to perform a workload transition between clusters, it’s practical to apply destroyOnDelete: "false" and syncOnDetach: "true" on the backend volume. This is to ensure the replica destination gets updated with the latest data from the source when destaging the workload. Also, retaining the volume on the array when the Kubernetes objects are being cleaned out from the source namespace is neccesary in the event of the replica destination is being configured to reverse the replication after the transition.

It will be exciting to see what other use cases will surface from the installed base with this new capability!

Next steps

The HPE CSI Driver for Kubernetes version 1.3.0 will become available in the next few weeks. StorageClasses may then be created with the allowMutations parameter and the CSI volume mutator may be used without any further tweaks.

Watch the HPE Developer Community for future exciting updates to the HPE CSI Driver for Kubernetes!

Related

Michael Mattsson

Apps and Infrastructure as Code with Ansible using HPE Cloud Volumes and Amazon AWS

Nov 29, 2017
Michael Mattsson

Doryd: A Dynamic Provisioner for Docker Volume plugins

Dec 6, 2017
Michael Mattsson

Get started with Prometheus and Grafana on Docker with HPE Storage Array Exporter

Jan 26, 2022
Michael Mattsson

Get started with the HPE Nimble Storage Content Collection for Ansible

Sep 29, 2020
Michael Mattsson

HPE CSI Driver for Kubernetes: Snapshots, Clones and Volume Expansion

Mar 19, 2020
Michael Mattsson

Introducing a multi-vendor CSI driver for Kubernetes

Aug 30, 2019
Michael Mattsson

Introducing an NFS Server Provisioner for the HPE CSI Driver for Kubernetes

Jun 20, 2020
Michael Mattsson

Introducing HPE Storage Container Orchestrator Documentation

Apr 20, 2020

HPE Developer Newsletter

Stay in the loop.

Sign up for the HPE Developer Newsletter or visit the Newsletter Archive to see past content.

By clicking on “Subscribe Now”, I agree to HPE sending me personalized email communication about HPE and select HPE-Partner products, services, offers and events. I understand that my email address will be used in accordance with HPE Privacy Statement. You may unsubscribe from receiving HPE and HPE-Partner news and offers at any time by clicking on the Unsubscribe button at the bottom of the newsletter.

For more information on how HPE manages, uses, and protects your personal data please refer to HPE Privacy Statement.