Kubernetes 1.9.0 including CSI support has been released! This allows Kubernetes to use CSI drivers based on the upcoming Container Storage Interface 0.1.0 specification. CSI is currently behind an alpha Kubernetes feature-gate, so it isn’t something I recommend testing in production just yet. To help make things easier, I’ve put together a vagrant environment that builds a 3-node Kubernetes cluster running v1.9.0 and all of the required CSI components enabled. The environment also includes a ScaleIO cluster to provide a storage target to test with. Feel free to follow along below in your own demo environment if you’re up for it (you’ll need Git, Vagrant, and VirtualBox installed to do so).

With the Kubernetes’ CSI implementation, the in-tree CSI volume plugin is designed to be minimal so that creating and maintaining the CSI integration is mostly separate from Kubernetes itself. To help achieve this goal, external Kubernetes CSI attacher and provisioner components have been introduced. This allows updating said components incrementally as CSI matures without having to upgrade your Kubernetes binaries. From a CSI perspective, Kubernetes has implemented the centralized model as described by the CSI specification.The implementation involves the Kubernetes controller and new external CSI attacher and provisioner communicating with CSI identity, node, and controller endpoints.

There isn’t an easy button yet for getting Kubernetes up and running with a CSI driver. For now we have built an example using Vagrant to deploy and integrate both Kubernetes and Dell EMC ScaleIO.

In this example, we will setup a demo environment with Vagrant and VirtualBox. We will provision the attacher, provisioner, and driver components. And finally we’ll put them to use with a simple Redis deployment. Do note that a lot of the CSI components are new, so while everything should work as expected, it’s good to expect some bugs here and there. Over the coming months as we journey on toward Kubernetes 1.10, the CSI implementation should become easier to work with and much more stable. We look forward to having others in the community help with this process. Feel free to drop into our slack community and join #project-csi #kubernetes and #scaleio channels to discuss your thoughts and questions.

Warning: This environment takes significant resources to successfully operate – you will need to have at least 8GB memory. If you have a bunch of windows open, you may need at least 12-16GB memory.

Step 1:

Clone our Vagrant repo and create the Kubernetes and ScaleIO cluster

git clonehttps://github.com/thecodeteam/vagrant

cd vagrant/kubernetes

vagrant up

Note: Bringing up the environment should take about 10-30 minutes depending on your system performance and internet speed.

Once complete, VirtualBox should show something similar to below:

Step 2:

  • SSH into the kubernetes master by running vagrant ssh master.
  • If you get a list of the kubernetes nodes, you should see 2 workers.
  • You will also see kube-dns running if you get a list of pods (including all-namespaces or kube-system).

Step 3:

  • Install the csi-scaleio components (attacher, detacher, and node drivers) by running kubectl create -f csi-scaleio. This will import all the yaml files from within the csi-scaleio directory.
  • Feel free to view them here.
  • You can check the status via kubectl get pods -o wide.
  • You will notice that the node drivers run on both nodes, as this is using a daemonset spec, in which a pod will be deployed on every node.

Step 4:

  • Get a list of Storage Classes by running kubectl get sc.
  • Note that a StorageClass was created for you in the last step. You can find the yaml definition here.
  • Create a Persistent Volume Claim (PVC) by running ./pvc-create.sh vol01.
  • This script creates creates an 8GiB PVC with the name you provide as an argument. This is your request for storage resources.

The CSI provisioner will take care of creating a Persistent Volume (PV) and binding it to the PVC you created.

Step 5:

  • Create a redis deployment by running kubectl create -f redis01.yaml.
  • You can check the status of the redis pod by running kubectl get pods -o wide -l app=redis.
  • This will show any pods with a label app=redis, and will also display which node each is running on.
  • You can also see the attachment request by opening http://192.168.50.11:8080/apis/storage.k8s.io/v1alpha1/volumeattachments

Step 6:

  • Exec into the redis container by running kubectl exec <pod_name> -it bash.
  • Once in the pod, run lsblk 2>/dev/null.
  • You will notice that you have a device named scinia mounted to the /data directory. This is your scaleio volume, which was created, mapped, and mounted thanks to the csi-scaleio driver you deployed.

Step 7:

  • Run redis-cli, check for keys by running keys *, set a key via set this_data is_persistent, force a database save via save, and exit via exit.

Step 8:

  • Delete the pod via kubectl delete pod <pod_name>.
  • If you get the status of pods again via kubectl get pods -o wide -l app=redis you might be able to catch both the old one being deleted and the new one in the ContainerCreating phase. Once the status of the new pod is “Running” Continue to the next step.
  • You can also see the updated attachment request by opening http://192.168.50.11:8080/apis/storage.k8s.io/v1alpha1/volumeattachments

Step 9:

  • Run kubectl exec -it <pod_name> redis-cli.
  • If you get a list of keys via keys *, you should see the key you set.
  • You can also get that key and see the value you set for it.
  • You’ve now launched a new pod with the same persistent volume created and managed by the csi-scaleio driver.
  • Exit out via exit.

Step 10:

  • Clean up the deployment and pvc by running kubectl delete deploy redis01 and kubectl delete pvc vol01.
  • Note that if you didn’t delete the pvc, you could re-use it with a new deployment or any other pod for that matter.

Step 11:

  • Feel free to browse the environment and when you’re ready to clean up, simply exit from the vagrant ssh session and running vagrant destroy -f.

Excellent you’ve made it! Thank you to the CSI team and Kubernetes CSI implementers from (in alphabetical order) {code}, Dell EMC, Diamanti, Docker, Google, Mesosphere, Portworx, and Red Hat.