Today is a big day for cloud native storage. The community has been busy defining a new specification to universally enable volume functionality across the storage ecosystem called the Container Storage Interface (CSI). The initiative is critical to cloud native as it cements the role of storage in the space and stops the fragmentation that has occurred. But making this successful requires much more and a bit of patience from the community.

For this, we have some exciting news to report. See Josh’s blog discussing CSI and the release of a couple of key on-premise CSI drivers: ScaleIO driver for bare metal and vSphere driver for virtualized cloud native environments. These CSI drivers are 0.1 releases but necessary for practical application of the specification. Another announcement for the community is the availability of the GoCSI package which intends to accelerate CSI adoption and provide the foundation for building CSI drivers.

Demonstrating them in action is easy thanks to the GoCSI client tool called csc. This tool enables us to simulate the calls that a container platform would make for volume orchestration and lifecycle management. For example, with a running ScaleIO cluster we can issue a GetCapacity (c get-capacity) and CreateVolume (c create) to create a new volume from an established pool with specific properties. Following this we can issue a combination of ControllerPublishVolume (c publish) and NodePublishVolume (n publish) to make a volume available on a host.

Notice the separation of Controller and Node calls. This ensures that if a storage platform supports centralized management (as ScaleIO does) it is able to consolidate these privileged calls into a single endpoint. The following figure illustrates the two options for both Headless and Centralized operations:

The vSphere driver enables stateful cloud native applications on all vSphere supported storage vendor ecosystem including VMware vSAN – VMware’s HCI offering. In the near future, VMware’s Project Hatchway plans support for all leading Container Orchestrators by leveraging this CSI driver. This has a lot of benefits including the ability to focus and bring a wealth of vSphere storage capabilities directly to cloud native applications. With CSI adoption from the container orchestrators Project Hatchway will also expand to support both Cloud Foundry and Mesos.

We can run through an identical scenario using the vSphere driver as well all driven by csc. In this case the driver operates in a headless manner by making controller requests locally at each node straight to the hypervisor through the VMCI.

CSI also enables the potential for containerized workloads that were not previously possible, by allowing for raw block devices to be directly attached to a container. This means that a volume, perhaps attached at /dev/xvda, can be attached to a container without being mounted as a filesystem first, deferring management of the disk and/or filesystem to the container itself. As of today, no COs support this capability, but it is being implemented and both csi-vsphere and csi-scaleio already support it.

Both of these drivers are great for the eco-system enabling more choice when building cloud native environments on-premise. To take them for a spin, checkout each project’s README for csi-scaleio and csi-vsphere.

If you’re interested in building your own driver, GoCSI is for you! The package serves to validate and test the spec but also includes a useable library, client, and other helpful utilities created with Go that aid in the development of CSI drivers. It includes not only the code but the experience and lessons learned from the team that provides a great starting point. Below is a screenshot where we invoke a make csi-sp to create a working driver from scratch as a template. Easy enough?

If you are unfamiliar with CSI, check out earlier blog posts on why CSI is important and a quick analysis of the spec itself.

What does the future hold? It is going to take some time for these implementations to mature. With time, it is expected that CSI will generally supercede other extensibility options such as the Docker Volume Driver, Kubernetes internal volume plugins, and Kubernetes Flexvolume.

Keep your eyes on the {code} team in the near future as we will soon be introducing more CSI drivers through REX-Ray. The 0.12 release will bring support for CSI at 0.1 for all of its drivers. With CSI, REX-Ray’s role focuses on providing features and functionality not declared, but complementary to the CSI specification. This includes streamlined packaging, value-added features, and interoperability with previous interfaces including Docker Volume Drivers, Kubernetes Flex, and CLI.