The Container Storage Interface (CSI) is making steady progress on mapping out how it will eventually look. If this is the first time you’ve heard about CSI, we would recommend that you read The Container Storage Interface – according to Josh to understand the need behind it and why the initiative is happening. To sum it all up, the spec is meant “To define an industry standard “Container Storage Interface” (CSI) that will enable storage vendors (SP) to develop a plugin once and have it work across a number of container orchestration (CO) systems.”

 

Let’s look at the technical details of the spec and what’s happening.

 

Storage providers for clouds and on-premises datacenters must provide two plugins that have associated services. The Node Plugin must run on worker nodes within a cluster where the provisioned volume will be used, while the other is the Controller Plugin that can run anywhere. Each of these plugins have specific roles and responsibilities that are defined as “services”.

 

A completed CSI driver can look a few different ways.  The Node and Controller Plugins can be baked into the same binary or possibly broken out into multiple binaries and live in their respective places.

The plugins are divided up into several different services. The first service is the Identity Service which is unique because it needs to be on both the Node and Controller Plugin. The Identity Service is really the “Information” service where it will respond to calls that ask for the supported versions as well as generic information.

 

The next two services requires going a bit more in-depth and each is unique to the particular plugin type.

 

The Controller Service is part of the Controller Plugin and is responsible for the API calls to the storage provider to do volume lifecycle operations.

  • CreateVolume
    • This will provision a new volume to be consumed as either a block device or a mounted filesystem.
    • The name of the volume must be unique in the form of a string. If the name already exists, the error recovery process will take place.
    • An optional parameter is capacity for the size of the volume to be created and will default to a standard capacity defined by the plugin.
    • Volume capabilities can be passed as well that indicate if the volume will be accessed as either block or file along with mount options for file.
    • If you’re familiar with Kubernetes Access Modes, the same logic applies to define if a volume can be accessed in a read/write or read-only mode by a single host or multiple.
  • DeleteVolume
    • This will deprovision a volume and completely delete it making all data in the volume no longer accessible.
  • ControllerPublishVolume
    • When a volume needs to be accessed by a particular node, this call is used to perform any modifications to make sure the node has availability to the volume.
  • ControllerUnpublishVolume
    • When a volume is no longer needed by a particular node, this performs the inverse of ControllerPublishVolume to make the volume no longer available to the given node.
  • ValidateVolumeCapabilities
    • Before a volume is used, this will check if a pre-provisioned volume has all the capabilities that the CO wants. All volume capabilities must return true to be successful.
  • ListVolumes
    • This will return information on all the volumes that it knows about.
  • GetCapacity
    • The CO can query the capacity of the storage pool from which the controller provisions volumes.
  • ControllerProbe
    • This allows the Controller Plugin to verify if it has the right configurations, devices, dependencies and drivers in order to run the controller service.
  • ControllerGetCapabilities
    • Of course volumes have capabilities but the controller service has it’s own provided by the Plugin. For example, some controller plugins may not support create/delete. This will query those capabilities and return the information available.

 

The Node Service is part of the Node Plugin and is responsible for the mounting and unmounting capabilities. This is why the Node Plugin must reside on each host that needs to have direct access to the volumes. For each of these RPC calls, they will be executed on the node where the volume will be used

  • NodePublishVolume
    • When a container is scheduled on a host and wants to use a specific volume, this RPC will be executed on the node. This RPC could potentially be called multiple times on the same node for the same volume with a different target_path or auth credentials. For the node to have access, it’s common that ControllerPublishVolume is called prior.
  • NodeUnpublishVolume
    • This is a reverse operation of NodePublishVolume that still runs on the node where the volume is being utilized. This RPC will undo the work done by NodePublishVolume. If a volume has multiple node access for read or writes, this will be called at least once for each target_path that was successfully setup via NodePublishVolume on each node. As seen before, the ControllerUnpublishVolume RPC will be used after the volume has been unmounted from the host.
  • GetNodeID
    • This will return the node ID of the host where the volume is going to be used. The result of this call is used by ControllerPublishVolume.
  • NodeProbe
    • This RPC call is used to examine if the plugin has everything needed on the node such as binaries, kernel module, drivers, etc. The result will be a success or failure if the node is ready to accept running containers using a particular CSI vendor plugin. This allows a container orchestrator to schedule workloads based on the availability of the plugin requirements.
  • NodeGetCapabilities
    • This will check the available capabilities of a node. Today, nothing is defined in the spec for types of capabilities, however, parameters can be returned based on plugin requirements.

 

This in-depth look at the CSI spec boils down the details of what is to be expected from a CSI Driver and the architectural options available to it. If you’re interested to see what {code} is doing with CSI, REX-Ray 0.11 has been validated against the spec and be sure to check out our repos to see the plugins and command line tools we’ve developed so far!

 

Want to know more about CSI, check out the latest video from MesosCon 2017