Deploying an application on a variety of different environments can be a daunting task, though arguably containers are proving to be a big part of the solution. Containers still have to be deployed somewhere though. We believe, as do many others, that infrastructure as code is the right way to approach this architecture debacle… but what if you want to deploy your applications to AWS and OpenStack and vSphere, and you don’t want to maintain a separate codebase for each environment?

Canonical’s ‘Juju’ system attempts to answer exactly that question. Juju packages your applications up as a series of charms or small self-contained services, connects those charms together into a bundle, and then deploys that bundle into an environment, which could be Vagrant on your local machine, an OpenStack Cloud, AWS, Azure or even VMWare vSphere or Joyent.

Given EMC {code}’s current focus on storage persistence, I dug in to see how the Juju storage subsystems work from an operations level and was pleased to discover that Juju does indeed have built-in methods to provide storage abstraction. This is important as it allows us to see how other groups are approaching common storage use cases, and also because it lets us explore opportunities for future expansion of our own storage abstraction projects. Unfortunately, I found that Juju does not yet have the tools in place to manage persistent volumes in a modern and manageable fashion.

Getting started with Juju was relatively painless – I used AWS as my environment substrate, and used ‘juju generate-config’ to create the ~/.juju/environments.yaml with my AWS access and secret keys. Once that was done, I was ready to bootstrap my environment using `juju bootstrap`. Bootstrapping Juju

The bootstrap process creates a virtual machine in AWS and installs the Juju substrate where charms can be deployed. Here we’re going to deploy MongoDB:

deploy_mongodb

…and that’s it!

The bootstrap brought up an AWS instance to act as “machine 0,” the master control node for the environment, as well as a second instance to host the MongoDB service. Running ‘juju status’ returns a list of the AWS instances, or machines, and a list of the services running on the environment. From here we should be able to use Juju to provision and attach storage to the instances… but this is where things began to fall apart.

Up until early 2015, storage was handled by a storage broker charm called unsurprisingly, storage. Using this charm, attaching storage to a service would be done by writing a configuration file defining the parameters of the storage, deploying the charm to your environment, and then creating a relationship between the service charm and the storage charm with ‘juju add-relation’. Scripting hooks within the charm would pick up on the change and perform any necessary steps to cause the service to use the new storage – stopping and restarting a daemon, for instance.

The storage charm and the methods around using it were deprecated with Juju version 1.25, the latest major release of Juju. As of version 1.25, the new method of managing storage is to use the internal Juju storage support:

juju_storage

Juju can now provision and attach storage to charms – with a big caveat: the charms have to be modified to use the new storage format, and as far as I could determine the vast majority of charms in the charm store have not yet been modified. This brings to light one major point about Juju and charms; that while Juju has a great many pre-existing charms in the Charm Store, if you’re planning to use Juju to deploy your own applications, sooner or later you’re going to have to learn to build (or at least modify) charms.

Fortunately that’s actually pretty simple – to test it out I’ve downloaded the source to the MongoDB charm, and will be modifying it by adding a “storage” definition to the bottom of the charm’s ‘metadata.yaml’ file, with a couple of subsections called “mongodata” and “mongologs:”

metadata.yaml

With that done, I can now deploy the charm from the local copy, using a slightly-modified deploy command, instructing Juju to provision two 10G filesystems in the default storage pool and attach them to the MongoDB instance:

deploy_mongo_modified

…and after waiting a minute or two for Juju to bring up a new AWS instance to deploy MongoDB onto, we can check the status of that instance with ‘juju status.’ This will show us the instance up and running:

mongodb_deployed

We can also see the storage that Juju provisioned in AWS, two 8G volumes as root volumes for the Juju machines and the two 10G volumes that were just provisioned by the ‘juju deploy’ command.

aws_volumes

However, when I removed the MongoDB service from operation using ‘juju remove-service mongodb,’ the newly-created volumes were also removed, with no warning and no option to preserve them.  Of course, I could manually snapshot the volumes in AWS, but this seems like a place where human error could easily be an issue.

As of the current version, Juju’s storage abstraction layer can configure and populate storage pools, and deploy volumes with a good amount of control over size, type and pool. Using hooks written into a charm, Juju can perform scripted actions when the storage is attached or detached, and the charmed instance can retrieve information about the attached storage, such as the name and location of the filesystem.

Juju cannot currently detach and reattach a volume from a service, nor can the volume remain persistent if the service is destroyed – this functionality is planned for the future, but as of today it is still under development.