Mesos is an important platform to consider if you’re interested in running containers in a highly available manner, operating an Enterprise-friendly container platform, or building application platforms to operate complex distributed applications. It should be thought of in a collaborative and complimentary way to the container eco-system. For some, it will sit at the scheduling layer only, and for others it will span across scheduling to the container runtime. Mesos represents a new way of thinking when it comes to how we operate and consume data center resources.

The Mesos platform is often adopted when a data center is moving towards the following key points:

  • A homogenous operating environment where all compute resources can run all workloads
    • Data center silos for workloads that aren’t virtualization friendly can now be scheduled alongside other workloads (ie. Hadoop and Cassandra)
    • IaaS and virtualization are no longer needed to pool resources
  • Providing simple but highly available applications
    • Basic capabilities include schedulers for running container workloads and ensuring availability
  • Simplifying operating complex distributed applications
    • Applications that typically have books associated with them for operating can be dramatically simplified by encapsulating the logic in frameworks
  • Building advanced application platforms, without focusing on infrastructure
    • Frameworks provide a means to develop a custom application platform for persistent and complex micro-service based applications

If you are interested and looking for an easy way to get your hands on Mesos, Mesosphere and our external volume support, we’ve got you covered!

Mesosphere maintains a Vagrant package for creating a simple Mesos environment on your local laptop through playa-mesos.  EMC {code} has created a fork (PR in the works) and adapted it to support external volumes with VirtualBox.  The following image is a diagram of the configuration:


Get started by going to the EMC {code} playa-mesos forked repo. We’ll give you a quick run through of the Quick Start listed in the repo.  The first step is a “vagrant up” to start the necessary instances.  It’s up to you to decide which instances get created through the config.json file.

Screen Shot 2016-02-11 at 12.56.06 PM

Once this operation completes, you have four nodes up and running.  By the way, the number of instances, size, and role is all configurable.  For this demo we created five nodes with one dedicated to the master and framework role.  The other four are all slaves/agents that have the external volume support enabled.  This screenshot shows where we have the agent/slave nodes running:

Screen Shot 2016-02-11 at 1.00.46 PM


The next step is to start a persistent application. Go to the Mesosphere UI (Marathon) at

This first example shows PostgreSQL as a persistent Docker container where volume support is provided through Docker natively. This is similar to running `docker run -ti –volume-driver=docker -v postgresdata:/var/lib/postgresql/data`.

Within the “Docker container settings”, specify “postgres” as the Docker image (official package from Docker Hub) and use “Parameters” to specify the external volume attributes.  We’ve specified the parameters with the following:

  • volume-driver: docker
  • volume: postgresdata:/var/lib/postgressql/data

Screen Shot 2016-02-11 at 1.01.52 PM

Mesos also has a native container runtime because it supports, what they call, custom executors.  Custom executors leverage derivative container capabilities and run applications through downloadable packages.

This next example demonstrates the support of external volumes through our mesos-module-dvdi project.  Volumes are defined via environment variables as an implicit model. This means volumes are created on a case-by-case basis and are not managed by Mesos. We’ve specified the parameters with the following:

  • ID: hello-play
  • Command: while [ true ] ; do touch /var/lib/rexray/volumes/test12345/hello ; sleep 5 ; done
  • DVDI_VOLUME_NAME: test12345
  • DVDI_VOLUME_OPTS: size=5

Screen Shot 2016-02-11 at 1.10.38 PM

Once those two applications launch successfully, you will see a running status in Marathon.  As a general note, the postgres system may take longer to start for the first time per host since the postgres image must be downloaded.

Screen Shot 2016-02-11 at 1.11.18 PM

Note how you can also check VirtualBox to see that the media was attached properly to the instances.

Screen Shot 2016-02-11 at 1.41.07 PMScreen Shot 2016-02-11 at 1.40.42 PM

The next step is to look at how we can provide availability of the persistent applications. Here’s two primary areas of focus on to achieve this:

  • Marathon detecting failures of the applications and slaves – it is responsible for rescheduling an application
  • REX-Ray, not only delivers volumes for a container, but also forcefully detaches and attaches volumes between hosts during failure scenarios which we call “pre-emption”

Take a look at the following depiction of this scenario:


To kick this off, we can simulate an agent/slave failure. Take a look at the postgres application details to determine which node it is running on. The example screenshot below depicts it’s running at or slave1.

Screen Shot 2016-02-11 at 1.43.29 PM.png

Next, go to VirtualBox and Close -> PowerOff the instance.  After about sixty seconds you should see that the node got removed from the agent list with Mesos.

Screen Shot 2016-02-11 at 1.49.34 PM.png

In Marathon, you should also see that the application status changed to staged.

Screen Shot 2016-02-11 at 1.50.47 PM.png

Note: Doing a pull ahead of time for the application (docker pull postgres) on all of the nodes will ensure there is less staging time when failing over. It should take seconds.

Following this the application should return to Running once the failover has completed.

External volume support makes Marathon and containers a great fit for traditional persistent applications. Have you heard of VMware High Availability (HA)? Yes, this is essentially enabling HA for persistent containers. But instead of rebooting VMs, we are restarting containers and attaching volumes dynamically to new hosts.

Screen Shot 2016-02-11 at 2.02.23 PM.png

If you are interested in learning more about Mesos, Mesosphere, and Marathon, here are some online resources:

  • Online Mesosphere test drive
  • Our overview and bootcamp material and video
  • Available frameworks are listed by Apache here
  • The fork of playa-mesos is available here