As prominent as microservices have become, there is a need to make sure network services are communicating properly and changes can be done effectively. Techniques such as traffic analysis, blue/green testing, load balancing, circuit breaking, and more, can be boiled into this new networking model called a “service mesh”. To get an introduction on it all, read William Morgan’s article What’s a service mesh? And why do I need one? from Buoyant. Now as everyday conversation becomes filled with words like Linkerd, Istio, and Envoy, you won’t be as puzzled.

Many companies run internal load-balancers, like HAProxy or F5, that assume responsibility of routing traffic between microservices. This can become cumbersome as these weren’t designed to handle inter-app communication, especially in cloud native applications at large scale. Linkerd replaces this with something tailored.

As of the published date of this article, the Cloud Native Computing Foundation is a sponsor of the Linkerd project in the “inception” phase. This means it has the potential to go to an incubation state if it graduates. Envoy is currently in an incubation state so it’s worth noting there are two routes (no pun intended) to get started with service meshes.

To get a jumpstart on the overview, architecture, and technical jargon of Linkerd, consider watching Alex Leong give a Linkerd 101 talk and take a few minutes to peruse the documentation. Like learning any piece of software, keep in mind what the end state should look like when complete. The goal of learning Linkerd for myself is to eventually run it in a full mesh mode within Kubernetes. The docs show how to do this very easily, however, it doesn’t necessarily explain what’s happening. So this blog starts on a much smaller scale using a single host with Docker installed that simplifies the network. This gives a basic understanding of packet flow.

A single host removes cumbersome networking that could involve things like CNI and port forwarding with Kubernetes. If you know your machine’s IP address, the subnet of the docker0 bridge, and how to expose a port with Docker, then the traffic path becomes very easy to understand.

This simple demonstration will be using NGINX in its most basic form using Linkerd as a proxy to intercept and forward requests. Wrapping everything in containers will make this clean and no binaries need to be installed on your machine. Deploy the NGINX container exposing port 8888 as it’s forwarded to port 80 using the docker command line:

docker run -d --name webtest -p 8888:80 nginx

Verify NGINX access through the configured port of 8888 on the localhost:

The Docker examples miss some key configuration steps and don’t necessarily explain how the router works. If you found this article because Linkerd didn’t work as noted in the examples, it’s because ip is not specified in all the necessary places. Let’s take a look at the Linkerd configuration and what each component is accomplishing.

The admin section is where the Admin UI is configured. The standard default is to specify port 9990 and the local IP of 0.0.0.0. routers is where the magic happens in this example. Linkerd has the ability to work with http, http/2, thrift, and mux protocols. Since NGINX uses the standard web service of HTTP/1.1, the http protocol is noted in the config. It’s possible to have multiple routers all using the http protocol, therefore using a label will differentiate the routers and landing page within the Admin UI. The biggest learning curve are Dtabs. This is the logical path of how traffic is forwarded. In this example, we are taking the default /svc request and translating it to a specific IP address for the deployed NGINX service.

In some cases you could use the local address, but since this is being encapsulated with Docker, the machine’s IP needs to be used. This can be abstracted out further using something like Consul or Zookeeper which could have multiple instances of NGINX for proper load balancing. Dtabs become increasingly more advanced as the notion of prefix matching, namers with service discovery, and wildcards are introduced. The servers is how the service is configured to accept incoming requests. This is port being advertised as a proxy and requests are routed through dtab. For this demo, requests are accepted on port 8080 and sent to NGINX on port 8888. This file will be called webtest.yml

admin:
  port: 9990
  ip: 0.0.0.0

routers:
- protocol: http
  label: webtest
  dtab: /svc => /$/inet//8888;
servers:
- port: 8080
  ip: 0.0.0.0

Now it’s time to deploy Linkerd! Using the configuration file, the ports for the Admin UI and the new NGINX proxy service need to be exposed with Docker. In addition, the container needs access to webtest.yml file and will use it as the entrypoint.

docker run --name linkerd -d -p 9990:9990 -p 8080:8080 -v `pwd`/webtest.yml:/config.yml buoyantio/linkerd:latest /config.yml

Verify the Admin UI is available at http://localhost:9990.

Now open a tab to http://localhost:8080 and the NGINX welcome screen will appear. Put this tab alongside the Linkerd Admin UI and verify there is 1 total connection. Repeatedly hit the refresh button to see the requests spike in real time.

You just deployed your first Linkerd service that functioned as a proxy for NGINX! This simple example sets a baseline for understanding traffic flow, port allocation, and more. This is an integral step before diving into more advanced setups that include advanced routers, configuring namers and namerd, and full mesh within Kubernetes.

Before going too far, be aware of HTTPS and TLS. Linkerd has the ability to use TLS but certificates and untrusted domains can make it very difficult to troubleshoot. After discovering the linkerd-tcp project, it showed promise for proxying traffic like MySQL communication and HTTPS (not just http). I was able to successfully use a combination of linkerd, linkerd-tcp, and namerd to create another service mesh for proxying Dell EMC ScaleIO gateway traffic. Unfortunately, the linkerd-tcp project looks to be stalled and no longer getting attention. The Docker image is out of date, building from source is broken, and there is only the ability to view metrics by forwarding to Prometheus. Linkerd-tcp has recently been updated to improve parity. The next phase of Buoyant’s journey is a new service mesh called Conduit that is tailor-made for Kubernetes. Learn more about Conduit from The New Stack’s podcast at KubeCon 2017. Welcome to the fast-paced and grueling ecosystem of service meshes!

 

–EDIT–