Using Docker to get HashiCorp Vault and Consul running with Packer and Terraform locally so you can practice without paying Amazon for the 100 hour learning curve.

In the real world there are instances where our applications will be deployed to multiple different environments. Developers usually test their applications locally using some homespun mechanism. Then (in larger organisations) the application will be released on to some integration or system test environment. Finally the application will be made live.

Again in larger organisations, there will be secrets that the company does not want developers to know, but are vital in the running of the application. Such as passwords, certificates, tokens etc.

So how do you provide your application with these secrets at runtime? And are those secrets the same on all environments?

There are many solutions, from the release team manually editing properties and config files on the live server, to complicated database solutions.

In the DevOps world we have things like SALT etc. Or often this substitution of secrets based on environment is part of the release team’s automation and features a lot of ‘SED’ commands *shudders*.

But HashiCorp provide two technologies for storing configuration. These are servers which are available on every possible environment (test, integration, live, and build). The servers themselves store data, from simple key-pair values to whole files. This data can be accessed via a REST API. The data stored can be edited by admins using a lovely web based front end. The idea is that whenever you need to get some secret, or config data, you call the relevant REST API and get it. At build time, or during live runtime etc.

Consul is intended to hold unsensitive simple config values, such as URL’s or port numbers. Vault is intended to hold secrets, such as passwords etc.

Both applications are essentially single executable files (I think they’re written in GoLang. Which function both as a server (when configured) or a client. So you install Consul on a server, and configure it as a server. And you install it on your build, or live environment as a client.

I’m going to give you some bits of bash, and packer which show how we do this on localhost using Docker, with any luck they’ll give you enough of an example to do this yourself.

We’re also creating our own Docker Registry on localhost so that we can store images when completed for later amendment, without using multi-stage docker builds or docker compose ie. FROM ubuntu:16.04 as Jetty etc.

Bash script to configure Docker Registry

#!/bin/bash
#test if docker has the given image and is running a container with the same name
function isDockerContainerRunning() {
   if [ -z "$(docker images | grep $1)" ]; then
      echo "the docker image $1 is not available locally"
      return 1
   fi
   if [ -z "$(docker container ls | grep $1)" ]; then
      echo "the docker container $1 is not running"
      return 1
   fi
   return 0
}

#is our local docker registry running?
if ! isDockerContainerRunning registry; then
   #create a docker registry
   echo "creating local docker registry http://localhost:5000/v2/_catalog"
   docker run -d -p 5000:5000 --restart=always -e REGISTRY_STORAGE_DELETE_ENABLED=true --name registry registry:2
   #to stop the registry :
   #docker container stop registry && docker container rm -v registry
   #curl http://localhost:5000/v2/_catalog
fi
if ! isDockerContainerRunning registry; then
   echo "failed to find or start a docker registry on localhost"
   exit 1
fi

Run and configure Consul

#!/bin/bash
#script to create configure and run HashiCorp Consul on localhost using docker
#test if docker has the given image and is running a container with the same name
function isDockerContainerRunning() {
   if [ -z "$(docker images | grep $1)" ]; then
      echo "the docker image $1 is not available locally"
      return 1
   fi
   if [ -z "$(docker container ls | grep $1)" ]; then
      echo "the docker container $1 is not running"
      return 1
   fi
   return 0
}

if ! isDockerContainerRunning consul; then
   docker run -d -p 8500:8500 -p 8600:8600/udp --name=consul consul
   export consul_addr=127.0.0.1:8500
   consul kv put some-name tabitha
fi
if ! isDockerContainerRunning consul; then
   echo "failed to find and start HashiCorp consul image on localhost"
   exit 1
fi
#the consul CLI installed on the machine running this script needs to
#know where to look for consul server, so use an environment variable
export consul_addr=127.0.0.1:8500

Run and configure Vault

#!/bin/bash
#run initialise and configure HashiCorp Vault locally using Docker
#test if docker has the given image and is running a container with the same name
function isDockerContainerRunning() {
   if [ -z "$(docker images | grep $1)" ]; then
      echo "the docker image $1 is not available locally"
      return 1
   fi
   if [ -z "$(docker container ls | grep $1)" ]; then
      echo "the docker container $1 is not running"
      return 1
   fi
   return 0
}
#extract the token from a file placed in /tmp by an earlier process
#and export it as an environment variable in this shell
function exportVaultToken() {
  if [ ! -f /tmp/vault.login ]; then
    echo "failed to find the vault.login file in /tmp"
    exit 1
  else
    localtoken="$(cat /tmp/vault.login | grep '^token  ' | awk '{print $2}' | xargs)"
    export VAULT_TOKEN="$localtoken"
  fi
}

#test if docker has the given image and is running a container with the same name
function isVaultInitialised() {
  readmsg="$(vault read secrets/myee | grep 'redis.url')"
  if [ ! "$readmsg" ]; then
    return 1
  fi
  return 0
}

function initVault() {
  #init vault and write output to disk
  (docker exec vault vault operator init) > /tmp/vault.init
  #unseal the vault using the keys create during initialisation
  cat /tmp/vault.init | grep '^Unseal' | awk '{print $4}' | for key in $(cat -); do
     (docker exec vault vault operator unseal $key) > /dev/null
  done
  #get a copy of the initial token
  token="$(cat /tmp/vault.init | grep '^Initial' | awk '{print $4}')"
  #docker exec vault sh && export VAULT_TOKEN=$token
  #attempt to log into the docker vault image and store our credentials
  (vault login $token) > /tmp/vault.login
  #set env var locally in this script allowing us to manipulate the vault
  exportVaultToken
  #tell vault where we want to put our secrets
  vault secrets enable -path secrets kv
  #store some secrets
  vault write secrets/myee \
        redis.url=redis://redis:6379
}

#do we need to do anything?
if ! isDockerContainerRunning vault; then
  #docker pull vault
  docker run -d --restart=always --cap-add=IPC_LOCK --name vault -p 8200:8200  -e 'VAULT_ADDR=http://127.0.0.1:8200' -e 'VAULT_LOCAL_CONFIG={"backend": {"file": {"path": "/vault/file"}}, "default_lease_ttl": "168h", "max_lease_ttl": "720h", "listener" : {"tcp":{"address":"0.0.0.0:8200", "tls_disable":"1"}}}' vault server
  docker exec vault sh && export VAULT_ADDR='127.0.0.1:8200'
  export VAULT_ADDR='http://127.0.0.1:8200'
  #sleep here, give docker chance to spin up the image
  sleep 10
fi
if ! isDockerContainerRunning vault; then
   echo "failed to find and start HashiCorp vault image on localhost"
   exit 1
fi

#is vault properly initialised?
if ! isVaultInitialised; then
  #start by removing some files that this script may have written previously
  rm /tmp/vault.init || true
  rm /tmp/vault.login || true
  echo "initializing vault with some default values. Your should change these values?"
  initVault
fi
exportVaultToken

How to use these scripts, and some useful functions

#!/bin/bash
#some useful functions etc.
#test to see if our local docker registry contains the named image
function isRegistryContains() {
   if [ -z "$(curl http://localhost:5000/v2/_catalog | grep $1)" ]; then
     return 1
   else
     return 0
   fi
}
#test for installation of required applications
function assertApplicationInstalled() {
   if [ -z "$(command -v $1)" ]; then
     echo "$1 not installed"
     exit 1
   fi
}

#how to check some pre-requisites
assertApplicationInstalled docker
assertApplicationInstalled packer
assertApplicationInstalled terraform
assertApplicationInstalled vault
assertApplicationInstalled consul

#how to init registry, vault, consul..
#see scripts above
#not we need to do 'source' because both the consul
#and vault scripts set an environment variable
source $(pwd)/scripts/initRegistry.sh
#init Consul server if not already initialised
source $(pwd)/scripts/initConsul.sh
#init Vault server if not already initialised
source $(pwd)/scripts/initVault.sh

#does our registry contain vanilla ubuntu image?
if ! isRegistryContains myee-ubuntu-base; then
   docker pull ubuntu:16.04
   docker tag ubuntu:16.04 localhost:5000/myee-ubuntu-base/latest
   #how to push images to our registry
   docker push localhost:5000/myee-ubuntu-base/latest
   #how to see contents of registry
   #echo $(curl http://localhost:5000/v2/_catalog)
fi

if ! isRegistryContains myee-ubuntu-base; then
   echo "failed to push ubuntu to local registry"
   exit 1
fi

#how to call packer
if ! isRegistryContains example-packer; then
  packer build ./example-packer.json
fi

On the machine that we have run the docker consul container, we need to install consul and vault commands. These will act as clients to the docker servers we have running. They need no configuration because our scripts have already set environment variables. Just install consul and vault (apt-get). Once this is done we can use consul from our command line (or a bash script) like : consul kv get foo

In Packer we must first declare a variable :


"variables": {
  "redis_url":"{{ vault `/secrets/myee` `redis.url`}}"
},

then we can use the variable in packer shell provisioners like this :


{
  "type": "shell",
  "environment_vars": [
    "redis_url={{ user `redis.url`}}"
  ],
  "inline": [
    "/home/jetty/replace_line.sh 'redis.url' \"redis.url=$redis_url\""
  ]
},

Packer publishing to Docker registry, example

{
    "builders": [{
          "type": "docker",
          "image": "localhost:5000/myee-ubuntu-base/latest",
          "commit": true,
          "changes": [
            "USER redis",
            "WORKDIR /user/redis",
            "EXPOSE 6375"
          ]
    }],
    "post-processors": [
        [
          {
           "type": "docker-tag",
           "repository": "localhost:5000/myee-redis",
           "tag" : "latest"
          },
           "docker-push"
        ]
    ],

    "provisioners": [
        {
            "type": "shell",
            "inline" : [
              "groupadd redis && useradd -N -m -g redis redis",
              "mkdir -p /home/redis/ && chown -R redis:redis /home/redis/",
              "touch /home/redis/.profile",
              "echo 'PATH $PATH:/home/redis/' >> /home/redis/.profile",
              "apt-get update && apt-get upgrade -y && apt-get install wget -y && apt-get install grep -y",
              "apt-get install redis-server -y"
            ]
        },
        {
            "type": "shell",
            "inline" : [
              "echo '#!/bin/bash' >> /home/redis/start.sh",
              "echo 'redis-server' >> /home/redis/start.sh",
              "chown redis:redis /home/redis/start.sh && chmod +x /home/redis/start.sh"
            ]
        }
    ]
 }

Deleting stuff

One problem you’ll face regularly when building docker images with HashiCorp stack is removing images from the local docker registry. Here is a script that almost works. Note that you have to start the registry image with -e REGISTRY_STORAGE_DELETE_ENABLED=true to allow the registry API to handle DELETE requests.

#!/bin/bash
imagename="myee-proxy"
blah="$(curl -XGET http://localhost:5000/v2/$imagename/manifests/latest)"
#echo "$blah"
#for each manifest
REGEX='sha256:([^"]*)(.*)'
while [[ ${blah} =~ ${REGEX} ]]; do
	sha="${BASH_REMATCH[1]}"
    res="$(curl -i -XDELETE http://localhost:5000/v2/$imagename/manifests/sha256:$sha)"
    echo "$res"
    blah="${BASH_REMATCH[2]}"
done
docker exec registry /bin/registry garbage-collect /etc/docker/registry/config.yml

Panic Script

At some point it will all go horribly wrong and you’ll need to start again. Here is a script which will burn it all with fire.

#!/bin/bash
terraform destroy -auto-approve
docker swarm leave --force
docker stop $(docker ps -a -q)
docker images purge
docker rm $(docker ps -a -q)
yes | docker system prune --all --volumes --force
rm ./terraform.tfstate
rm ./terraform.tfstate.*

Wrapping up

A quick note, Packer will only read Consul and Vault with versions around 1.4.0 onwards.

Hopefully at some point I’ll put an example Terraform script in here to show you how to tie up all the orchastration..