sebgoa sebgoa on digitalocean, docker, kmachine, kubernetes

Kubernetes on-ramp with kmachine

Docker has transformed the world of containers and IT in general and moved the emphasis on application design rather than on infrastructure provisioning. With the many cloud providers out there, it is fair to say that provisioning machines is now easy. Packaging individual application services into container images and composing them into an application is becoming more straightforward. Docker has also developed a set of tools that developers have embraced Docker machine, Docker compose and Docker swarm.

At skippbox we love Docker and we also love Kubernetes. Kubernetes is a container orchestration system that was open sourced by Google and represents a re-write of their Borg internal system. We think that developers should be able to use the Docker tools that they love and also experience, learn and use Kubernetes the same way. To that end we have started to develop a few tools. The first one is kmachine, aimed at being an on-ramp to Kubernetes.

To put it straight, it is a fork of Docker-machine which in addition to setting up Docker on a cloud instance also sets up Kubernetes solo. This means that with the same commands that you have come to love with docker-machine, you now get the benefit of a working Kubernetes system ready for development and testing of your applications.

Let’s see how this works:

Getting Started

First grab the kmachine binary for your operating system, for example on OSX:

$ sudo curl -L https://github.com/skippbox/machine/releases/download/v1.2.0/kmachine_darwin-amd64.tar.gz >kmachine.tar.gz && \
tar xzf kmachine.tar.gz && \  
rm kmachine.tar.gz && \  
mv -f kmachine_darwin-amd64/* /usr/local/bin  

If you do not have it already grab the  Kubernetes CLI called kubectl:

$ wget https://storage.googleapis.com/kubernetes-release/release/v1.2.0/bin/darwin/amd64/kubectl

Then pick your Cloud provider of choice. AWS, DigitalOcean, Exoscale and local VirtualBox have been tested so far.

On Digital Ocean for instance, set up your token the same way that you do with docker-machine. If you have already set it up, you have nothing to do.

$ export DIGITALOCEAN_ACCESS_TOKEN=a8bd38345hkg235hgb7bf82ca321e4212a0e2cd38475902834hslkdf43857

You are now ready to create a kmachine:

$ kmachine create -d digitalocean skippbox
Running pre-create checks...  
Creating machine...  
Waiting for machine to be running, this may take a few minutes...  
Machine is running, waiting for SSH to be available...  
Detecting operating system of created instance...  
Provisioning created instance...  
Copying certs to the local machine directory...  
Copying certs to the remote machine...  
Setting Docker configuration on the remote daemon... Configuring kubernetes...  
Copying certs to the remote system...  
To see how to connect Docker to this machine, run:  
kmachine env skippbox  

Once the machine is created, just like with docker-machine you can get some environment variables that will allow you to use it easily. Note that with kmachine, we return some instructions that kubectl can use to define a new k8s context (i.e a Kubernetes API endpoint).

$ kmachine env skippbox
kubectl config set-cluster skippbox --server=https://159.203.140.251:6443 --insecure-skip-tls-verify=false  
kubectl config set-cluster skippbox --server=https://159.203.140.251:6443 --certificate-authority=/Users/sebgoa/.docker/machine/machines/skippbox/ca.pem  
kubectl config set-credentials kuser --token=IHqC9JMhWOHnFFlr2cO3tBpGGAXzDqYx  
kubectl config set-context skippbox --user=skippbox --cluster=skippbox  
kubectl config use-context skippbox  
export DOCKER_TLS_VERIFY="1"  
export DOCKER_HOST="tcp://159.203.140.251:2376"  
export DOCKER_CERT_PATH="/Users/sebgoa/.kube/machine/machines/skippbox"  
export DOCKER_MACHINE_NAME="skippbox"  
# Run this command to configure your shell:
# eval "$(kmachine env skippbox)"

The authentication token is auto-generated, and the certificates are put in place for proper TLS communication with the k8s API server. Once this new context is set you see it with kubectl config view

$ eval "$(kmachine env skippbox)"
$ kubectl config view
apiVersion: v1  
clusters:  
- cluster:
  certificate-authority: /Users/sebgoa/.kube/machine/machines/skippbox/ca.pem
  server: https://159.203.140.251:6443
  name: skippbox
contexts:  
- context:
  cluster: skippbox
  user: kuser 
  name: skippbox
current-context: skippbox  
kind: Config preferences: {}  
users:  
- name: skippbox
  user: token: IHqC9JMhWOHnFFlr2cO3tBpGGAXzDqYx

Note that since the functionalities of docker-machine are preserved you will have a working remote Docker host and easy path into your kmachine via SSH, you will be able to list all your kmachines and you will also be able to remove them. Try the following commands.

$ docker ps
$ kmachine ssh skippbox
$ kmachine ls

When listing the containers, you will see that a few containers have already started. These containers run the various Kubernetes components. Yes that means that we started Kubernetes using Docker ! It is a very handy deployment mechanism. This does mean that we could have used Docker-compose to start Kubernetes, but we wanted to have a single command, hence Kubernetes is deployed as part of the kmachine provisioning.

Starting a sample application

With a working Kubernetes endpoint, you can now try to deploy an application. Let’s deploy the famous game 2048. To do this we first create a replication controller, this is easily done with the kubectl run command.

$ kubectl run game --image=cpk1224/docker-2048

And now we create a service which will expose matching Kubernetes pods to the outside internet. Since we are doing this for development and testing, we expose the service to the internet using a type called NodePort (more on that in another post ! ).

 $ cat s.yaml
apiVersion: v1  
kind: Service  
metadata:  
  labels:
    name: game
  name: game
spec:  
  type: NodePort
  ports:
  - port: 80
  targetPort: 80 
  selector:
    run: game
$ kubectl create -f s.yaml
You have exposed your service on an external port on all nodes in your cluster. If you want to expose this service to the external internet, you may need to set up firewall rules for the service port(s) (tcp:30023) to serve traffic.  

Check the IP of your kmachine and open your browser at that IP using the port assigned to the service above and Voila !

 $ kmachine ls
NAME ACTIVE DRIVER STATE URL SWARM  
skippbox * digitalocean Running tcp://104.131.38.84:2376  

Don’t forget to remove your machine when you are done.

 $ kmachine rm skippbox