sebgoa sebgoa on AWS, kubernetes, packer, solo

Pack8s, packer templates for Kubernetes solo

Ramping up on distributed applications orchestration systems like Kubernetes can be challenging.  Thankfully Kubernetes can be deployed standalone on a single node, which makes learning it easy. With Packer -which helps you create a ready to use image- this becomes a breeze.

Pack8s is a set of packer templates used to create AWS AMI that run Kubernetes on a single node. Once you create your AMI, starting an instance with it will give you an easy to use one node Kubernetes setup for dev and test.

You will need to have Packer installed on your local machine and a AWS account. Currently these templates only support AWS, but could be extended to support other Cloud providers and local hypervisors. PR welcome .

So far, only Ubuntu and Atomic are supported. On Ubuntu a docker based deloyment is used while on Atomic, the default systemd units are used with minimal configuration. The configuration is indeed minimal (e.g no TLS ) and the resulting AMI should only be used for development and testing.

To give a try, clone the pack8s repo and select your OS (i.e Ubuntu or Atomic)

$ git clone https://github.com/skippbox/pack8s.git
$ cd pack8s
$ cd ubuntu
$ packer build k8s.json
...
==> amazon-ebs: Creating the AMI: k8s_atomic_single_1446562839 amazon-ebs: AMI: ami-913fe1e2
==> amazon-ebs: Waiting for AMI to become ready...
==> amazon-ebs: Terminating the source AWS instance...
==> amazon-ebs: Deleting temporary security group...
==> amazon-ebs: Deleting temporary keypair... Build 'amazon-ebs' finished.
==> Builds finished. The artifacts of successful builds are: --> amazon-ebs: AMIs were created: eu-west-1: ami-913fe1e2

Now you can just start an instance using the AMI that was built. For example using the AWS cli do:

$ aws ec2 run-instances --image-id ami-913fe1e2 --instance-type t2.micro --key-name k8s

SSH to the running instance and enjoy Kubernetes on a single node

$ ssh -i ~/.ssh/id_rsa_k8s ubuntu@52.31.45.173
$ ./kubectl get pods
NAME READY STATUS RESTARTS AGE  
kube-controller-ip-172-31-32-7 5/5 Running 0 1m  
$ ./kubectl get nodes
NAME LABELS STATUS  
ip-172-31-32-7 kubernetes.io/hostname=ip-172-31-32-7 Ready  

To start experimenting, you can create a replication controller to launch nginx. Once the Docker image for nginx is downloaded the Pod will enter running state:

$ ./kubectl run nginx --image=nginx
$ kubectl get pods
NAME READY STATUS RESTARTS AGE  
nginx-vkqte 1/1 Running 0 1m  

To access the application, you need to expose this replication controller to the outside, using a service (another Kubernetes primitive). For testing you can use a NodePort type of service, the default security group will need to have the service port opened. For production cases you will most likely want to use a LoadBalancer type for the service. But for testing NodePort is a good start.

Here is a basic service definition:

apiVersion: v1
kind: Service
metadata:
  labels:
    name: nginx
  name: nginx
spec:
  type: NodePort
  ports:
  - port: 80
  targetPort: 80
selector:
  run: nginx

Create the service with the kubectl cli.

$ ./kubectl create -f s.yaml
You have exposed your service on an external port on all nodes in your cluster. If you want to expose this service to the external internet, you may need to set up firewall rules for the service port(s) (tcp:30606) to serve traffic.  

If you open your browser on the IP of the instance your started on AWS and port 30606 (e.g example above, your port will be different), you will see the homepage of nginx.

Enjoy discovering Kubernetes with pack8s.