sebgoa sebgoa on atomic, AWS, kubernetes, terraform

Terraform plan for Atomic based k8s cluster

Once you have started experiencing Kubernetes on a single node with Pack8s you will want to start using a real cluster. There are several methods described on the official documentation. If you do not want to use GKE you can choose your favorite Cloud provider and configuration management system (i.e Saltstack, Ansible, Chef).

Terraform however has emerged  in the DevOps community has a very useful tool to create and manage virtual infrastructure. Terraform makes it easy to define your infrastructure as code and manage changes to it. We think it is one of the easiest way to create your Kubernetes cluster on a cloud provider. Hence we have started working on formk8s  a terraform plan to create a Kubernetes cluster on AWS based on Atomic OS. Support for other cloud providers could be added as well as using a different OS. Atomic does make it extremely easy because it is a Docker optimized operating systems that contains the Kubernetes binaries and the systemd unit files needed to run all required services. This work was inspired by Kubestack which uses CoreOS.

You will need a AWS account and terraform installed on your system.

A single master is created which runs a one node etcd, flannel is used to create an overaly network used by Kubernetes.
Configuration of systemd units is done by copying files to the instances or using templates. Nodes auto-register with the master.

Currently, the API server is not secured by TLS and authorization is not implemented. This is to be used for dev and test until configuration needed for production is added.

Getting Started

Start by cloning the repo.

$ git clone https://github.com/skippbox/formk8s.git $ cd formk8s

The k8s plan is inside a terraform module, load it

$ terraform get

And now apply the plan (will take a minute):

$ terraform apply
module.k8s.aws_security_group.k8s:  
Creating...  
description: "" => "Kubernetes traffic" ...  
Outputs: master = ec2-52-17-199-53.eu-west-1.compute.amazonaws.com workers = ec2-52-48-45-230.eu-west-1.compute.amazonaws.com,ec2-52-49-117-239.eu-west-1.compute.amazonaws.com  

Connect to the head node which has the kubectl cli in its path and check that all nodes have joined the Kubernetes cluster:

$ ssh -i ~/.ssh/id_rsa_k8s centos@ec2-52-17-199-53.eu-west-1.compute.amazonaws.com
[centos@ip-172-31-40-236 ~]$ kubectl get nodes
NAME LABELS STATUS  
127.0.0.1 kubernetes.io/hostname=127.0.0.1 Ready  
ip-172-31-36-209.eu-west-1.compute.internal kubernetes.io/hostname=ip-172-31-36-209.eu-west-1.compute.internal Ready  
ip-172-31-42-27.eu-west-1.compute.internal kubernetes.io/hostname=ip-172-31-42-27.eu-west-1.compute.internal Ready  

The IP addresses of the instances you create will be different.

Testing

You can log into each node and check that the Flannel overlay is properly setup and try creating a replication controller.
For instance to run a nginx replicaton controller:

[centos@ip-172-31-40-236 ~]$ kubectl run nginx --image=nginx [centos@ip-172-31-40-236 ~]$ kubectl get rc
CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS  
nginx nginx nginx run=nginx 1  

Once the image is downloaded the pod will enter running state

[centos@ip-172-31-40-236 ~]$ kubectl get pods
NAME READY STATUS RESTARTS AGE  
nginx-vkqte 1/1 Running 0 1m  

You can now expose this replication controller to the outside, using a service. For testing you can use a NodePort type of service, since the security group currently opens every port (not recommended though).

[centos@ip-172-31-40-236 ~]$ cat s.yaml
apiVersion: v1  
kind: Service  
metadata:  
  labels:
    name: nginx
  name: nginx
spec:  
  type: NodePort
  ports:
  - port: 80
  targetPort: 80
selector:  
  run: nginx

Create the service the kubectl cli.

[centos@ip-172-31-40-236 ~]$ kubectl create -f s.yaml
You have exposed your service on an external port on all nodes in your cluster. If you want to expose this service to the external internet, you may need to set up firewall rules for the service port(s) (tcp:30606) to serve traffic.  

If you open your browser on the IP of the node running the pod and port 30606 (for the example above, your port will be different), you will see the homepage of nginx.

Tuning

The main k8s plan is k8s.tf and looks like this:

module "k8s" { source = "./k8s" keyname = "k8s" keypath = "~/.ssh/idrsak8s" region = "eu-west-1" servers= "2" instancetype = "t2.micro" masterinstancetype = "t2.micro" } output "master" { value = "${module.k8s.masteraddress}" } output "workers" { value = "${module.k8s.worker_addresses}" }

Change the key name, path, region, number of servers etc as you see fit in this file. Additional defaults are set in the module file k8s/variables.tf

Managing

The strength of Terraform comes when you want to make changes to your infrastructure. Say we want to add two nodes to this cluster. We can just edit the k8s.tf file and change the number of servers to 4. Terraform will detect that it has already 2 running and will start an additional two. It will not change any of the existing resources.

$ terraform plan Refreshing Terraform state prior to plan... ... + module.k8s 2 resource(s) Plan: 2 to add, 0 to change, 0 to destroy.

Apply the plan and see the two new nodes being started and configured. After they reboot they will join the Kubernetes cluster automatically.

[centos@ip-172-31-40-236 ~]$ kubectl get nodes
NAME LABELS STATUS  
127.0.0.1 kubernetes.io/hostname=127.0.0.1 Ready  
ip-172-31-32-24.eu-west-1.compute.internal kubernetes.io/hostname=ip-172-31-32-24.eu-west-1.compute.internal Ready  
ip-172-31-34-50.eu-west-1.compute.internal kubernetes.io/hostname=ip-172-31-34-50.eu-west-1.compute.internal Ready  
ip-172-31-36-209.eu-west-1.compute.internal kubernetes.io/hostname=ip-172-31-36-209.eu-west-1.compute.internal Ready  
ip-172-31-42-27.eu-west-1.compute.internal kubernetes.io/hostname=ip-172-31-42-27.eu-west-1.compute.internal Ready  

Nodes can be removed in a similar fashion, setting the servers variables to a lower value.

Formk8s is still under development and PR are welcome, but one command to get a working Kubernetes cluster on AWS with Atomic. What could be easier ?

PS: Don’t forget to destroy your infrastructure or your instances will keep running and you might get charged for it !)

$ terraform destroy Do you really want to destroy? Terraform will delete all your managed infrastructure. There is no undo. Only 'yes' will be accepted to confirm. Enter a value: yes