How I Installed Kubernetes From Scratch on a Bytemark VM

I run Luzme, an ebook search system, which runs 24x7 on about 30 virtual machines in 7 countries on 6 continents.

This article explains how I setup Kubernetes from scratch on a set of VMs from Bytemark, to improve the way I use the infrastructure behind Luzme, using the new container-based technologies, such as Docker and Kubernetes,  to provide a better, more maintainable, more scalable system Luzme uses a diverse stack of technologies, including Percona (a MySQL-compatible database), Firebase (real-time NoSQL), Redis, Celery, Django, Solr, AngularJS, Python.

The majority of the system runs on VMs provided by the excellent Bytemark, a UK hosting company, with a few other VMs running in other countries where necessary.

I develop and support this system by myself. So I need the system to just work. I need it to be self-healing, where possible. And I would really like it to scale gracefully so that the next time I find Luzme on the front page of CNET or Lifehacker, the system scales to cope, rather than falling whimpering to its knees.

I liked the idea of using Kubernetes and Docker to containerise some aspects of this infrastructure. But although the documentation at the time made it very easy to get a single node up and running, it proved extremely hard to get a multi-node system up and running.

So this is my documentation of what I needed to do, to help others avoid the pain of discovery. Where possible, I have included listings of the network configuration at each stage, since this was the cause of almost all the problems I encountered.

Please note that this was done in January 2016; current documentation and procedures may be different now.

Getting Started

I followed this documentation.

http://kubernetes.io/v1.1/docs/getting-started-guides/docker-multinode.html

The first step was to create 2 Ubuntu Trusty nodes, one master and one worker. I did this using the usual Bytemark setup process.

To add more worker nodes, I would just replicate the procedure for the first one.

Note that the documentation says: “Please install Docker 1.6.2 or Docker 1.7.1.”

Setup the Master node

So I created a standard Bytemark VM, using their base configuration of 1 GB memory, 25 GB disk space, choosing Ubuntu 14.04 LTS (Trusty) as the OS to install.

I saved the root password.

I prefer to use less powerful accounts where possible so configured the machine to let me in on a ssh key.

# mkdir ~/.ssh && chmod 700 ~/.ssh
# touch .ssh/authorized_keys
# chmod 600 .ssh/authorized_keys

I then copied my public key over to the new machine, adding it to .ssh/authorised_keys

Remove IPv6

The cause of many of my problems were due to a confusion in the install script between different results returned when asking programmatically for IP address or hostname.

Bytemark VMs come as standard with both IPv6 and IPv4.

I didn’t have sufficient time to understand why these different results were being returned, there was no obvious reason why that was so. So I decided to just disable IPv6.

Not ideal, and with more time, I would have investigated this further.

<code># nano /etc/sysctl.conf</code>

and add these lines to sysctl.conf file

net.ipv6.conf.all.disable_ipv6 = 1 net.ipv6.conf.default.disable_ipv6 = 1 net.ipv6.conf.lo.disable_ipv6 = 1

# sysctl -p

Install Docker

# apt-get install docker.io

Note that this package DOES NOT HAVE the name docker in ubuntu.

So now my network config looked like this.

# ifconfig -a

docker0   Link encap:Ethernet  HWaddr 56:84:7a:fe:97:99 
          inet addr:172.17.42.1  Bcast:0.0.0.0  Mask:255.255.0.0
          UP BROADCAST MULTICAST  MTU:1500  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

eth0      Link encap:Ethernet  HWaddr fe:ff:00:00:55:11 
          inet addr:46.43.2.130  Bcast:46.43.2.255  Mask:255.255.255.0
          inet6 addr: 2001:41c9:1:41f::130/64 Scope:Global
          inet6 addr: fe80::fcff:ff:fe00:5511/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:10428 errors:0 dropped:0 overruns:0 frame:0
          TX packets:3801 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:9825303 (9.8 MB)  TX bytes:402822 (402.8 KB)

lo        Link encap:Local Loopback 
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0

          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

Clone Kubernetes Code (v1.1.4)

$ git clone <a href="https://github.com/kubernetes/kubernetes.git">https://github.com/kubernetes/kubernetes.git</a>

$ git checkout v1.1.4

$ cd kubernetes/docs/getting-started-guides/docker-multinode

You will need the kubectl program which is not part of that repo.

So download that using the link in the kubernetes documentation, and put it in  /usr/local/bin/.

Check Network Config

If you run these 5 commands now,

# ifconfig -a

# route

# docker ps

# docker -H unix:///var/run/docker-bootstrap.sock ps

# kubectl get nodes

you should now get output similar to this.

# ifconfig -a
docker0   Link encap:Ethernet  HWaddr 56:84:7a:fe:97:99 
          inet addr:172.17.42.1  Bcast:0.0.0.0  Mask:255.255.0.0
          UP BROADCAST MULTICAST  MTU:1500  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

eth0      Link encap:Ethernet  HWaddr fe:ff:00:00:55:11 
          inet addr:46.43.2.130  Bcast:46.43.2.255  Mask:255.255.255.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:93398 errors:0 dropped:0 overruns:0 frame:0
          TX packets:43871 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:131469041 (131.4 MB)  TX bytes:3516199 (3.5 MB)

lo        Link encap:Local Loopback 
          inet addr:127.0.0.1  Mask:255.0.0.0
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0

          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

# route
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
default         46-43-2-1.no-re 0.0.0.0         UG    0      0        0 eth0

46.43.2.0       *               255.255.255.0   U     0      0        0 eth0

172.17.0.0      *               255.255.0.0     U     0      0        0 docker0


# docker ps

CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES


# docker -H unix:///var/run/docker-bootstrap.sock ps

FATA[0000] Get http:///var/run/docker-bootstrap.sock/v1.18/containers/json: dial unix /var/run/docker-bootstrap.sock: no such file or directory. Are you trying to connect to a TLS-enabled daemon without TLS? 


# kubectl get nodes

error: couldn't read version from server: Get <a href="http://localhost:8080/api">http://localhost:8080/api</a>: dial tcp 127.0.0.1:8080: connection refused

Which tells us this:

  * we have a docker interface in ifconfig
  * we have a docker route in the routing table
  * the standard docker daemon is running...
  * ... but it doesn't have any live containers
  * the 2nd docker daemon needed by kubernetes is not yet running...
  * ... and neither is the kubernetes server

MASTER

So now we can install the kubernetes master node

cd kubernetes/docs/getting-started-guides/docker-multinode

./master.sh

Whereupon the result of those 5 commands now looks like this

and now we have

# ifconfig -a

          docker0   Link encap:Ethernet  HWaddr 56:84:7a:fe:97:99 
          inet addr:10.1.55.1  Bcast:0.0.0.0  Mask:255.255.255.0
          UP BROADCAST MULTICAST  MTU:1500  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

eth0      Link encap:Ethernet  HWaddr fe:ff:00:00:55:11 
          inet addr:46.43.2.130  Bcast:46.43.2.255  Mask:255.255.255.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:155374 errors:0 dropped:0 overruns:0 frame:0
          TX packets:54960 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:217671551 (217.6 MB)  TX bytes:4489883 (4.4 MB)

flannel.1 Link encap:Ethernet  HWaddr de:93:0f:c2:96:8f 
          inet addr:10.1.55.0  Bcast:0.0.0.0  Mask:255.255.0.0
          UP BROADCAST RUNNING MULTICAST  MTU:1450  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

lo        Link encap:Local Loopback 
          inet addr:127.0.0.1  Mask:255.0.0.0
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:2404 errors:0 dropped:0 overruns:0 frame:0
          TX packets:2404 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0

          RX bytes:456749 (456.7 KB)  TX bytes:456749 (456.7 KB)




# route

Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
default         46-43-2-1.no-re 0.0.0.0         UG    0      0        0 eth0
10.1.0.0        *               255.255.0.0     U     0      0        0 flannel.1
10.1.55.0       *               255.255.255.0   U     0      0        0 docker0

46.43.2.0       *               255.255.255.0   U     0      0        0 eth0

# docker ps
CONTAINER ID        IMAGE                                       COMMAND                CREATED              STATUS              PORTS               NAMES
1479e15ed371        <a href="http://gcr.io/google_containers/hyperkube:v1.0.3">gcr.io/google_containers/hyperkube:v1.0.3</a>   "/hyperkube schedule   About a minute ago   Up About a minute                       k8s_scheduler.2b8ee744_k8s-master-127.0.0.1_default_82b47e8581e171a8c3f38c284b8a0579_61e81a02           
2d17874be25e        <a href="http://gcr.io/google_containers/hyperkube:v1.0.3">gcr.io/google_containers/hyperkube:v1.0.3</a>   "/hyperkube apiserve   About a minute ago   Up About a minute                       k8s_apiserver.a94f0183_k8s-master-127.0.0.1_default_82b47e8581e171a8c3f38c284b8a0579_c6a20e6c           
226f11219fbd        <a href="http://gcr.io/google_containers/hyperkube:v1.0.3">gcr.io/google_containers/hyperkube:v1.0.3</a>   "/hyperkube controll   About a minute ago   Up About a minute                       k8s_controller-manager.19f4ee5e_k8s-master-127.0.0.1_default_82b47e8581e171a8c3f38c284b8a0579_9ccccf86  
b00a7288243d        <a href="http://gcr.io/google_containers/pause:0.8.0">gcr.io/google_containers/pause:0.8.0</a>        "/pause"               About a minute ago   Up About a minute                       k8s_POD.e4cc795_k8s-master-127.0.0.1_default_82b47e8581e171a8c3f38c284b8a0579_9a8bd96c                  
2a825a132d52        <a href="http://gcr.io/google_containers/hyperkube:v1.0.3">gcr.io/google_containers/hyperkube:v1.0.3</a>   "/hyperkube proxy --   About a minute ago   Up About a minute                       hopeful_davinci                                                                                         

ea52c28a9ac9        <a href="http://gcr.io/google_containers/hyperkube:v1.0.3">gcr.io/google_containers/hyperkube:v1.0.3</a>   "/hyperkube kubelet    About a minute ago   Up About a minute                       ecstatic_nobel                

                                                                 

# docker -H unix:///var/run/docker-bootstrap.sock ps
CONTAINER ID        IMAGE                                  COMMAND                CREATED             STATUS              PORTS               NAMES

6a523dd5f0bd        <a href="http://quay.io/coreos/flannel:0.5.0">quay.io/coreos/flannel:0.5.0</a>           "/opt/bin/flanneld -   2 minutes ago       Up 2 minutes                            tender_kowalevski   

d036c4f56cbd        <a href="http://gcr.io/google_containers/etcd:2.0.12">gcr.io/google_containers/etcd:2.0.12</a>   "/usr/local/bin/etcd   3 minutes ago       Up 3 minutes                            romantic_mccarthy 


# kubectl get nodes
NAME        LABELS                             STATUS

127.0.0.1   <a href="http://kubernetes.io/hostname=127.0.0.1">kubernetes.io/hostname=127.0.0.1</a>   Ready

WORKER

Add the IP address of the worker node to the firewall on the master.

Follow the same initial setup as before, and it should look similar to this.

[email protected]:~# ifconfig -a
docker0   Link encap:Ethernet  HWaddr 56:84:7a:fe:97:99 
          inet addr:172.17.42.1  Bcast:0.0.0.0  Mask:255.255.0.0
          UP BROADCAST MULTICAST  MTU:1500  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

eth0      Link encap:Ethernet  HWaddr fe:ff:00:00:55:12 
          inet addr:46.43.2.131  Bcast:46.43.2.255  Mask:255.255.255.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:94645 errors:0 dropped:0 overruns:0 frame:0
          TX packets:35207 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:130735811 (130.7 MB)  TX bytes:2631768 (2.6 MB)

lo        Link encap:Local Loopback 
          inet addr:127.0.0.1  Mask:255.0.0.0
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0

          collisions:0 txqueuelen:0 

          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)




[email protected]:~# route
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
default         46-43-2-1.no-re 0.0.0.0         UG    0      0        0 eth0

46.43.2.0       *               255.255.255.0   U     0      0        0 eth0

172.17.0.0      *               255.255.0.0     U     0      0        0 docker0



[email protected]:~# docker ps

CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES

Edit and Run the Worker Install Code

There’s a line in the worker.sh file from the kubernetes repo which causes the IPv4/IPv6 confusion I mentioned earlier.

So edit worker.sh to remove that overrider line. Then run the work installation script.

export MASTER_IP=<public IP on master>

./worker.sh

So now on the worker node, we should have this:

# ifconfig -adocker0   Link encap:Ethernet  HWaddr 56:84:7a:fe:97:99 
          inet addr:10.1.76.1  Bcast:0.0.0.0  Mask:255.255.255.0
          UP BROADCAST MULTICAST  MTU:1500  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

eth0      Link encap:Ethernet  HWaddr fe:ff:00:00:55:12 
          inet addr:46.43.2.131  Bcast:46.43.2.255  Mask:255.255.255.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:153405 errors:0 dropped:0 overruns:0 frame:0
          TX packets:44774 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:211357174 (211.3 MB)  TX bytes:3458177 (3.4 MB)

flannel.1 Link encap:Ethernet  HWaddr 9e:80:4b:9b:9e:75 
          inet addr:10.1.76.0  Bcast:0.0.0.0  Mask:255.255.0.0
          UP BROADCAST RUNNING MULTICAST  MTU:1450  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

lo        Link encap:Local Loopback 
          inet addr:127.0.0.1  Mask:255.0.0.0
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:2 errors:0 dropped:0 overruns:0 frame:0
          TX packets:2 errors:0 dropped:0 overruns:0 carrier:0

          collisions:0 txqueuelen:0 

          RX bytes:100 (100.0 B)  TX bytes:100 (100.0 B)




# route
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
default         46-43-2-1.no-re 0.0.0.0         UG    0      0        0 eth0
10.1.0.0        *               255.255.0.0     U     0      0        0 flannel.1
10.1.76.0       *               255.255.255.0   U     0      0        0 docker0

46.43.2.0       *               255.255.255.0   U     0      0        0 eth0




# docker ps
CONTAINER ID        IMAGE                                       COMMAND                CREATED              STATUS              PORTS               NAMES
5b520d7b12e8        <a href="http://gcr.io/google_containers/hyperkube:v1.0.3">gcr.io/google_containers/hyperkube:v1.0.3</a>   "/hyperkube proxy --   About a minute ago   Up About a minute                       modest_mclean      

cfb440bf9a5a        <a href="http://gcr.io/google_containers/hyperkube:v1.0.3">gcr.io/google_containers/hyperkube:v1.0.3</a>   "/hyperkube kubelet    About a minute ago   Up About a minute                       goofy_babbage 




# docker -H unix:///var/run/docker-bootstrap.sock ps
CONTAINER ID        IMAGE                          COMMAND                CREATED             STATUS              PORTS               NAMES

569fc709ad84        <a href="http://quay.io/coreos/flannel:0.5.0">quay.io/coreos/flannel:0.5.0</a>   "/opt/bin/flanneld -   3 minutes ago       Up 3 minutes                            serene_hoover     




# kubectl get nodes
NAME          LABELS                               STATUS
127.0.0.1     <a href="http://kubernetes.io/hostname=127.0.0.1">kubernetes.io/hostname=127.0.0.1</a>     Ready

46.43.2.131   <a href="http://kubernetes.io/hostname=46.43.2.131">kubernetes.io/hostname=46.43.2.131</a>   Ready

FINISHED

And that’s it done. I wrote some ansible scripts to do this, which meant that once I’d figured all this out, it was possible to install a new node in a very short time.

But it took me about a month of digging through the Kubernetes install to get it working.

So I hope this helps someone!