HowTo

Kubernetes Cluster on Hetzner Bare Metal Servers

If you want to run your own Kubernetes Cluster, you have plenty of possibilities: You can set up a single node cluster using minikube locally or on a remote machine. You can also set up a multi node cluster on VPS or using managed cloud providers such as AWS or GCE. Alternatively, you can use hardware, e.g. Raspberry Pis or bare metal servers. However, without the functionality provided by a managed cloud provider, it is difficult to take full advantage of the complete high availability capabilities of Kubernetes. We have tried – and present here the instructions for a highly available Kubernetes cluster on Hetzner bare metal servers.

Read more “Kubernetes Cluster on Hetzner Bare Metal Servers”
GitHub

Ansible Role for tinc VPN

When setting up Kubernetes clusters, it makes sense for the individual nodes of Kubernetes to live in the same private network. If Kubernetes is set up on bare metal machines from suppliers such as Hetzner, it may not necessarily be possible to set up a common network of this kind natively. This is where tinc comes in: it makes it very easy to set up a virtual network across all participating nodes. To keep the configuration of tinc parallel to that of Kubernetes (I use Kubespray for my Kubernetes setup), I developed an Ansible Role for tinc VPN and made it available on GitHub.

Features

  • Installing and setting up tinc VPN service
  • In-place private key generation (private keys are never copied)
  • Support for additional nodes where host machines are not covered by the playbook
  • Support for custom routes for the VPN interface
  • Support for joining existing bridge interfaces on the host machine
  • Custom scripting for up/down hook scripts

Setup

For setup instructions or a tutorial how to use my Ansible Role for tinc VPN please check the README. It always contains the up-to-date instructions for using this role and will be updated, if new features come up.