Kubernetes Cluster on Hetzner Bare Metal Servers

If you want to run your own Kubernetes Cluster, you have plenty of possibilities: You can set up a single node cluster using minikube locally or on a remote machine. You can also set up a multi node cluster on VPS or using managed cloud providers such as AWS or GCE. Alternatively, you can use hardware, e.g. Raspberry Pis or bare metal servers. However, without the functionality provided by a managed cloud provider, it is difficult to take full advantage of the complete high availability capabilities of Kubernetes. We have tried – and present here the instructions for a highly available Kubernetes cluster on Hetzner bare metal servers.

Why Bare Metal?

GCE and AWS are very expensive, especially when your cluster is growing. Raspberry Pis are cheap, but also quite limited regarding resources. Usual VPS providers lack support of High Availability, and a single node cluster is per definition a whole Single-Point-of-Failure – nice for testing, but not for production.

It is also possible to set up a Kubernetes cluster on bare metal. In this case, implementation of High Availability depends on the features the colocation provider is offering. Somehow, you have to create a machine-independend load balancer, which redirects traffic to working nodes and ignores broken nodes.

Dorian Cantzen (extrument.com) and I have taken up that challenge. In this article, we will describe the steps to set up a production ready Kubernetes cluster on Hetzner bare metal servers using the Hetzner vSwitch feature.

Important notice: Neither the author of this article (Matthias Lohr) nor Dorian Cantzen are affiliated with Hetzner nor paid for writing this article. This article is a technical report, initiated by Matthias Lohr, summarizing the findings when trying to find a feasible solution for a highly available Kubernetes cluster on bare metal machines. We found Hetzner as part of one possible solution, most probably there are more eligible providers out there for a similar solution.

The High Availability Challenge

The basic goal of a server cluster is reliability in terms of high availability and fault-tolerance. If one component of a cluster fails, cluster logic will automatically use another component for the task.

Generally, you can devide cluster components in three categories: computational resources, storage resources and networking. The core component of Kubernetes is a scheduler for computational loads (Pods), which provide services such as web portals etc. Ceph clusters or Kubernetes-based solutions like Rook provide redundant storage. High Availability for the remaining part, the networking, requires special support by the colocation provider. Usually, when using mainstream hosters like Hetzner, a server has a single NIC with a single IP (ok, one IPv4 and one IPv6) address. Typically, a DNS record points to one or multiple IP addresses. However, when pointing to multiple IPs, and the server behind this IP is not available, the user will get connection errors. So, DNS can’t help to provide a solution here. What we need is an IP address which can be shared between multiple servers.

We found that using Hetzner vSwitches, it is possible to route IPs or IP subnets into a VLAN (IEEE 802.1Q) where each server can be connected. It’s then up to the servers to decide which one should reply to incoming traffic for this/these IP(s). IP migrations can be done completely within the cluster servers, without notifying an external API.

Setting up a Kubernetes Cluster on Hetzner Bare Metal Servers

Create VLAN (Hetzner vSwitch)

First, we have to create the VLAN, which connects the servers. You can do that in the Hetzner vSwitch configuration area:

After the vSwitch is created, you have to assign the servers you want to add to your Kubernetes cluster to the vSwitch:

On the IPs tab, you can order additional IPs or IP subnets, which are routed to the vSwitch and therefore not assigned to a single server. We will use MetalLB to manage these IP address(es).

Server Setup

Now, you should set up the servers with your favorite OS capable of running Kubernetes. After the standard setup has finished, we need to configure the VLAN and the additional IPs. According to the official Hetzner documentation, you have to create a virtual network interface with VLAN taggings. But since you want to use the IPs within the Kubernetes cluster, you have to add some additional ip rules.

Below you will find an example for a working netplan configuration. This configuration uses 10.233.255.0/24 as internal network range for cluster internal communication and 321.321.321.32/28 as subnet assigned to the vSwitch. 10.233.0.0/18 is the default service IP range used by kubespray, 10.233.64.0/18 the according default Pod IP range.

network:
  version: 2
  vlans:
    # Configure vSwitch public
    enp4s0.4000:
      id: 4000
      link: enp4s0
      mtu: 1400
      addresses:
        - 10.233.255.1/24
      routes:
        - to: 0.0.0.0/0
          via: 321.321.321.33
          table: 1
          on-link: true
      routing-policy:
        - from: 321.321.321.32/28
          to: 10.233.0.0/18
          table: 254
          priority: 0
        - from: 321.321.321.32/28
          to: 10.233.64.0/18
          table: 254
          priority: 0
        - from: 321.321.321.32/28
          table: 1
          priority: 10
        - to: 321.321.321.32/28
          table: 1
          priority: 10

Alternatively, you can use our Hetzner vSwitch ansible role which we developed during our experiments.

Test if the VLAN works properly by try to ping all nodes using their private IP addresses (10.233.255.1, 10.233.255.2, …).

Setup Kubernetes

Now, since the networking stuff is up and running, you are ready to install Kubernetes. We did that using Ansible/kubespray, which offers a quite convenient and production ready solution for managing bare metal Kubernetes clusters. Use the server’s internal IP addresses in your inventory.

# example inventory
[all]
node1 ip=10.233.255.1 etcd_member_name=etcd1
node2 ip=10.233.255.2 etcd_member_name=etcd2
node3 ip=10.233.255.3 etcd_member_name=etcd3

Before installing the network plugin, ensure that you set the right MTU for your Kubernetes networking plugin. Hetzner vSwitch interfaces have a MTU of 1400. When using e.g. Calico, which has a 20 bytes overhead, you need to set the MTU for Calico to 1380.

Install and configure MetalLB to use the IP/subnet assigned to the Hetzner vSwitch. Please ensure, that you do not configure the whole subnet, but exclude the two first and the last IP address.

Example
subnet assigned: 321.321.321.32/28
IPs in subnet: 321.321.321.32321.321.321.47
subnet address (not usable): 321.321.321.32
subnet gateway address (used by Hetzner): 321.321.321.33
subnet broadcast address (not usable): 321.321.321.47
remaining usable IPs/MetalLB range: 321.321.321.34321.321.321.46

That’s it! Now you can start using Kubernetes Services with type LoadBalancer and one of the usable IP addresses to get traffic into your cluster. MetalLB will care about assigning these IP addresses to working nodes. If one node goes down, MetalLB will reassign the IP address to a working node.

The only bottleneck we didn’t figure out yet: Does Hetzner have a redundant (highly available) setup for the vSwitches…?

Leave a Reply

Your email address will not be published. Required fields are marked *