Blog

General

TeddyCloud on a Public Server/IP with LetsEncrypt

TonieBoxes are a great kind of toy for fathers of young children. It allows them to play music or audio books on their own, just by placing funny little figures on it. It’s cute, cuddly, extremely intuitive, and not that technically complex. However, there is a major disadvantage: the device is tied to the manufacturer, who has of course invested a lot of time and money in the development of the Toniebox, and it is legitimate for the manufacturer to make money from it. However, there are enough examples of closed, cloud-based systems whose manufacturers discontinue the platform at some point because it is no longer sufficiently profitable to operate. Luckily, some people successfully reverse engineered the box and were able to modify it in a way, which allows to operate your own content server for this system.

When I first came into contact with their software, TeddyCloud, the first question that arose was whether I should install TeddyCloud locally or on a public server. The decision to go with a public server was an easy one: I wanted my kids to be able to use their TonieBox with all its content (both original and custom) while on vacation, for example. After that, however, the question arose as to how I could secure access. Typically, this includes a properly set up SSL and user authentication. For self-hosted services, LetsEncrypt is typically the first choice for SSL certificates. However, TeddyCloud also sets up its own CA to secure communication with the TonieBox, which uses Client Certificate Authentication.

In this context, the TeddyCloud documentation contains the following statement:

Please beware that port 443 cannot be remapped and you cannot use a reverse proxy like nginx or traefik without passing through the TLS (complex, not recommended). The client certificate authentication needs to be done by teddyCloud. Also, there is no SNI.

Challenge Accepted!

Actually, writing this article, I realized that I never checked if custom SSL certificates (e.g., issued by LetsEncrypt) are supported for the TeddyCloud web interface. Anyway, I also couldn’t find any documentation considering authentication for TeddyCloud. So I really wanted to have a reverse proxy, dealing with both, SSL encryption for the web interface with a valid (LetsEncrypt) certificate, and authentication (I decided to just use plain old Basic Auth).

Usually, a reverse proxy either just forwards TCP, or it actually does SSL Offloading by decrypting the encrypted HTTPS traffic and only forwarding the HTTP part to the upstream application. However, since TonieBoxes do client certificate authentication and the TeddyCloud wants to do it by itself to identify the TonieBox, SSL Offloading by the reverse proxy was no option. On the other hand, I somehow had to be able to set a switch if a connection (from a TonieBox) should be forwarded to the TeddyCloud port 443, or if it (from a browser) should end up with the web interface, encrypted with a LetsEncrypt certificate.

Luckily, nginx offers a nice option for this: With a stream block, I can basically do TCP forwarding, but with the ssl_preread directive I’m able to somehow peek into the connection and to check if, and which server name is indicated for the SSL connection.

Also, there is no SNI.

Perfect! This saved my day! So, if it’s a request with SNI, I do the SSL Offloading stuff with my LetsEncrypt certificate. If there is no SNI, I just forward it to TeddyCloud as is. Works for me 🙂

Setup

In case you want to use it by yourself, here is the relevant config!

To get things running, do the following steps:

  • Set up a server with an OS of your choice, install Docker engine and configure your desired domain name to the IP(s) of that server. I also would suggest setting up a firewall and only allowing ports 22 (SSH), 80 (HTTP) and 443 (HTTPS).
  • Copy the configs to your server (docker-compose.yaml and nginx.conf should be in the same directory) and adjust all occurences of the domain name teddycloud.example.com to your domain.
  • Comment out the HTTPS server section in nginx.conf, as we don’t have the LetsEncrypt certificates yet.
  • Start the Docker containers, e.g., using docker compose up -d.
  • Request a LetsEncrypt certificate by running docker compose exec -it certbot certbot certonly --webroot. When you’re asked for the webroot directory, please provide /var/www/certbot.
  • Uncomment the HTTPS server configuration in nginx.conf.
  • Restart nginx, e.g., using docker compose restart nginx.
General

Sync Zotero Library to Nextcloud

For my research, I’m using Zotero for bibliography management. It’s free, it’s great, and it fits perfectly for my needs. In this blog post, I show how to configure Zotero to synchronize your Zotero Library to Nextcloud.

For some of the research papers I’ve read during my research activities, there are several (mostly, but not always, similar) versions, for some other papers it is very difficult to find the document. Therefore, I decided to always keep a digital copy of the document I’ve just read, just to ensure to be able to access the exact same version I have accessed before. Zotero allows for attaching files to an entry, and furthermore allows for synchronizing the library as all as the attachments. While synchronizing the bibliography entries meta data (authors, title, …) seems to be free and unlimited, only 300MB of document storage are for free per account. Zotero offers paid plans to increase the storage limit, or to use own, WebDAV based, storage.

Since I have a running Nextcloud instance with WebDAV support, I decided to use my Nextcloud for the synchronization. Actually, it is quite easy to configure it accordingly, however, I spent some time on finding that out and there are also some open posts in the Zotero forums, therefore I’m going to document my solution here.

Configuring the Synchronization of your Zotero Library to Nextcloud

First, we need to create a folder in Nextcloud. Please note that Zotero requires the path to end with zotero. Also consider if you want to use your global Nextcloud credentials (which I don’t recommend to do) or to create a dedicated shared folder for this, which will provide you with extra credentials just for this purpose. Since the name of the folder configured to be shared does not show up in the URL, within the shared folder there has to be the zotero folder containing the actual synchronized attachments.

In my Nextcloud instance, I created a folder PhD/Zotero/zotero and configured and configured the directory PhD/Zotero to be accessible and editable using a link. The link then should look like this:

https://nextcloud.example.com/s/1337R4nd0mSh4r3S3cret

Now, in Zotero client, configure Sync (Edit -> Preferences -> Sync) as follows: Set File Syncing mode to WebDAV, as URL put nextcloud.example.com/public.php/webdav, and as username as well as password use the sharing secret (the last part of the URL). That should be it.

Update

The URL nextcloud.example.com/public.php/webdav is correct when using a sharing secret for the credentials. When using the actual account username and password, the URL is nextcloud.example.com/remote.php/webdav.

HowTo

Kubernetes Cluster on Hetzner Bare Metal Servers

If you want to run your own Kubernetes Cluster, you have plenty of possibilities: You can set up a single node cluster using minikube locally or on a remote machine. You can also set up a multi node cluster on VPS or using managed cloud providers such as AWS or GCE. Alternatively, you can use hardware, e.g. Raspberry Pis or bare metal servers. However, without the functionality provided by a managed cloud provider, it is difficult to take full advantage of the complete high availability capabilities of Kubernetes. We have tried – and present here the instructions for a highly available Kubernetes cluster on Hetzner bare metal servers.

Read more “Kubernetes Cluster on Hetzner Bare Metal Servers”
General

GitLab on a DiskStation

Sometimes, regardless of the possibilities offered by “the cloud”, you want to host important services yourself. For me as a software and DevOp engineer, this applies to my source code. For this reason, I host my GitLab instance myself. Since the GitLab package for DSM provided by Synology is outdated, I will explain here how to install the latest version of GitLab on a DiskStation using Docker.

Read more “GitLab on a DiskStation”
GitHub

Ansible Role for tinc VPN

When setting up Kubernetes clusters, it makes sense for the individual nodes of Kubernetes to live in the same private network. If Kubernetes is set up on bare metal machines from suppliers such as Hetzner, it may not necessarily be possible to set up a common network of this kind natively. This is where tinc comes in: it makes it very easy to set up a virtual network across all participating nodes. To keep the configuration of tinc parallel to that of Kubernetes (I use Kubespray for my Kubernetes setup), I developed an Ansible Role for tinc VPN and made it available on GitHub.

Features

  • Installing and setting up tinc VPN service
  • In-place private key generation (private keys are never copied)
  • Support for additional nodes where host machines are not covered by the playbook
  • Support for custom routes for the VPN interface
  • Support for joining existing bridge interfaces on the host machine
  • Custom scripting for up/down hook scripts

Setup

For setup instructions or a tutorial how to use my Ansible Role for tinc VPN please check the README. It always contains the up-to-date instructions for using this role and will be updated, if new features come up.