Featured image of post Selfhosted setup

Selfhosted setup

Overview over my personal selfhosted setup, with the hows and whys

Getting inspired by a recent Reddit post on r/selfhosted, I wanted to do this myself. That way others might find something useful, give their input on improvements, and I have a record of what I had at this moment of time (for when the setup changes in the future) to look back on.

I highly encourage others to share their thoughts.


Since I don’t have a good place for my own hardware in my current apartment, I’ve decided to use Hetzner. Another good reasoning is that any setup I do at home can’t compete with neither their network reliability nor green energy. But it’s more expensive in the longrun. But having my own server opens up a lot of possibilities for tinkering with software.


  • Intel Core i7-3930
  • Intel 82579LM - 1 Gbit NIC
  • 2x SSD SATA 240 GB
  • 2x HDD SATA 3,0 TB
  • 8x RAM 8192 MB DDR3

Excluding the 25% Danish tax (had to mention that…) the server costs ~$46 monthly. The nice thing about Hetzner though is that they also offer reused hardware from former customers or products, selling it on their auction at a lower price.

The main reasoning for getting a dedicated host is mostly around using VMs, costs, and storage. Storage especially is expensive on all the major cloud platforms, making 1TB for example cost around $57 extra each month. I got a whole server with 6x that amount of storage.


Virtualization is done using Proxmox which was available as a standard image when provisioning my new server from the rescue image. It’s a great open-source product, and I haven’t had any issues with it on it’s free version (subscription is for support and extra updates).

As seen above I run a pet server setup, meaning that each server is my special little baby that I manually configured. Though they are mostly identical all running Docker Swarm (yes you heard that right), which I’ll cover in the next section.

VM specs

Name # Cores RAM HDD
Warden 2 4GB 32GB
Files 4 8GB 532GB
Docker1 4 15GB 932GB
Docker2 4 15GB 632GB
kasm 4 8GB 82GB
k0s 2 4GB 32GB


Services Containers Networks
~28 ~43 ~20

As mentioned throughout, I actually run the “dead” Docker swarm software for managing my services. If memory services me right this is mainly due to using Portainer which used Swarm when it was setup (don’t know if that’s changed now). I don’t actually recommend using Swarm, and I don’t need its features either, which is mainly replication and orchestration. Instead its enough to use docker-compose since it gives the same nice way to describe deployments.

Each of my services have their own stack/compose file which describe the deployment. It’s domain scoped, meaning that all the services it needs like databases are defined in there, not pointing to a central instance. This helps me separate my concerns and keeps them isolated. Since using Swarm I define which node I want it bound it (making Swarm even more useless) so it doesn’t try to schedule on a different node where its data isn’t located. I also define labels for Traefik making it automatically bind to the container - meaning I don’t need to manage extra exposed ports nor configuration files.

I’m currently in process of slowly moving the manifest files over on my Gitlab instance and pointing Portainer to it, and setting up the webhooks for updating on pushes.

Heimdall example

version: '3'
      - PUID=1000
      - PGID=1000
      - TZ=Europe/Copenhagen
      - /srv/docker/heimdall:/config
      - traefik-net
          - "node.hostname == docker1"
        traefik.enable: 'true' 80
        traefik.http.routers.heimdall.rule: Host(``)
        traefik.http.routers.heimdall.tls: "true" traefik-net

    external: true

All volumes are mounted under /srv/docker with their own folders, and usually at least has the traefik-net network attached so traefik can connect to the container. When there are multiple containers for the service, a database for example, a non-external network called net is created and attached on each of the containers.


I run 25+ services, all intended for myself, but are public facing for easy access. Though some are more used than others, the important or interesting ones I’ll detail below.


This blog was actually built for this post really, as it’s the first. It was the extra push to finally get a good “main website”. It’s built using Hugo, with the Stack theme by Jimmy Cai. Of course giving it some modifications for colors and what not.

Since it’s a static site, it’s bundled into a nginx image using a multi-stage Dockerfile. The code itself resides in a Git repository of course, which I made a little pipeline for on Gitlab.


FROM klakegg/hugo:ext-alpine-onbuild AS hugo

COPY . /src

FROM nginx:alpine
COPY --from=hugo /target /usr/share/nginx/html

CMD ["nginx", "-g", "daemon off;"]

Very simple, the first stage uses the Docker image by klakegg to build the site, by copying the entire repository into the source directory. Afterwards the nginx stage can copy the generated site from the previous one, and place it into nginx’s data folder. Since nginx has a default configuration that just serves files, no other modification were needed.

Gitlab pipeline

  - docker
  - deploy

  image: $CI_REGISTRY/zerosrealm/blog/blog:latest
  deploy_webhook: XXXX

  stage: docker
    - export DOCKER_BUILDKIT=1
    - docker logout $CI_REGISTRY
    - docker build -f Dockerfile --build-arg HUGO_ENV_ARG=production -t $image .
    - docker image push $image

  stage: deploy
  image: curlimages/curl:7.80.0
  script: curl -X POST $deploy_webhook

It logins into the Gitlab registry with the deploy user credentials, and builds it with with the given tag and the argument HUGO_ENV_ARG set to production, which is a value the Hugo image uses to set the environment. The DOCKER_BUILDKIT part is to enable Dockers new build architecture, which I just do as a best-practice (isn’t enabled by default either last I checked). Lastly I get it to send a webhook to Portainer (another great use) to pull the new image and schedule a new container.


This shouldn’t be a big surprise, but I use Portainer for simple management of my containers. It’s mainly to easily access logs, and deploying the manifest files (so I don’t need to do any terminal stuff either).

I think it’s a fine piece of software, but something like lazydocker is also good enough to get a overview.


Accessing my VMs is mainly done through Guacamole, so I don’t need to connect with a VPN and can always access it. The only gripe I have with it is copy-pasting - it’s usually a no-go.

Might just be a issue with my setup somehow, but it doesn’t do indentations, every line has a extra new line (last I checked it wasn’t due to Windows new lines), and doing it in something as simple as nano will mess up the editor somehow (sudden duplicate lines and overwriting). Other than that, very good. I just hop back into my tmux session when I connect.

Looking into an alternative called Teleport which has some great features, but currently having some troubles with it, so opened a issue on their Github (which hasn’t gotten any attention yet).


All my personal code gets added to my own Gitlab instance, this is mostly a data governance thing, and projects I want to share to other people as well, I’ll publish on Github. It also lets me do my CI/CD totally within my own infrastructure.

Pipelines usually consist of building the application and creating one or more containers with Docker. I don’t yet do any actual deployments (have played with it though), but will do more of that on my next setup.

Tried Jenkins for a bit but it was being too much of a pain getting different images to work together in the same pipeline. Gitlab nicely combines my normal Git and CI operations instead of them being different applications (was using Gitea before).


Best note taking experience so far. Working both offline and online by syncing to a server instance, Trilium is truly great. If you’ve ever seen Obsidian, it’s basically the same concept and style.

Supporting anything I’d want from markdown, to WYSIWYG, images, tasks. Even has a journal/calendar feature to write To-do’s for the day. Only “missing” a mobile app, but the web app looks and works perfect for simple note taking on mobile.

Highly recommended if looking for a good note application.

Filebrowser & Syncthing

Acting cloud solution is a combination of Filebrowser for web based management, and Syncthing for the actual synchronization software. There are various clients for both Windows, Linux, and Android. That’s everything I need.

It’s a extremely simple solution, and by no means that user friendly to setup compared to Nextcloud or Seafile. But it’s light as a feather, and just works. I detailed a bit more finely how it’s setup on my wiki here.


List of honorable mentions that I use but don’t need to show off.

  • CoreDNS
  • Wireguard
  • Vaultwarden
  • Jellyfin
  • Leantime
  • WikiJS
  • Code-server
  • Zabbix


Restic has become my go-to backup tool. It’s easy, secure, and fast. I backup to a Hetzner Storage Box due to the costs, and its still within Hetzner’s infrastructure. It’s mounted with sshfs since there is an issue between Hetzner and restic losing connection. Think this might be Hetzner disconnecting idle or long sessions, and restic has no reconnection mechanism for SFTP yet, so it just fails. Then I can point restic to my docker data with a simple cron job.

Currently looking into switching over to AWS Glacier.

Being the over-engineer that I am at times, I wanted a simpler solution for managing restic across multiple servers. So I actually started work on a little application for that, which I need to polish and test more. But here’s a sneak-peek.


All VMs except k0s are on the same network, with iptables rejecting and forwarding traffic. Any web traffic gets sent to Warden which has Traefik on it, and have a few other services such as Wireguard and Mailcow being forwarded to their respective servers.

I had a issue with my Docker services being unable to contact each other due to the routing, so I had to setup rules for the OUTPUT chain to redirect traffic from my public IP to Traefik. However the real solution is to actually just setup DNS so that it just points to the correct server. I did this before I got CoreDNS and haven’t taken the time to point all servers to it (it’s not a big need, so in the backlog).


A rather short post, but hopefully gave you some ideas, both of some cool things and what not to do. In selfhosting it’s a balance between time, effort, and do-i-give-a-fuck. My setup is simply just a cluster of pet Docker hosts that I take care of (though I don’t actually do that much on the hosts themselves). This is due to it being my first selfhosted setup, so things just got added onto each other with time.

For the future I want to go over to a nice cattle setup, where things are done in a more neat way. There automation will be king, so going to test out deploying and setting up new nodes using Ansible and cloud-init within Proxmox. Of course, based on Kubernetes. I already define all my services manually in manifest files for Docker, so doing it for Kubernetes shouldn’t be a deal-breaker for me. Just need more testing and toying around.

If that sounds interesting, I might share my ideas for my next setup as they are right now.

Licensed under CC BY-NC-SA 4.0
Built with Hugo
Theme Stack designed by Jimmy