In this article I will write about:

  • Why I needed a new hosting setup for my projects
  • Make a shallow dive into why I chose Docker
  • Show great tools for easy deployments for HTTP(S)
  • List services running with this now

I regularly do side-projects which mostly consist of some API or website of some sorts. After developing it locally I have to deploy it to a server to make it generally available. For this purpose I prefer cheap Digital Ocean droplets since they are reliable enough and can easily be scaled up and down. These projects can have varying software stacks, especially when I try something new.

When publishing a new service I now have two choices. Firstly I could configure one of my existing servers to provide the service. This can easily lead to problems in the long run, when I have various different programs and libraries on my server most of which I forget to remove after abandoning a project. This will eventually leave me with outdated software that is exposed to the internet without my knowledge. The other choice is to create a new droplet / VM for every new project. While this is a very clean way it is not sustainable because of its high costs. Even 5$ / VM / month can easily add up to a lot of money, especially for a student like myself. Both of these approaches increase the maintenance cost and make it harder for me to continuously create one-off projects and deploy them quickly.

That lead to me looking into solutions for server management. There is a lot of server management software out there that generates a lot of value for someone who has a full-time position as OPs-guy, but not for private persons. (Vagrant, Chef, Puppet, …) There was not enough time for an extensive look so I might have missed something.

I liked what Docker has to offer. With docker I can separate different side-projects on the same machine. Even more so I do not even need to install the dependencies on my local development machine because I can already work with the Docker image on my local machine. With shared folders I can persist data. Since performance is usually not an issue for my projects (sadly), I do not have to consider the added overhead that Docker might impose. There is only a lack of memory most of the times when I am running stacks redundantly on the same machine.

This is great for projects that limit themselves to one library/software. But sometimes it is interesting to try to build a software that leverages queues, a key-value store, a database and what not. In this case docker-compose comes in really handy. It allows the definition of many different containers that are managed by a configuration file.

HTTP Caveats

Virtual Hosts

It is crucial for all websites and APIs to be available via port 80. Managing Virtual Hosts manually is not too hard but still a burden. Luckily someone has taken care of that and built nginx-proxy. With it I can simply define the virtual host of the container in the docker-compose configuration file. nginx-proxy also automatically detects when a new container is starting up and adjusts accordingly.

Shared networks

In docker-compose only the containers of the same docker-compose file are on the same network. If you want to reverse proxy to a container you need to make them join the network of the reverse proxy.

E.g. in your docker-compose file:

...
  gogs:
    networks:
      - outside
      - default
    ...

networks:
  outside:
    external:
      name: reverseproxy_default

HTTPS

Another thing that always takes time and previously money and was therefore often abandoned by me is the management of SSL-certificates. As add-on to nginx-proxy there is letsencrypt-nginx-proxy-companion which automatically manages the letsencrypt certificates for all subdomains and regularly renews them.

With these two libraries it becomes incredibly easy to quickly deploy.

Startup

Every project sits in its own folder and has its own docker-compose.yml. I also wrote a little script that guarantees a clean build:

#!/bin/bash
docker-compose stop
docker-compose rm -f
docker-compose build
docker-compose up -d --remove-orphans

This rebuild the image every time which might not be desired. Also I might have to look at properly daemonizing the containers. For now the existing setup has proven to work sufficiently.

Review

This blog now runs with this software configuration and it has proven quite stable. Also I already migrated my setup to a new server in the meantime.

A very simple tool that now runs on this is:

Moved to Projects

This is not a very long list, but it is supposed to grow whenever I find time to try something new.

EDIT: Changed the URL of the letsencrypt companion container to the original repo.