Storage in Docker Swarm

Using Docker Swarm to orchestrate Container Deployments on multiple hosts is convenient, but has its drawbacks when it comes to persistent volumes. Assuming you already know what Docker Swarm is and how it works, you’re probably familiar with the fact that Container Volumes are local to the node the Container has been started on. While Docker Swarm thankfully orchestrates replication of my services across the cluster (adding more containers on multiple nodes), it doesn’t handle replicating these containers‘ volumes between nodes.

Now, Docker seems to be the ideal candidate for hosting web-apps like Mastodon, Peertube, etc, as it makes it easy to package and isolate multiple instances of the same service on one node (or cluster of nodes). While Docker itself supports persistent volumes, there’s no off-the-shelf replication of these volumes between the cluster nodes.

ENTER S3

While there are lots of options available to run a shared access filesystem between linux hosts, we wanted to experiment with something less hacky and more dockery. Enter S3

Technically with S3 we’re referring to Object Storage, but everything is S3-compatible today, so we roll with it.

Disclaimer: S3 Storage for Docker Volumes is an interesting concept but has its limits. It’s good for volumes with medium to big files and a medium access rate. As it’s not block-based (at least for the interface Docker uses), even atomic operations on files means to fully transfer the whole file to the S3 storage backend. Doesn’t make sense for databases, but, let’s say, a photo or video sharing app (like Pixelfed or Peertube).

S3FS Docker Plugin

For our S3 Storage Backend, we’re using WASABI. They have a 30-day trial and claim to be faster than all the other providers, so why not.

To have Docker use the S3 backend for creating Volumes, we’re using rexray’s S3FS-Plugin for Docker.

On each node you want to use S3FS-Volumes, execute:

docker plugin install rexray/s3fs:latest \
S3FS_ACCESSKEY=XXXXXXXXXXXXX \
S3FS_SECRETKEY=XXXXXXXXXXXXXXXXX \
S3FS_ENDPOINT=https://s3.wasabisys.com \
S3FS_OPTIONS=url=https://s3.wasabisys.com

Once S3FS is successfully installed, you can create volumes on it in your Compose Stacks by specifying the driver in the volumes section:

Benefits

volumes:
app-data:
driver: rexray/s3fs:latest

Depends on the application. Apps that store lots of not-so-tiny files (Nextcloud, Mastodon, Peertube, Gallery Apps, WordPress, …) can make use of this as S3 storage is much cheaper and probably (depending on the provider) faster than block storage. It’s absolutely useless for Databases, Logging or anything that requires lots of IO. It’s a cheap hack for web-servers with static content – you can have the cheap S3 storage with the flexibility of a Container Platform on top of it – so you can handle SSL Termination, Routing, DNS, etc as well.

Come visit us on Discord! We’re actively developing new ways to ease hosting of Distributed and Federated Apps and we’re happy to hear your thoughts about it!

author image

Fabian Peter

See all author posts

Schreibe einen Kommentar

Deine E-Mail-Adresse wird nicht veröffentlicht. Erforderliche Felder sind mit * markiert.