Sometimes it helps to have programmatical access to files on your NAS. By default, you can use SMB, FTP or sometimes a HTTP API. But besides the security aspects, it’s mostly practical issues you encounter with these protocols.

Ever tried to use a NAS-storage on external hosts or, even better, Containers? It’s a PITA to work with traditional protocols. Enter S3, again.

We’re working with a QNAP here, but the following approach works on any NAS (or even your local Computer) that has Docker Support. QNAP fortunately comes with Container Station by default, I assume Synology, FreeNas etc have a way of running containers, too.

TL;DR – Here’s the plot in 30s

Please enjoy this 30s Uptempo-Screencast that compactly illustrates what we’re actually doing here.

Music as usual done by the Baus himself: @uunemployedd

If you’re still with me, here’s the sauce you need to get it up and running without further ado.

docker-compose.yml

In the video above, we’re creating our S3-compatible interface to the NAS‘ files with MinIO – „an open source object storage server compatible with Amazon S3 APIs“. MinIO is pretty solid, works awesome in distributed setups and is a breeze to set up.

We’re simply mounting a Share on the QNAP NAS (here: /share/Rootless, which corresponds to a shared folder named „Rootless„) into a MinIO Docker Container and then starting MinIO using this mounted volume as its data source. Et voilà: we can access our files through MinIO’s webinterface (http://$NAS_IP:9000) and of course with any app that supports S3-compatible storage backends.

Here’s the docker-compose.yml used in the video:

version: '3'
services:
minio:
image: minio/minio
environment:
- MINIO_ACCESS_KEY=minio1234
- MINIO_SECRET_KEY=minio1234
volumes:
- /share/Rootless:/data
restart: always
network_mode: bridge
ports:
- "9000:9000"
command: gateway nas /data

Replace minio1234 with whatever you find suitable for your security-needs. Use these credentials to access your files through MinIO’s webinterface or API.

Benefits?

Are endless, actually, as today S3 (and compatible services) is a very popular and cheap way to store data – and it’s already integrated in lots of popular and self-hosted services. Possible use-cases are:

More cool stuff to do with MinIO (and Object Storage) can be found on one of the awesome awesome lists.

But why at all?

My specific use case for a S3-compatible interface on a QNAP NAS is data transmission between clients. On one end, a digital workflow in a Cloud service generates a PDF-document that needs to be read and worked on by multiple Windows clients all using a common Network Share (on a QNAP NAS) in an internal network on the other end of the world.

Before, the PDF has been sent by mail and then, whenever someone found the time (or if someone found the time) would be downloaded and copied to the Network Share (hopefully the right one!).

Now the transmission is being done using a S3-Endpoint (pointing to MinIO, secure by a SSL-enabled Reverse-Proxy on the edge) with strict access controls and the PDF is simply being uploaded to the correct folder („Bucket“ in S3-terminology) on the QNAP NAS whenever it is being generated using S3-compatible commands – and the Windows clients seemlessly can access them using Samba/SMB in realtime.

Disclaimer: QNAP comes with its own Object Storage server that also implements QNAPs user- and rights-management. I personally found it a PITA to use. Besides needing to adjust access-rights on 3 different places for a single bucket, I wasn’t even able to connect to the Object Storage interface using S3-compatible libraries. Dealbreaker. I tend to like my stuff easy to use and quick to set up, so back to Docker it was!

Snacks

author image

Fabian Peter

See all author posts

Schreibe einen Kommentar

Deine E-Mail-Adresse wird nicht veröffentlicht. Erforderliche Felder sind mit * markiert.