[TriLUG] docker swarm, rancheros, persistent storage

Dewey Hylton via TriLUG trilug at trilug.org
Tue Dec 19 15:11:52 EST 2017


Wow, not a lot of docker swarm folks here ...

I really like the idea of using https://hub.docker.com/r/vieux/sshfs/ as a volume
plugin. Leveraging ssh makes me happy for more than one reason. And this does
appear to work, at least on the surface.

I can create a volume (docker volume create -d vieux/sshfs ...) and see that the
volume presents the proper data. For example, I can fire up a container with
keisisqrl/fossil and see exactly what I would expect.

HOWEVER ...

If I attempt to create a service (docker service create ...) in swarm mode, using
keisisqrl/fossil, even if on a single host and without being replicated, the
container fails to start due to "chmod error" having to do with the very same
volume.

So, going on the hope that other docker users are present and may have seen something
similar, I'm posting this here. Anyone with a clue? Or failing that, capable and
willing to help troubleshoot?

Thanks!

----- On Dec 16, 2017, at 1:52 PM, Triangle Linux Users Group General Discussion trilug at trilug.org wrote:

> Hi all!
> 
> What do you use for replicated or shared persistant volumes for your docker
> swarm containers?
> 
> Most folks who know me know that I'm a minimalist, sometimes (or most times) to
> a fault. I'm currently working with RancherOS for my Docker-based projects. I
> like it very much because it is very simple and stripped down - thus it fits
> me. I can install via PXE, it seems to run Docker very well, etc. It also works
> great in swarm mode. I happen to like swarm mode because it is baked into
> Docker, secure by default (as in its management traffic is secured by TLS), and
> it is very simple to get going - particularly when compared with Kubernetes.
> 
> Recently I have begun looking at moving some of my stateful containers into
> swarm mode for redundancy. The theory is pretty simple; move the stateful data
> into a named Docker volume which is accessible by all cluster nodes, and
> therefore all containers. For external databases and such this is not a big
> deal, but I have found this to be a pain point for simple shared files. For
> example, a moinmoin wiki leverages plain files instead of a database; another
> is the fossil cms, which can serve an entire directory of fossil repositories,
> which are stored on the filesystem as sqlite database files. All this works
> great in a Docker container, with data in a named volume - but I have yet to
> figure out how to make those directories and files available to containers on
> different hosts.
> 
> I have read that Kubernetes can provide those shared volumes somehow, but I'd
> really like to find a decent way to do this without having to add so much
> complexity. I've attempted NFS mounts (docker volume create --opt type=nfs) and
> while the creation does not error out, and all swarm containers see the volume,
> the volume data does not seem to reflect the data on the nfs share at all. I've
> also attempted portworx, which sounds fantastic aside from its price tag for
> enterprise users, but I have failed to get it installed properly. It may be
> that both of these failures are somewhat due to the stripped-down nature of
> RancherOS. If anyone has any experience with this, particularly with RancherOS
> (not necessarily Rancher, though), I'd love to hear from you.
> 
> Even if you do not use RancherOS, and instead have a full installation of
> something else (eg. Ubuntu/CentOS) with docker installed atop of that - and you
> have figured out how to provide shared volumes across your swarm nodes - I
> would appreciate hearing from you as well.
> 
> Thanks!
> --
> This message was sent to: Dewey Hylton <plug at hyltown.com>
> To unsubscribe, send a blank message to trilug-leave at trilug.org from that
> address.
> TriLUG mailing list : https://www.trilug.org/mailman/listinfo/trilug
> Unsubscribe or edit options on the web	:
> https://www.trilug.org/mailman/options/trilug/plug%40hyltown.com
> Welcome to TriLUG: http://trilug.org/welcome


More information about the TriLUG mailing list