Managing Volumes

Managing persistence volumes in a Docker Swarm application

This content is associated with a legacy version of the Replicated product. For the current Replicated product documentation, see docs.replicated.com.

Managing Volumes

Docker Swarm supports both binding a host path as part of a definition for a single service as well as anonymous and named volumes in the top-level volumes key. When scheduling your services, keep in mind that the tasks (containers) backing a service can be deployed on any node in a swarm, and this may be a different node each time the service is updated. It is possible to specify constraints so that the service’s tasks are deployed on a node that has the volume present. Node labels are particularly useful in this case and can be configured via the Replicated browser admin console by your customer. See the example below:

---
# kind: replicated

...
swarm:
  nodes:
    - labels:
        role: db
...

---
# kind: scheduler-swarm

version: '3'
services:
  db:
    image: postgres
    deploy:
      placement:
        constraints:
          - node.labels.role == db

Add Labels Screenshot

When using named volumes, Docker will use the default driver configured by the Engine (in most cases, this is the local driver) by default. In addition, Docker can be configured to use a volume driver that is multi-host aware. See this link for a list of Docker volume plugins available. Replicated does not add any additional support for volume plugins. This must be configured by your customer at runtime.