Most non-NFS Kubernetes volume providers are not able to provision volumes with an access mode of ReadWriteMany. The Rook Ceph provisioner included by default with the Kubernetes scheduler has the same limitation when provisioning PVCs, but is able to support a shared filesystem mounted read/write by multiple pods.
The CRD for the Filesystem kind in the ceph.rook.io/v1beta1 API group is applied in the rook-ceph namespace at install, so your application only needs to provide a Filesystem config to enable this feature.
--- #kind: scheduler-kubernetes apiVersion: ceph.rook.io/v1beta1 kind: Filesystem metadata: name: rook-shared-fs namespace: rook-ceph spec: metadataPool: replicated: size: 1 dataPools: - replicated: size: 1 metadataServer: activeCount: 1 activeStandby: true
This will run a Ceph metadata server in the rook-ceph namespace to store all the attributes required for a functioning filesystem. The actual data in the files will be stored as blobs in the same cluster that is already running on all nodes. Once a Filesystem has been added to your application yaml, you can mount a shared fileysystem in any pod as a flexVolume.
--- #kind: scheduler-kubernetes apiVersion: apps/v1 kind: Deployment metadata: name: my-deployment spec: replicas: 2 selector: matchLabels: app: my-deployment template: metadata: labels: app: my-deployment spec: containers: - name: ubuntu image: index.docker.io/ubuntu:16.04 command: - /bin/sh - -c - 'sleep 3600' volumeMounts: - name: shared mountPath: /var/lib/shared volumes: - name: shared flexVolume: driver: ceph.rook.io/rook fsType: ceph options: fsName: rook-shared-fs clusterNamespace: rook-ceph path: /subdir1
Attempting to create more than one Filesystem config will lead to errors on Linux kernel versions < 4.7. If different groups of pods need to have isolated shared filesystems, a better approach is to mount subpaths as shown in the example above.