Hi There,
We have a deployment with an init container where we are checking whether flexVolume is ready to be mounted. Here are the details
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: test
namespace: replicated-namespace
spec:
progressDeadlineSeconds: 600
replicas: 2
revisionHistoryLimit: 10
selector:
matchLabels:
app: xyz
strategy:
type: Recreate
template:
metadata:
labels:
app: xyz
tier: backend
spec:
initContainers:
- command:
- /bin/sh
- -c
- df $MOUNT_PATH | grep ":6789"
env:
- name: MOUNT_PATH
value: /sharedfs
image: docker.io/replicated/replicated-operator:stable-2.46.2
imagePullPolicy: IfNotPresent
name: check-mount
resources: {}
securityContext:
seLinuxOptions:
type: spc_t
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /sharedfs
name: shared-mount
readOnly: true
containers:
...
...
...
restartPolicy: Always
volumes:
- flexVolume:
driver: ceph.rook.io/rook
fsType: ceph
options:
clusterNamespace: rook-ceph
fsName: rook-shared-fs
name: shared-mount
Now, whenever we restart the node the Deployment goes into CrashloopBackoff. Even though Due to restart policy the deployment is recreated by k8s, it never recovers.
It works only if we manually delete and recreate the deployment after node restart. Is there any way to get this working?