Container Secrets

Secrets fail silently. Here's how to verify them.

Docker Swarm, Docker Compose, and Kubernetes each handle secrets differently - different injection paths, different storage backends, different failure modes. Most breakage produces no error: the app just reads an empty file or an old value. This page covers how each mechanism works, where each silently fails, and the exact commands to confirm your secrets are live in containers.

How they work

Each runtime has a different injection path, storage backend, and lifecycle. Understanding the mechanics tells you where to look when something is wrong.

Docker Swarm
Native secrets - manager-controlled
Most secure by default
Injection path
Secret stored in Swarm manager's Raft log (encrypted at rest). On task start, manager decrypts and sends to worker node over mutual TLS. Mounted as a tmpfs file at /run/secrets/<name> inside the container. Never written to disk on the worker.
Storage backend
Raft consensus store on manager nodes. AES-256-GCM encrypted at rest. Replicated across all managers. Survives manager restarts.
Lifecycle
Secret exists independent of services. Attach to a service with --secret at deploy time or via docker service update. Tasks must restart to see a new secret version - Swarm has no in-place rotation.
What docker inspect exposes
docker inspect <container> shows secret name and target path in Spec.TaskTemplate.ContainerSpec.Secrets. It does not show the value. The value is only accessible inside the running container via the mounted file.
Docker Compose
File-backed secrets - host-dependent
Fragile at boundary cases
Injection path
Compose v2 reads the secret from a host file (or inline value) and bind-mounts it into the container at /run/secrets/<name>. Not a tmpfs - it's a bind-mount from the host filesystem. Compose v1 (docker-compose) silently ignores the secrets: directive entirely.
Storage backend
A plain file on the host machine. No encryption at rest, no distributed replication, no access control beyond host file permissions.
Lifecycle
Secret value is the file content at container start. If the host file changes, the running container sees the new content immediately (bind-mount), but only if the application re-reads the file.
What docker inspect exposes
docker inspect <container> shows the bind-mount source path. If secrets are passed as environment variables instead of via the secrets: directive, the value is fully visible in Config.Env.
Any environment: entry with a secret value is visible to anyone who can run docker inspect on the host.
Kubernetes
Secret objects - base64, not encrypted by default
Secure only with etcd encryption
Injection path
Secret object stored in etcd. At pod start, kubelet fetches the secret and either: (a) mounts it as a tmpfs volume at a path in the container, or (b) injects it as an environment variable. Option (a) is preferred. Option (b) exposes the value in kubectl describe pod.
Storage backend
etcd key-value store. Base64-encoded, not encrypted by default. Anyone with etcd read access sees all secrets in plaintext. Encryption at rest requires explicit EncryptionConfiguration - not enabled by default on most clusters.
Lifecycle
Update the Secret object and pods eventually receive the new value via the kubelet sync loop (default: 60s for volume-mounted secrets). Env-var-injected secrets require pod restart to update.
What kubectl describe exposes
kubectl describe pod shows env var names and their Secret source. kubectl get secret -o yaml shows base64-encoded values - decode with base64 -d and you have the plaintext. RBAC controls who can run these commands.
Property Docker Swarm Docker Compose Kubernetes
Encrypted at rest Yes - Raft store, AES-256-GCM No - plain host file Only with EncryptionConfig
In-memory mount (tmpfs) Yes - never written to disk No - bind-mount from host Yes - kubelet uses tmpfs
Value visible via inspect No - name only Yes if using environment: Yes via kubectl get secret -o yaml
Multi-service sharing Native - attach to multiple services Manual - each service reads same file Native - any pod in namespace with RBAC
Rotation requires restart Yes - must --force update No - bind-mount refreshes on file change Env vars: yes. Volume mounts: no (60s sync)
Compose v1 support N/A Silently ignored N/A

Verification checklist

Runnable commands per runtime. Each check has pass criteria and what to fix on failure. Click a card to mark it done.

0 of 4 checked
1. List secrets registered in Swarm
Confirm the secret exists in the manager's store before checking service attachment.
docker secret ls
Pass
Your secret name appears in the list with a creation timestamp.
Fail / fix
Secret missing. Create it: echo "value" | docker secret create <name> - or from a file: docker secret create <name> ./secret.txt
2. Confirm service has secret reference
Verify the secret is attached to the service spec, not just created in the Swarm store.
docker service inspect <service> \ --format '{{range .Spec.TaskTemplate.ContainerSpec.Secrets}}{{.SecretName}} -> /run/secrets/{{.File.Name}}{{"\n"}}{{end}}'
Pass
Output shows: my-secret -> /run/secrets/my-secret
Fail / fix
Empty output. Attach the secret: docker service update --secret-add <name> <service> - then docker service update --force <service> to restart tasks.
3. Confirm secret file exists inside running container
The service spec can reference a secret that is still not present in the container if the task started before the secret was attached or if a node rebalance occurred.
docker exec \ $(docker ps -q --filter "name=<service>" | head -1) \ ls -la /run/secrets/
Pass
Secret filename visible with non-zero size. Can cat /run/secrets/<name> inside the container to confirm content.
Fail / fix
File missing or directory empty. Force task restart: docker service update --force <service>. If still missing, verify secret is attached (check 2) and the worker node is healthy.
4. Audit for env-var secret leakage
If a Dockerfile uses ENV SECRET_VALUE=... or a compose file uses environment: SECRET=value, the value is baked into the image layer and visible here.
docker inspect \ $(docker ps -q --filter "name=<service>" | head -1) \ --format '{{json .Config.Env}}' | tr ',' '\n'
Pass
Env vars show names only (e.g. DB_HOST=postgres). No plaintext secret values visible.
Fail / fix
Plaintext secret value visible (e.g. DB_PASSWORD=hunter2). Remove ENV from Dockerfile. Read secrets from /run/secrets/<name> in application code instead. Rotate the exposed credential immediately.
0 of 4 checked
1. Confirm Compose CLI is v2 (plugin), not v1 (standalone)
The secrets: directive is silently ignored by docker-compose v1. This is the most common reason secrets appear to be configured but are not present in containers.
docker compose version # should output: Docker Compose version v2.x.x # NOT: docker-compose version 1.x.x
Pass
docker compose version shows v2.x.x.
Fail / fix
Running v1 standalone. Install the plugin: apt install docker-compose-plugin (Debian/Ubuntu). Replace all docker-compose calls with docker compose (no hyphen) in scripts and CI.
2. Verify secrets: directive is present, not environment: substitution
Check that the compose file uses the top-level secrets: block with a file source. Environment variable substitution from .env files passes values as plaintext env vars.
docker compose config | grep -A 10 "^secrets:"
Pass
Output shows the secrets block with file: or external: true entries. The service definition references them under secrets:, not environment:.
Fail / fix
No secrets block, or secrets passed via environment: MY_SECRET: ${MY_SECRET}. Move to secrets: directive: define the secret at top level with a file: source, mount it in the service under secrets:, and read from /run/secrets/<name> in code.
3. Confirm secret file is present inside container
A correctly configured secrets: directive with Compose v2 bind-mounts the secret file into /run/secrets/.
docker compose exec <service> ls -la /run/secrets/
Pass
Secret filename visible. cat /run/secrets/<name> returns expected value.
Fail / fix
Directory empty or does not exist. Check: (a) using Compose v2 (check 1), (b) secrets: block defined at top level and referenced in service definition, (c) secret source file exists on host at the path specified in file:. Recreate containers: docker compose up -d --force-recreate.
4. Check .env file is not accessible inside the container
If a .env file is COPY'd into the image or bind-mounted into the container, its contents are accessible to any process in the container and to anyone who can exec into it.
# Check for .env in image layers docker compose exec <service> find / -name ".env" 2>/dev/null # Check Config.Env for leaked values docker inspect \ $(docker compose ps -q <service>) \ --format '{{json .Config.Env}}' | tr ',' '\n' | grep -v "^PATH\|^HOME\|^HOSTNAME"
Pass
No .env file found in container. Config.Env shows only non-sensitive vars (PATH, hostname, framework settings).
Fail / fix
.env accessible in container, or plaintext secrets in Config.Env. Remove .env from Dockerfile COPY instructions. Add .env to .dockerignore. Replace environment: secret entries with secrets: directive. Rotate any exposed values.
0 of 4 checked
1. List secrets and confirm expected names exist
Verify the Secret object exists in the correct namespace before checking pod attachment.
kubectl get secrets -n <namespace> # Check a specific secret type and key names (no values) kubectl describe secret <name> -n <namespace>
Pass
Secret name appears in output. kubectl describe shows correct key names and non-zero byte counts.
Fail / fix
Secret missing. Create it: kubectl create secret generic <name> --from-literal=key=value -n <namespace> or from a file: kubectl create secret generic <name> --from-file=./secret.txt -n <namespace>
2. Confirm secret is mounted as a volume (not injected as env var)
Volume mounts expose the secret as a file in a tmpfs. Env var injection bakes the value into the running process environment - visible in kubectl describe pod and /proc/<pid>/environ.
# Check volumes and mounts kubectl describe pod <pod> -n <namespace> | grep -A 20 "Volumes:" # Confirm file exists inside container kubectl exec <pod> -n <namespace> -- ls /path/to/mounted/secret/
Pass
Pod spec shows a projected or secret volume. File visible inside container at the expected mountPath.
Fail / fix
Secret injected as env var (valueFrom.secretKeyRef). Move to a volume mount in the pod spec. Update application to read from the mounted file path. Restart the pod after spec change.
3. Audit for plaintext env var injection
Secrets injected as env vars show up in kubectl describe pod output with their source. This is better than hardcoded values but the value is accessible to any process in the container and shows in describe output for anyone with pod read access.
# Find all env vars sourced from secrets kubectl get pod <pod> -n <namespace> -o jsonpath=\ '{range .spec.containers[*].env[*]}{.name}{" from secret: "}{.valueFrom.secretKeyRef.name}{"/"}{.valueFrom.secretKeyRef.key}{"\n"}{end}'
Pass
No output, or only non-sensitive vars from secrets (e.g. service URLs, feature flags). Credential secrets use volume mounts instead.
Fail / fix
Credential secrets (passwords, tokens, API keys) injected as env vars. Migrate to volume mounts. Tighten RBAC so only authorized service accounts can describe pods containing sensitive env refs.
4. Verify etcd encryption at rest is enabled
By default, K8s Secret objects are stored in etcd as base64-encoded plaintext. Anyone with etcd access can decode all secrets. Encryption at rest requires an EncryptionConfiguration resource - it is not enabled by default on most managed clusters (EKS, GKE, AKS have it optionally).
# Check if encryption config is applied (self-managed clusters) kubectl get apiserver -o yaml 2>/dev/null | grep -i "encryption" # On EKS, check via aws CLI aws eks describe-cluster --name <cluster> \ --query "cluster.encryptionConfig" # Verify a secret value is not base64-readable without auth kubectl auth can-i get secrets -n <namespace> --as system:anonymous
Pass
Encryption config shows aescbc, aesgcm, or KMS provider. Anonymous access denied for secrets.
Fail / fix
No encryption config. Enable encryption at rest via EncryptionConfiguration and restart the API server. Consider migrating to an external secrets operator (External Secrets Operator, Vault Agent) that stores values outside etcd entirely.

Silent failure catalog

Failures observed running Docker Swarm secrets for customers in production. Each one produces no error at deploy time. The app breaks later, or silently reads the wrong value.

F-01
Secret not updated after docker service update without --force
Swarm - rotation appears to succeed, container reads old value
Symptom
You rotate a secret: docker secret rm old-db-pass, docker secret create new-db-pass, docker service update --secret-rm old-db-pass --secret-add new-db-pass <svc>. Command exits 0. App continues authenticating with the old password - which has already been rotated at the database. Auth failures start.
Diagnosis
docker service inspect <svc> --format '{{range .Spec.TaskTemplate.ContainerSpec.Secrets}}{{.SecretName}}{{"\n"}}{{end}}' shows the new secret name. But docker exec <container> cat /run/secrets/new-db-pass still returns the old value. Tasks are still running old containers.
docker service ps <svc> --format "{{.Name}} {{.CurrentState}}" # If tasks show "Running X minutes ago" with no recent update, tasks did not restart
Root cause
Swarm only restarts tasks when the task spec changes in a way that affects scheduling. --secret-rm + --secret-add updates the service spec, but Swarm may determine the running tasks are already compliant with the new spec and skip restarting them. The mounted file at /run/secrets/ reflects what was injected at task start - it does not update in-place.
Fix
Always run docker service update --force <svc> after any secret rotation. --force increments the task spec version and triggers a rolling restart of all tasks, causing each new container to mount the updated secret.
docker service update --secret-rm old-db-pass --secret-add new-db-pass <svc> docker service update --force <svc>
F-02
Secret file missing in container after node drain or rebalance
Swarm - app crashes on next request after node maintenance
Symptom
Node drained for maintenance (docker node update --availability drain <node>). Tasks reschedule to other nodes. App throws "permission denied" or "no such file" reading /run/secrets/<name>. Secret exists in docker secret ls.
Diagnosis
docker service ps <svc> shows tasks in Failed or Starting state on new nodes. Check the failure reason:
docker service ps <svc> --no-trunc \ --format "{{.Name}} {{.Node}} {{.CurrentState}} {{.Error}}"
Root cause
Most common cause: the new worker node is not reachable by the manager at the time of task assignment (network partition, node not yet fully rejoined). The manager cannot deliver the secret to the worker, so the task starts without it. Less common: the secret was referenced by name but the actual secret object was deleted before the task started on the new node.
Fix
After drain + node work + returning node to active: verify all nodes are Ready before rescheduling tasks. Force a reroll to let the manager re-deliver secrets cleanly.
docker node update --availability active <node> docker node ls # confirm all nodes Ready docker service update --force <svc>
F-03
docker inspect shows secret value when ENV used instead of --secret
Swarm Compose - value exposed to anyone with host access
Symptom
You believe secrets are managed properly. A security scan or audit runs docker inspect <container> and finds database passwords, API keys, or tokens in plain text in the Config.Env array. The values are in the image layer history too (docker history --no-trunc).
Diagnosis
Check the Dockerfile for ENV SECRET= instructions or ARGs used with ENV. Check the compose file for environment: MYVAR: ${MYVAR} entries sourcing values from .env.
docker inspect <container> --format '{{json .Config.Env}}' \ | tr ',' '\n' | grep -v "^PATH\|^HOME\|^HOSTNAME\|^TERM"
Root cause
ENV in a Dockerfile embeds values into image layers at build time. These layers are stored in the image manifest and visible to anyone who can run docker inspect or docker history on the host. Environment variables set via compose environment: are also stored in container metadata and exposed by inspect.
Fix
Remove all secrets from ENV instructions and environment: blocks. For runtime secrets: use the secrets: directive (Compose) or --secret flag (Swarm). Application code reads from /run/secrets/<name>. For build-time secrets (npm tokens, etc.): use Docker BuildKit's --secret build flag - the value is never written to any image layer.
# Build-time secret (not stored in image layer) docker build --secret id=npmrc,src=.npmrc . # Runtime: read from file in application code # Ruby: File.read('/run/secrets/db_password').strip # Node: require('fs').readFileSync('/run/secrets/api_key', 'utf8').trim() # Python: open('/run/secrets/db_password').read().strip()
F-04
Compose secrets: directive silently ignored by docker-compose v1
Compose - secrets block parsed and discarded, no error
Symptom
Compose file has a correctly structured secrets: block. Running docker-compose up succeeds with no errors or warnings. But /run/secrets/ inside the container is empty. App reads empty string from secret file.
Diagnosis
Determine which CLI is running:
docker-compose --version # v1 standalone: "docker-compose version 1.x.x" docker compose version # v2 plugin: "Docker Compose version v2.x.x" # Confirm secrets empty in container docker-compose exec <service> ls /run/secrets/
Root cause
Docker Compose v1 (the standalone Python binary, docker-compose) was written before the Compose Specification added the secrets: key. It recognizes the key well enough to not throw a parse error, but does not implement the injection behavior. The block is silently ignored.
Fix
Install the Compose v2 plugin and replace docker-compose with docker compose everywhere - shell scripts, CI configs, Makefiles, cron jobs.
# Debian/Ubuntu apt install docker-compose-plugin # Verify docker compose version # must show v2.x.x # Find all v1 calls in your scripts grep -r "docker-compose" . --include="*.sh" --include="Makefile" \ --include="*.yml" --include="*.yaml"
F-05
Secret file empty when target path conflicts with a bind-mount
Swarm Compose - mount ordering overwrites the secret
Symptom
docker exec <container> cat /run/secrets/<name> returns empty output or "Is a directory" error. The secret exists in Swarm (docker secret ls). The service spec correctly references it. But the file is empty or a directory inside the container.
Diagnosis
Inspect the container's mount table for overlapping paths:
docker inspect \ $(docker ps -q --filter "name=<service>" | head -1) \ --format '{{json .Mounts}}' | python3 -m json.tool # Look for any mount targeting /run/secrets or a parent like /run
Root cause
Docker processes mounts in order. If a volume or bind-mount targets /run, /run/secrets, or any path that overlaps with the secret's target path, the later mount overlays the earlier one. The secret was injected into /run/secrets/ but the bind-mount placed an empty directory there afterward, hiding the file.
Fix
Never mount volumes at /run/secrets or /run. Change conflicting bind-mounts to use a different target path. If you need to share runtime files via a volume, use a path like /data/runtime instead. Update the secret's target path in the service spec if /run/secrets itself is reserved by something else.
# In a Swarm service spec, change secret target path docker service update \ --secret-rm <name> \ --secret-add source=<name>,target=/app/secrets/<name> \ <svc> docker service update --force <svc>
🔒 What we learned running Docker Swarm secrets for paying customers since we started

vmfarms has run Docker Swarm for every customer since 2009. Secrets management was one of the first operational problems we had to solve at scale - before most of the tooling existed. Every failure in the catalog above is something we have seen in production, usually at 3am, usually because a deploy looked clean at the time.

The pattern we landed on: secrets are versioned objects in the Swarm store, services declare their secret dependencies explicitly, and every rotation is followed by a forced task reroll. We verify injection on every deploy using the same commands in the checklist above. The bind-mount conflict (F-05) still catches people - it is the hardest one to diagnose because nothing in the deploy output indicates a problem.

For teams hitting more complex scenarios - secret scoping across services, rotation without downtime, audit trails for who changed what - see the upcoming secret blast radius guide (coming soon, tracking issue #362).