Docker Executor
Run workflow steps in Docker containers for isolated, reproducible execution.
Container Field
Use the container
field at the DAG level to run all steps in containers:
# All steps run in this container
container:
image: python:3.11
volumes:
- ./data:/data
env:
- PYTHONPATH=/app
steps:
- pip install -r requirements.txt
- python process.py /data/input.csv
Step-Level Container Configuration
steps:
- name: hello
executor:
type: docker
config:
image: alpine
autoRemove: true
command: echo "hello from container"
Configuration Reference (step-level)
Use these fields under executor.config
for the Docker executor:
steps:
- executor:
type: docker
config:
# Provide one of the following (see precedence note below):
image: alpine:latest # Create a new container from this image
# containerName: my-running-container # Exec into an existing container
# Image pull behavior (string or boolean)
pull: missing # always | missing | never (true->always, false->never)
# Remove container after step finishes (for new containers)
autoRemove: true
# Target platform for pulling/creating the image
platform: linux/amd64
# Low-level Docker configs (passed through to Docker API)
container: {} # container.Config (env, workingDir, user, exposedPorts, etc.)
host: {} # hostConfig (binds, portBindings, devices, restartPolicy, etc.)
network: {} # networkingConfig (endpoints, aliases, etc.)
# Exec options when using an existing container (containerName)
exec: # docker ExecOptions (when targeting an existing container)
user: root
workingDir: /work
env: ["DEBUG=1"]
Notes:
- You must set either
image
orcontainerName
. If both are omitted, the step fails validation. - Precedence: if
containerName
is set, the executor runsdocker exec
in that container. In this mode,image
,container
,host
, andnetwork
are not used. - When creating a new container (
image
set) andcommand
is omitted, Docker uses the image’s defaultENTRYPOINT
/CMD
. - When executing in an existing container (
containerName
set), the container must be running; otherwise the step fails with “container is not running”. Theexec
block controls user/workingDir/env. host.autoRemove
from raw Docker config is ignored — use top‑levelconfig.autoRemove
instead.pull
accepts booleans for backward compatibility:true
=always
,false
=never
.
Behavior with DAG‑level container
If the DAG has a top‑level container:
configured, any step using the Docker executor runs inside that shared container via docker exec
.
- The step’s
config
(includingimage
,container/host/network
, andexec
) is ignored in this case; only the step’scommand
andargs
are used. - To customize working directory or environment for steps in a DAG‑level container, set them on the DAG‑level
container
itself. If you need per‑step image semantics (honorENTRYPOINT
/CMD
), remove the DAG‑levelcontainer
and use the Docker executor per step.
Validation and Errors
- Required fields:
- Step‑level:
config.image
orconfig.containerName
must be provided (one is required). IfcontainerName
is provided, the target container must already be running. - DAG‑level:
container.image
is required.
- Step‑level:
- Volume format (DAG‑level
container.volumes
):source:target[:ro|rw]
source
may be absolute, relative to DAG workingDir (.
or./...
), or~
-expanded; otherwise it is treated as a named volume.- Only
ro
orrw
are valid modes; invalid formats fail with “invalid volume format”.
- Port format (DAG‑level
container.ports
):"80"
,"8080:80"
,"127.0.0.1:8080:80"
, optional protocol on container port:80/tcp|udp|sctp
(default tcp). Invalid forms fail with “invalid port format”.
- Network (DAG‑level
container.network
):- Accepts
bridge
,host
,none
,container:<name|id>
, or a custom network name. Custom names are added toEndpointsConfig
automatically.
- Accepts
- Restart policy (DAG‑level
container.restartPolicy
): supported values areno
,always
,unless-stopped
; other values fail validation. - Platform (DAG‑level and step‑level):
linux/amd64
,linux/arm64
, etc.; invalid platform strings fail. - Startup/readiness (DAG‑level):
startup
:keepalive
(default),entrypoint
,command
. Invalid values fail with an error.waitFor
:running
(default) orhealthy
. Ifhealthy
is selected but the image has no healthcheck, Dagu falls back torunning
and logs a warning.logPattern
: must be a valid regex; invalid patterns fail. Readiness timeout is 120s; on timeout, Dagu reports the last known state.
Container Field Configuration
The container
field supports all Docker configuration options:
container:
image: node:20 # Required
pullPolicy: missing # always, missing, never
env:
- NODE_ENV=production
- API_KEY=${API_KEY} # From host environment
volumes:
- ./src:/app # Bind mount
- /data:/data:ro # Read-only mount
workingDir: /app # Working directory
platform: linux/amd64 # Platform specification
user: "1000:1000" # User and group
ports:
- "8080:8080" # Port mapping
network: host # Network mode
keepContainer: true # Keep container running
How Commands Execute with DAG-level Container
When you configure a DAG‑level container
, Dagu starts a single persistent container (kept alive with a simple sleep loop) and executes each step inside it using docker exec
.
- Step commands run directly in the running container; the image’s
ENTRYPOINT
/CMD
are not invoked for step commands. - If your image’s entrypoint is a dispatcher (for example, it expects a job name like
sendConfirmationEmails
), call that entrypoint explicitly in your step command, or invoke the underlying program yourself.
Example:
container:
image: myorg/myimage:latest
steps:
# Runs inside the already-running container via `docker exec`
- my-entrypoint sendConfirmationEmails
If you prefer each step to honor the image’s ENTRYPOINT
/CMD
as with a fresh docker run
, use the step‑level Docker executor instead of a DAG‑level container
.
Registry Authentication
Access private container registries with authentication configured at the DAG level:
# Option 1: Structured format with username/password
registryAuths:
docker.io:
username: ${DOCKER_USERNAME}
password: ${DOCKER_PASSWORD}
ghcr.io:
username: ${GITHUB_USER}
password: ${GITHUB_TOKEN}
# Use private images in your workflow
container:
image: ghcr.io/myorg/private-app:latest
steps:
- python process.py
Authentication Methods
Using Environment Variables:
# Set DOCKER_AUTH_CONFIG with standard Docker format
registryAuths: ${DOCKER_AUTH_CONFIG}
# DOCKER_AUTH_CONFIG uses the same format as ~/.docker/config.json:
# {
# "auths": {
# "docker.io": {
# "auth": "base64(username:password)"
# }
# }
# }
Pre-encoded Authentication:
registryAuths:
gcr.io:
auth: ${GCR_AUTH_BASE64} # base64(username:password)
Per-Registry JSON Configuration:
registryAuths:
# JSON string for specific registry
"123456789012.dkr.ecr.us-east-1.amazonaws.com": |
{
"username": "AWS",
"password": "${ECR_AUTH_TOKEN}"
}
Authentication Priority
Dagu checks for registry credentials in this order:
- DAG-level
registryAuths
- Configured in your DAG file DOCKER_AUTH_CONFIG
environment variable - Standard Docker authentication (same format as~/.docker/config.json
)- No authentication - For public registries
Note: The
DOCKER_AUTH_CONFIG
format is fully compatible with Docker's standard~/.docker/config.json
file. You can copy the contents of your Docker config file directly into the environment variable.
Using Docker Config File
You can export your existing Docker configuration as an environment variable:
# Export your Docker config as an environment variable
export DOCKER_AUTH_CONFIG=$(cat ~/.docker/config.json)
# Then run your DAG - it will use the Docker credentials automatically
dagu run my-workflow.yaml
Example: Multi-Registry Workflow
registryAuths:
docker.io:
username: ${DOCKERHUB_USER}
password: ${DOCKERHUB_TOKEN}
ghcr.io:
username: ${GITHUB_USER}
password: ${GITHUB_TOKEN}
gcr.io:
auth: ${GCR_AUTH}
steps:
- executor:
type: docker
config:
image: myorg/processor:latest # from Docker Hub
command: process-data
- executor:
type: docker
config:
image: ghcr.io/myorg/analyzer:v2 # from GitHub
command: analyze-results
- executor:
type: docker
config:
image: gcr.io/myproject/reporter:stable # from GCR
command: generate-report
Docker in Docker
You need to mount the Docker socket and run as root to use Docker inside your containers. Example compose.yml
for running Dagu with Docker support:
# compose.yml for Dagu with Docker support
services:
dagu:
image: ghcr.io/dagu-org/dagu:latest
ports:
- 8080:8080
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- ./dags:/var/lib/dagu/dags
entrypoint: ["dagu", "start-all"] # Override default entrypoint
user: "0:0" # Run as root for Docker access
Container Lifecycle Management
The keepContainer
option prevents the container from being removed after the workflow completes, allowing for debugging and inspection:
# Container stays alive for entire workflow
container:
image: postgres:16
keepContainer: true
env:
- POSTGRES_PASSWORD=secret
ports:
- "5432:5432"
Platform-Specific Builds
container:
image: postgres:16
platform: linux/amd64
env:
- POSTGRES_PASSWORD=secret
ports:
- "5432:5432"