Vault
Run Vault on IBM z/OS Container Extensions (zCX)
Deploy a fully secured 3-node HashiCorp Vault Enterprise cluster on IBM z/OS Container Extensions (zCX).
Before you start
Read about sealing and unsealing Vault. You cannot test your deployment without unsealing Vault.
Install the Docker CLI. You must be able to interact with Docker containers as part of the deploy process.
Check your permissions. You must have permission to:
- Deploy and update Docker containers.
- Run
vault operatorcommands. - Configure and deploy your load balancer.
You must have your encryption certificates. To secure communication in the cluster between Vault clients, HAProxy, and other nodes you must have the following certificates:
vault.pem- the TLS certificate that secures communication between nodesvault.key- the private encryption key used by Vaultca.pem- the certificate authority (CA) used to validate mutual TLS between nodes
Optional: install the Vault CLI. To make Vault CLI calls, you must install Vault locally. If you prefer not to install Vault, you can make API calls from the command line instead.
Step 1: Create your persistent Docker volumes
We recommend creating a persistent Docker volume for Vault and HAProxy configuration persistence across nodes in the cluster.
Create a volume called
vault-configfor Vault configuration files:$ docker volume create vault-configCreate a volume called
haproxy-configfor proxy configuration files:$ docker volume create haproxy-configVerify volume creation for the volumes:
$ docker volume ls | grep 'config'
Step 2: Create your local directory structure
We recommend creating a local directory, vault-deploy to keep and version your
deploy files and the relevant configuration files for your Vault deployment:
shared-deploy
vault-deploy
|-- local-config
|-- certs
|-- hcl
proxy-deploy
|-- local-config
The remaining steps assume you store:
- Shared docker compose files under
shared-deploy. - Your Docker compose and
Dockerfilefiles for Vault undervault-deploy. - Your Vault license file,
vault-license.hclic, undervault-deploy/local-config. - Your encryption certificates, under
vault-deploy/local-config/certs. - Your Vault node configuration files under
vault-deploy/local-config/hcl. - Your Docker compose and
Dockerfilefiles for the load balancer underproxy-deploy. - Your load balancer configuration files under
proxy-deploy/local-config
Step 3: Create a Docker environment file for Vault
To simplify maintenance, we recommend creating an single file,
vault-deploy/vault.env to collect and set common Vault environment variables:
# Mount path for vault-config
VAULT_CONFIG="/vault/config"
# Path to license file
VAULT_LICENSE_PATH="${VAULT_CONFIG}/vault-license.hclic"
# Custom variable to select node-specific config file
NODE_IDX=""
Step 4: Configure the network environment
We recommend creating a Docker compose file, shared-deploy/cluster-env.yml,
for the cluster environment that defines the persistent volumes and a shared
network so you can refer to containers and volumes by name in configuration
files:
networks:
<resource_name>:
name: <custom_network_name>
driver: bridge
ipam:
config:
- subnet: "<custom_subnet>"
ip_range: "<IP_range_for_nodes>"
gateway: "<gateway_IP>"
volumes:
vault-config:
external: true
name: vault-config
haproxy-config:
external: true
name: haproxy-config
For example:
networks:
vault_cluster:
name: vault-network
driver: bridge
ipam:
config:
- subnet: "192.168.42.128/25"
ip_range: "192.168.42.128/25"
gateway: "192.168.42.129"
volumes:
vault-config:
external: true
name: vault-config
haproxy-config:
external: true
name: haproxy-config
Step 5: Create a basic compose file for Vault
Create a basic Docker compose file, vault-deploy/vault.compose.yml that
defines the base build process for your Vault nodes:
name: <project_name>
services:
<serice_name>:
volumes:
- vault-config:/vault/config
env_file:
- path: <relative_path_to_env_settings>
required: true
networks:
- <cluster_network_name>
build:
include:
- path: <path_to_cluster_env_file>
For example:
name: zcx
services:
vault-base:
volumes:
- vault-config:/vault/config
env_file:
- path: ./vault.env
required: true
networks:
- vault_cluster
build:
include:
- path: ../shared-deploy/cluster-env.yml
Step 6: Create a basic Dockerfile for Vault
Create a build file, vault-deploy/vault.Dockerfile, that copies key files to
persistent storage, creates directories for Vault data files, sets key directory
ownership and access permissions for Vault, and uses environment variables to
start Vault with the relevant, node-specific configuration file.
For example:
# syntax=docker/dockerfile:1
FROM hashicorp/vault-enterprise:<version_tag>
# Copy the config and license files
COPY local-config/hcl/vault-*.hcl /vault/config/hcl/
COPY local-config/vault-license.hclic /vault/config/vault-license.hclic
# Copy the cert files
COPY local-config/certs/vault.pem /vault/config/certs/vault.pem
COPY local-config/certs/vault.key /vault/config/certs/vault.keyt
COPY local-config/certs/ca.pem /vault/config/certs/ca.pem
RUN mkdir /vault/data/
RUN mkdir /vault/plugins/
RUN mkdir /vault/plugins/tmp
# Make sure Vault owns the Vault-related files
RUN chown -R vault:vault /vault
# Set ownership and mode for certificate files
RUN chown root:vault /vault/certs/vault-key.pem
RUN chmod 0644 /vault/certs/vault-cert.pem
RUN chmod 0644 /vault/certs/vault-ca.pem
RUN chmod 0640 /vault/certs/vault-key.pem
CMD vault server -config=${VAULT_CONFIG}/hcl/vault-${NODE_IDX}.hcl
Step 7: Update your Vault compose file
Update the base build in your
vault.compose.ymlfile to usevault.Dockerfile:name: zcx services: vault-base: volumes: - vault-config:/vault/config env_file: - path: ./vault.env required: true networks: - vault_cluster build: context: . dockerfile: vault.Dockerfile include: - path: ../shared-deploy/cluster-env.ymlExtend the base build definition to add build instructions for each node in the cluser, set the relevant
VAULT_ADDRenvironment variable, and use theNODE_IDXenvironment variable to customize the behavior ofvault.Dockerfile. For example:name: zcx services: vault-base: ... vault-1: extends: service: vault-base container_name: zcx-vault-1 hostname: zcx-vault-1 environment: VAULT_ADDR: https://zcx-vault-1:8200 NODE_IDX: "1" vault-2: extends: service: vault-base container_name: zcx-vault-2 hostname: zcx-vault-2 environment: VAULT_ADDR: https://zcx-vault-2:8200 NODE_IDX: "2" vault-3: extends: service: vault-base container_name: zcx-vault-3 hostname: zcx-vault-3 environment: VAULT_ADDR: https://zcx-vault-3:8200 NODE_IDX: "3" include: - path: ../shared-deploy/cluster-env.yml
Step 8: Create your Vault configuration files
Use the following template to create individual configuration files
(vault-N.hcl) for each of the Vault nodes:
ui = true
disable_mlock = true
license_path = "<path_to_license_file_on_cluster>"
# --- Configure the non-loopback interface ---
api_addr = "https://<container_name>:8200"
cluster_addr = "https://<container_name>:8201"
cluster_name = "<custom_name_for_cluster>"
# --- Listener configuration ---
listener "tcp" {
address = "[::]:8200"
tls_disable = "false"
tls_cert_file = "<path_to_cert_files>/vault.pem"
tls_key_file = "<path_to_cert_files>/vault.key"
tls_client_ca_file = "<path_to_cert_files>/ca.pem"
}
# --- Plugin configuration ---
plugin_directory = "<path_to_plugin_dir_on_container>"
plugin_tmpdir = "<path_to_tmp_dir_on_container>"
# --- Integrated storage ---
storage "raft" {
path = "/vault/data"
node_id = "<container_name>"
# Rejoin/listener configuration for cluster node N+1
retry_join {
leader_api_addr = "https://<alt_container_name_1>:8200"
tls_disable = "false"
tls_cert_file = "<path_to_cert_files>/vault.pem"
tls_key_file = "<path_to_cert_files>/vault.key"
tls_client_ca_file = "<path_to_cert_files>/ca.pem"
}
# Rejoin/listener configuration for cluster node N+2
retry_join {
leader_api_addr = "https://<alt_container_name_2>:8200"
tls_disable = "false"
tls_cert_file = "<path_to_cert_files>/vault.pem"
tls_key_file = "<path_to_cert_files>/vault.key"
tls_client_ca_file = "<path_to_cert_files>/ca.pem"
}
}
You must set tls_disable to false to force TLS communication within the
cluster. Additionally, we recommend setting the UI, license path, and plugin
directory information. Even if you currently do not intend to register custom or
enterprise plugins, we recommend deciding on a plugin location at setup because
you must restart the cluster to update the plugin information later.
For example:
ui = true
disable_mlock = true
license_path = "/vault/config/license.hclic"
# --- Configure the non-loopback interface ---
api_addr = "https://zcx-vault-1:8200"
cluster_addr = "https://zcx-vault-1:8201"
cluster_name = "zcx-vault"
# --- Plugin configuration ---
plugin_directory = "/vault/plugins/"
plugin_tmpdir = "/vault/plugins/tmp"
# --- Listener configuration ---
listener "tcp" {
address = "[::]:8200"
tls_disable = "false"
tls_cert_file = "/vault/config/certs/vault.pem"
tls_key_file = "/vault/config/certs/vault.key"
tls_client_ca_file = "/vault/config/certs/ca.pem"
}
# --- Integrated storage ---
storage "raft" {
path = "/vault/data"
node_id = "zcx-vault-1"
# Rejoin/listener configuration for cluster node N+1
retry_join {
leader_api_addr = "https://zcx-vault-2:8200"
tls_disable = "false"
tls_cert_file = "/vault/config/certs/vault.pem"
tls_key_file = "/vault/config/certs/vault.key"
tls_client_ca_file = "/vault/config/certs/ca.pem"
}
# Rejoin/listener configuration for cluster node N+2
retry_join {
leader_api_addr = "https://zcx-vault-3:8200"
tls_disable = "false"
tls_cert_file = "/vault/config/certs/vault.pem"
tls_key_file = "/vault/config/certs/vault.key"
tls_client_ca_file = "/vault/config/certs/ca.pem"
}
}
Step 9: Start the Vault cluster
Before bringing up the entire cluster, you must bring up a single node to initialize Vault, generate a root token, and create your unseal keys. Then you can bring up and unseal the remaining nodes.
Start one of the Vault services. For example, to start
vault-1:$ docker compose \ -f vault-deploy/vault.compose.yml up vault-1 --build --detachUse the Docker CLI to run
vault operator initagainst the container and save the initialization details tovault_init.jsonin a secure location. For example, to initialize Vault with thevault-1container,zcx-vault-1:$ docker exec -it zcx-vault-1 \ vault operator init -format=json > /secure/location/vault-init.jsonUse the initialization details to run
vault operator unsealagainst the container with the generated unseal keys. For example, to unseal the Vault instance running in thezcx-vault-1container:$ for token in $( cat /secure/location/vault-init.json | \ jq -r '.unseal_keys_b64[0:3] | join(" ")' ) ; do docker exec -it zcx-vault-1 vault operator unseal $token doneCheck the seal status on the container. For example, to check the Vault status for the
zcx-vault-1container:$ docker exec -it zcx-vault-1 vault status --- ----- Seal Type shamir Initialized true Sealed false Total Shares 5 Threshold 3 Version 1.21.2+ent Build Date 2026-01-06T16:58:57Z Storage Type raft Cluster Name zcx-vault Cluster ID 520515c4-887f-4ace-b267-8625cfd5fb43 Removed From Cluster false HA Enabled true HA Cluster https://zcx-vault-1:8201 HA Mode active Active Since 2026-02-04T02:51:25.540535695Z Raft Committed Index 9945 Raft Applied Index 9945 Last WAL 3819Repeat the
docker composeand unseal steps for the remaining Vault nodes.For example, to build
vault-2run:$ docker compose \ -f vault-deploy/vault.compose.yml up vault-2 --build --detachAnd to unseal
vault-2, run:$ for token in $( cat /secure/location/vault-init.json | \ jq -r '.unseal_keys_b64[0:3] | join(" ")' ) ; do docker exec -it zcx-vault-2 vault operator unseal $token done
Step 10: Create a compose file for HAProxy
Create a compose file, proxy-deploy/proxy.compose.yml for your load balancer
that uses the same network configuration as the Vault cluster. For example:
name: haproxy
services:
vault-lb:
image: ibmz-hc-registry.ngrok.dev/haproxy:3.2
volumes:
- haproxy-config:/usr/local/etc/haproxy
networks:
- vault_cluster
ports:
- "8300:8200"
- "8404:8404"
include:
- path: ../shared-deploy/cluster-env.yml
Step 11: Create a configuration file for HAProxy
Use the container names for your Vault nodes to create a basic HAProxy
configuration file, proxy-deploy/local-config/haproxy.cfg.
The following example configures TLS termination in the load balancer for external clients, defines health checks for the Vault cluster, and enables Layer-4 TCP forwarding to Vault peers using a round-robin distribution strategy:
global
maxconn 4096
log stdout format raw local0
defaults
mode tcp # TCP mode for Layer-4 forwarding
timeout connect 5s
timeout client 1m
timeout server 1m
log global
# --- Stats Page (HTTPS) ---
frontend stats
bind *:8404 ssl crt /usr/local/etc/haproxy/haproxy.pem
mode http
stats enable
stats uri /stats
stats refresh 10s
stats admin if TRUE
# --- Vault API Frontend (TLS Passthrough) ---
frontend vault_api
bind *:8200 # Listen on Vault API port
mode tcp # Layer-4 forwarding to preserve TLS
default_backend vault_nodes
# --- Vault Nodes Backend ---
backend vault_nodes
mode tcp
balance roundrobin # Simple load distribution among nodes
option tcp-check # Health check at TCP level
server node1 zcx-vault-1:8200 check
server node2 zcx-vault-2:8200 check
server node3 zcx-vault-3:8200 check
Step 12: Deploy the load balancer
Use the Docker CLI to copy your HAProxy config to the persistent volume. For example, to create a temporary container,
haproxy-tmp, copy to the volume, and remove the temporary container:$ docker run \ --name haproxy-tmp \ -v haproxy-config:/usr/local/etc/haproxy \ alpine sh -c "sleep 1" \ && docker cp \ proxy-deploy/local-config/haproxy.cfg \ haproxy-tmp:/usr/local/etc/haproxy/haproxy.cfg \ && docker rm -f haproxy-tmpUse Docker compose to deploy the HAProxy load-balancer. For example:
$ docker compose \ -f proxy-deploy/proxy.compose.yml up vault-lb --build --detach
Step 13: Verify the deployment
Get the IP address of the
vault-1container:$ docker inspect -f \ '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' zcx-vault-1Set your local
VAULT_ADDRenvironment variable to thevault-1container IP:$ export VAULT_ADDR=https://<vault-1-IP>:8200Set your local
VAULT_TOKENenvironment variable to theroot_tokenvalue returned in your initialization output file (vault-init.json):$ export VAULT_TOKEN=hvs.000000000000000000000000Call Vault to list the current nodes:
$ vault operator raft list-peers Node Address State Voter ---- ------- ----- ----- zcx-vault-1 zcx-vault-1:8201 leader true zcx-vault-2 zcx-vault-2:8201 follower true zcx-vault-3 zcx-vault-3:8201 follower trueOpen
https://<proxy_url>:<proxy_port>/statsin the browser to test the HAProxy container health.Set your local
VAULT_PROXY_ADDRenvironment variable to the URL of your load balancer:$ export VAULT_PROXY_ADDR=https://<proxy_url>:<proxy_port>Unset the
VAULT_ADDRenvironment variable in favor of the proxy URL:$ unset VAULT_ADDRCall Vault to check the current storage configuration and confirm the proxy routes traffic correctly:
$ vault read -format json sys/storage/raft/configuration | jq .data { "config": { "index": 0, "servers": [ { "address": "zcx-vault-1:8201", "leader": true, "node_id": "zcx-vault-1", "protocol_version": "3", "voter": true }, { "address": "zcx-vault-2:8201", "leader": false, "node_id": "zcx-vault-2", "protocol_version": "3", "voter": true }, { "address": "zcx-vault-3:8201", "leader": false, "node_id": "zcx-vault-3", "protocol_version": "3", "voter": true } ] } }