Boundary
Worker stanza
The worker
stanza configures Boundary worker-specific parameters.
All workers within Boundary use certificates and encryption keys to identify themselves and protect data in transit. However, there are three different ways to register them so that registration of workers can fit into any workflow: controller-led, worker-led, and via external KMS.
The sub-pages linked at the bottom of this page explain the differences in their configuration.
You must register workers using the worker-led or controller-led methods in the system with an API call. These workers require storage on disk to store the current set of credentials. Workers using an external KMS auto-register after authenticating. This makes them an easy mechanism to use for automatic scaling. This also means they do not need to store credentials locally; the KMS re-authenticates them each time they connect.
Important
Before version 0.15 of Boundary, there were two different types of workers, PKI & KMS workers.
If you are using pre-0.15 workers with pre-0.15 upstream configurations, please switch the documentation version to 0.13.x
- 0.14.x
. This will ensure you have the correct information.
Common worker parameters
The following fields apply to all registration mechanisms.
worker {
public_addr = "5.1.23.198"
# Local storage path required if session recording is enabled
recording_storage_path = "tmp/boundary/"
# Minimum available disk space required in the local storage path if session recording is enabled
recording_storage_minimum_available_capacity = "500MB"
# Mutually exclusive with hcp_boundary_cluster_id
initial_upstreams = [
"10.0.0.1",
"10.0.0.2",
]
tags {
type = ["prod", "webservers"]
region = ["us-east-1"]
}
# HCP Boundary only
# hcp_boundary_cluster_id = "....."
}
public_addr
- Specifies the public host or IP address (and optionally port) where clients can reach the worker for proxying. By default, it uses the address of the listener marked forproxy
purpose. This is useful for cloud environments that do not bind a publicly accessible IP directly to a NIC on the host, such as an Amazon EIP.You should omit this parameter in multi-hop configurations if this self-managed worker connects to an upstream HCP-managed worker.
This value can reference any of the following:
- a direct address string
- read an address from a file on disk (file://)
- read an address from an environment variable (env://)
initial_upstreams
- A list of hosts/I0P addresses and optionally ports for reaching the Boundary cluster. The port will default to:9201
if not specified. This value can be a direct access string array with the addresses, or it can refer to a file on disk (file://
) from which the addresses will be read, or an environment variable (env://
) from which to read the addresses. When using environment variable or file, their contents must formatted as a JSON array:["127.0.0.1", "192.168.0.1", "10.0.0.1"]
Self-managed workers connecting to HCP Boundary require the
hcp_boundary_cluster_id
parameter instead ofinitial upstreams
, unless you are configuring an HCP-managed worker as an ingress worker. If you configure a self-managed worker with bothinitial_upstreams
andhcp_boundary_cluster_id
, the worker configuration fails.hcp_boundary_cluster_id
- A string required to configure workers using worker-led or controller-led registration to connect to your HCP Boundary cluster rather than specifyinginitial_upstreams
. This parameter is valid only for workers using the worker-led or controller-led registration method and for workers directly connected to HCP Boundary.recording_storage_path
- A path to the local storage for recorded sessions. Boundary stores session recordings in the local storage while they are in progress. When the session is complete, Boundary moves the local session recording to remote storage and deletes the local copy.recording_storage_minimum_available_capacity
- A value measured in bytes that defines the worker's local storage state. Boundary compares this value to the available local disk space found in therecording_storage_path
and determines if a worker can perform session recording operations. The supported suffixes are kb, kib, mb, mib, gb, gib, tb, tib, which are not case sensitive. Example: 2GB, 2gb, 2GiB, 2gib. The possible storage states based on therecording_storage_minimum_available_capacity
are:- Available - The worker has storage above the threshold and can proxy sessions that have session recording enabled.
- Low storage - The worker has storage below the threshold. It allows existing sessions to continue without interruption but prevents proxying new sessions that have session recording enabled. The worker cannot record new sessions or play back existing recordings.
- Critically low storage - The worker falls below half the storage threshold. It forcefully closes existing sessions with session recording. The worker cannot record new sessions or play back existing recordings.
- Out of storage - The worker is out of local disk space. It cannot record new sessions or play back existing recordings. The worker enters an unrecoverable state, requiring an administrator to intervene and resolve the issue.
- Not configured - The worker lacks a configured local storage path.
- Unknown - The worker starts with this default local storage state. This state indicates that the worker's local storage state is not yet known.
tags
- A map of key-value pairs where values are an array of strings. Most commonly used for filtering targets a worker can proxy via worker tags. OnSIGHUP
, the tags set here will be re-parsed and new values used. It can also be a string referring to a file on disk (file://
) or an environment variable (env://
).
Signals
The SIGHUP
signal causes a worker to reload its configuration file to pick up any updates for the initial_upstreams
and tags
values.
Boundary ignores other updated values.
The SIGTERM
and SIGINT
signals initiate a graceful shutdown on a worker. The worker waits for any sessions to drain
before shutting down. Workers in a graceful shutdown state do not receive any new work, including session proxying, from the control plane.
Multi-hop worker capabilities
Enterprise
This feature requires HCP Boundary or Boundary Enterprise
Multi-hop capabilities, including multi-hop sessions and Vault private access, is when a session or Vault credential request goes through more than one worker. To enable this, you must connect two or more workers to each other in some configuration. There are no limits on the number of workers allowed in a multi-hop session configuration.
It helps to think of “upstream” and “downstream” nodes in the context of multi-hop. If you view controllers as the “top” node of a multi-hop chain, any worker connected to a node is "downstream" of that node. The node that any particular worker connects to (whether another worker or a controller) is the "upstream" of that node. For example, in the diagram below, Worker 2’s upstream is Worker 1, and its downstream is Worker 3.
You can deploy multi-hop workers in scenarios where inbound network traffic is not allowed. A worker in a private network can send outbound communication to its upstream worker, and create a reverse proxy to establish a session.
You can configure target worker filters with multi-hop workers to allow for fine-grained control on which workers handle ingress and egress for session traffic to a target. Ingress worker filters specify the workers you use to initiate a session, and egress worker filters specify the workers you use to access targets.
Multi-hop worker requirements
When you configure multi-hop sessions, there is an "ingress" worker, an "egress" worker, and any number of intermediary workers. Ingress, egress, and intermediary workers have the following requirements.
Ingress worker requirements
To proxy target connections, ingress workers require outbound access to the Boundary control plane and inbound access from clients.
Intermediary worker requirements
Intermediary workers require outbound access to an upstream worker. The upstream worker may be an ingress worker or another intermediary worker. Intermediary workers also require inbound access from a downstream worker. The downstream worker may be an egress worker or another intermediary worker.
Egress worker requirements
To proxy target connections, egress workers require outbound access to an upstream worker and outbound access to the destination host or service.
Complete configuration example
listener "tcp" {
purpose = "proxy"
tls_disable = true
address = "127.0.0.1"
}
worker {
# Path for worker storage, assuming worker-led or controller-led registration. Must be unique across workers
auth_storage_path="/boundary/demo-worker-1"
# Local storage path required if session recording is enabled
recording_storage_path = "tmp/boundary/"
# Minimum available disk space required in the local storage path if session recording is enabled
recording_storage_minimum_available_capacity = "500MB"
# Workers typically need to reach upstreams on :9201
initial_upstreams = [
"10.0.0.1",
"10.0.0.2",
"10.0.0.3",
]
public_addr = "myhost.mycompany.com"
tags {
type = ["prod", "webservers"]
region = ["us-east-1"]
}
}
# The following KMS config is an example only
# Use a production KMS such as AWS KMS for production installs
kms "aead" {
purpose = "worker-auth-storage"
aead_type = "aes-gcm"
key = "X+IJMVT6OnsrIR6G/9OTcJSX+lM9FSPN"
key_id = "worker-auth-storage"
}
Tutorials
Refer to the Self-Managed Worker Registration with HCP Boundary tutorial to learn how to register and manage Workers.
Refer to the Manage Multi-Hop Sessions with HCP Boundary tutorial to learn how to configure a multi-hop session.