Deploying Boundary Enterprise
Overview
This section details the steps to create a Boundary Enterprise cluster manually in a private datacenter. This guide assumes that you have already read the Recommended Deployment Architecture and Detailed Design section of this guide and have a basic understanding of the Boundary Enterprise architecture and the steps required to deploy it in a private datacenter.
This guide uses the following names and IP addresses for the Boundary Enterprise controllers and workers.
DNS Name | IP Address | Node Type | Location |
---|---|---|---|
controller-api-lb.boundary.domain | controller_api_lb_address | Internet-facing controller API load balancer (TCP 443) | (all zones) |
controller-cluster-lb.boundary.domain | controller_cluster_lb_address | Internal-facing controller cluster load balancer (TCP 9201) | (all zones) |
controller1.boundary.domain | 10.0.253.11 | Controller VM | zone1 |
controller2.boundary.domain | 10.0.254.12 | Controller VM | zone2 |
controller3.boundary.domain | 10.0.255.13 | Controller VM | zone3 |
ingressworker-lb.boundary.domain | ingress_lb_address | Internal-facing ingress worker load balancer (TCP 9202) | (all zones) |
ingressworker1.boundary.domain | 10.0.253.101 | Ingress worker VM | zone1 |
ingressworker2.boundary.domain | 10.0.254.102 | Ingress worker VM | zone2 |
ingressworker3.boundary.domain | 10.0.255.103 | Ingress worker VM | zone3 |
egressworker1.boundary.domain | 10.0.253.201 | Egress worker VM | zone1 |
egressworker2.boundary.domain | 10.0.254.202 | Egress worker VM | zone2 |
egressworker3.boundary.domain | 10.0.255.203 | Egress worker VM | zone3 |
Prepare
License
Obtain your active Boundary Enterprise license file. If you do not have this file, please contact your HashiCorp account team.
Servers
- Refer to the Detailed Design section of the guide for more information on what to consider the sizing based on your environment.
- Identify the availability zones within your datacenter where you plan to host your Boundary controllers and ingress/egress. For the rest of this document, we refer to these as zone1, zone2, zone3, and so on.
- Build the servers. Ensure there is 1 Boundary Enterprise controller, ingress/egress worker per each of the 3 availability zones, for a total of 3 servers. For the rest of this document, we refer to these servers as controller1, controller2, controller3, ingressworker1, ingressworker2, ingressworker3, egressworker1, egressworker2, egressworker2, and so on.
- Ensure you can log in to each server as a user with sudo or root privileges via SSH or equivalent.
Load balancer
- A layer 4 load balancer exposes controller API and administrator UI via HTTPS (port 443) to Boundary Enterprise clients. The load balancer distributes Boundary Enterprise client requests to the controllers's API port (default TCP 9200).
- Another layer 4 load balancer exposes the controller's cluster port (default TCP 9201) for workers session authorization, credentials, and so on.
- Refer to the Detailed Design section of the guide for more information on how to configure the load balancer.
PostgreSQL
Controllers are stateless and manage all configurations through an external PostgreSQL database. We recommend configuring the PostgreSQL database for high availability. Please refer to PostgreSQL High Availability, Load Balancing, and Replication. If you use a managed service, refer to your provider's PostgreSQL high availability documentation.
Storage for session recording
We recommend using S3-compliant Object Storage for audit logging and session recording. Refer to the Detailed Design section of the guide for more information on how to configure storage for session recording.
Boundary Enterprise controller configuration
TLS
Create an X.509 certificate. Install this onto each of the Boundary Enterprise controllers. Refer to your organization's process on creating a new certificate that matches the DNS record you intend to direct users to when accessing Boundary Enterprise. In this case, the DNS record pointing at the load balancer: boundary.domain
. Please replace boundary.domain
with your actual domain name.
You need these files:
- The certificate (
cert.pem
). - The certificate's private key (
key.pem
). - The certificate authority bundle from a trusted certificate authority (
bundle.pem
).
You may have to create a new directory to store the certificate material at /etc/boundary.d/tls.
ls -l /etc/boundary.d/tls
-rw-r----- 1 boundary boundary 1801 Oct 17 03:47 bundle.pem
-rw-r----- 1 boundary boundary 1842 Oct 17 03:47 cert.pem
-rw-r----- 1 boundary boundary 1679 Oct 17 03:47 key.pem
Specify the following configuration in the /etc/boundary.d/controller.hcl
file to contain the TLS certificates configuration.
# API listener configuration block
listener "tcp" {
address = "0.0.0.0:9200"
purpose = "api"
tls_disable = false
tls_cert_file = "/etc/boundary.d/tls/cert.pem"
tls_key_file = "/etc/boundary.d/tls/key.pem"
tls_client_ca_file = "/etc/boundary.d/tls/bundle.pem"
cors_enabled = true
cors_allowed_origins = ["*"]
}
# Ops listener for operations like health checks for load balancers
listener "tcp" {
address = "0.0.0.0:9203"
purpose = "ops"
tls_disable = false
tls_cert_file = "/etc/boundary.d/tls/cert.pem"
tls_key_file = "/etc/boundary.d/tls/key.pem"
tls_client_ca_file = "/etc/boundary.d/tls/bundle.pem"
}
Note
We recommend storing the TLS material and license key for Boundary Enterprise controllers in HashiCorp Vault, HCP Vault Secrets or cloud provider equivalents.
KMS for controllers
You need four KMS keys:
- root
- The root key is the primary encryption key used by Boundary Enterprise.
- worker-auth
- The controller and worker share this to authenticate a worker to the controller.
- recovery
- Use this key for rescue/recovery operations in case of system issues or when normal authentication methods are unavailable.
- bsr
- Use this key for session recording capabilities. Recording of session data uses this key and ensures the integrity of those recordings.
Specify the following configuration in the /etc/boundary.d/controller.hcl
file to contain the KMS keys configuration.
# Root KMS Key (managed by AWS KMS in this example)
kms "awskms" {
purpose = "root"
region = "ap-southeast-1"
kms_key_id = "abcd1234-a123-456a-a12b-aexamplekey1"
}
# Recovery KMS Key (managed by AWS KMS in this example)
kms "awskms" {
purpose = "recovery"
region = "ap-southeast-1"
kms_key_id = "abcd1234-a123-456a-a12b-aexamplekey2"
}
# Worker-Auth KMS Key (managed by AWS KMS in this example)
kms "awskms" {
purpose = "worker-auth"
region = "ap-southeast-1"
kms_key_id = "abcd1234-a123-456a-a12b-aexamplekey3"
}
# BSR KMS Key (managed by AWS KMS in this example)
kms "awskms" {
purpose = "bsr"
region = "ap-southeast-1"
kms_key_id = "abcd1234-a123-456a-a12b-aexamplekey4"
}
Refer to the Vault Transit page to configure Boundary Enterprise to use Vault's Transit secret engine for key management. If you use a managed service, refer to our KMS documentation for guidance.
Prepare controller configuration to initialize a PostgreSQL database
Use the following in the /etc/boundary.d/controller.hcl
file to configure the database URL.
# Controller configuration block
controller {
name = "<controller1>" # update here for other controllers
description = "<Boundary Controller 1>" # update here for other controllers
database {
url = "postgresql://POSTGRESQL_CONNECTION_STRING"
}
license = "file:////opt/boundary/license/license.hclic"
}
Prepare Boundary Enterprise controller configuration
Populate the /etc/boundary.d/controller.hcl
file with the configuration information below.
Use the following controller configuration on each controller node, replacing the name, description, database URL, license path, listener IP address and KMS configuration in each case.
# disable memory from being swapped to disk
disable_mlock = true
telemetry {
prometheus_retention_time = "24h"
disable_hostname = true
}
# Controller configuration block
controller {
name = "<controller1>" # update here for other controllers
description = "<Boundary Enterprise Controller 1>" # update here for other controllers
database {
url = "postgresql://POSTGRESQL_CONNECTION_STRING"
}
license = "file:////opt/boundary/license/license.hclic"
}
# API listener configuration block
listener "tcp" {
address = "0.0.0.0:9200"
purpose = "api"
tls_disable = false
tls_cert_file = "/etc/boundary.d/tls/cert.pem"
tls_key_file = "/etc/boundary.d/tls/key.pem"
tls_client_ca_file = "/etc/boundary.d/tls/bundle.pem"
cors_enabled = true
cors_allowed_origins = ["*"]
}
# Data-plane listener configuration block (used for worker coordination)
listener "tcp" {
address = "<10.0.253.11>:9201" # update here for other controllers IP address
purpose = "cluster"
}
# Ops listener for operations like health checks for load balancers
listener "tcp" {
address = "0.0.0.0:9203"
purpose = "ops"
tls_disable = false
tls_cert_file = "/etc/boundary.d/tls/cert.pem"
tls_key_file = "/etc/boundary.d/tls/key.pem"
tls_client_ca_file = "/etc/boundary.d/tls/bundle.pem"
}
# Events (logging) configuration. This configured logging for ALL events to both
# stderr and a file at /var/log/boundary/controller.log
events {
audit_enabled = true
sysevents_enabled = true
observations_enable = true
sink "stderr" {
name = "all-events"
description = "All events sent to stderr"
event_types = ["*"]
format = "cloudevents-json"
}
sink {
name = "file-sink"
description = "All events sent to a file"
event_types = ["*"]
format = "cloudevents-json"
file {
path = "/var/log/boundary"
file_name = "controller.log"
}
audit_config {
audit_filter_overrides {
sensitive = "redact"
secret = "redact"
}
}
}
}
# Root KMS Key (managed by AWS KMS in this example)
kms "awskms" {
purpose = "root"
region = "ap-southeast-1"
kms_key_id = "abcd1234-a123-456a-a12b-aexamplekey1"
}
# Recovery KMS Key
kms "awskms" {
purpose = "recovery"
region = "ap-southeast-1"
kms_key_id = "abcd1234-a123-456a-a12b-aexamplekey2"
}
# Worker-Auth KMS Key (in this example uses KMS authenticated workers)
kms "awskms" {
purpose = "worker-auth"
region = "ap-southeast-1"
kms_key_id = "abcd1234-a123-456a-a12b-aexamplekey3"
}
# BSR KMS Key (in this example uses KMS for the session recording feature)
kms "awskms" {
purpose = "bsr"
region = "ap-southeast-1"
kms_key_id = "abcd1234-a123-456a-a12b-aexamplekey4"
}
Initialize a PostgreSQL database
Before you can start Boundary Enterprise, you must initialize the database from one controller.
The following command initializes a Boundary Enterprise's database with the configuration specified in the /etc/boundary.d/controller.hcl
file:
boundary database init -config=/etc/boundary/controller.hcl
Starting Boundary Enterprise controller service
When the configuration files are in place on each Boundary Enterprise controller, you can proceed to enable and start the binary via systemd
on each of the Boundary Enterprise controller nodes.
Perform these steps on all Boundary Enterprise controllers:
Create a
boundary
user, and create directories for Boundary Enterprise configuration owned by this user.$ adduser --system --group boundary || true $ mkdir -p /etc/boundary.d /etc/boundary.d/tls /opt/boundary/license /var/log/boundary $ chown -R boundary:boundary /etc/boundary.d /var/log/boundary
Download the Boundary Enterprise package from HashiCorp. Unzip the package and move the
boundary
binary to a sharedPATH
location such as/usr/local/bin
, owned by theboundary
user.$ curl -O https://releases.hashicorp.com/boundary/0.17.1+ent/boundary_0.17.1+ent_linux_amd64.zip $ unzip boundary_0.17.1+ent_linux_amd64.zip $ mv boundary /usr/local/bin/ $ chown boundary:boundary /usr/local/bin/boundary
Add the license file, certificate files, and relevant config file to
/etc/boundary.d
. In the end, the directory should look like the below.$ chown boundary:boundary /etc/boundary.d/* $ chmod 640 /etc/boundary.d/* $ ls -l /etc/boundary.d/ -rw-r----- 1 boundary boundary 1652 Oct 17 03:47 controller.hcl drwxr-x--- 2 boundary boundary 4096 Oct 17 03:47 tls $ ls -l /etc/boundary.d/tls -rw-r----- 1 boundary boundary 1801 Oct 17 03:47 bundle.pem -rw-r----- 1 boundary boundary 1842 Oct 17 03:47 cert.pem -rw-r----- 1 boundary boundary 1679 Oct 17 03:47 key.pem $ ls -l /opt/boundary/license -rw-rw-r-- 1 root root 3514 Aug 15 18:21 EULA.txt -rw-rw-r-- 1 root root 4922 Aug 15 18:21 LICENSE.txt -rw-rw-r-- 1 root root 9518 Aug 15 18:21 TermsOfEvaluation.txt -rw-r--r-- 1 root root 1163 Oct 17 03:47 license.hclic $ ls -la /var/log/boundary total 8 drwxr-x--- 2 boundary boundary 4096 Oct 17 06:23 . drwxrwxr-x 11 root syslog 4096 Oct 17 06:23 ..
Create a
systemd
unit file for the Boundary Enterprise service, then load it intosystemd
. Note that theExecStart
line runs theboundary
binary pointing to yourcontroller.hcl
file.$ cat << EOF >> /etc/systemd/system/boundary.service [Unit] Description="HashiCorp Boundary Enterprise" Documentation=https://developer.hashicorp.com/boundary/docs Requires=network-online.target After=network-online.target ConditionFileNotEmpty=/etc/boundary.d/controller.hcl StartLimitIntervalSec=60 StartLimitBurst=3 [Service] User=boundary Group=boundary ProtectSystem=full ProtectHome=read-only PrivateTmp=yes PrivateDevices=yes SecureBits=keep-caps AmbientCapabilities=CAP_IPC_LOCK CapabilityBoundingSet=CAP_SYSLOG CAP_IPC_LOCK NoNewPrivileges=yes ExecStart=/usr/bin/boundary server -config=/etc/boundary.d/controller.hcl ExecReload=/bin/kill --signal HUP $MAINPID KillMode=process KillSignal=SIGINT Restart=on-failure RestartSec=5 TimeoutStopSec=30 LimitNOFILE=65536 LimitMEMLOCK=infinity [Install] WantedBy=multi-user.target EOF
$ systemctl daemon-reload $ systemctl enable boundary
Start the Boundary Enterprise controller service.
$ systemctl start boundary
Boundary Enterprise ingress workers configuration
KMS for ingress workers
Get the worker-auth
KMS key from the KMS for controllers section. This key enables secure communication between workers and controllers, ensuring that only authorized workers can connect to each other.
Specify the following configuration in the /etc/boundary.d/worker.hcl
file to contain the KMS keys configuration.
# Worker-auth KMS key (managed by AWS KMS in this example)
kms "awskms" {
purpose = "worker-auth"
region = "ap-southeast-1"
kms_key_id = "abcd1234-a123-456a-a12b-aexamplekey3"
}
Prepare ingress workers configuration
Populate the /etc/boundary.d/worker.hcl
file with the configuration information below replacing content prompted in angled brackets with the correct characters for your deployment.
All three ingress workers' configurations are the same except the worker configuration stanza
.
# disable memory from being swapped to disk
disable_mlock = true
telemetry {
prometheus_retention_time = "24h"
disable_hostname = true
}
# worker block for configuring the specifics of the worker service
worker {
public_addr = "<10.0.253.101>" # update here for other ingress workers ip address
name = "<ingressworker1>" # update here for other ingress workers name
initial_upstreams = ["<controller_cluster_lb_address>:9201"]
recording_storage_path="/opt/boundary/bsr"
recording_storage_minimum_available_capacity="500MB"
tags {"app"="worker","env"="uat","bsr"="enabled","worker-type"="ingress"}
}
# listener denoting this is a worker proxy
listener "tcp" {
address = "0.0.0.0:9202"
purpose = "proxy"
}
# Ops listener for operations like health checks for ingress workers
listener "tcp" {
address = "0.0.0.0:9203"
purpose = "ops"
tls_disable = true
}
# Events (logging) configuration. This
# configured logging for ALL events to both
# stderr and a file at /var/log/boundary/ingress-worker.log
events {
audit_enabled = true
sysevents_enabled = true
observations_enable = true
sink "stderr" {
name = "all-events"
description = "All events sent to stderr"
event_types = ["*"]
format = "cloudevents-json"
}
sink {
name = "file-sink"
description = "All events sent to a file"
event_types = ["*"]
format = "cloudevents-json"
file {
path = "/var/log/boundary"
file_name = "ingress-worker.log"
}
audit_config {
audit_filter_overrides {
sensitive = "redact"
secret = "redact"
}
}
}
}
# kms block for encrypting the authentication PKI material
# Worker-Auth KMS Key (managed by AWS KMS in this example)
kms "awskms" {
purpose = "worker-auth"
region = "ap-southeast-1"
kms_key_id = "abcd1234-a123-456a-a12b-aexamplekey3"
}
Starting Boundary Enterprise ingress worker service
When the configuration files are in place on each Boundary Enterprise ingress worker, you can proceed to enable and start the binary via systemd
on each of the Boundary Enterprise ingress worker nodes.
Perform these steps on all Boundary Enterprise ingress workers:
Create a
boundary
user, and create directories for Boundary Enterprise configuration owned by this user.$ adduser --system --group boundary || true $ mkdir -p /etc/boundary.d /opt/boundary/bsr /var/log/boundary $ chown -R boundary:boundary /etc/boundary.d /opt/boundary/bsr /var/log/boundary
Use the official well-architected method to download the Boundary Enterprise binary and signature files from HashiCorp and confirm the integrity using the
gpg
binary. Unzip the package and move theboundary
binary to a sharedPATH
location such as/usr/local/bin
, owned by theboundary
user as per the below.$ unzip boundary_0.17.1+ent_linux_amd64.zip $ mv boundary /usr/local/bin/ $ chown boundary:boundary /usr/local/bin/boundary
Add relevant config file to
/etc/boundary.d
. In the end, the directory should look like this:$ chown boundary:boundary /etc/boundary.d/* $ chmod 640 /etc/boundary.d/* $ ls -l /etc/boundary.d/ -rw-r----- 1 boundary boundary 704 Oct 17 06:23 worker.hcl $ ls -l /opt/boundary/ drwxr-x--- 3 boundary boundary 4096 Oct 17 06:23 bsr drwxr-x--- 2 boundary boundary 4096 Oct 17 06:23 data $ ls -la /var/log/boundary total 8 drwxr-x--- 2 boundary boundary 4096 Oct 17 06:23 . drwxrwxr-x 11 root syslog 4096 Oct 17 06:23 ..
Create a systemd unit file for the Boundary Enterprise service, then load it into
systemd
. Note that theExecStart
line runs theboundary
binary pointing to yourworker.hcl
file:$ cat << EOF >> /etc/systemd/system/boundary.service [Unit] Description="HashiCorp Boundary Enterprise" Documentation=https://www.boundaryproject.io/docs/ Requires=network-online.target After=network-online.target ConditionFileNotEmpty=/etc/boundary.d/worker.hcl StartLimitIntervalSec=60 StartLimitBurst=3 [Service] User=boundary Group=boundary ProtectSystem=full ProtectHome=read-only PrivateTmp=yes PrivateDevices=yes SecureBits=keep-caps AmbientCapabilities=CAP_IPC_LOCK CapabilityBoundingSet=CAP_SYSLOG CAP_IPC_LOCK NoNewPrivileges=yes ExecStart=/usr/bin/boundary server -config=/etc/boundary.d/worker.hcl ExecReload=/bin/kill --signal HUP $MAINPID KillMode=process KillSignal=SIGINT Restart=on-failure RestartSec=5 TimeoutStopSec=30 LimitNOFILE=65536 LimitMEMLOCK=infinity [Install] WantedBy=multi-user.target EOF
$ systemctl daemon-reload $ systemctl enable boundary
Start the Boundary Enterprise ingress worker service.
$ systemctl start boundary
Boundary Enterprise egress workers configuration
KMS for egress workers
Get the worker-auth
KMS key from the KMS for controllers section. This key enables secure communication between workers and controllers, ensuring that only authorized workers can connect to each other.
Use the following KMS configuration block in the /etc/boundary.d/worker.hcl
file.
# Worker-Auth KMS Key (managed by AWS KMS in this example)
kms "awskms" {
purpose = "worker-auth"
region = "ap-southeast-1"
kms_key_id = "abcd1234-a123-456a-a12b-aexamplekey3"
}
Prepare egress workers configuration
Populate the /etc/boundary.d/worker.hcl
file with the configuration information below.
The configuration of all three egress workers should be identical except the worker
and kms
blocks. Amend these as necessary.
# disable memory from being swapped to disk
disable_mlock = true
telemetry {
prometheus_retention_time = "24h"
disable_hostname = true
}
# worker block for configuring the specifics of the worker service
worker {
public_addr = "<10.0.253.201>" # update here for other egress workers IP address
name = "<egressworker1>" # update here for other egress workers name
initial_upstreams = ["<ingress_lb_address>:9202"]
recording_storage_path="/opt/boundary/bsr"
recording_storage_minimum_available_capacity="500MB"
tags {"app"="worker","env"="uat","bsr"="enabled","worker-type"="egress"}
}
# listener denoting this is a worker proxy
listener "tcp" {
address = "0.0.0.0:9202"
purpose = "proxy"
}
# Ops listener for operations like health checks for ingress workers
listener "tcp" {
address = "0.0.0.0:9203"
purpose = "ops"
tls_disable = true
}
# Events (logging) configuration. This
# configured logging for ALL events to both
# stderr and a file at /var/log/boundary/egress-worker.log
events {
audit_enabled = true
sysevents_enabled = true
observations_enable = true
sink "stderr" {
name = "all-events"
description = "All events sent to stderr"
event_types = ["*"]
format = "cloudevents-json"
}
sink {
name = "file-sink"
description = "All events sent to a file"
event_types = ["*"]
format = "cloudevents-json"
file {
path = "/var/log/boundary"
file_name = "egress-worker.log"
}
audit_config {
audit_filter_overrides {
sensitive = "redact"
secret = "redact"
}
}
}
}
# kms block for encrypting the authentication PKI material
# Worker-Auth KMS Key (managed by AWS KMS in this example)
kms "awskms" {
purpose = "worker-auth"
region = "ap-southeast-1"
kms_key_id = "abcd1234-a123-456a-a12b-aexamplekey3"
}
Starting Boundary Enterprise egress worker service
When the configuration files are in place on each Boundary Enterprise egress worker, enable the binary via systemd
on each of the Boundary Enterprise egress worker nodes.
Perform these steps on all Boundary Enterprise egress workers:
Create a
boundary
user, and create directories for Boundary Enterprise configuration owned by this user.$ adduser --system --group boundary || true $ mkdir -p /etc/boundary.d /opt/boundary/bsr /var/log/boundary $ chown -R boundary:boundary /etc/boundary.d /opt/boundary/bsr /var/log/boundary
Use the official well-architected method to download the Boundary Enterprise binary and signature files from HashiCorp and confirm the integrity using the
gpg
binary. Unzip the package and move theboundary
binary to a sharedPATH
location such as/usr/local/bin
, owned by theboundary
user as per the below.$ unzip boundary_0.17.1+ent_linux_amd64.zip $ mv boundary /usr/local/bin/ $ chown boundary:boundary /usr/local/bin/boundary
Add relevant config file to
/etc/boundary.d
. In the end, the directory should look like this:$ chown boundary:boundary /etc/boundary.d/* $ chmod 640 /etc/boundary.d/* $ ls -l /etc/boundary.d/ -rw-r----- 1 boundary boundary 704 Oct 17 06:23 worker.hcl $ ls -l /opt/boundary/ drwxr-x--- 3 boundary boundary 4096 Oct 17 06:23 bsr drwxr-x--- 2 boundary boundary 4096 Oct 17 06:23 data $ ls -la /var/log/boundary total 8 drwxr-x--- 2 boundary boundary 4096 Oct 17 06:23 . drwxrwxr-x 11 root syslog 4096 Oct 17 06:23 ..
Create a
systemd
unit file for the Boundary Enterprise service, then load it intosystemd
. Note that theExecStart
line runs theboundary
binary pointing to yourworker.hcl
file:$ cat << EOF >> /etc/systemd/system/boundary.service [Unit] Description="HashiCorp Boundary Enterprise" Documentation=https://www.boundaryproject.io/docs/ Requires=network-online.target After=network-online.target ConditionFileNotEmpty=/etc/boundary.d/worker.hcl StartLimitIntervalSec=60 StartLimitBurst=3 [Service] User=boundary Group=boundary ProtectSystem=full ProtectHome=read-only PrivateTmp=yes PrivateDevices=yes SecureBits=keep-caps AmbientCapabilities=CAP_IPC_LOCK CapabilityBoundingSet=CAP_SYSLOG CAP_IPC_LOCK NoNewPrivileges=yes ExecStart=/usr/bin/boundary server -config=/etc/boundary.d/worker.hcl ExecReload=/bin/kill --signal HUP $MAINPID KillMode=process KillSignal=SIGINT Restart=on-failure RestartSec=5 TimeoutStopSec=30 LimitNOFILE=65536 LimitMEMLOCK=infinity [Install] WantedBy=multi-user.target EOF
$ systemctl daemon-reload $ systemctl enable boundary
Start the Boundary Enterprise egress worker service.
$ systemctl start boundary
Next steps
After setting up a Boundary Enterprise cluster, it is essential to perform initial configuration steps to ensure the environment is secure, functional, and ready for use. Refer to Initial Configuration in the Boundary Enterprise: Operating Guide for Adoption.