HashiCorp Cloud Platform
Install self-managed workers
HCP Boundary allows organizations to register and manage their own workers. You can deploy these self-managed workers in private networks, and they can communicate with an upstream HCP Boundary cluster.
For a step-by-step example of configuring a self-managed worker instance, refer to the self-managed workers tutorial.
To install and configure a self-managed worker, complete the procedures below.
Download the Boundary Enterprise binary
- Navigate to the Boundary releases page and download the latest Boundary Enterprise binary for your operating system. - For Linux there are multiple versions of the binary available, based on distro and architecture. Select the correct package to download the zip to your local machine. Then, extract the - boundarybinary.- Alternatively, refer to the examples below for installing the latest version of the - boundary-enterprisepackage using a package manager.- $ curl -fsSL https://apt.releases.hashicorp.com/gpg | sudo gpg --dearmor -o /usr/share/keyrings/hashicorp-archive-keyring.gpg $ echo "deb [signed-by=/usr/share/keyrings/hashicorp-archive-keyring.gpg] https://apt.releases.hashicorp.com $(lsb_release -cs) main" | sudo tee /etc/apt/sources.list.d/hashicorp.list $ sudo apt update && sudo apt install boundary-enterprise -y
- After downloading the binary, ensure the - boundaryversion matches the HCP Boundary control plane's version in order to benefit from the latest HCP Boundary features.- Use the following command to verify the version: - $ boundary version Version information: Build Date: 2024-11-18T16:04:45Z Git Revision: d648fb7e0fe80d45df04faa165161ede74014888 Metadata: ent Version Number: 0.18.1+ent
Create the self-managed worker configuration file
Next, create a self-managed worker configuration file. Refer to the complete configuration example to view all valid configuration options.
- Create a new file to store the worker configuration. - $ touch worker.hcl
- Open the - worker.hclfile with a text editor, such as Vi. Paste the following configuration information into the- worker.hclfile:- worker.hcl - disable_mlock = true hcp_boundary_cluster_id = "<cluster-id>" listener "tcp" { address = "127.0.0.1:9202" purpose = "proxy" } worker { auth_storage_path = "/home/myusername/worker" tags { type = ["worker", "linux"] } }
- Update the configuration fields in the - worker.hclfile as necessary. You can specify the following configuration fields for self-managed workers:- The - hcp_boundary_cluster_idfield accepts a Boundary cluster ID and is used by the worker when it initially connects to HCP Boundary. You configure this field externally to the- workerstanza.- The cluster ID is the UUID in the HCP Boundary cluster URL. For example, the cluster ID is - c3a7a20a-f663-40f3-a8e3-1b2f69b36254, if your cluster URL is:- https://c3a7a20a-f663-40f3-a8e3-1b2f69b36254.boundary.hashicorp.cloud
- The - listenerstanza in the example above sets the- addressport to- 0.0.0.0:9202. This port should already be configured by the AWS security group for this instance to accept inbound TCP connections. If you want to use a custom listener port, you can specify it in this field.
- If set, the - public_addrfield should match the public IP or DNS name of your self-managed worker instance. The- public_addrdoes not need to be set if the worker has outbound access to an upstream worker or controller. In the unlikely event that you deploy the Boundary client and worker on the same local machine, you should omit the- public_addrattribute.- For an example of the Boundary client and worker being deployed on the same local machine, refer to the Configure the worker section of the self-managed worker tutorial. 
- The - auth_storage_pathis a local path where the worker stores its credentials. You should not share storage between workers. This field should match the full path to the- /worker/directory, such as:- /home/ubuntu/worker
- The - initial_upstreamsvalue indicates the address or addresses a worker uses when initially connecting to Boundary. You can use- initial_upstreamsin the- workerstanza as an alternative to the- hcp_boundary_cluster_id.- For most use cases, the - hcp_boundary_cluster_idis sufficient for ensuring that connectivity is always available, even if the HCP-managed upstream workers change. You should only configure an- initial_upstreamsvalue if you want to connect this worker to another self-managed or HCP-managed worker as part of a multi-hop sessions topology. Make sure to use- hcp_boundary_cluster_idto connect self-managed workers to HCP Boundary.- The example above uses the - auth_storage_pathand the- hcp_boundary_cluster_idvalues. If you want to configure- initial_upstreamsinstead, you should omit the- hcp_boundary_cluster_id.
 
- Save the - worker.hclfile.
Start the self-managed worker
Once the configuration file is created, you can start the worker server. Use the
following command to start the server. You must provide the full path to the
worker configuration file, for example /home/worker.hcl.
Note the Worker Auth Registration Request: value on line 12. You can also
locate this value in the auth_request_token file. You must provide this value
when you Register a new worker with
HCP.
Enter the following command to start the worker:
worker.hcl
$ ./boundary server -config="/home/myusername/worker.hcl"
==> Boundary server configuration:
                             Cgo: disabled
                      Listener 1: tcp (addr: "0.0.0.0:9202", max_request_duration: "1m30s", purpose: "proxy")
                       Log Level: info
                           Mlock: supported: true, enabled: false
                         Version: Boundary v0.11.2+hcp
                     Version Sha: f0006502c93b51291896b4c9a1d2d5290796f9ce
      Worker Auth Current Key Id: knoll-unengaged-twisting-kite-envelope-dock-liftoff-legend
Worker Auth Registration Request: GzusqckarbczHoLGQ4UA25uSR7RQJqCjDfxGSJZvEpwQpE7HzYvpDJ88a4QMP3cUUeBXhS5oTgck3ZvZ3nrZWD3HxXzgq4wNScpy7WE7JmNrrGNLNEFeqqMcyhjqGJVvg2PqiZA6arL6zYLNLNCEFtRhcvG5LLMeHc3bthkrbwLg7R7TNswTjDJWmwh4peYpnKuQ9qHEuTK9fapmw4fdvRTiTbrq78ju4asvLByFTCTR3nbk62Tc15iANYsUAn9JLSxjgRXTsuTBkp4QoqBqz89pEi258Wd1ywcACBHRT3
        Worker Auth Storage Path: /home/myusername/worker
        Worker Public Proxy Addr: 52.90.177.171:9202
==> Boundary server started! Log data will stream in below:
{"id":"l0UQKrAg7b","source":"https://hashicorp.com/boundary/ip-172-31-86-85/worker","specversion":"1.0","type":"system","data":{"version":"v0.1","op":"worker.(Worker).StartControllerConnections","data":{"msg":"Setting HCP Boundary cluster address 6f40d99c-ed7a-4f22-ae52-931a5bc79c03.proxy.boundary.hashicorp.cloud:9202 as upstream address"}},"datacontentype":"application/cloudevents","time":"2023-01-10T04:34:52.616180263Z"}
The worker starts and outputs its authorization token as Worker Auth
Registration Request.
It is also saved to a file, auth_request_token,
defined by the auth_storage_path in the worker configuration file.
After you install and start the self-managed worker, you must register it with HCP in your environment's admin console.