Boundary
Dynamic host catalogs on AWS
- 20min
- |
- BoundaryBoundary
Deployment
Dynamic updates to host catalogs distinguish Boundary from traditional access methods that rely on manual target configuration. Dynamic host catalogs enable integrations with major cloud providers for seamless onboarding of cloud tenant identities, roles, and targets.
Boundary supports automated discovery of target hosts and services for major cloud providers including Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP). Dynamic connections to any service registry ensure that hosts and host catalogs are consistently up-to-date. This critical workflow offers access-on-demand and eliminates the need to manually configure targets for dynamic, cloud-based infrastructure.
In this tutorial you will configure a dynamic host catalog using Amazon Web Services.
Dynamic hosts catalog overview
- Get setup
- Dynamic host catalogs background
- Set up cloud hosts
- Build a host catalog
- Verify catalog membership
Prerequisites
A Boundary binary greater than 0.18.0 in your
PATH
This tutorial assumes you can connect to an HCP Boundary cluster, a Boundary Enterprise cluster, or launch Boundary in dev mode.
An Amazon Web Services test account. This tutorial requires the creation of new cloud resources and will incur costs associated with the deployment and management of these resources.
Installing the AWS CLI provides an optional workflow for this tutorial. If you use the AWS CLI, CloudFormation must also be available within your
PATH
.Installing Terraform 0.14.9 or greater provides an optional workflow for this tutorial. The binary must be available in your
PATH
.
Get setup
In this tutorial, you will test dynamic host catalog integrations using HCP Boundary, a Boundary Enterpise cluster, or by running a Boundary controller locally using dev mode. Select a tutorial Deployment option at the top of this page to proceed.
Dynamic host catalogs background
In a cloud operating model, infrastructure resources are highly dynamic and ephemeral. Boundary lacks an on-target agent or daemon, and cannot recognize when a host service migrates or is redeployed. Instead, Boundary relies on an external entity, such as manual configuration by an administrator or IaC (infrastructure as code) application like Terraform, to ensure host definitions route to the appropriate network location. Many other secure access solutions follow this pattern.
Dynamic host catalog plugins are an alternative way to automate the discovery and configuration of Boundary hosts and targets by delegating the host registry and their connection information to a cloud infrastructure provider. Administrators supply credentials for the catalog provider and a set of tag-based rules for discovering resources in the catalog. For example, "this catalog contains EC2 instance types in AWS’s us-west-2 region within the Marketing subscription". This model does not rely on IaC target discovery or agent-based target discovery.
Boundary uses Go-Plugin to implement a plugin model for expanding the dynamic host catalog ecosystem. Plugins enable a future ecosystem of partner and community contributed integrations across each step in the Boundary access workflow.
Host tag filtering
To maintain a dynamic host catalog, you should tag hosts in a logical way that enables sorting into host sets identifiable by filters.
For example, this tutorial configures hosts on AWS using the following tags:
Boundary hosts will be sorted into any host catalogs and host sets you configure using these filtering attributes.
AWS IAM credential types
You can select from dynamic credential or static credential types for setting up access to your AWS account.
- Dynamic credentials use an IAM role.
- Static credentials use IAM user account credentials.
HashiCorp recommends using dynamic credentials when possible to configure dynamic host catalogs. Select a credential type to continue.
Note
You must configure a self-managed Boundary worker to set up dynamic credentials.
Dynamic credentials use an AWS IAM role to generate credentials. This role must be assumed by a self-managed Boundary worker. Boundary uses this worker when setting up the dynamic host catalog and when syncing hosts from AWS.
This tutorial deploys a worker as part of the lab environment.
To set up dynamic credentials for this tutorial, you will:
- Deploy the hosts and worker using the provided lab environment.
- Register the self-managed Boundary worker deployed in AWS.
- Create an AWS IAM role with the
AmazonEC2ReadOnlyAccess
policy attached. - Attach the role to the IAM instance you configured as a worker.
- Configure a Boundary dynamic host catalog.
Static credentials use an AWS IAM user's access credentials to access your AWS account. Boundary uses these credentials when setting up the dynamic host catalog and when syncing hosts from AWS.
To set up static credentials for this tutorial, you will:
- Configure an IAM user for Boundary, or gather existing user credentials
- Configure a Boundary dynamic host catalog with the IAM credentials
Set up cloud hosts
Warning
This tutorial deploys cloud machines to test host catalog plugin configuration. You are responsible for any costs incurred by following the steps in this tutorial. Recommendations for destroying the associated cloud resources are detailed in the Cleanup and teardown section.
You need an Amazon Web Services account to set up the Boundary AWS host plugin.
This tutorial enables configuration of the test hosts using the AWS CLI, Terraform, or the AWS Console UI.
You will need access to an AWS account and sample application to set up the AWS hosts plugin for Boundary. If you don't have an account, sign up for AWS. A free account is suitable for the steps outlined in this tutorial, but please note that you are responsible for any charges incurred by following the steps in this tutorial.
This tutorial sets up permissions and hosts for host catalogs using the AWS CLI and CloudFormation.
You must complete the following tasks to set up hosts using the AWS CLI:
- Deploy and tag the host set members appropriately.
- Configure user permissions for Boundary.
- Configure the approriate IAM policy for dynamic or static credential types.
First, ensure to configure the AWS CLI with your account credentials. You will need the AWS Access Key, Secret Access Key, and (if needed) a Session Token.
You can set up the CLI in any of the following ways:
- Execute
aws configure
and pass the access values. - Export the access values as environment variables.
- Configure the AWS credentials file.
For more information on setting up the AWS CLI to interact with AWS, check the Configuring the AWS CLI documentation.
Create hosts
This tutorial deploys tagged hosts to test the dynamic host catalog integration.
To deploy a set of pre-configured hosts, a CloudFormation template file is available for use with AWS. CloudFormation is used to deploy the EC2 instances in this tutorial.
Clone the sample template repository.
$ git clone https://github.com/hashicorp-education/learn-boundary-cloud-host-catalogs
$ git clone https://github.com/hashicorp-education/learn-boundary-cloud-host-catalogs
Navigate into the aws
directory.
$ cd learn-boundary-cloud-host-catalogs/aws/
$ cd learn-boundary-cloud-host-catalogs/aws/
The provided aws-dynamic-hosts.json
template will be used to deploy a CloudFormation resource stack that contains the hosts to be included in our Boundary host catalog.
This tutorial uses the us-east-1a
availability zone to deploy the EC2 instances, but you can use any region you want. To change the region, open the aws-dynamic-hosts.json
file and update the AvailabilityZone
.
Next, create a keypair to be used for the instances. Alternatively, you may open the aws-dynamic-hosts.json
file and update the boundary-keypair
to match the name of an existing EC2 keypair.
$ aws ec2 create-key-pair \ --key-name boundary-keypair \ --key-type rsa \ --query "KeyMaterial" \ --output text > boundary-keypair.pem
$ aws ec2 create-key-pair \
--key-name boundary-keypair \
--key-type rsa \
--query "KeyMaterial" \
--output text > boundary-keypair.pem
Note
The boundary-keypair.pem
file was created within your working directory. You may need access to this keypair later on, so note its location and retain the private key for the duration of the tutorial. The tutorial deletes this keypair in the Cleanup and teardown section. If you choose to keep this keypair, consider moving it into another directory like ~/.aws/
. Do not check this key into source control.
Ensure you are within the learn-boundary-cloud-host-catalogs/aws/
directory where the aws-dynamic-hosts.json
file is located.
Deploy a new resource stack using the provided aws-dynamic-hosts.json
template.
$ aws cloudformation create-stack \ --stack-name boundary-dynamic-hosts \ --template-body file://./aws-dynamic-hosts.json
$ aws cloudformation create-stack \
--stack-name boundary-dynamic-hosts \
--template-body file://./aws-dynamic-hosts.json
Example output:
$ aws cloudformation create-stack \ --stack-name boundary-dynamic-hosts \ --template-body file://./aws-dynamic-hosts.json { "StackId": "arn:aws:cloudformation:us-east-1:157470686136:stack/boundary-dynamic-hosts/c29af940-78b6-11ec-849d-12bc7613d5b9" }
$ aws cloudformation create-stack \
--stack-name boundary-dynamic-hosts \
--template-body file://./aws-dynamic-hosts.json
{
"StackId": "arn:aws:cloudformation:us-east-1:157470686136:stack/boundary-dynamic-hosts/c29af940-78b6-11ec-849d-12bc7613d5b9"
}
The deployment may take a few minutes.
Check that the stack was successfully created.
$ aws cloudformation list-stacks { "StackSummaries": [ { "StackId": "arn:aws:cloudformation:us-east-1:157470686136:stack/boundary-dynamic-hosts/da9ebb30-78a7-11ec-bb47-0eddcb3d6bf3", "StackName": "boundary-dynamic-hosts", "TemplateDescription": "AWS CloudFormation template for Boundary Dynamic Hosts tutorial. Deploying this template will incur costs to your AWS account.", "CreationTime": "2022-01-18T22:43:11.249000+00:00", "StackStatus": "CREATE_COMPLETE", "DriftInformation": { "StackDriftStatus": "NOT_CHECKED" } } ] }
$ aws cloudformation list-stacks
{
"StackSummaries": [
{
"StackId": "arn:aws:cloudformation:us-east-1:157470686136:stack/boundary-dynamic-hosts/da9ebb30-78a7-11ec-bb47-0eddcb3d6bf3",
"StackName": "boundary-dynamic-hosts",
"TemplateDescription": "AWS CloudFormation template for Boundary Dynamic Hosts tutorial. Deploying this template will incur costs to your AWS account.",
"CreationTime": "2022-01-18T22:43:11.249000+00:00",
"StackStatus": "CREATE_COMPLETE",
"DriftInformation": {
"StackDriftStatus": "NOT_CHECKED"
}
}
]
}
Verify that the hosts were successfully created by printing their "Name"
tag using aws ec2 describe-instances
. The following command will only print running instances, and will also report the availability zone and instance ID in table format.
$ aws ec2 describe-instances \ --output table \ --filters Name=instance-state-name,Values=running \ --filters Name=tag-key,Values=Name \ --query 'Reservations[*].Instances[*].{Instance:InstanceId,AZ:Placement.AvailabilityZone,Name:Tags[?Key==`Name`]|[0].Value}'
$ aws ec2 describe-instances \
--output table \
--filters Name=instance-state-name,Values=running \
--filters Name=tag-key,Values=Name \
--query 'Reservations[*].Instances[*].{Instance:InstanceId,AZ:Placement.AvailabilityZone,Name:Tags[?Key==`Name`]|[0].Value}'
$ aws ec2 describe-instances \ --output table \ --filters Name=instance-state-name,Values=running \ --filters Name=tag-key,Values=Name \ --query 'Reservations[*].Instances[*].{Instance:InstanceId,AZ:Placement.AvailabilityZone,Name:Tags[?Key==`Name`]|[0].Value}' ---------------------------------------------------------------- | DescribeInstances | +------------+-----------------------+-------------------------+ | AZ | Instance | Name | +------------+-----------------------+-------------------------+ | us-east-1a| i-06f5a1240a2e0c3fb | boundary-3-production | | us-east-1a| i-0657b0de7662f9863 | boundary-1-dev | | us-east-1a| i-0ac5cfcae66bdacf5 | boundary-2-dev | | us-east-1a| i-0b6d3ad435c586783 | boundary-4-production | | us-east-1a| i-0b5c5867836d3ad43 | boundary-worker | +------------+-----------------------+-------------------------+
$ aws ec2 describe-instances \
--output table \
--filters Name=instance-state-name,Values=running \
--filters Name=tag-key,Values=Name \
--query 'Reservations[*].Instances[*].{Instance:InstanceId,AZ:Placement.AvailabilityZone,Name:Tags[?Key==`Name`]|[0].Value}'
----------------------------------------------------------------
| DescribeInstances |
+------------+-----------------------+-------------------------+
| AZ | Instance | Name |
+------------+-----------------------+-------------------------+
| us-east-1a| i-06f5a1240a2e0c3fb | boundary-3-production |
| us-east-1a| i-0657b0de7662f9863 | boundary-1-dev |
| us-east-1a| i-0ac5cfcae66bdacf5 | boundary-2-dev |
| us-east-1a| i-0b6d3ad435c586783 | boundary-4-production |
| us-east-1a| i-0b5c5867836d3ad43 | boundary-worker |
+------------+-----------------------+-------------------------+
Configure IAM policy
Select from dynamic credential or static credential types for setting up access to your AWS account.
You configure Dynamic credentials using an IAM role. You can configure Static credentials with IAM user account credentials.
HashiCorp recommends using dynamic credentials when possible to configure dynamic host catalogs. Select a credential type to continue.
After configuring the self-managed AWS worker, you should next configure an IAM role with the AmazonEC2ReadOnlyAccess
policy attached.
Configure an IAM role
Sign in to the AWS web console.
Navigate to the Identity and Access Management (IAM) dashboard.
Create a new role:
- Click on Roles, then click the New Role button.
- Under Trusted entity type, select AWS service.
- Under the Use case dropdown, select EC2. Then select the EC2 use case. Click Next.
- Search for the
AmazonEC2ReadOnlyAccess
policy, then select the checkbox beside it. Click Next. - Name the role, such as
boundary-worker-dhc
. Verify that the trust policy for Select trusted entities matches the following:{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "sts:AssumeRole" ], "Principal": { "Service": [ "ec2.amazonaws.com" ] } } ] }
{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "sts:AssumeRole" ], "Principal": { "Service": [ "ec2.amazonaws.com" ] } } ] }
- Click Create role.
Assign the new role to the EC2 instance you configured as a worker:
- Navigate to the EC2 console.
- In the navigation pane, click Instances.
- Select the instance configured as the Boundary worker.
- Click Actions, then Security, then Modify IAM role.
- For IAM role, select the role name you configured, such as
boundary-worker-dhc
. - Click Update IAM role.
First, locate the instance ID for the self-managed Boundary worker you deployed. To learn how to find an AWS EC2 instance ID, visit the Finding an instance ID or IP address documentation.
Create a new file called boundary-describe-instances-policy.json
, and fill it
with the following policy:
boundary-worker-dhc-policy.json
{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "sts:AssumeRole" ], "Principal": { "Service": [ "ec2.amazonaws.com" ] } } ] }
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"sts:AssumeRole"
],
"Principal": {
"Service": [
"ec2.amazonaws.com"
]
}
}
]
}
This policy allows the instance attached to the role to run the AssumeRole
API call as the EC2 service principal, similar to running the aws ec2 describe-instances
command using the CLI. Boundary will be able to list these details, including the host's tags. This will allow Boundary to sort hosts into their appropriate catalogs.
Create a new role named boundary-worker-dhc
and pass it the policy document. This command assumes the json policy file is located in the same directory you execute the command from.
$ aws iam create-role \ --role-name boundary-worker-dhc \ --assume-role-policy-document file://./boundary-worker-dhc-policy.json
$ aws iam create-role \
--role-name boundary-worker-dhc \
--assume-role-policy-document file://./boundary-worker-dhc-policy.json
Next, attach this role to the self-managed Boundary worker instance. You must pass the instance ID for the worker (such as i-1234567890abcdef0
) and the name of the role, which is boundary-worker-dhc
in this example.
$ aws ec2 associate-iam-instance-profile \ --instance-id i-1234567890abcdef0 \ --iam-instance-profile Name="boundary-worker-dhc"
$ aws ec2 associate-iam-instance-profile \
--instance-id i-1234567890abcdef0 \
--iam-instance-profile Name="boundary-worker-dhc"
Check your work by describing the worker instance by ID and looking for the attached policy.
$ aws ec2 describe-instances --instance-ids i-05317599921ed350e --query "Reservations[*].Instances[*].IamInstanceProfile" [ [ { "Arn": "arn:aws:iam::915080512474:instance-profile/boundary-worker-dhc", "Id": "AIPA5KDYMQPNALVSJAAXR" } ] ]
$ aws ec2 describe-instances --instance-ids i-05317599921ed350e --query "Reservations[*].Instances[*].IamInstanceProfile"
[
[
{
"Arn": "arn:aws:iam::915080512474:instance-profile/boundary-worker-dhc",
"Id": "AIPA5KDYMQPNALVSJAAXR"
}
]
]
Usually, you might configure an IAM user with the correct policies assigned for keeping the dynamic host catalog up-to-date. You may also use an existing IAM user and assign it the policy, or an IAM instance profile.
This tutorial demonstrates creating a new IAM user, but the tutorial can also be continued using root credentials. To continue using root credentials, skip to the Gather plugin details section.
Next, create a new IAM user for Boundary.
$ aws iam create-user --user-name boundary { "User": { "Path": "/", "UserName": "boundary", "UserId": "AIDASWVU2XLZFFIP6P4IN", "Arn": "arn:aws:iam::157470686136:user/boundary", "CreateDate": "2022-01-12T19:35:32+00:00" } }
$ aws iam create-user --user-name boundary
{
"User": {
"Path": "/",
"UserName": "boundary",
"UserId": "AIDASWVU2XLZFFIP6P4IN",
"Arn": "arn:aws:iam::157470686136:user/boundary",
"CreateDate": "2022-01-12T19:35:32+00:00"
}
}
Create a new file called boundary-describe-instances-policy.json
, and fill it
with the following policy:
boundary-describe-instances-policy.json
{ "Version": "2012-10-17", "Statement": [ { "Action": [ "ec2:DescribeInstances" ], "Effect": "Allow", "Resource": "*" } ] }
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"ec2:DescribeInstances"
],
"Effect": "Allow",
"Resource": "*"
}
]
}
This policy allows the boundary
IAM user to run the DescribeInstances
API
call, similar to running the aws ec2 describe-instances
command using the CLI.
Boundary will be able to list these details, including the host's tags. This
will allow Boundary to sort hosts into their appropriate catalogs.
Next, attach this as an inline policy to the boundary
user, giving it the name
BoundaryDescribeInstances
. This command assume the json policy file is located
in the same directory the command is executed from.
$ aws iam put-user-policy \ --user-name boundary \ --policy-name BoundaryDescribeInstances \ --policy-document file://./boundary-describe-instances-policy.json
$ aws iam put-user-policy \
--user-name boundary \
--policy-name BoundaryDescribeInstances \
--policy-document file://./boundary-describe-instances-policy.json
Check your work by listing the policies attached to the user.
$ aws iam get-user-policy --user-name boundary --policy-name BoundaryDescribeInstances { "UserName": "boundary", "PolicyName": "BoundaryDescribeInstances", "PolicyDocument": { "Version": "2012-10-17", "Statement": [ { "Action": [ "ec2:DescribeInstances" ], "Effect": "Allow", "Resource": "*" } ] } }
$ aws iam get-user-policy --user-name boundary --policy-name BoundaryDescribeInstances
{
"UserName": "boundary",
"PolicyName": "BoundaryDescribeInstances",
"PolicyDocument": {
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"ec2:DescribeInstances"
],
"Effect": "Allow",
"Resource": "*"
}
]
}
}
The last step is to create an access key for the user.
$ aws iam create-access-key --user-name boundary { "AccessKey": { "UserName": "boundary", "AccessKeyId": "AKIASWVU2XLZLFLIDMVW", "Status": "Active", "SecretAccessKey": "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", "CreateDate": "2022-01-19T02:26:11+00:00" } }
$ aws iam create-access-key --user-name boundary
{
"AccessKey": {
"UserName": "boundary",
"AccessKeyId": "AKIASWVU2XLZLFLIDMVW",
"Status": "Active",
"SecretAccessKey": "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",
"CreateDate": "2022-01-19T02:26:11+00:00"
}
}
Save the access details
Keep the AccessKeyId
and SecretAccessKey
in a safe location for the
remainder of this tutorial.
To get ready to configure Boundary, export these values as environment variables.
$ export BOUNDARY_ACCESS_KEY_ID=<AWS Access Key ID> \ export BOUNDARY_SECRET_ACCESS_KEY=<AWS Secret Access Key>
$ export BOUNDARY_ACCESS_KEY_ID=<AWS Access Key ID> \
export BOUNDARY_SECRET_ACCESS_KEY=<AWS Secret Access Key>
Note
If you use root credentials instead of creating an IAM user, you should export these values as the environment variables defined below. These values are passed to Boundary when you create the host catalog later on.
The prerequisites for setting up the learning environment with Terraform are:
- Terraform 0.14.9 or greater or greater is installed
- An active AWS account
- The AWS CLI is installed and available in your
PATH
.
Terraform needs to perform the following tasks to set up the lab environment, depending on whether you configure Boundary to use dynamic or static credentials for your host catalog.
Remember that dynamic credentials use an IAM role to access AWS, and require a self-managed worker deployed in AWS to access your account. Static credentials use an AWS IAM user's account credentials to access your AWS account.
- Deploy and tag the host set members appropriately.
- Configure an IAM role with the
AmazonEC2ReadOnlyAccess
policy attached. - Assign the IAM role to the self-managed Boundary worker, enabling Boundary to authenticate and access the tagged hosts through the worker.
- Deploy and tag the host set members appropriately.
- Configure user permissions for Boundary.
- Assign
ec2:DescribeInstances
IAM privileges to the configured IAM user. - Generate a Secret Access Key for the IAM user, enabling Boundary to authenticate and access the tagged hosts.
Configure Terraform
First, ensure that the CLI is properly configured and can contact your AWS account.
Export the account Access Key ID and Secret Access Key as environment variables
for use by Terraform in the following steps. If you wish to change the region
from us-east-1
, change its value before export.
$ export AWS_ACCESS_KEY_ID=<Access Key ID> \ export AWS_SECRET_ACCESS_KEY=<Secret Access Key> \ export AWS_REGION=us-east-1
$ export AWS_ACCESS_KEY_ID=<Access Key ID> \
export AWS_SECRET_ACCESS_KEY=<Secret Access Key> \
export AWS_REGION=us-east-1
Execute the following command in your terminal to make sure your credentials are configured.
$ aws sts get-caller-identity { "UserId": "AIDAWV7SJJRSWAD2Q32ES", "Account": "157470686136", "Arn": "arn:aws:iam::157470686136:user/username" }
$ aws sts get-caller-identity
{
"UserId": "AIDAWV7SJJRSWAD2Q32ES",
"Account": "157470686136",
"Arn": "arn:aws:iam::157470686136:user/username"
}
If the command does not return the correct user information, ensure the correct credentials have been exported in the terminal session.
Export your Boundary Enterprise cluster address as the BOUNDARY_ADDR
environment variable.
$ export BOUNDARY_CLUSTER_ADDR="your_boundary_cluster_address"
$ export BOUNDARY_CLUSTER_ADDR="your_boundary_cluster_address"
For example:
$ export BOUNDARY_ADDR="https://my-boundary-enterprise-cluster.dev"
$ export BOUNDARY_ADDR="https://my-boundary-enterprise-cluster.dev"
This tutorial assumes you are working out of the home directory ~/
, but you can use any working directory you want for the following steps.
Clone the example code for this tutorial into your working directory.
$ git clone https://github.com/hashicorp-education/learn-boundary-cloud-host-catalogs
$ git clone https://github.com/hashicorp-education/learn-boundary-cloud-host-catalogs
Navigate into the aws
directory.
$ cd learn-boundary-cloud-host-catalogs/aws/
$ cd learn-boundary-cloud-host-catalogs/aws/
Examine the Terraform configuration files.
. ├── README.md ├── hosts.tf ├── main.tf ├── worker-setup.sh.tftpl ├── worker.tf
.
├── README.md
├── hosts.tf
├── main.tf
├── worker-setup.sh.tftpl
├── worker.tf
The
hosts.tf
file creates the hosts that Boundary will import using the dynamic host catalog.The
main.tf
file configures theaws
provider and sets up the credentials Boundary will use to authenticate to AWS.The
worker-setup.sh.tftpl
is a Terraform template file that generates a setup script for the Boundary worker.The
worker.tf
file defines the Boundary worker configuration, including its service configuration.
Note
If you choose to proceed without creating an IAM user, expand the following accordion to proceed.
This configuration is intended to enable configuration without creating the
boundary
IAM user. Instead, Boundary will be configured using the
AWS_ACCESS_KEY_ID
and AWS_SECRET_ACCESS_KEY
environment variable values
later on.
If you use static credentials, note that the user credentials provided to Boundary must have the ec2:DescribeInstances
policy available for the host catalog to be configured correctly. This corresponds to the following policy:
{ "Version": "2012-10-17", "Statement": [ { "Action": [ "ec2:DescribeInstances" ], "Effect": "Allow", "Resource": "*" } ] }
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"ec2:DescribeInstances"
],
"Effect": "Allow",
"Resource": "*"
}
]
}
Generally, if the user is able to execute the aws ec2 describe-instances
command, those user credentials can be used to configure Boundary.
Paste the following configuration in your main.tf
to proceed without a boundary
IAM user. This configuration sets up the AWS Terraform provider.
# Configure the AWS provider terraform { required_providers { aws = { source = "hashicorp/aws" version = "~> 5.88" } } required_version = ">= 0.14.9" } provider "aws" { }
# Configure the AWS provider
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 5.88"
}
}
required_version = ">= 0.14.9"
}
provider "aws" {
}
When finished, skip to the Create Hosts section to proceed.
If you are setting up Boundary with dynamic credentials, open the main.tf
file and uncomment the following lines:
## uncomment the following to set up dynamic credentials for the Boundary host catalog output "boundary_iam_role_arn" { value = aws_iam_role.boundary_worker_dhc.arn } output "boundary_iam_role_id" { value = aws_iam_role.boundary_worker_dhc.unique_id } resource "aws_iam_role" "boundary_worker_dhc" { name = "boundary-worker-dhc" assume_role_policy = jsonencode({ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "sts:AssumeRole" ], "Principal": { "Service": [ "ec2.amazonaws.com" ] } } ] }) } resource "aws_iam_role_policy" "describe_instances" { name = "AWSEC2DescribeInstances" role = aws_iam_role.boundary_worker_dhc.id policy = jsonencode({ Version = "2012-10-17" Statement = [ { Action = [ "ec2:DescribeInstances", ] Effect = "Allow" Resource = "*" }, ] }) } resource "aws_iam_instance_profile" "boundary_worker_dhc_profile" { name = "boundary-worker-dhc-profile" role = aws_iam_role.boundary_worker_dhc.name }
## uncomment the following to set up dynamic credentials for the Boundary host catalog
output "boundary_iam_role_arn" {
value = aws_iam_role.boundary_worker_dhc.arn
}
output "boundary_iam_role_id" {
value = aws_iam_role.boundary_worker_dhc.unique_id
}
resource "aws_iam_role" "boundary_worker_dhc" {
name = "boundary-worker-dhc"
assume_role_policy = jsonencode({
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"sts:AssumeRole"
],
"Principal": {
"Service": [
"ec2.amazonaws.com"
]
}
}
]
})
}
resource "aws_iam_role_policy" "describe_instances" {
name = "AWSEC2DescribeInstances"
role = aws_iam_role.boundary_worker_dhc.id
policy = jsonencode({
Version = "2012-10-17"
Statement = [
{
Action = [
"ec2:DescribeInstances",
]
Effect = "Allow"
Resource = "*"
},
]
})
}
resource "aws_iam_instance_profile" "boundary_worker_dhc_profile" {
name = "boundary-worker-dhc-profile"
role = aws_iam_role.boundary_worker_dhc.name
}
Save the main.tf
file.
Now initialize the Terraform plan.
$ terraform init Initializing the backend... Initializing provider plugins... - Finding latest version of hashicorp/azurerm... - Finding hashicorp/aws versions matching "~> 3.73"... - Installing hashicorp/aws v5.88.0... - Installed hashicorp/aws v5.88.0 (signed by HashiCorp) Terraform has created a lock file .terraform.lock.hcl to record the provider selections it made above. Include this file in your version control repository so that Terraform can guarantee to make the same selections by default when you run "terraform init" in the future. Terraform has been successfully initialized! You may now begin working with Terraform. Try running "terraform plan" to see any changes that are required for your infrastructure. All Terraform commands should now work. If you ever set or change modules or backend configuration for Terraform, rerun this command to reinitialize your working directory. If you forget, other commands will detect it and remind you to do so if necessary.
$ terraform init
Initializing the backend...
Initializing provider plugins...
- Finding latest version of hashicorp/azurerm...
- Finding hashicorp/aws versions matching "~> 3.73"...
- Installing hashicorp/aws v5.88.0...
- Installed hashicorp/aws v5.88.0 (signed by HashiCorp)
Terraform has created a lock file .terraform.lock.hcl to record the provider
selections it made above. Include this file in your version control repository
so that Terraform can guarantee to make the same selections by default when
you run "terraform init" in the future.
Terraform has been successfully initialized!
You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.
If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.
Create hosts and the worker
Next, you will configure and deploy the host VMs to test the dynamic host catalog integration.
Deploy the Terraform configuration using terraform apply
and supply the BOUNDARY_CLUSTER_ID
environment variable as a Terraform variable.
$ terraform apply -var BOUNDARY_ADDR=$BOUNDARY_ADDR --auto-approve data.aws_ami.amazon: Reading... data.aws_availability_zones.boundary: Reading... data.aws_availability_zones.boundary: Read complete after 0s [id=us-east-1] data.aws_ami.amazon: Read complete after 1s [id=ami-05d9b53b86dec19c8] Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols: + create Terraform will perform the following actions: # aws_iam_instance_profile.boundary_worker_dhc_profile will be created + resource "aws_iam_instance_profile" "boundary_worker_dhc_profile" { + arn = (known after apply) + create_date = (known after apply) + id = (known after apply) + name = "boundary-worker-dhc-profile" + name_prefix = (known after apply) + path = "/" + role = "boundary-worker-dhc" + tags_all = (known after apply) + unique_id = (known after apply) } # aws_iam_role.boundary_worker_dhc will be created + resource "aws_iam_role" "boundary_worker_dhc" { + arn = (known after apply) + assume_role_policy = jsonencode( { + Statement = [ + { + Action = [ + "sts:AssumeRole", ] + Effect = "Allow" + Principal = { + Service = [ + "ec2.amazonaws.com", ] } }, ] + Version = "2012-10-17" } ) + create_date = (known after apply) + force_detach_policies = false + id = (known after apply) + managed_policy_arns = (known after apply) + max_session_duration = 3600 + name = "boundary-worker-dhc" + name_prefix = (known after apply) + path = "/" + tags_all = (known after apply) + unique_id = (known after apply) + inline_policy (known after apply) } ... ... Truncated Output ... ... # tls_private_key.worker_ssh_key will be created + resource "tls_private_key" "worker_ssh_key" { + algorithm = "RSA" + ecdsa_curve = "P224" + id = (known after apply) + private_key_openssh = (sensitive value) + private_key_pem = (sensitive value) + private_key_pem_pkcs8 = (sensitive value) + public_key_fingerprint_md5 = (known after apply) + public_key_fingerprint_sha256 = (known after apply) + public_key_openssh = (known after apply) + public_key_pem = (known after apply) + rsa_bits = 4096 } Plan: 17 to add, 0 to change, 0 to destroy. Changes to Outputs: + boundary_iam_role_arn = (known after apply) + boundary_iam_role_id = (known after apply) + boundary_worker_key_pair_name = "default-boundary-worker-key" + host_public_ips = [ + (known after apply), + (known after apply), + (known after apply), + (known after apply), ] + worker_private_key = (sensitive value) + worker_public_ip = (known after apply) tls_private_key.worker_ssh_key: Creating... aws_iam_role.boundary_worker_dhc: Creating... aws_vpc.boundary_hosts_vpc: Creating... tls_private_key.worker_ssh_key: Creation complete after 1s [id=51030f86baf9d036d00fd27259a5a7f369ad19f5] aws_key_pair.boundary_worker_key: Creating... local_sensitive_file.worker_private_key: Creating... local_sensitive_file.worker_private_key: Creation complete after 0s [id=084a14b463bf921363e4f3be3437fe41cfe68ffc] aws_iam_role.boundary_worker_dhc: Creation complete after 0s [id=boundary-worker-dhc] aws_iam_instance_profile.boundary_worker_dhc_profile: Creating... aws_key_pair.boundary_worker_key: Creation complete after 1s [id=default-boundary-worker-key] aws_iam_instance_profile.boundary_worker_dhc_profile: Creation complete after 6s [id=boundary-worker-dhc-profile] aws_vpc.boundary_hosts_vpc: Still creating... [10s elapsed] aws_vpc.boundary_hosts_vpc: Creation complete after 12s [id=vpc-00e0db90f60236480] aws_internet_gateway.boundary_gateway: Creating... aws_subnet.boundary_hosts_subnet: Creating... aws_security_group.boundary_worker_outbound: Creating... aws_security_group.boundary_ssh: Creating... aws_internet_gateway.boundary_gateway: Creation complete after 0s [id=igw-076f83c70fbb9cdcf] aws_route_table.boundary_hosts_public_rt: Creating... aws_security_group.boundary_ssh: Creation complete after 1s [id=sg-01e87a4c6478123c9] aws_route_table.boundary_hosts_public_rt: Creation complete after 1s [id=rtb-08ecfe4dd9dced030] aws_security_group.boundary_worker_outbound: Creation complete after 2s [id=sg-0c8def8012d13fbd6] aws_subnet.boundary_hosts_subnet: Still creating... [10s elapsed] aws_subnet.boundary_hosts_subnet: Creation complete after 11s [id=subnet-007e151279d3a78e3] aws_route_table_association.public_1_rt_a: Creating... aws_instance.boundary_instance[1]: Creating... aws_instance.boundary_instance[3]: Creating... aws_instance.boundary_instance[2]: Creating... aws_instance.boundary_instance[0]: Creating... aws_instance.boundary_worker: Creating... aws_route_table_association.public_1_rt_a: Creation complete after 0s [id=rtbassoc-0704858684baeeb63] aws_instance.boundary_instance[1]: Still creating... [10s elapsed] aws_instance.boundary_instance[0]: Still creating... [10s elapsed] aws_instance.boundary_instance[3]: Still creating... [10s elapsed] aws_instance.boundary_instance[2]: Still creating... [10s elapsed] aws_instance.boundary_worker: Still creating... [10s elapsed] aws_instance.boundary_instance[0]: Creation complete after 12s [id=i-03fe9ced29faa3680] aws_instance.boundary_instance[2]: Creation complete after 12s [id=i-0edddcd2b13b91f54] aws_instance.boundary_instance[3]: Creation complete after 12s [id=i-0aea48e2b25e73c15] aws_instance.boundary_instance[1]: Creation complete after 12s [id=i-00d634ca16b9be758] aws_instance.boundary_worker: Provisioning with 'file'... aws_instance.boundary_worker: Still creating... [20s elapsed] aws_instance.boundary_worker: Creation complete after 20s [id=i-022b0a06f636f6065] Apply complete! Resources: 17 added, 0 changed, 0 destroyed. Outputs: boundary_iam_role_arn = "arn:aws:iam::807078899029:role/boundary-worker-dhc" boundary_iam_role_id = "boundary-worker-dhc" boundary_worker_key_pair_name = "default-boundary-worker-key" host_public_ips = [ "100.24.125.28", "44.200.120.73", "98.82.24.59", "44.200.19.100", ] worker_private_key = <sensitive> worker_public_ip = "3.218.141.47"
$ terraform apply -var BOUNDARY_ADDR=$BOUNDARY_ADDR --auto-approve
data.aws_ami.amazon: Reading...
data.aws_availability_zones.boundary: Reading...
data.aws_availability_zones.boundary: Read complete after 0s [id=us-east-1]
data.aws_ami.amazon: Read complete after 1s [id=ami-05d9b53b86dec19c8]
Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following
symbols:
+ create
Terraform will perform the following actions:
# aws_iam_instance_profile.boundary_worker_dhc_profile will be created
+ resource "aws_iam_instance_profile" "boundary_worker_dhc_profile" {
+ arn = (known after apply)
+ create_date = (known after apply)
+ id = (known after apply)
+ name = "boundary-worker-dhc-profile"
+ name_prefix = (known after apply)
+ path = "/"
+ role = "boundary-worker-dhc"
+ tags_all = (known after apply)
+ unique_id = (known after apply)
}
# aws_iam_role.boundary_worker_dhc will be created
+ resource "aws_iam_role" "boundary_worker_dhc" {
+ arn = (known after apply)
+ assume_role_policy = jsonencode(
{
+ Statement = [
+ {
+ Action = [
+ "sts:AssumeRole",
]
+ Effect = "Allow"
+ Principal = {
+ Service = [
+ "ec2.amazonaws.com",
]
}
},
]
+ Version = "2012-10-17"
}
)
+ create_date = (known after apply)
+ force_detach_policies = false
+ id = (known after apply)
+ managed_policy_arns = (known after apply)
+ max_session_duration = 3600
+ name = "boundary-worker-dhc"
+ name_prefix = (known after apply)
+ path = "/"
+ tags_all = (known after apply)
+ unique_id = (known after apply)
+ inline_policy (known after apply)
}
...
... Truncated Output ...
...
# tls_private_key.worker_ssh_key will be created
+ resource "tls_private_key" "worker_ssh_key" {
+ algorithm = "RSA"
+ ecdsa_curve = "P224"
+ id = (known after apply)
+ private_key_openssh = (sensitive value)
+ private_key_pem = (sensitive value)
+ private_key_pem_pkcs8 = (sensitive value)
+ public_key_fingerprint_md5 = (known after apply)
+ public_key_fingerprint_sha256 = (known after apply)
+ public_key_openssh = (known after apply)
+ public_key_pem = (known after apply)
+ rsa_bits = 4096
}
Plan: 17 to add, 0 to change, 0 to destroy.
Changes to Outputs:
+ boundary_iam_role_arn = (known after apply)
+ boundary_iam_role_id = (known after apply)
+ boundary_worker_key_pair_name = "default-boundary-worker-key"
+ host_public_ips = [
+ (known after apply),
+ (known after apply),
+ (known after apply),
+ (known after apply),
]
+ worker_private_key = (sensitive value)
+ worker_public_ip = (known after apply)
tls_private_key.worker_ssh_key: Creating...
aws_iam_role.boundary_worker_dhc: Creating...
aws_vpc.boundary_hosts_vpc: Creating...
tls_private_key.worker_ssh_key: Creation complete after 1s [id=51030f86baf9d036d00fd27259a5a7f369ad19f5]
aws_key_pair.boundary_worker_key: Creating...
local_sensitive_file.worker_private_key: Creating...
local_sensitive_file.worker_private_key: Creation complete after 0s [id=084a14b463bf921363e4f3be3437fe41cfe68ffc]
aws_iam_role.boundary_worker_dhc: Creation complete after 0s [id=boundary-worker-dhc]
aws_iam_instance_profile.boundary_worker_dhc_profile: Creating...
aws_key_pair.boundary_worker_key: Creation complete after 1s [id=default-boundary-worker-key]
aws_iam_instance_profile.boundary_worker_dhc_profile: Creation complete after 6s [id=boundary-worker-dhc-profile]
aws_vpc.boundary_hosts_vpc: Still creating... [10s elapsed]
aws_vpc.boundary_hosts_vpc: Creation complete after 12s [id=vpc-00e0db90f60236480]
aws_internet_gateway.boundary_gateway: Creating...
aws_subnet.boundary_hosts_subnet: Creating...
aws_security_group.boundary_worker_outbound: Creating...
aws_security_group.boundary_ssh: Creating...
aws_internet_gateway.boundary_gateway: Creation complete after 0s [id=igw-076f83c70fbb9cdcf]
aws_route_table.boundary_hosts_public_rt: Creating...
aws_security_group.boundary_ssh: Creation complete after 1s [id=sg-01e87a4c6478123c9]
aws_route_table.boundary_hosts_public_rt: Creation complete after 1s [id=rtb-08ecfe4dd9dced030]
aws_security_group.boundary_worker_outbound: Creation complete after 2s [id=sg-0c8def8012d13fbd6]
aws_subnet.boundary_hosts_subnet: Still creating... [10s elapsed]
aws_subnet.boundary_hosts_subnet: Creation complete after 11s [id=subnet-007e151279d3a78e3]
aws_route_table_association.public_1_rt_a: Creating...
aws_instance.boundary_instance[1]: Creating...
aws_instance.boundary_instance[3]: Creating...
aws_instance.boundary_instance[2]: Creating...
aws_instance.boundary_instance[0]: Creating...
aws_instance.boundary_worker: Creating...
aws_route_table_association.public_1_rt_a: Creation complete after 0s [id=rtbassoc-0704858684baeeb63]
aws_instance.boundary_instance[1]: Still creating... [10s elapsed]
aws_instance.boundary_instance[0]: Still creating... [10s elapsed]
aws_instance.boundary_instance[3]: Still creating... [10s elapsed]
aws_instance.boundary_instance[2]: Still creating... [10s elapsed]
aws_instance.boundary_worker: Still creating... [10s elapsed]
aws_instance.boundary_instance[0]: Creation complete after 12s [id=i-03fe9ced29faa3680]
aws_instance.boundary_instance[2]: Creation complete after 12s [id=i-0edddcd2b13b91f54]
aws_instance.boundary_instance[3]: Creation complete after 12s [id=i-0aea48e2b25e73c15]
aws_instance.boundary_instance[1]: Creation complete after 12s [id=i-00d634ca16b9be758]
aws_instance.boundary_worker: Provisioning with 'file'...
aws_instance.boundary_worker: Still creating... [20s elapsed]
aws_instance.boundary_worker: Creation complete after 20s [id=i-022b0a06f636f6065]
Apply complete! Resources: 17 added, 0 changed, 0 destroyed.
Outputs:
boundary_iam_role_arn = "arn:aws:iam::807078899029:role/boundary-worker-dhc"
boundary_iam_role_id = "boundary-worker-dhc"
boundary_worker_key_pair_name = "default-boundary-worker-key"
host_public_ips = [
"100.24.125.28",
"44.200.120.73",
"98.82.24.59",
"44.200.19.100",
]
worker_private_key = <sensitive>
worker_public_ip = "3.218.141.47"
You can reference the Terraform outputs at any time by executing terraform output
.
If you are setting up Boundary with static credentials, open the main.tf
file and uncomment the following lines:
# uncomment the following to set up static credentials for the Boundary host catalog output "boundary_access_key_id" { value = aws_iam_access_key.boundary.id } output "boundary_secret_access_key" { value = aws_iam_access_key.boundary.secret sensitive = true } resource "random_id" "aws_iam_user_name" { prefix = "demo-${local.deployment_name}-boundary-iam-user" byte_length = 4 } resource "aws_iam_user" "boundary" { name = random_id.aws_iam_user_name.dec path = "/" force_destroy = true tags = { "boundary-demo" = local.deployment_name } } resource "aws_iam_access_key" "boundary" { user = aws_iam_user.boundary.name } resource "aws_iam_user_policy" "BoundaryDescribeInstances" { name = "BoundaryDescribeInstances" user = aws_iam_user.boundary.name policy = <<EOF { "Version": "2012-10-17", "Statement": [ { "Action": [ "ec2:DescribeInstances" ], "Effect": "Allow", "Resource": "*" } ] } EOF }
# uncomment the following to set up static credentials for the Boundary host catalog
output "boundary_access_key_id" {
value = aws_iam_access_key.boundary.id
}
output "boundary_secret_access_key" {
value = aws_iam_access_key.boundary.secret
sensitive = true
}
resource "random_id" "aws_iam_user_name" {
prefix = "demo-${local.deployment_name}-boundary-iam-user"
byte_length = 4
}
resource "aws_iam_user" "boundary" {
name = random_id.aws_iam_user_name.dec
path = "/"
force_destroy = true
tags = {
"boundary-demo" = local.deployment_name
}
}
resource "aws_iam_access_key" "boundary" {
user = aws_iam_user.boundary.name
}
resource "aws_iam_user_policy" "BoundaryDescribeInstances" {
name = "BoundaryDescribeInstances"
user = aws_iam_user.boundary.name
policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"ec2:DescribeInstances"
],
"Effect": "Allow",
"Resource": "*"
}
]
}
EOF
}
Save the main.tf
file.
Now initialize the Terraform plan.
$ terraform init Initializing the backend... Initializing provider plugins... - Finding latest version of hashicorp/azurerm... - Finding hashicorp/aws versions matching "~> 3.73"... - Installing hashicorp/aws v5.88.0... - Installed hashicorp/aws v5.88.0 (signed by HashiCorp) Terraform has created a lock file .terraform.lock.hcl to record the provider selections it made above. Include this file in your version control repository so that Terraform can guarantee to make the same selections by default when you run "terraform init" in the future. Terraform has been successfully initialized! You may now begin working with Terraform. Try running "terraform plan" to see any changes that are required for your infrastructure. All Terraform commands should now work. If you ever set or change modules or backend configuration for Terraform, rerun this command to reinitialize your working directory. If you forget, other commands will detect it and remind you to do so if necessary.
$ terraform init
Initializing the backend...
Initializing provider plugins...
- Finding latest version of hashicorp/azurerm...
- Finding hashicorp/aws versions matching "~> 3.73"...
- Installing hashicorp/aws v5.88.0...
- Installed hashicorp/aws v5.88.0 (signed by HashiCorp)
Terraform has created a lock file .terraform.lock.hcl to record the provider
selections it made above. Include this file in your version control repository
so that Terraform can guarantee to make the same selections by default when
you run "terraform init" in the future.
Terraform has been successfully initialized!
You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.
If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.
Create hosts and the worker
Next, you will configure and deploy the host VMs to test the dynamic host catalog integration.
Deploy the Terraform configuration using terraform apply
and supplying the BOUNDARY_CLUSTER_ID
environment variable as a Terraform variable.
$ terraform apply -var BOUNDARY_ADDR=$BOUNDARY_ADDR --auto-approve data.aws_ami.amazon: Reading... data.aws_caller_identity.current: Reading... data.aws_availability_zones.boundary: Reading... data.aws_caller_identity.current: Read complete after 0s [id=807078899029] data.aws_availability_zones.boundary: Read complete after 0s [id=us-east-1] data.aws_ami.amazon: Read complete after 1s [id=ami-089146b56f5af20cf] Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols: + create Terraform will perform the following actions: # aws_iam_access_key.boundary will be created + resource "aws_iam_access_key" "boundary" { + create_date = (known after apply) + encrypted_secret = (known after apply) + encrypted_ses_smtp_password_v4 = (known after apply) + id = (known after apply) + key_fingerprint = (known after apply) + secret = (sensitive value) + ses_smtp_password_v4 = (sensitive value) + status = "Active" + user = (known after apply) } # aws_iam_user.boundary will be created + resource "aws_iam_user" "boundary" { + arn = (known after apply) + force_destroy = true + id = (known after apply) + name = (known after apply) + path = "/" + permissions_boundary = "arn:aws:iam::807078899029:policy/DemoUser" + tags = { + "boundary-demo" = "rbeck@hashicorp.com" } + tags_all = { + "boundary-demo" = "rbeck@hashicorp.com" } + unique_id = (known after apply) } ... ... Truncated Output ... ... # random_id.aws_iam_user_name will be created + resource "random_id" "aws_iam_user_name" { + b64_std = (known after apply) + b64_url = (known after apply) + byte_length = 4 + dec = (known after apply) + hex = (known after apply) + id = (known after apply) + prefix = "demo-rbeck@hashicorp.com-boundary-iam-user" } # tls_private_key.worker_ssh_key will be created + resource "tls_private_key" "worker_ssh_key" { + algorithm = "RSA" + ecdsa_curve = "P224" + id = (known after apply) + private_key_openssh = (sensitive value) + private_key_pem = (sensitive value) + private_key_pem_pkcs8 = (sensitive value) + public_key_fingerprint_md5 = (known after apply) + public_key_fingerprint_sha256 = (known after apply) + public_key_openssh = (known after apply) + public_key_pem = (known after apply) + rsa_bits = 4096 } Plan: 19 to add, 0 to change, 0 to destroy. Changes to Outputs: + boundary_access_key_id = (known after apply) + boundary_secret_access_key = (sensitive value) + boundary_worker_key_pair_name = "default-boundary-worker-key" + host_public_ips = [ + (known after apply), + (known after apply), + (known after apply), + (known after apply), ] + worker_private_key = (sensitive value) + worker_public_ip = (known after apply) tls_private_key.worker_ssh_key: Creating... random_id.aws_iam_user_name: Creating... random_id.aws_iam_user_name: Creation complete after 0s [id=te_JoQ] aws_vpc.boundary_hosts_vpc: Creating... aws_iam_user.boundary: Creating... aws_iam_user.boundary: Creation complete after 1s [id=demo-rbeck@hashicorp.com-boundary-iam-user3052390817] aws_iam_access_key.boundary: Creating... aws_iam_user_policy.BoundaryDescribeInstances: Creating... aws_iam_access_key.boundary: Creation complete after 0s [id=AKIA3X2NGVVKTIYPIE54] aws_iam_user_policy.BoundaryDescribeInstances: Creation complete after 0s [id=demo-rbeck@hashicorp.com-boundary-iam-user3052390817:BoundaryDescribeInstances] tls_private_key.worker_ssh_key: Creation complete after 1s [id=f160e34aac942d96078f746d04787c3c7bc54073] aws_key_pair.boundary_worker_key: Creating... local_sensitive_file.worker_private_key: Creating... local_sensitive_file.worker_private_key: Creation complete after 0s [id=50ddefd0159ea6f1322816092866f60a5b590a7c] aws_key_pair.boundary_worker_key: Creation complete after 1s [id=default-boundary-worker-key] aws_vpc.boundary_hosts_vpc: Still creating... [10s elapsed] aws_vpc.boundary_hosts_vpc: Creation complete after 12s [id=vpc-02666caaf66fa7679] aws_internet_gateway.boundary_gateway: Creating... aws_subnet.boundary_hosts_subnet: Creating... aws_security_group.boundary_ssh: Creating... aws_security_group.boundary_worker_outbound: Creating... aws_internet_gateway.boundary_gateway: Creation complete after 1s [id=igw-01f626c009ab13961] aws_route_table.boundary_hosts_public_rt: Creating... aws_route_table.boundary_hosts_public_rt: Creation complete after 1s [id=rtb-0198060b7c5966b52] aws_security_group.boundary_ssh: Creation complete after 2s [id=sg-052f92dc30611d8f4] aws_security_group.boundary_worker_outbound: Creation complete after 2s [id=sg-0729f481da3b90581] aws_subnet.boundary_hosts_subnet: Still creating... [10s elapsed] aws_subnet.boundary_hosts_subnet: Creation complete after 11s [id=subnet-041c9e45bc93706be] aws_route_table_association.public_1_rt_a: Creating... aws_instance.boundary_instance[1]: Creating... aws_instance.boundary_instance[2]: Creating... aws_instance.boundary_worker: Creating... aws_instance.boundary_instance[0]: Creating... aws_instance.boundary_instance[3]: Creating... aws_route_table_association.public_1_rt_a: Creation complete after 1s [id=rtbassoc-05f3585f2de68ab00] aws_instance.boundary_instance[1]: Still creating... [10s elapsed] aws_instance.boundary_instance[2]: Still creating... [10s elapsed] aws_instance.boundary_instance[0]: Still creating... [10s elapsed] aws_instance.boundary_worker: Still creating... [10s elapsed] aws_instance.boundary_instance[3]: Still creating... [10s elapsed] aws_instance.boundary_instance[1]: Creation complete after 13s [id=i-0506b1b3533660163] aws_instance.boundary_instance[2]: Creation complete after 13s [id=i-041a2528932973354] aws_instance.boundary_instance[0]: Creation complete after 12s [id=i-058dc1d8bb15b5da0] aws_instance.boundary_instance[3]: Creation complete after 13s [id=i-05584aa48e2684630] aws_instance.boundary_worker: Provisioning with 'file'... aws_instance.boundary_worker: Still creating... [20s elapsed] aws_instance.boundary_worker: Creation complete after 25s [id=i-0cd1da09f5007aed8] Apply complete! Resources: 19 added, 0 changed, 0 destroyed. Outputs: boundary_access_key_id = "AKIA3X2NGVVKTIYPIE54" boundary_secret_access_key = <sensitive> boundary_worker_key_pair_name = "default-boundary-worker-key" host_public_ips = [ "18.209.212.169", "3.235.52.123", "44.200.53.196", "44.222.153.158", ] worker_private_key = <sensitive> worker_public_ip = "44.204.134.35"
$ terraform apply -var BOUNDARY_ADDR=$BOUNDARY_ADDR --auto-approve
data.aws_ami.amazon: Reading...
data.aws_caller_identity.current: Reading...
data.aws_availability_zones.boundary: Reading...
data.aws_caller_identity.current: Read complete after 0s [id=807078899029]
data.aws_availability_zones.boundary: Read complete after 0s [id=us-east-1]
data.aws_ami.amazon: Read complete after 1s [id=ami-089146b56f5af20cf]
Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following
symbols:
+ create
Terraform will perform the following actions:
# aws_iam_access_key.boundary will be created
+ resource "aws_iam_access_key" "boundary" {
+ create_date = (known after apply)
+ encrypted_secret = (known after apply)
+ encrypted_ses_smtp_password_v4 = (known after apply)
+ id = (known after apply)
+ key_fingerprint = (known after apply)
+ secret = (sensitive value)
+ ses_smtp_password_v4 = (sensitive value)
+ status = "Active"
+ user = (known after apply)
}
# aws_iam_user.boundary will be created
+ resource "aws_iam_user" "boundary" {
+ arn = (known after apply)
+ force_destroy = true
+ id = (known after apply)
+ name = (known after apply)
+ path = "/"
+ permissions_boundary = "arn:aws:iam::807078899029:policy/DemoUser"
+ tags = {
+ "boundary-demo" = "rbeck@hashicorp.com"
}
+ tags_all = {
+ "boundary-demo" = "rbeck@hashicorp.com"
}
+ unique_id = (known after apply)
}
...
... Truncated Output ...
...
# random_id.aws_iam_user_name will be created
+ resource "random_id" "aws_iam_user_name" {
+ b64_std = (known after apply)
+ b64_url = (known after apply)
+ byte_length = 4
+ dec = (known after apply)
+ hex = (known after apply)
+ id = (known after apply)
+ prefix = "demo-rbeck@hashicorp.com-boundary-iam-user"
}
# tls_private_key.worker_ssh_key will be created
+ resource "tls_private_key" "worker_ssh_key" {
+ algorithm = "RSA"
+ ecdsa_curve = "P224"
+ id = (known after apply)
+ private_key_openssh = (sensitive value)
+ private_key_pem = (sensitive value)
+ private_key_pem_pkcs8 = (sensitive value)
+ public_key_fingerprint_md5 = (known after apply)
+ public_key_fingerprint_sha256 = (known after apply)
+ public_key_openssh = (known after apply)
+ public_key_pem = (known after apply)
+ rsa_bits = 4096
}
Plan: 19 to add, 0 to change, 0 to destroy.
Changes to Outputs:
+ boundary_access_key_id = (known after apply)
+ boundary_secret_access_key = (sensitive value)
+ boundary_worker_key_pair_name = "default-boundary-worker-key"
+ host_public_ips = [
+ (known after apply),
+ (known after apply),
+ (known after apply),
+ (known after apply),
]
+ worker_private_key = (sensitive value)
+ worker_public_ip = (known after apply)
tls_private_key.worker_ssh_key: Creating...
random_id.aws_iam_user_name: Creating...
random_id.aws_iam_user_name: Creation complete after 0s [id=te_JoQ]
aws_vpc.boundary_hosts_vpc: Creating...
aws_iam_user.boundary: Creating...
aws_iam_user.boundary: Creation complete after 1s [id=demo-rbeck@hashicorp.com-boundary-iam-user3052390817]
aws_iam_access_key.boundary: Creating...
aws_iam_user_policy.BoundaryDescribeInstances: Creating...
aws_iam_access_key.boundary: Creation complete after 0s [id=AKIA3X2NGVVKTIYPIE54]
aws_iam_user_policy.BoundaryDescribeInstances: Creation complete after 0s [id=demo-rbeck@hashicorp.com-boundary-iam-user3052390817:BoundaryDescribeInstances]
tls_private_key.worker_ssh_key: Creation complete after 1s [id=f160e34aac942d96078f746d04787c3c7bc54073]
aws_key_pair.boundary_worker_key: Creating...
local_sensitive_file.worker_private_key: Creating...
local_sensitive_file.worker_private_key: Creation complete after 0s [id=50ddefd0159ea6f1322816092866f60a5b590a7c]
aws_key_pair.boundary_worker_key: Creation complete after 1s [id=default-boundary-worker-key]
aws_vpc.boundary_hosts_vpc: Still creating... [10s elapsed]
aws_vpc.boundary_hosts_vpc: Creation complete after 12s [id=vpc-02666caaf66fa7679]
aws_internet_gateway.boundary_gateway: Creating...
aws_subnet.boundary_hosts_subnet: Creating...
aws_security_group.boundary_ssh: Creating...
aws_security_group.boundary_worker_outbound: Creating...
aws_internet_gateway.boundary_gateway: Creation complete after 1s [id=igw-01f626c009ab13961]
aws_route_table.boundary_hosts_public_rt: Creating...
aws_route_table.boundary_hosts_public_rt: Creation complete after 1s [id=rtb-0198060b7c5966b52]
aws_security_group.boundary_ssh: Creation complete after 2s [id=sg-052f92dc30611d8f4]
aws_security_group.boundary_worker_outbound: Creation complete after 2s [id=sg-0729f481da3b90581]
aws_subnet.boundary_hosts_subnet: Still creating... [10s elapsed]
aws_subnet.boundary_hosts_subnet: Creation complete after 11s [id=subnet-041c9e45bc93706be]
aws_route_table_association.public_1_rt_a: Creating...
aws_instance.boundary_instance[1]: Creating...
aws_instance.boundary_instance[2]: Creating...
aws_instance.boundary_worker: Creating...
aws_instance.boundary_instance[0]: Creating...
aws_instance.boundary_instance[3]: Creating...
aws_route_table_association.public_1_rt_a: Creation complete after 1s [id=rtbassoc-05f3585f2de68ab00]
aws_instance.boundary_instance[1]: Still creating... [10s elapsed]
aws_instance.boundary_instance[2]: Still creating... [10s elapsed]
aws_instance.boundary_instance[0]: Still creating... [10s elapsed]
aws_instance.boundary_worker: Still creating... [10s elapsed]
aws_instance.boundary_instance[3]: Still creating... [10s elapsed]
aws_instance.boundary_instance[1]: Creation complete after 13s [id=i-0506b1b3533660163]
aws_instance.boundary_instance[2]: Creation complete after 13s [id=i-041a2528932973354]
aws_instance.boundary_instance[0]: Creation complete after 12s [id=i-058dc1d8bb15b5da0]
aws_instance.boundary_instance[3]: Creation complete after 13s [id=i-05584aa48e2684630]
aws_instance.boundary_worker: Provisioning with 'file'...
aws_instance.boundary_worker: Still creating... [20s elapsed]
aws_instance.boundary_worker: Creation complete after 25s [id=i-0cd1da09f5007aed8]
Apply complete! Resources: 19 added, 0 changed, 0 destroyed.
Outputs:
boundary_access_key_id = "AKIA3X2NGVVKTIYPIE54"
boundary_secret_access_key = <sensitive>
boundary_worker_key_pair_name = "default-boundary-worker-key"
host_public_ips = [
"18.209.212.169",
"3.235.52.123",
"44.200.53.196",
"44.222.153.158",
]
worker_private_key = <sensitive>
worker_public_ip = "44.204.134.35"
You can reference the Terraform outputs at any time by executing terraform output
.
This tutorial sets up permissions and hosts for host catalogs using the AWS Console and CloudFormation.
You must complete the following tasks to set up hosts using the AWS Console:
- Deploy and tag the host set members appropriately.
- Configure user permissions for Boundary.
- Configure the approriate IAM policy for dynamic or static credential types.
First, ensure that the AWS CLI is correctly configured with your account credentials. You will need the AWS Access Key, Secret Access Key, and (if needed) a Session Token.
You can set up the CLI in any of the following ways:
- Execute
aws configure
and pass the access values. - Export the access values as environment variables.
- Configure the AWS credentials file.
For more information on setting up the AWS CLI to interact with AWS, check the Configuring the AWS CLI documentation.
Create hosts
This tutorial deploys tagged hosts to test the dynamic host catalog integration.
A CloudFormation template file is available for use with AWS to deploy a set of pre-configured hosts. You should use CloudFormation to deploy the EC2 instances in this tutorial.
Clone the sample template repository.
$ git clone https://github.com/hashicorp-education/learn-boundary-cloud-host-catalogs
$ git clone https://github.com/hashicorp-education/learn-boundary-cloud-host-catalogs
Navigate into the aws
directory.
$ cd learn-boundary-cloud-host-catalogs/aws/
$ cd learn-boundary-cloud-host-catalogs/aws/
The provided aws-dynamic-hosts.json
template deploys a
CloudFormation resource stack that contains the hosts to be included in our
Boundary host catalog.
This tutorial uses the us-east-1a
availability zone to deploy the EC2
instances, but you can use any region you want. To change the region, open the
aws-dynamic-hosts.json
file and update the AvailabilityZone
.
Next, create a
keypair
to be used for the instances. Alternatively, you may open the
aws-dynamic-hosts.json
file and update the boundary-keypair
to match the
name of an existing EC2 keypair. You may need access to the private key later in this tutorial.
Log into the AWS Console and navigate to the EC2
portal. Ensure that the us-east-1
region
is selected, unless you updated the region in the aws-dynamic-hosts.json
file.
Next, select Key Pairs from the Resources list or using the left-hand navigation bar under Network & Security.
Click Create key pair. Name the keypair boundary-keypair
, leaving the type
set to RSA
and the file format set to .pem
. Click Create key pair when
finished.
The keypair will then be downloaded to your local machine. Retain this key to use later.
Note
Keep the boundary-keypair.pem
file safe. You may need access to this keypair later on, so note its location and retain the private key for the duration of the tutorial. The tutorial deletes this keypair in the Cleanup and teardown section. If you choose to keep this keypair, consider moving it into another directory like ~/.aws/
. Do not check this key into source control.
Next, deploy the sample hosts using CloudFormation. You will need access to the location of the learn-boundary-cloud-host-catalogs
repository you downloaded earlier.
Navigate to the CloudFormation Portal by using the Search field.
Select Create stack.
From the Create Stack form, select Upload a template file from the
Specify template section. Then click the Choose file button. Select the
provided learn-boundary-cloud-host-catalogs/aws/aws-dynamic-hosts.json
template file, then click Next.
Under the Specify stack details form, enter the Stack name
boundary-dynamic-hosts
. Under KeyName, select the boundary-keypair
created
earlier. The SSHLocation can be left as 0.0.0.0/0
.
Leave the details on the Configure stack options page as-is, and click Next.
Confirm the stack details on the Review boundary-dynamic-hosts page, then scroll down and click Create stack.
The boundary-dynamic-hosts page will be displayed, with a list of Events.
After a couple of minutes, press the Refresh button âźł until the Status
shows CREATE_COMPLETE for each EC2Instance
.
Configure IAM policy
Select from dynamic credential or static credential types for setting up access to your AWS account.
You configure Dynamic credentials using an IAM role. You can configure Static credentials with IAM user account credentials.
HashiCorp recommends using dynamic credentials when possible to configure dynamic host catalogs. Select a credential type to continue.
After configuring the self-managed AWS worker, you should next configure an IAM role with the AmazonEC2ReadOnlyAccess
policy attached.
Configure an IAM role
Sign in to the AWS web console.
Navigate to the Identity and Access Management (IAM) dashboard.
Create a new role:
- Click on Roles, then click the New Role button.
- Under Trusted entity type, select AWS service.
- Under the Use case dropdown, select EC2. Then select the EC2 use case. Click Next.
- Search for the
AmazonEC2ReadOnlyAccess
policy, then select the checkbox beside it. Click Next. - Name the role, such as
boundary-worker-dhc
. Verify that the policy matches the following:{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "sts:AssumeRole" ], "Principal": { "Service": [ "ec2.amazonaws.com" ] } } ] }
{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "sts:AssumeRole" ], "Principal": { "Service": [ "ec2.amazonaws.com" ] } } ] }
- Click Create role.
Assign the new role to the EC2 instance you configured as a worker:
- Navigate to the EC2 console.
- In the navigation pane, click Instances.
- Select the instance configured as the Boundary worker.
- Click Actions, then Security, then Modify IAM role.
- For IAM role, select the role name you configured, such as
boundary-worker-dhc
. - Click Update IAM role.
First, locate the instance ID for the self-managed Boundary worker you deployed. To learn how to find an AWS EC2 instance ID, visit the Finding an instance ID or IP address documentation.
Create a new file called boundary-describe-instances-policy.json
, and fill it
with the following policy:
boundary-worker-dhc-policy.json
{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "sts:AssumeRole" ], "Principal": { "Service": [ "ec2.amazonaws.com" ] } } ] }
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"sts:AssumeRole"
],
"Principal": {
"Service": [
"ec2.amazonaws.com"
]
}
}
]
}
This policy allows the instance attached to the role to run the AssumeRole
API call as the EC2 service principal, similar to running the aws ec2 describe-instances
command using the CLI. Boundary will be able to list these details, including the host's tags. This will allow Boundary to sort hosts into their appropriate catalogs.
Create a new role named boundary-worker-dhc
and pass it the policy document. This command assumes the json policy file is located in the same directory you execute the command from.
$ aws iam create-role \ --role-name boundary-worker-dhc \ --assume-role-policy-document file://./boundary-worker-dhc-policy.json
$ aws iam create-role \
--role-name boundary-worker-dhc \
--assume-role-policy-document file://./boundary-worker-dhc-policy.json
Next, attach this role to the self-managed Boundary worker instance. You must pass the instance ID for the worker (such as i-1234567890abcdef0
) and the name of the role, which is boundary-worker-dhc
in this example.
$ aws ec2 associate-iam-instance-profile \ --instance-id i-1234567890abcdef0 \ --iam-instance-profile Name="boundary-worker-dhc"
$ aws ec2 associate-iam-instance-profile \
--instance-id i-1234567890abcdef0 \
--iam-instance-profile Name="boundary-worker-dhc"
Check your work by describing the worker instance by ID and looking for the attached policy.
$ aws ec2 describe-instances --instance-ids i-05317599921ed350e --query "Reservations[*].Instances[*].IamInstanceProfile" [ [ { "Arn": "arn:aws:iam::915080512474:instance-profile/boundary-worker-dhc", "Id": "AIPA5KDYMQPNALVSJAAXR" } ] ]
$ aws ec2 describe-instances --instance-ids i-05317599921ed350e --query "Reservations[*].Instances[*].IamInstanceProfile"
[
[
{
"Arn": "arn:aws:iam::915080512474:instance-profile/boundary-worker-dhc",
"Id": "AIPA5KDYMQPNALVSJAAXR"
}
]
]
Usually, you might configure an IAM user with the correct policies assigned for keeping the dynamic host catalog up-to-date. You may also use an existing IAM user and assign it the policy, or an IAM instance profile.
This tutorial demonstrates creating a new IAM user, but the tutorial can also be completed using root credentials. To continue using root credentials, skip to the Gather plugin details section.
Next, create a new IAM user for Boundary.
$ aws iam create-user --user-name boundary { "User": { "Path": "/", "UserName": "boundary", "UserId": "AIDASWVU2XLZFFIP6P4IN", "Arn": "arn:aws:iam::157470686136:user/boundary", "CreateDate": "2022-01-12T19:35:32+00:00" } }
$ aws iam create-user --user-name boundary
{
"User": {
"Path": "/",
"UserName": "boundary",
"UserId": "AIDASWVU2XLZFFIP6P4IN",
"Arn": "arn:aws:iam::157470686136:user/boundary",
"CreateDate": "2022-01-12T19:35:32+00:00"
}
}
Create a new file called boundary-describe-instances-policy.json
, and fill it
with the following policy:
boundary-describe-instances-policy.json
{ "Version": "2012-10-17", "Statement": [ { "Action": [ "ec2:DescribeInstances" ], "Effect": "Allow", "Resource": "*" } ] }
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"ec2:DescribeInstances"
],
"Effect": "Allow",
"Resource": "*"
}
]
}
This policy allows the boundary
IAM user to run the DescribeInstances
API
call, similar to running the aws ec2 describe-instances
command using the CLI.
Boundary will be able to list these details, including the host's tags. This
will allow Boundary to sort hosts into their appropriate catalogs.
Next, attach this as an inline policy to the boundary
user, giving it the name
BoundaryDescribeInstances
. This command assume the json policy file is located
in the same directory the command is executed from.
$ aws iam put-user-policy \ --user-name boundary \ --policy-name BoundaryDescribeInstances \ --policy-document file://./boundary-describe-instances-policy.json
$ aws iam put-user-policy \
--user-name boundary \
--policy-name BoundaryDescribeInstances \
--policy-document file://./boundary-describe-instances-policy.json
Check your work by listing the policies attached to the user.
$ aws iam get-user-policy --user-name boundary --policy-name BoundaryDescribeInstances { "UserName": "boundary", "PolicyName": "BoundaryDescribeInstances", "PolicyDocument": { "Version": "2012-10-17", "Statement": [ { "Action": [ "ec2:DescribeInstances" ], "Effect": "Allow", "Resource": "*" } ] } }
$ aws iam get-user-policy --user-name boundary --policy-name BoundaryDescribeInstances
{
"UserName": "boundary",
"PolicyName": "BoundaryDescribeInstances",
"PolicyDocument": {
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"ec2:DescribeInstances"
],
"Effect": "Allow",
"Resource": "*"
}
]
}
}
The last step is to create an access key for the user.
$ aws iam create-access-key --user-name boundary { "AccessKey": { "UserName": "boundary", "AccessKeyId": "AKIASWVU2XLZLFLIDMVW", "Status": "Active", "SecretAccessKey": "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", "CreateDate": "2022-01-19T02:26:11+00:00" } }
$ aws iam create-access-key --user-name boundary
{
"AccessKey": {
"UserName": "boundary",
"AccessKeyId": "AKIASWVU2XLZLFLIDMVW",
"Status": "Active",
"SecretAccessKey": "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",
"CreateDate": "2022-01-19T02:26:11+00:00"
}
}
Keep the AccessKeyId
and SecretAccessKey
in a safe location for the
remainder of this tutorial.
To get ready to configure Boundary, export these values as environment variables.
$ export BOUNDARY_ACCESS_KEY_ID=<AWS Access Key ID> \ export BOUNDARY_SECRET_ACCESS_KEY=<AWS Secret Access Key>
$ export BOUNDARY_ACCESS_KEY_ID=<AWS Access Key ID> \
export BOUNDARY_SECRET_ACCESS_KEY=<AWS Secret Access Key>
Note
This tutorial disables credential rotation for the host catalog. To learn more about the required IAM permissions for enabling credential rotation, refer to the boundary-plugin-host-aws documentation.
Configure a Boundary worker
To configure a self-managed worker, you should gather the following from Boundary:
- Cluster URL (Boundary address)
- Auth method ID (from the Admin Console)
- Admin login name and password
Visit the Getting Started on HCP tutorial if you need to locate any of these values.
The worker is deployed within your AWS account with access to the hosts you want to sync with Boundary.
This tutorial deploys a self-managed worker using Amazon Linux, but you can deploy a worker using a Linux, MacOS, Windows, BSD, or Solaris instance.
Warning
For the purposes of this tutorial the security group policy for the AWS worker instance accepts incoming TCP connections on port 9202 to allow Boundary client connections. For HCP and Enterprise users, multi-hop sessions can also provide access to network resources without allowing inbound connections to workers.
Locate the worker auth token
You must register the worker with Boundary to route traffic through it and access your hosts.
When the worker is started it outputs its authorization request as Worker Auth Registration Request. This is saved to a file, auth_request_token
, defined by the auth_storage_path
in the worker configuration file.
First, locate the public IP address and private key of your self-managed worker.
Use the describe-instances
command to locate the instance's public IP address:
$ aws ec2 describe-instances --filters "Name=tag:Name,Values=boundary-worker" --query 'Reservations[*].Instances[*].PublicIpAddress' --output text 32.218.141.47
$ aws ec2 describe-instances --filters "Name=tag:Name,Values=boundary-worker" --query 'Reservations[*].Instances[*].PublicIpAddress' --output text
32.218.141.47
The private key for the worker instance should match the keypair you created or supplied in the aws-dynamic-hosts.yaml
config file for CloudFormation. Locate your copy of the boundary-keypair.pem
private key to log into the worker instance.
Check the Terraform outputs to locate the worker's public IP address:
$ terraform output worker_public_ip "32.218.141.47"
$ terraform output worker_public_ip
"32.218.141.47"
The worker_private_key.pem
file contains the private key for the worker instance.
$ ls ~/learn-boundary-cloud-host-catalogs/aws/terraform/ README.md main.tf terraform.tfstate.backup worker.tf hosts.tf terraform.tfstate worker-setup.sh.tftpl worker_private_key.pem
$ ls ~/learn-boundary-cloud-host-catalogs/aws/terraform/
README.md main.tf terraform.tfstate.backup worker.tf
hosts.tf terraform.tfstate worker-setup.sh.tftpl worker_private_key.pem
Open the AWS EC2 console in your web browser and navigate to the Instances page.
Click on Instances to view your running instances.
Select the boundary-worker
instance to display its Details panel.
Click the copy button beside the Public IPv4 address (such as 32.218.141.47
).
The private key for the worker instance should match the keypair you created or supplied in the aws-dynamic-hosts.yaml
config file for CloudFormation. Locate your copy of the boundary-keypair.pem
private key to log into the worker instance.
Substitute the worker public IP address and path to your worker's private key in the following command. This will print the auth_request_token
by fetching its value from the worker using its SSH credentials:
$ ssh ec2-user@<WORKER_PUBIC_IP> -i worker_private_key.pem "cat /boundary-worker/config/worker_auth_token" The authenticity of host '3.218.141.47 (3.218.141.47)' can't be established. ED25519 key fingerprint is SHA256:07W+OOT7+Klh3EBpDfzWmZd9jEm7zCUoRzCztuKJlwo. This key is not known by any other names. Are you sure you want to continue connecting (yes/no/[fingerprint])? yes Warning: Permanently added '3.218.141.47' (ED25519) to the list of known hosts. GzusqckarbczHoLGQ4UA25uSR5BjuYpKpJcLt9zu2StdyMoiyRJwkZXTHZAwk2JLFfVkMy3qcBDEqMsdMYCYX3DJafMc6za9R6N9nSZcPyerGWD9wXs71CY7VtBbtxnFBsnGJHK562XcDexZHT5M8GqA7W2j7cJyqVHwSBgnSddqi59HEVG8oMpyJCBETT3FNrMyTCkw22tL5FMpbX6b8t4xTkLZNB8iwscCJnaQ7BLMUfcpbMXGLahtNvk3jtZWJMyeQEn8ym51ewmxNwvtvZahNFzemMrYSQTB3aXqu
$ ssh ec2-user@<WORKER_PUBIC_IP> -i worker_private_key.pem "cat /boundary-worker/config/worker_auth_token"
The authenticity of host '3.218.141.47 (3.218.141.47)' can't be established.
ED25519 key fingerprint is SHA256:07W+OOT7+Klh3EBpDfzWmZd9jEm7zCUoRzCztuKJlwo.
This key is not known by any other names.
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
Warning: Permanently added '3.218.141.47' (ED25519) to the list of known hosts.
GzusqckarbczHoLGQ4UA25uSR5BjuYpKpJcLt9zu2StdyMoiyRJwkZXTHZAwk2JLFfVkMy3qcBDEqMsdMYCYX3DJafMc6za9R6N9nSZcPyerGWD9wXs71CY7VtBbtxnFBsnGJHK562XcDexZHT5M8GqA7W2j7cJyqVHwSBgnSddqi59HEVG8oMpyJCBETT3FNrMyTCkw22tL5FMpbX6b8t4xTkLZNB8iwscCJnaQ7BLMUfcpbMXGLahtNvk3jtZWJMyeQEn8ym51ewmxNwvtvZahNFzemMrYSQTB3aXqu
Note the Worker Auth Registration Request:
value in the output. You can also find this value in the /boundary/auth_request_token
file on the worker. Copy this value.
Register the worker
Authenticate to the Boundary Admin UI as the admin user.
Enter the admin username and password and click Authenticate.
Once logged in, navigate to the Workers page.
Click New.
You can use the new workers page to construct the contents of the
worker.hcl
file.
Do not fill in any of the worker fields.
Providing the following details will construct the worker config file contents for you:
- Worker public address
- Config file path
- Worker tags
The instructions on this page provide details for installing the Boundary Enterprise binary and deploying the constructed config file.
Because the worker has already been deployed, only the Worker Auth Registration Request key needs to be provided on this page.
Scroll down to the bottom of the New Worker page and paste the Worker Auth Registration Request key you copied earlier.
Click Register Worker.
Click Done and notice the new worker on the Workers page.
Open a terminal session on your local machine.
Ensure that the BOUNDARY_ADDR
and BOUNDARY_AUTH_METHOD_ID
environment
variables are set.
$ export BOUNDARY_ADDR="https://c3a7a20a-f663-40f3-a8e3-1b2f69b36254.boundary.hashicorp.cloud"
$ export BOUNDARY_ADDR="https://c3a7a20a-f663-40f3-a8e3-1b2f69b36254.boundary.hashicorp.cloud"
$ export BOUNDARY_AUTH_METHOD_ID="ampw_KfLAjMS2CG"
$ export BOUNDARY_AUTH_METHOD_ID="ampw_KfLAjMS2CG"
Log into the CLI as the admin user, providing the admin login name and admin password when prompted.
$ boundary authenticate Please enter the login name (it will be hidden): Please enter the password (it will be hidden): Authentication information: Account ID: acctpw_VOeNSFX8pQ Auth Method ID: ampw_ZbB6UXpW3B Expiration Time: Thu, 15 Aug 2023 12:35:32 MST User ID: u_ogz79sV4sT The token was successfully stored in the chosen keyring and is not displayed here.
$ boundary authenticate
Please enter the login name (it will be hidden):
Please enter the password (it will be hidden):
Authentication information:
Account ID: acctpw_VOeNSFX8pQ
Auth Method ID: ampw_ZbB6UXpW3B
Expiration Time: Thu, 15 Aug 2023 12:35:32 MST
User ID: u_ogz79sV4sT
The token was successfully stored in the chosen keyring and is not displayed here.
Next, export the Worker Auth Request Token value as an environment variable.
$ export WORKER_TOKEN="<Worker Auth Registration Request Value>"
$ export WORKER_TOKEN="<Worker Auth Registration Request Value>"
The token is used to issue a create worker request that authorizes the worker to Boundary and make it available.
Create a new worker:
$ boundary workers create worker-led -worker-generated-auth-token=$WORKER_TOKEN Worker information: Active Connection Count: 0 Created Time: Mon, 12 Aug 2024 19:40:57 MDT ID: w_IPfR7jBVri Local Storage State: unknown Type: pki Updated Time: Mon, 12 Aug 2024 19:40:57 MDT Version: 1 Scope: ID: global Name: global Type: global Authorized Actions: update delete add-worker-tags set-worker-tags remove-worker-tags no-op read
$ boundary workers create worker-led -worker-generated-auth-token=$WORKER_TOKEN
Worker information:
Active Connection Count: 0
Created Time: Mon, 12 Aug 2024 19:40:57 MDT
ID: w_IPfR7jBVri
Local Storage State: unknown
Type: pki
Updated Time: Mon, 12 Aug 2024 19:40:57 MDT
Version: 1
Scope:
ID: global
Name: global
Type: global
Authorized Actions:
update
delete
add-worker-tags
set-worker-tags
remove-worker-tags
no-op
read
Troubleshooting missing worker details
The worker's Tags should be updated on the Workers page within about a minute.
If these tags are missing, click the Refresh button and check again.
If they are still missing, locate your worker's public IP address, then restart the worker service:
$ ssh ec2-user@<WORKER_PUBLIC_IP_ADDRESS> -i worker_private_key.pem "sudo systemctl restart boundary-worker.service"
$ ssh ec2-user@<WORKER_PUBLIC_IP_ADDRESS> -i worker_private_key.pem "sudo systemctl restart boundary-worker.service"
Then click the Refresh button again and verify that the tags exist. Clicking on the worker's ID will display the worker's details page, which also reports when the worker was last seen.
After gathering the AWS Access Key ID and Secret Access Key you will use for Boundary, you can move on and set up the dynamic host catalog.
Host catalog plugins
For Boundary, the process for creating a dynamic host catalog has two steps:
- Create a plugin-type host catalog
- Create a host set that defines membership using filters
A plugin-type host catalog can be created using some cloud provider resource details, and the host set is then defined using a filter that selects hosts for membership based on the tags defined when you set up the hosts.
Host set filter expressions are defined by the plugin provider, in this case
AWS. The AWS plugin uses simple filter queries to specify tags
associated with hosts based on tag:Name=Value
.
For example, a host set filter that selects all hosts tagged with
"service-type": "database"
is written as:
tag:service-type=database
Resources within AWS can generally be filtered by tag names and values, and filters can be designed to use either/or selectors for tag values. This process is described in the Boundary AWS Host Plugin documentation.
To learn more about AWS filters for listing resources, visit the
describe-instances
CLI documentation
page
and the Describe Instances AWS EC2 API
docs.
Build a host catalog
With the cloud provider details gathered, you can create a plugin host catalog that will contain the host sets for the database and application filters.
Create a host catalog plugin
Authenticate to the Boundary Admin UI as the admin user.
Once logged in, select the org and project you want to create a dynamic host catalog for.
Navigate to the Host Catalogs page. Click New Host Catalog.
Select Dynamic for the host catalog type. Select from the static or dynamic credential type tabs to learn how you should fill out the new catalog form.
Complete the following fields:
Name:
AWS Catalog
Description:
AWS host catalog
Type:
Dynamic
Provider:
AWS
AWS Region:
us-east-1
(or other appropriate region)Credential type: (Required) Select the type of credential you want to use to authenticate to the host catalog. The required fields for configuring the host catalog vary depending on whether you configure static or dynamic credentials:
- Use an access key (Static Credentials): Authenticates to the host catalog using an access key that you generate in AWS.
- Use Assume Role (Dynamic Credentials): Authenticates to the host catalog using credentials that AWS
AssumeRole
generates.
- Role ARN:
YOUR_IAM_ROLE_ARN
- Role external ID:
YOUR_IAM_ROLE_EXTERNAL_ID
- Worker Filter:
"aws" in "/tags/cloud"
- Disable credential rotation:
true
Credential rotation must be disabled when you use dynamic credentials.
- Access Key ID:
YOUR_IAM_ACCESS_KEY_ID
- Secret Access Key:
YOUR_IAM_SECRET_ACCESS_KEY
- Worker Filter:
"aws" in "/tags/cloud"
When you create a host catalog, it's important that the worker filter you define matches a filter expression for your AWS worker. In the Deploy a worker section, you deployed a worker with the following tags:
tags { "Name":"boundary-worker","service-type":"worker","cloud":"aws" }
tags { "Name":"boundary-worker","service-type":"worker","cloud":"aws" }
When you configure the host catalog, Boundary will use any available worker to attempt to refresh the catalog, unless you specify which worker should be used.
An appropriate filter expression to select this worker can select its tags:
"aws" in "/tags/cloud"
You can also select the correct worker using other filter expressions. To learn more, refer to the Worker tags documentation.
Gather plugin details
To set up a dynamic host catalog using the Boundary AWS hosts plugin, you can pass the following details to Boundary for the role that has access to the AssumeRole
API:
- AWS Role ARN
- AWS Role External ID
- AWS Role Session Name
- AWS Role Tags
In this example we will use the role ARN and role external ID. These values need to be available as environment variables within your terminal session.
Gather the role ARN and role external ID from AWS.
$ aws ec2 describe-instances --instance-ids i-05317599921ed350e --query "Reservations[*].Instances[*].IamInstanceProfile" [ [ { "Arn": "arn:aws:iam::915080512474:instance-profile/boundary-worker-dhc", "Id": "AIPA5KDYMQPNALVSJAAXR" } ] ]
$ aws ec2 describe-instances --instance-ids i-05317599921ed350e --query "Reservations[*].Instances[*].IamInstanceProfile"
[
[
{
"Arn": "arn:aws:iam::915080512474:instance-profile/boundary-worker-dhc",
"Id": "AIPA5KDYMQPNALVSJAAXR"
}
]
]
Set these as environment variables within your shell session.
$ export AWS_ROLE_ARN="<AWS Role ARN Value>" \ export AWS_ROLE_ID="<AWS Role External ID Value>"
$ export AWS_ROLE_ARN="<AWS Role ARN Value>" \
export AWS_ROLE_ID="<AWS Role External ID Value>"
Check that the values are set correctly.
$ echo $AWS_ROLE_ARN; echo $AWS_ROLE_ID arn:aws:iam::915080512474:role/boundary-worker-dhc boundary-worker-dhc
$ echo $AWS_ROLE_ARN; echo $AWS_ROLE_ID
arn:aws:iam::915080512474:role/boundary-worker-dhc
boundary-worker-dhc
Authenticate to Boundary as the admin user.
$ boundary authenticate Please enter the login name (it will be hidden): Please enter the password (it will be hidden): Authentication information: Account ID: acctpw_VOeNSFX8pQ Auth Method ID: ampw_wxzojlKJLN Expiration Time: Mon, 13 Feb 2023 12:35:32 MST User ID: u_1vUkf5fPs9 The token was successfully stored in the chosen keyring and is not displayed here.
$ boundary authenticate
Please enter the login name (it will be hidden):
Please enter the password (it will be hidden):
Authentication information:
Account ID: acctpw_VOeNSFX8pQ
Auth Method ID: ampw_wxzojlKJLN
Expiration Time: Mon, 13 Feb 2023 12:35:32 MST
User ID: u_1vUkf5fPs9
The token was successfully stored in the chosen keyring and is not displayed here.
When you create a host catalog, it's important that the worker filter you define matches a filter expression for your AWS worker. In the Deploy a worker section, you deployed a worker with the following tags:
tags { "Name":"boundary-worker","service-type":"worker","cloud":"aws" }
tags {
"Name":"boundary-worker","service-type":"worker","cloud":"aws"
}
When you configure the host catalog, Boundary will use any available worker to attempt to refresh the catalog, unless you specify which worker should be used.
An appropriate filter expression to select this worker can select its tags:
"aws" in "/tags/cloud"
You can also select the correct worker using other filter expressions. To learn more, refer to the Worker tags documentation.
Next, create a new plugin-type host catalog with a -plugin-name
of aws
,
providing the role ARN and role external ID using the -secret
flag. These values should map to the environment variables defined above. Additionally, ensure that you set the disable_credential_rotation=true
, region
, and worker-filter
attributes using the -attr
flag.
$ boundary host-catalogs create plugin \ -scope-id $BOUNDARY_PROJECT_ID \ -plugin-name aws \ -worker-filter '"aws" in "/tags/cloud"' \ -attr disable_credential_rotation=true \ -attr region=$AWS_REGION \ -secret role_arn=env://AWS_ROLE_ARN \ -secret role_external_id=env://AWS_ROLE_ID
$ boundary host-catalogs create plugin \
-scope-id $BOUNDARY_PROJECT_ID \
-plugin-name aws \
-worker-filter '"aws" in "/tags/cloud"' \
-attr disable_credential_rotation=true \
-attr region=$AWS_REGION \
-secret role_arn=env://AWS_ROLE_ARN \
-secret role_external_id=env://AWS_ROLE_ID
Command flags:
-plugin-name
: This corresponds to the host catalog plugin's name, such asazure
oraws
.disable_credential_rotation
: This tutorial uses a static secret by setting this value totrue
.worker_filter
: A boolean expression to filter which workers can handle dynamic host catalog commands for this host catalog. This should match a valid filter expression for the self-managed worker deployed in AWS.region
: The region to configure the host catalog for. All host sets in this catalog will be configured for this region.role_arn
: The AWS role ARN used for AssumeRole authentication. If you provide arole_arn
value, you must also setdisable_credential_rotation
totrue
.role_external_id
: The external ID that you configured for the AssumeRole provider.
Note
Although credentials are stored encrypted within Boundary, by
default this plugin attempts to rotate credentials supplied through the
secrets
object during a create or update call to the host catalog resource.
You should disable credential rotation when configuring a host catalog plugin with dynamic credentials.
Sample output:
$ boundary host-catalogs create plugin \ -scope-id p_1234567890 \ -plugin-name aws \ -worker-filter '"aws" in "/tags/cloud"' \ -attr disable_credential_rotation=true \ -attr region=$AWS_REGION \ -secret role_arn=env://AWS_ROLE_ARN \ -secret role_external_id=env://AWS_ROLE_ID Host Catalog information: Created Time: Tue, 18 Jul 2023 10:08:12 MDT ID: hcplg_N9p1Woaq8l Plugin ID: pl_1PRTDb78iY Secrets HMAC: 3AcXDqAEALitc7WKM2KUw8dSUVEyYG2391x1DZZCS32z Type: plugin Updated Time: Tue, 18 Jul 2023 10:08:12 MDT Version: 1 Scope: ID: p_1234567890 Name: Generated project scope Parent Scope ID: o_1234567890 Type: project Plugin: ID: pl_1PRTDb78iY Name: aws Attributes: disable_credential_rotation: true region: us-east-1 Authorized Actions: no-op read update delete Authorized Actions on Host Catalog's Collections: host-sets: create list hosts: list
$ boundary host-catalogs create plugin \
-scope-id p_1234567890 \
-plugin-name aws \
-worker-filter '"aws" in "/tags/cloud"' \
-attr disable_credential_rotation=true \
-attr region=$AWS_REGION \
-secret role_arn=env://AWS_ROLE_ARN \
-secret role_external_id=env://AWS_ROLE_ID
Host Catalog information:
Created Time: Tue, 18 Jul 2023 10:08:12 MDT
ID: hcplg_N9p1Woaq8l
Plugin ID: pl_1PRTDb78iY
Secrets HMAC: 3AcXDqAEALitc7WKM2KUw8dSUVEyYG2391x1DZZCS32z
Type: plugin
Updated Time: Tue, 18 Jul 2023 10:08:12 MDT
Version: 1
Scope:
ID: p_1234567890
Name: Generated project scope
Parent Scope ID: o_1234567890
Type: project
Plugin:
ID: pl_1PRTDb78iY
Name: aws
Attributes:
disable_credential_rotation: true
region: us-east-1
Authorized Actions:
no-op
read
update
delete
Authorized Actions on Host Catalog's Collections:
host-sets:
create
list
hosts:
list
Copy the host catalog ID from the output (hcplg_N9p1Woaq8l
in this example) and
store it in the HOST_CATALOG_ID
environment variable.
$ export HOST_CATALOG_ID=hcplg_N9p1Woaq8l
$ export HOST_CATALOG_ID=hcplg_N9p1Woaq8l
To use the Boundary AWS hosts plugin, the following details must be available for a user that has access to the DescribeInstances
API.
- AWS Access Key ID
- AWS Secret Access Key
These values should be available as environment variables within your terminal session, or copied from a safe location for use when setting up Boundary. This tutorial created the IAM user boundary
and exported these values as environment variables.
Note
If using root credentials instead of creating an IAM user, you should export these values as the environment variables defined below. These values are passed to Boundary when you create the host catalog later on.
Check that these variables were defined when you created the Boundary user.
$ echo BOUNDARY_ACCESS_KEY_ID; echo BOUNDARY_SECRET_ACCESS_KEY
$ echo BOUNDARY_ACCESS_KEY_ID;
echo BOUNDARY_SECRET_ACCESS_KEY
If these values are missing, define them from where you stored the users details.
$ export BOUNDARY_ACCESS_KEY_ID=<AWS Access Key ID> \ export BOUNDARY_SECRET_ACCESS_KEY=<AWS Secret Access Key>
$ export BOUNDARY_ACCESS_KEY_ID=<AWS Access Key ID> \
export BOUNDARY_SECRET_ACCESS_KEY=<AWS Secret Access Key>
Note
If you used static credentials and proceeded without setting up the boundary
IAM user, expand the accordion below to continue.
Boundary only requires user credentials that can execute the aws ec2
describe-instances
command.
To proceed with different user credentials (such as root account credentials for testing purposes), export these values and provide them to Boundary in the following section. These values may already be available in your terminal session as the $AWS_ACCESS_KEY_ID
and $AWS_SECRET_ACCESS_KEY
environment variables.
$ export BOUNDARY_ACCESS_KEY_ID=<AWS_ACCESS_KEY_ID>
$ export BOUNDARY_ACCESS_KEY_ID=<AWS_ACCESS_KEY_ID>
$ export BOUNDARY_SECRET_ACCESS_KEY=<AWS_SECRET_ACCESS_KEY>
$ export BOUNDARY_SECRET_ACCESS_KEY=<AWS_SECRET_ACCESS_KEY>
If necessary, authenticate to Boundary as the admin user.
$ boundary authenticate Please enter the login name (it will be hidden): Please enter the password (it will be hidden): Authentication information: Account ID: acctpw_VOeNSFX8pQ Auth Method ID: ampw_wxzojlKJLN Expiration Time: Mon, 13 Feb 2023 12:35:32 MST User ID: u_1vUkf5fPs9 The token was successfully stored in the chosen keyring and is not displayed here.
$ boundary authenticate
Please enter the login name (it will be hidden):
Please enter the password (it will be hidden):
Authentication information:
Account ID: acctpw_VOeNSFX8pQ
Auth Method ID: ampw_wxzojlKJLN
Expiration Time: Mon, 13 Feb 2023 12:35:32 MST
User ID: u_1vUkf5fPs9
The token was successfully stored in the chosen keyring and is not displayed here.
Next, create a new plugin-type host catalog with a -plugin-name
of aws
,
providing the Boundary access key ID and Boundary secret access key using the
-secret
flag. These values should map to the environment variables defined
above. Additionally, ensure disable_credential_rotation
and region
are set
using the -attr
flag.
$ boundary host-catalogs create plugin \ -scope-id $PROJECT_ID \ -plugin-name aws \ -attr disable_credential_rotation=true \ -attr region=$AWS_REGION \ -secret access_key_id=env://BOUNDARY_ACCESS_KEY_ID \ -secret secret_access_key=env://BOUNDARY_SECRET_ACCESS_KEY
$ boundary host-catalogs create plugin \
-scope-id $PROJECT_ID \
-plugin-name aws \
-attr disable_credential_rotation=true \
-attr region=$AWS_REGION \
-secret access_key_id=env://BOUNDARY_ACCESS_KEY_ID \
-secret secret_access_key=env://BOUNDARY_SECRET_ACCESS_KEY
Command flags:
-plugin-name
: This corresponds to the host catalog plugin's name, such asazure
oraws
disable_credential_rotation
: This tutorial uses a static secret by setting this value totrue
region
: The region to configure the host catalog for. All host sets in this catalog will be configured for this regionaccess_key_id
: The access key ID for the IAM user to use with this host catalogsecret_access_key
: The secret access key for the IAM user to use with this host catalog
Note
Although credentials stored within Boundary are encrypted, by default this plugin attempts to rotate credentials supplied through the secrets
object during a create or update call to the host catalog resource. The given credentials are used to create a new credential, and then revoked. In this way, after rotation, only Boundary knows the client secret in use by this plugin. Credential rotation will be generally available in a future release of Boundary.
Sample output:
$ boundary host-catalogs create plugin \ -scope-id p_1234567890 \ -plugin-name aws \ -attr disable_credential_rotation=true \ -attr region=$AWS_REGION \ -secret access_key_id=env://BOUNDARY_ACCESS_KEY_ID \ -secret secret_access_key=env://BOUNDARY_SECRET_ACCESS_KEY Host Catalog information: Created Time: Tue, 18 Jul 2023 10:08:12 MDT ID: hcplg_N9p1Woaq8l Plugin ID: pl_1PRTDb78iY Secrets HMAC: 3AcXDqAEALitc7WKM2KUw8dSUVEyYG2391x1DZZCS32z Type: plugin Updated Time: Tue, 18 Jul 2023 10:08:12 MDT Version: 1 Scope: ID: p_1234567890 Name: Generated project scope Parent Scope ID: o_1234567890 Type: project Plugin: ID: pl_1PRTDb78iY Name: aws Attributes: disable_credential_rotation: true region: us-east-1 Authorized Actions: no-op read update delete Authorized Actions on Host Catalog's Collections: host-sets: create list hosts: list
$ boundary host-catalogs create plugin \
-scope-id p_1234567890 \
-plugin-name aws \
-attr disable_credential_rotation=true \
-attr region=$AWS_REGION \
-secret access_key_id=env://BOUNDARY_ACCESS_KEY_ID \
-secret secret_access_key=env://BOUNDARY_SECRET_ACCESS_KEY
Host Catalog information:
Created Time: Tue, 18 Jul 2023 10:08:12 MDT
ID: hcplg_N9p1Woaq8l
Plugin ID: pl_1PRTDb78iY
Secrets HMAC: 3AcXDqAEALitc7WKM2KUw8dSUVEyYG2391x1DZZCS32z
Type: plugin
Updated Time: Tue, 18 Jul 2023 10:08:12 MDT
Version: 1
Scope:
ID: p_1234567890
Name: Generated project scope
Parent Scope ID: o_1234567890
Type: project
Plugin:
ID: pl_1PRTDb78iY
Name: aws
Attributes:
disable_credential_rotation: true
region: us-east-1
Authorized Actions:
no-op
read
update
delete
Authorized Actions on Host Catalog's Collections:
host-sets:
create
list
hosts:
list
Copy the host catalog ID from the output (hcplg_N9p1Woaq8l
in this example) and store it in the HOST_CATALOG_ID
environment variable.
$ export HOST_CATALOG_ID=hcplg_N9p1Woaq8l
$ export HOST_CATALOG_ID=hcplg_N9p1Woaq8l
When you create a host catalog it's important that the worker filter you define matches a filter expression for your AWS worker. In the Deploy a worker section, you deployed a worker with the following tags:
tags { "Name":"boundary-worker","service-type":"worker","cloud":"aws" }
tags {
"Name":"boundary-worker","service-type":"worker","cloud":"aws"
}
When you configure the host catalog, Boundary will use any available worker to attempt to refresh the catalog, unless you specify which worker should be used.
An appropriate filter expression to select this worker can select its tags:
"aws" in "/tags/cloud"
You can also select the correct worker using other filter expressions. To learn more, refer to the Worker tags documentation.
Next, create a new file in your terraform/
directory called boundary.tf
.
Configure the boundary
provider to accept variables for the cluster's address, login name, and login password.
provider "boundary" { addr = var.boundary_addr auth_method_login_name = var.boundary_login_name auth_method_password = var.boundary_login_password } variable "aws_region" { type = string } variable "boundary_addr" { type = string } variable "boundary_login_name" { type = string } variable "boundary_login_password" { type = string }
provider "boundary" {
addr = var.boundary_addr
auth_method_login_name = var.boundary_login_name
auth_method_password = var.boundary_login_password
}
variable "aws_region" {
type = string
}
variable "boundary_addr" {
type = string
}
variable "boundary_login_name" {
type = string
}
variable "boundary_login_password" {
type = string
}
Now, export these values as Terraform variables in your current shell session. Replace the values prefixed with YOUR_
with your actual login credentials.
$ export TF_VAR_aws_region=$AWS_REGION; export TF_VAR_boundary_addr="YOUR_BOUNDARY_ADDR"; export TF_VAR_boundary_login_name="YOUR_BOUNDARY_LOGIN_NAME"; export TF_VAR_boundary_login_password="YOUR_BOUNDARY_LOGIN_PASSWORD"
$ export TF_VAR_aws_region=$AWS_REGION; export TF_VAR_boundary_addr="YOUR_BOUNDARY_ADDR"; export TF_VAR_boundary_login_name="YOUR_BOUNDARY_LOGIN_NAME"; export TF_VAR_boundary_login_password="YOUR_BOUNDARY_LOGIN_PASSWORD"
For example:
$ export TF_VAR_aws_region=us-east-1; export TF_VAR_boundary_addr="https://my-boundary-enterprise-cluster.dev"; export TF_VAR_boundary_login_name="admin"; export TF_VAR_boundary_login_password="password"
$ export TF_VAR_aws_region=us-east-1; export TF_VAR_boundary_addr="https://my-boundary-enterprise-cluster.dev"; export TF_VAR_boundary_login_name="admin"; export TF_VAR_boundary_login_password="password"
Next, add a Boundary test org and scope where the new dynamic host catalog should be created.
resource "boundary_scope" "aws_test_org" { name = "AWS test org" description = "Test org for AWS resources" scope_id = "global" auto_create_admin_role = true auto_create_default_role = true } resource "boundary_scope" "aws_project" { name = "aws_project" description = "Test project for AWS dynamic host catalogs" scope_id = boundary_scope.aws_test_org.id auto_create_admin_role = true }
resource "boundary_scope" "aws_test_org" {
name = "AWS test org"
description = "Test org for AWS resources"
scope_id = "global"
auto_create_admin_role = true
auto_create_default_role = true
}
resource "boundary_scope" "aws_project" {
name = "aws_project"
description = "Test project for AWS dynamic host catalogs"
scope_id = boundary_scope.aws_test_org.id
auto_create_admin_role = true
}
Gather plugin details
To use the Boundary AWS hosts plugin, you need the following details:
- IAM Role ARN
- IAM Role ID
- AWS Region
These values should be available as environment variables within your terminal session, or copied to a safe location for use when setting up Boundary.
Export these values in your current shell session for use later on.
$ export AWS_ROLE_ARN=$(terraform output -raw boundary_iam_role_arn); export AWS_ROLE_ID=$(terraform output -raw boundary_iam_role_id); echo $AWS_ROLE_ARN; echo $AWS_ROLE_ID arn:aws:iam::807078899029:role/boundary-worker-dhc AROA3X2NGVVKTA4XKSOZZ
$ export AWS_ROLE_ARN=$(terraform output -raw boundary_iam_role_arn); export AWS_ROLE_ID=$(terraform output -raw boundary_iam_role_id); echo $AWS_ROLE_ARN; echo $AWS_ROLE_ID
arn:aws:iam::807078899029:role/boundary-worker-dhc
AROA3X2NGVVKTA4XKSOZZ
Now define two more Terraform variables for the AWS IAM credentials.
variable "iam_role_arn" { type = string } variable "iam_role_id" { type = string }
variable "iam_role_arn" {
type = string
}
variable "iam_role_id" {
type = string
}
Export these as Terraform variables in your terminal session.
$ export TF_VAR_iam_role_id=$AWS_ROLE_ID; export TF_VAR_iam_role_arn=$AWS_ROLE_ARN; echo $TF_VAR_iam_role_arn; echo $TF_VAR_iam_role_id
$ export TF_VAR_iam_role_id=$AWS_ROLE_ID;
export TF_VAR_iam_role_arn=$AWS_ROLE_ARN;
echo $TF_VAR_iam_role_arn; echo $TF_VAR_iam_role_id
Add the boundary_host_catalog_plugin
resource. This creates a new plugin-type host catalog. Set the plugin_name
to aws
. Set disable_credential_rotation
to true
, the AWS region
, and the role ARN and role external ID using the attributes_json
attribute. These values should map to the environment variables defined above. Ensure that you set the worker_filter
attribute to the worker filter discussed above.
Additionally, add an output for the host catalog ID.
resource "boundary_host_catalog_plugin" "aws_host_catalog" { name = "AWS Catalog" description = "AWS Host Catalog" scope_id = boundary_scope.aws_project.id plugin_name = "aws" worker_filter = "\"aws\" in \"/tags/cloud\"" attributes_json = jsonencode({ "region" = var.aws_region, "disable_credential_rotation" = true, "role_arn" = var.iam_role_arn, "role_external_id" = var.iam_role_id }) } output "aws_host_catalog_id" { value = boundary_host_catalog_plugin.aws_host_catalog.id }
resource "boundary_host_catalog_plugin" "aws_host_catalog" {
name = "AWS Catalog"
description = "AWS Host Catalog"
scope_id = boundary_scope.aws_project.id
plugin_name = "aws"
worker_filter = "\"aws\" in \"/tags/cloud\""
attributes_json = jsonencode({
"region" = var.aws_region,
"disable_credential_rotation" = true,
"role_arn" = var.iam_role_arn,
"role_external_id" = var.iam_role_id
})
}
output "aws_host_catalog_id" {
value = boundary_host_catalog_plugin.aws_host_catalog.id
}
To learn more about defining host catalogs plugins, refer to the boundary_host_catalog_plugin in the Terraform registry.
Note
Although credentials are stored encrypted within Boundary, by default this plugin attempts to rotate credentials during a create or update call to the host catalog resource. You should disable credential rotation when you configure a host catalog plugin with dynamic credentials.
Upgrade the Terraform dependencies to add the Boundary provider:
$ terraform init -upgrade Initializing the backend... Initializing provider plugins... - Finding latest version of hashicorp/boundary... - Finding hashicorp/aws versions matching "~> 5.88"... - Finding latest version of hashicorp/tls... - Finding latest version of hashicorp/local... - Finding latest version of hashicorp/random... - Using previously-installed hashicorp/tls v4.0.6 - Using previously-installed hashicorp/local v2.5.2 - Using previously-installed hashicorp/random v3.7.1 - Installing hashicorp/boundary v1.2.0... - Installed hashicorp/boundary v1.2.0 (signed by HashiCorp) - Using previously-installed hashicorp/aws v5.94.1 Terraform has made some changes to the provider dependency selections recorded in the .terraform.lock.hcl file. Review those changes and commit them to your version control system if they represent changes you intended to make. Terraform has been successfully initialized! You may now begin working with Terraform. Try running "terraform plan" to see any changes that are required for your infrastructure. All Terraform commands should now work. If you ever set or change modules or backend configuration for Terraform, rerun this command to reinitialize your working directory. If you forget, other commands will detect it and remind you to do so if necessary.
$ terraform init -upgrade
Initializing the backend...
Initializing provider plugins...
- Finding latest version of hashicorp/boundary...
- Finding hashicorp/aws versions matching "~> 5.88"...
- Finding latest version of hashicorp/tls...
- Finding latest version of hashicorp/local...
- Finding latest version of hashicorp/random...
- Using previously-installed hashicorp/tls v4.0.6
- Using previously-installed hashicorp/local v2.5.2
- Using previously-installed hashicorp/random v3.7.1
- Installing hashicorp/boundary v1.2.0...
- Installed hashicorp/boundary v1.2.0 (signed by HashiCorp)
- Using previously-installed hashicorp/aws v5.94.1
Terraform has made some changes to the provider dependency selections recorded
in the .terraform.lock.hcl file. Review those changes and commit them to your
version control system if they represent changes you intended to make.
Terraform has been successfully initialized!
You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.
If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.
Apply the new Terraform configuration to create the host catalog.
$ terraform apply -var BOUNDARY_ADDR=$BOUNDARY_ADDR --auto-approve tls_private_key.worker_ssh_key: Refreshing state... [id=eb3e3c1074d36c33f4ce288115e99b3f6bdebc3c] local_sensitive_file.worker_private_key: Refreshing state... [id=0e633212313b75960a4ac41ae50142d8a472838a] data.aws_caller_identity.current: Reading... data.aws_ami.amazon: Reading... data.aws_availability_zones.boundary: Reading... aws_key_pair.boundary_worker_key: Refreshing state... [id=default-boundary-worker-key] aws_vpc.boundary_hosts_vpc: Refreshing state... [id=vpc-0e518fe3be84c0626] aws_iam_role.boundary_worker_dhc: Refreshing state... [id=boundary-worker-dhc] data.aws_caller_identity.current: Read complete after 0s [id=807078899029] data.aws_availability_zones.boundary: Read complete after 0s [id=us-east-1] aws_iam_role_policy.describe_instances: Refreshing state... [id=boundary-worker-dhc:AWSEC2DescribeInstances] aws_iam_instance_profile.boundary_worker_dhc_profile: Refreshing state... [id=boundary-worker-dhc-profile] data.aws_ami.amazon: Read complete after 1s [id=ami-0ae838902389a789a] aws_security_group.boundary_worker_outbound: Refreshing state... [id=sg-00f63d290a8ccf29e] aws_subnet.boundary_hosts_subnet: Refreshing state... [id=subnet-0bc1dd3de57e82739] aws_security_group.boundary_ssh: Refreshing state... [id=sg-033677f3eb7ac3152] aws_internet_gateway.boundary_gateway: Refreshing state... [id=igw-07e0f3fb928fbcde7] aws_route_table.boundary_hosts_public_rt: Refreshing state... [id=rtb-0cfad395e11776531] aws_instance.boundary_instance[1]: Refreshing state... [id=i-07bce1d9e4c185649] aws_instance.boundary_instance[3]: Refreshing state... [id=i-0d5b03f1b38cd4430] aws_instance.boundary_instance[2]: Refreshing state... [id=i-0f24cb0a1f7ecfc7c] aws_instance.boundary_instance[0]: Refreshing state... [id=i-07656c05a76f47415] aws_route_table_association.public_1_rt_a: Refreshing state... [id=rtbassoc-07ef7e8b4866bc828] aws_instance.boundary_worker: Refreshing state... [id=i-0205439c0e83f8fa0] Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols: + create Terraform will perform the following actions: # boundary_host_catalog_plugin.aws_host_catalog will be created + resource "boundary_host_catalog_plugin" "aws_host_catalog" { + attributes_json = jsonencode( { + disable_credential_rotation = true + region = "us-east-1" + role_arn = "arn:aws:iam::807078899029:role/boundary-worker-dhc" + role_external_id = "AROA3X2NGVVKTA4XKSOZQ" } ) + description = "AWS Host Catalog" + id = (known after apply) + internal_force_update = (known after apply) + internal_hmac_used_for_secrets_config_hmac = (known after apply) + internal_secrets_config_hmac = (known after apply) + name = "AWS Catalog" + plugin_id = (known after apply) + plugin_name = "aws" + scope_id = (known after apply) + secrets_hmac = (known after apply) + worker_filter = "\"aws\" in \"/tags/cloud\"" } # boundary_scope.aws_project will be created + resource "boundary_scope" "aws_project" { + auto_create_admin_role = true + description = "Test project for AWS dynamic host catalogs" + id = (known after apply) + name = "aws_project" + scope_id = (known after apply) } # boundary_scope.aws_test_org will be created + resource "boundary_scope" "aws_test_org" { + auto_create_admin_role = true + auto_create_default_role = true + description = "Test org for AWS resources" + id = (known after apply) + name = "AWS test org" + scope_id = "global" } Plan: 3 to add, 0 to change, 0 to destroy. Changes to Outputs: + aws_host_catalog_id = (known after apply) boundary_scope.aws_test_org: Creating... boundary_scope.aws_test_org: Creation complete after 1s [id=o_MBd5FSe5U7] boundary_scope.aws_project: Creating... boundary_scope.aws_project: Creation complete after 0s [id=p_6XNQRlugfI] boundary_host_catalog_plugin.aws_host_catalog: Creating... boundary_host_catalog_plugin.aws_host_catalog: Creation complete after 1s [id=hcplg_4utedEnKQr] Apply complete! Resources: 3 added, 0 changed, 0 destroyed. Outputs: aws_host_catalog_id = "hcplg_4utedEnKQr" boundary_iam_role_arn = "arn:aws:iam::807078899029:role/boundary-worker-dhc" boundary_iam_role_id = "AROA3X2NGVVKTA4XKSOZQ" boundary_worker_key_pair_name = "default-boundary-worker-key" host_public_ips = [ "44.195.128.144", "44.213.128.21", "98.82.24.108", "3.236.131.158", ] worker_private_key = <sensitive> worker_public_ip = "34.234.164.123"
$ terraform apply -var BOUNDARY_ADDR=$BOUNDARY_ADDR --auto-approve
tls_private_key.worker_ssh_key: Refreshing state... [id=eb3e3c1074d36c33f4ce288115e99b3f6bdebc3c]
local_sensitive_file.worker_private_key: Refreshing state... [id=0e633212313b75960a4ac41ae50142d8a472838a]
data.aws_caller_identity.current: Reading...
data.aws_ami.amazon: Reading...
data.aws_availability_zones.boundary: Reading...
aws_key_pair.boundary_worker_key: Refreshing state... [id=default-boundary-worker-key]
aws_vpc.boundary_hosts_vpc: Refreshing state... [id=vpc-0e518fe3be84c0626]
aws_iam_role.boundary_worker_dhc: Refreshing state... [id=boundary-worker-dhc]
data.aws_caller_identity.current: Read complete after 0s [id=807078899029]
data.aws_availability_zones.boundary: Read complete after 0s [id=us-east-1]
aws_iam_role_policy.describe_instances: Refreshing state... [id=boundary-worker-dhc:AWSEC2DescribeInstances]
aws_iam_instance_profile.boundary_worker_dhc_profile: Refreshing state... [id=boundary-worker-dhc-profile]
data.aws_ami.amazon: Read complete after 1s [id=ami-0ae838902389a789a]
aws_security_group.boundary_worker_outbound: Refreshing state... [id=sg-00f63d290a8ccf29e]
aws_subnet.boundary_hosts_subnet: Refreshing state... [id=subnet-0bc1dd3de57e82739]
aws_security_group.boundary_ssh: Refreshing state... [id=sg-033677f3eb7ac3152]
aws_internet_gateway.boundary_gateway: Refreshing state... [id=igw-07e0f3fb928fbcde7]
aws_route_table.boundary_hosts_public_rt: Refreshing state... [id=rtb-0cfad395e11776531]
aws_instance.boundary_instance[1]: Refreshing state... [id=i-07bce1d9e4c185649]
aws_instance.boundary_instance[3]: Refreshing state... [id=i-0d5b03f1b38cd4430]
aws_instance.boundary_instance[2]: Refreshing state... [id=i-0f24cb0a1f7ecfc7c]
aws_instance.boundary_instance[0]: Refreshing state... [id=i-07656c05a76f47415]
aws_route_table_association.public_1_rt_a: Refreshing state... [id=rtbassoc-07ef7e8b4866bc828]
aws_instance.boundary_worker: Refreshing state... [id=i-0205439c0e83f8fa0]
Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following
symbols:
+ create
Terraform will perform the following actions:
# boundary_host_catalog_plugin.aws_host_catalog will be created
+ resource "boundary_host_catalog_plugin" "aws_host_catalog" {
+ attributes_json = jsonencode(
{
+ disable_credential_rotation = true
+ region = "us-east-1"
+ role_arn = "arn:aws:iam::807078899029:role/boundary-worker-dhc"
+ role_external_id = "AROA3X2NGVVKTA4XKSOZQ"
}
)
+ description = "AWS Host Catalog"
+ id = (known after apply)
+ internal_force_update = (known after apply)
+ internal_hmac_used_for_secrets_config_hmac = (known after apply)
+ internal_secrets_config_hmac = (known after apply)
+ name = "AWS Catalog"
+ plugin_id = (known after apply)
+ plugin_name = "aws"
+ scope_id = (known after apply)
+ secrets_hmac = (known after apply)
+ worker_filter = "\"aws\" in \"/tags/cloud\""
}
# boundary_scope.aws_project will be created
+ resource "boundary_scope" "aws_project" {
+ auto_create_admin_role = true
+ description = "Test project for AWS dynamic host catalogs"
+ id = (known after apply)
+ name = "aws_project"
+ scope_id = (known after apply)
}
# boundary_scope.aws_test_org will be created
+ resource "boundary_scope" "aws_test_org" {
+ auto_create_admin_role = true
+ auto_create_default_role = true
+ description = "Test org for AWS resources"
+ id = (known after apply)
+ name = "AWS test org"
+ scope_id = "global"
}
Plan: 3 to add, 0 to change, 0 to destroy.
Changes to Outputs:
+ aws_host_catalog_id = (known after apply)
boundary_scope.aws_test_org: Creating...
boundary_scope.aws_test_org: Creation complete after 1s [id=o_MBd5FSe5U7]
boundary_scope.aws_project: Creating...
boundary_scope.aws_project: Creation complete after 0s [id=p_6XNQRlugfI]
boundary_host_catalog_plugin.aws_host_catalog: Creating...
boundary_host_catalog_plugin.aws_host_catalog: Creation complete after 1s [id=hcplg_4utedEnKQr]
Apply complete! Resources: 3 added, 0 changed, 0 destroyed.
Outputs:
aws_host_catalog_id = "hcplg_4utedEnKQr"
boundary_iam_role_arn = "arn:aws:iam::807078899029:role/boundary-worker-dhc"
boundary_iam_role_id = "AROA3X2NGVVKTA4XKSOZQ"
boundary_worker_key_pair_name = "default-boundary-worker-key"
host_public_ips = [
"44.195.128.144",
"44.213.128.21",
"98.82.24.108",
"3.236.131.158",
]
worker_private_key = <sensitive>
worker_public_ip = "34.234.164.123"
Copy the host catalog ID from the output (hcplg_N9p1Woaq8l
in this example) and
store it in the HOST_CATALOG_ID
environment variable.
$ export HOST_CATALOG_ID=hcplg_N9p1Woaq8l
$ export HOST_CATALOG_ID=hcplg_N9p1Woaq8l
To use the Boundary AWS hosts plugin, you need the following values:
- AWS (Boundary user) Access Key ID
- AWS (Boundary user) Secret Access Key
- AWS Region
These values should be available as environment variables within your terminal session, or copied to a safe location for use when setting up Boundary.
Export these values in your current shell session for use later on.
$ export BOUNDARY_ACCESS_KEY_ID=$(terraform output -raw boundary_access_key_id); export BOUNDARY_SECRET_ACCESS_KEY=$(terraform output -raw boundary_secret_access_key); echo $BOUNDARY_ACCESS_KEY_ID; echo $BOUNDARY_SECRET_ACCESS_KEY AKIASWVU2XLZOLN2IRD3 aXe492h+rcr4jyhEWwc9FP+6XOiusDl4D8RarHP0
$ export BOUNDARY_ACCESS_KEY_ID=$(terraform output -raw boundary_access_key_id); export BOUNDARY_SECRET_ACCESS_KEY=$(terraform output -raw boundary_secret_access_key); echo $BOUNDARY_ACCESS_KEY_ID; echo $BOUNDARY_SECRET_ACCESS_KEY
AKIASWVU2XLZOLN2IRD3
aXe492h+rcr4jyhEWwc9FP+6XOiusDl4D8RarHP0
Note
If you used static credentials and proceeded without setting up the boundary
IAM user, expand the accordion below to continue.
Boundary only requires user credentials that can execute the aws ec2
describe-instances
command.
To proceed with different user credentials (such as root account credentials for testing purposes), export these values as Terraform input variables. These values may already be available in your terminal session as the $AWS_ACCESS_KEY_ID
and $AWS_SECRET_ACCESS_KEY
environment variables.
$ export TF_VAR_iam_access_key_id=<AWS_ACCESS_KEY_ID>
$ export TF_VAR_iam_access_key_id=<AWS_ACCESS_KEY_ID>
$ export TF_VAR_iam_secret_access_key=<AWS_SECRET_ACCESS_KEY>
$ export TF_VAR_iam_secret_access_key=<AWS_SECRET_ACCESS_KEY>
Now define two additional variables for the AWS IAM credentials.
variable "iam_access_key_id" { type = string } variable "iam_secret_access_key" { type = string }
variable "iam_access_key_id" {
type = string
}
variable "iam_secret_access_key" {
type = string
}
Export these as Terraform variables in your terminal session.
$ export TF_VAR_iam_access_key_id=$BOUNDARY_ACCESS_KEY_ID; export TF_VAR_iam_secret_access_key=$BOUNDARY_SECRET_ACCESS_KEY; echo $TF_VAR_iam_access_key_id; echo $TF_VAR_iam_secret_access_key AKIASWVU2XLZLFLIDMVW 8BnyuNv7egZG9/k/+d79JGLoJXcqXGEZiUPEcx0O
$ export TF_VAR_iam_access_key_id=$BOUNDARY_ACCESS_KEY_ID;
export TF_VAR_iam_secret_access_key=$BOUNDARY_SECRET_ACCESS_KEY;
echo $TF_VAR_iam_access_key_id; echo $TF_VAR_iam_secret_access_key
AKIASWVU2XLZLFLIDMVW
8BnyuNv7egZG9/k/+d79JGLoJXcqXGEZiUPEcx0O
Add the boundary_host_catalog_plugin
resource. This creates a new plugin-type host catalog. Set the plugin_name
to aws
, and provide the IAM Access Key ID and Secret Access Key using the secrets_json
attribute. These values should map to the environment variables defined above. Additionally, ensure that you set the worker_filter
attribute to the worker filter defined above. Lastly, set disable_credential_rotation
to true
and the AWS region
using attributes_json
.
Additionally, add an output for the host catalog ID.
resource "boundary_host_catalog_plugin" "aws_host_catalog" { name = "AWS Catalog" description = "AWS Host Catalog" scope_id = boundary_scope.project.id plugin_name = "aws" worker_filter = "\"aws\" in \"/tags/cloud\"" attributes_json = jsonencode({ "region" = var.aws_region, "disable_credential_rotation" = true }) secrets_json = jsonencode({ "access_key_id" = var.iam_access_key_id, "secret_access_key" = var.iam_secret_access_key }) } output "aws_host_catalog_id" { value = boundary_host_catalog_plugin.aws_host_catalog.id }
resource "boundary_host_catalog_plugin" "aws_host_catalog" {
name = "AWS Catalog"
description = "AWS Host Catalog"
scope_id = boundary_scope.project.id
plugin_name = "aws"
worker_filter = "\"aws\" in \"/tags/cloud\""
attributes_json = jsonencode({
"region" = var.aws_region,
"disable_credential_rotation" = true
})
secrets_json = jsonencode({
"access_key_id" = var.iam_access_key_id,
"secret_access_key" = var.iam_secret_access_key
})
}
output "aws_host_catalog_id" {
value = boundary_host_catalog_plugin.aws_host_catalog.id
}
To learn more about defining host catalogs plugins, refer to the boundary_host_catalog_plugin in the Terraform registry.
Note
Although credentials are stored encrypted within Boundary, by default this plugin attempts to rotate credentials supplied through the secrets
object during a create or update call to the host catalog resource. You should disable credential rotation when you configure a host catalog plugin with dynamic credentials.
Upgrade the Terraform dependencies to add the Boundary provider:
$ terraform init -upgrade Initializing the backend... Initializing provider plugins... - Finding latest version of hashicorp/boundary... - Finding hashicorp/aws versions matching "~> 5.88"... - Finding latest version of hashicorp/tls... - Finding latest version of hashicorp/local... - Finding latest version of hashicorp/random... - Using previously-installed hashicorp/tls v4.0.6 - Using previously-installed hashicorp/local v2.5.2 - Using previously-installed hashicorp/random v3.7.1 - Installing hashicorp/boundary v1.2.0... - Installed hashicorp/boundary v1.2.0 (signed by HashiCorp) - Using previously-installed hashicorp/aws v5.94.1 Terraform has made some changes to the provider dependency selections recorded in the .terraform.lock.hcl file. Review those changes and commit them to your version control system if they represent changes you intended to make. Terraform has been successfully initialized! You may now begin working with Terraform. Try running "terraform plan" to see any changes that are required for your infrastructure. All Terraform commands should now work. If you ever set or change modules or backend configuration for Terraform, rerun this command to reinitialize your working directory. If you forget, other commands will detect it and remind you to do so if necessary.
$ terraform init -upgrade
Initializing the backend...
Initializing provider plugins...
- Finding latest version of hashicorp/boundary...
- Finding hashicorp/aws versions matching "~> 5.88"...
- Finding latest version of hashicorp/tls...
- Finding latest version of hashicorp/local...
- Finding latest version of hashicorp/random...
- Using previously-installed hashicorp/tls v4.0.6
- Using previously-installed hashicorp/local v2.5.2
- Using previously-installed hashicorp/random v3.7.1
- Installing hashicorp/boundary v1.2.0...
- Installed hashicorp/boundary v1.2.0 (signed by HashiCorp)
- Using previously-installed hashicorp/aws v5.94.1
Terraform has made some changes to the provider dependency selections recorded
in the .terraform.lock.hcl file. Review those changes and commit them to your
version control system if they represent changes you intended to make.
Terraform has been successfully initialized!
You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.
If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.
Apply the new Terraform configuration to create the host catalog.
$ terraform apply -var BOUNDARY_ADDR=$BOUNDARY_ADDR --auto-approve tls_private_key.worker_ssh_key: Refreshing state... [id=f02f2e75ac602bc189e0b358e1406f032d7007e9] local_sensitive_file.worker_private_key: Refreshing state... [id=335b9b022d146da6eb9747e31c09d6ad38aac38a] data.aws_ami.amazon: Reading... data.aws_availability_zones.boundary: Reading... data.aws_caller_identity.current: Reading... aws_key_pair.boundary_worker_key: Refreshing state... [id=default-boundary-worker-key] aws_vpc.boundary_hosts_vpc: Refreshing state... [id=vpc-06e1206f5ead3d0a7] data.aws_caller_identity.current: Read complete after 0s [id=807078899029] random_id.aws_iam_user_name: Refreshing state... [id=D4cZ7A] aws_iam_user.boundary: Refreshing state... [id=demo-rbeck@hashicorp.com-boundary-iam-user260512236] data.aws_availability_zones.boundary: Read complete after 0s [id=us-east-1] aws_iam_user_policy.BoundaryDescribeInstances: Refreshing state... [id=demo-rbeck@hashicorp.com-boundary-iam-user260512236:BoundaryDescribeInstances] aws_iam_access_key.boundary: Refreshing state... [id=AKIA3X2NGVVKU6CJ4QZT] data.aws_ami.amazon: Read complete after 1s [id=ami-0ae838902389a789a] aws_internet_gateway.boundary_gateway: Refreshing state... [id=igw-0b2c2a67e70810410] aws_security_group.boundary_worker_outbound: Refreshing state... [id=sg-0420663a279c95f15] aws_subnet.boundary_hosts_subnet: Refreshing state... [id=subnet-026611c024b2f260b] aws_security_group.boundary_ssh: Refreshing state... [id=sg-000a4e198606547de] aws_route_table.boundary_hosts_public_rt: Refreshing state... [id=rtb-0ecfebfde1e536648] aws_instance.boundary_worker: Refreshing state... [id=i-0cb01545ee52a260e] aws_instance.boundary_instance[0]: Refreshing state... [id=i-0b55fdde76ca3ad8b] aws_instance.boundary_instance[2]: Refreshing state... [id=i-02005c065bfb5cfa1] aws_instance.boundary_instance[1]: Refreshing state... [id=i-0b099a69025e9dc90] aws_instance.boundary_instance[3]: Refreshing state... [id=i-00d1f4a742b67540f] aws_route_table_association.public_1_rt_a: Refreshing state... [id=rtbassoc-083e7036477e2045c] Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols: + create Terraform will perform the following actions: # boundary_host_catalog_plugin.aws_host_catalog will be created + resource "boundary_host_catalog_plugin" "aws_host_catalog" { + attributes_json = jsonencode( { + disable_credential_rotation = true + region = "us-east-1" } ) + description = "AWS Host Catalog" + id = (known after apply) + internal_force_update = (known after apply) + internal_hmac_used_for_secrets_config_hmac = (known after apply) + internal_secrets_config_hmac = (known after apply) + name = "AWS Catalog" + plugin_id = (known after apply) + plugin_name = "aws" + scope_id = (known after apply) + secrets_hmac = (known after apply) + secrets_json = (sensitive value) } # boundary_scope.aws_project will be created + resource "boundary_scope" "aws_project" { + auto_create_admin_role = true + description = "Test project for AWS dynamic host catalogs" + id = (known after apply) + name = "aws_project" + scope_id = (known after apply) } # boundary_scope.aws_test_org will be created + resource "boundary_scope" "aws_test_org" { + auto_create_admin_role = true + auto_create_default_role = true + description = "Test org for AWS resources" + id = (known after apply) + name = "AWS test org" + scope_id = "global" } Plan: 3 to add, 0 to change, 0 to destroy. Changes to Outputs: + aws_host_catalog_id = (known after apply) boundary_scope.aws_test_org: Creating... boundary_scope.aws_test_org: Creation complete after 0s [id=o_UqbwBNRxRg] boundary_scope.aws_project: Creating... boundary_scope.aws_project: Creation complete after 0s [id=p_rmUyEtQWqj] boundary_host_catalog_plugin.aws_host_catalog: Creating... boundary_host_catalog_plugin.aws_host_catalog: Creation complete after 1s [id=hcplg_cxtXN3rOEM] Apply complete! Resources: 3 added, 0 changed, 0 destroyed. Outputs: aws_host_catalog_id = "hcplg_cxtXN3rOEM" boundary_access_key_id = "AKIA3X2NGVVKU6CJ4QZT" boundary_secret_access_key = <sensitive> boundary_worker_key_pair_name = "default-boundary-worker-key" host_public_ips = [ "98.81.67.214", "44.211.57.160", "100.27.36.188", "44.201.22.57", ] worker_private_key = <sensitive> worker_public_ip = "44.200.235.252"
$ terraform apply -var BOUNDARY_ADDR=$BOUNDARY_ADDR --auto-approve
tls_private_key.worker_ssh_key: Refreshing state... [id=f02f2e75ac602bc189e0b358e1406f032d7007e9]
local_sensitive_file.worker_private_key: Refreshing state... [id=335b9b022d146da6eb9747e31c09d6ad38aac38a]
data.aws_ami.amazon: Reading...
data.aws_availability_zones.boundary: Reading...
data.aws_caller_identity.current: Reading...
aws_key_pair.boundary_worker_key: Refreshing state... [id=default-boundary-worker-key]
aws_vpc.boundary_hosts_vpc: Refreshing state... [id=vpc-06e1206f5ead3d0a7]
data.aws_caller_identity.current: Read complete after 0s [id=807078899029]
random_id.aws_iam_user_name: Refreshing state... [id=D4cZ7A]
aws_iam_user.boundary: Refreshing state... [id=demo-rbeck@hashicorp.com-boundary-iam-user260512236]
data.aws_availability_zones.boundary: Read complete after 0s [id=us-east-1]
aws_iam_user_policy.BoundaryDescribeInstances: Refreshing state... [id=demo-rbeck@hashicorp.com-boundary-iam-user260512236:BoundaryDescribeInstances]
aws_iam_access_key.boundary: Refreshing state... [id=AKIA3X2NGVVKU6CJ4QZT]
data.aws_ami.amazon: Read complete after 1s [id=ami-0ae838902389a789a]
aws_internet_gateway.boundary_gateway: Refreshing state... [id=igw-0b2c2a67e70810410]
aws_security_group.boundary_worker_outbound: Refreshing state... [id=sg-0420663a279c95f15]
aws_subnet.boundary_hosts_subnet: Refreshing state... [id=subnet-026611c024b2f260b]
aws_security_group.boundary_ssh: Refreshing state... [id=sg-000a4e198606547de]
aws_route_table.boundary_hosts_public_rt: Refreshing state... [id=rtb-0ecfebfde1e536648]
aws_instance.boundary_worker: Refreshing state... [id=i-0cb01545ee52a260e]
aws_instance.boundary_instance[0]: Refreshing state... [id=i-0b55fdde76ca3ad8b]
aws_instance.boundary_instance[2]: Refreshing state... [id=i-02005c065bfb5cfa1]
aws_instance.boundary_instance[1]: Refreshing state... [id=i-0b099a69025e9dc90]
aws_instance.boundary_instance[3]: Refreshing state... [id=i-00d1f4a742b67540f]
aws_route_table_association.public_1_rt_a: Refreshing state... [id=rtbassoc-083e7036477e2045c]
Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following
symbols:
+ create
Terraform will perform the following actions:
# boundary_host_catalog_plugin.aws_host_catalog will be created
+ resource "boundary_host_catalog_plugin" "aws_host_catalog" {
+ attributes_json = jsonencode(
{
+ disable_credential_rotation = true
+ region = "us-east-1"
}
)
+ description = "AWS Host Catalog"
+ id = (known after apply)
+ internal_force_update = (known after apply)
+ internal_hmac_used_for_secrets_config_hmac = (known after apply)
+ internal_secrets_config_hmac = (known after apply)
+ name = "AWS Catalog"
+ plugin_id = (known after apply)
+ plugin_name = "aws"
+ scope_id = (known after apply)
+ secrets_hmac = (known after apply)
+ secrets_json = (sensitive value)
}
# boundary_scope.aws_project will be created
+ resource "boundary_scope" "aws_project" {
+ auto_create_admin_role = true
+ description = "Test project for AWS dynamic host catalogs"
+ id = (known after apply)
+ name = "aws_project"
+ scope_id = (known after apply)
}
# boundary_scope.aws_test_org will be created
+ resource "boundary_scope" "aws_test_org" {
+ auto_create_admin_role = true
+ auto_create_default_role = true
+ description = "Test org for AWS resources"
+ id = (known after apply)
+ name = "AWS test org"
+ scope_id = "global"
}
Plan: 3 to add, 0 to change, 0 to destroy.
Changes to Outputs:
+ aws_host_catalog_id = (known after apply)
boundary_scope.aws_test_org: Creating...
boundary_scope.aws_test_org: Creation complete after 0s [id=o_UqbwBNRxRg]
boundary_scope.aws_project: Creating...
boundary_scope.aws_project: Creation complete after 0s [id=p_rmUyEtQWqj]
boundary_host_catalog_plugin.aws_host_catalog: Creating...
boundary_host_catalog_plugin.aws_host_catalog: Creation complete after 1s [id=hcplg_cxtXN3rOEM]
Apply complete! Resources: 3 added, 0 changed, 0 destroyed.
Outputs:
aws_host_catalog_id = "hcplg_cxtXN3rOEM"
boundary_access_key_id = "AKIA3X2NGVVKU6CJ4QZT"
boundary_secret_access_key = <sensitive>
boundary_worker_key_pair_name = "default-boundary-worker-key"
host_public_ips = [
"98.81.67.214",
"44.211.57.160",
"100.27.36.188",
"44.201.22.57",
]
worker_private_key = <sensitive>
worker_public_ip = "44.200.235.252"
Create the host sets
With the dynamic host catalog created, host sets can now be defined that correspond to the service-type and application tags added to the hosts.
Recall the three host sets we wish to create:
- All hosts with a
service-type
tag ofdatabase
- All hosts with an
application
tag ofdev
- All hosts with an
application
tag ofproduction
The respective host set filters can be constructed as:
Authenticate to the Boundary Admin UI as the admin user.
Once logged in, select the org and project that contains your dynamic host catalog.
Navigate to the Host Catalogs page. Click on the AWS Catalog.
Click the Host Sets tab. Click the New button to create a new host set.
Complete the following fields:
- Name:
database
- Filter:
tag:service-type=database
- Name:
Click Add beside the filter field to add the filter.
Click Save.
Wait a moment, then click on the Hosts tab, which should contain the following hosts:
- boundary-1-dev
- boundary-2-dev
- boundary-3-production
- boundary-4-production
Note
It may take up to five minutes for the host catalog to sync with the cloud provider. Refresh the page if the hosts do not initially appear in the catalog.
Now follow the same steps to create two more host sets for the following host set filters:
The
dev
host set.- Name:
dev
- Filter:
tag:application=dev
- Name:
The
production
host set.- Name:
production
- Filter:
tag:application=production
- Name:
Check the hosts that are included in these host sets and move on to the next section.
Create the first plugin host set containing hosts tagged with a service-type
of database
, supplying the host catalog ID copied above and the needed filter using the -attr
flag.
$ boundary host-sets create plugin \ -name database \ -host-catalog-id $HOST_CATALOG_ID \ -attr filters=tag:service-type=database
$ boundary host-sets create plugin \
-name database \
-host-catalog-id $HOST_CATALOG_ID \
-attr filters=tag:service-type=database
Sample output:
$ boundary host-sets create plugin \ -name database \ -host-catalog-id $HOST_CATALOG_ID \ -attr filters=tag:service-type=database Host Set information: Created Time: Tue, 18 Jul 2023 10:20:19 MDT Host Catalog ID: hcplg_N9p1Woaq8l ID: hsplg_sSCs67KYGD Name: database Type: plugin Updated Time: Tue, 18 Jul 2023 10:20:19 MDT Version: 1 Scope: ID: p_1234567890 Name: Generated project scope Parent Scope ID: o_1234567890 Type: project Plugin: ID: pl_5pQekoPRJt Name: aws Attributes: filters: tag:service-type=database Authorized Actions: no-op read update delete
$ boundary host-sets create plugin \
-name database \
-host-catalog-id $HOST_CATALOG_ID \
-attr filters=tag:service-type=database
Host Set information:
Created Time: Tue, 18 Jul 2023 10:20:19 MDT
Host Catalog ID: hcplg_N9p1Woaq8l
ID: hsplg_sSCs67KYGD
Name: database
Type: plugin
Updated Time: Tue, 18 Jul 2023 10:20:19 MDT
Version: 1
Scope:
ID: p_1234567890
Name: Generated project scope
Parent Scope ID: o_1234567890
Type: project
Plugin:
ID: pl_5pQekoPRJt
Name: aws
Attributes:
filters: tag:service-type=database
Authorized Actions:
no-op
read
update
delete
Copy the database host set ID from the output (hsplg_sSCs67KYGD
in this example) and store it in the DATABASE_HOST_SET_ID
environment variable.
$ export DATABASE_HOST_SET_ID=hsplg_sSCs67KYGD
$ export DATABASE_HOST_SET_ID=hsplg_sSCs67KYGD
Wait a moment, then list all available hosts within the aws
host catalog, which should contain the newly created database
host set.
Note
It may take up to five minutes for the host catalog to sync with the cloud provider.
$ boundary hosts list -host-catalog-id $HOST_CATALOG_ID Host information: ID: hplg_AAdFY3zsdP External ID: i-0c482cd40f62309e2 Version: 1 Type: plugin Authorized Actions: no-op read ID: hplg_6PlZJpEzgx External ID: i-0767b38ec388cf165 Version: 1 Type: plugin Authorized Actions: no-op read ID: hplg_7KjFRrcLxD External ID: i-08c588f2d58120192 Version: 1 Type: plugin Authorized Actions: no-op read ID: hplg_FQzmhOmDqn External ID: i-0dac079e5f8760b9c Version: 1 Type: plugin Authorized Actions: no-op read
$ boundary hosts list -host-catalog-id $HOST_CATALOG_ID
Host information:
ID: hplg_AAdFY3zsdP
External ID: i-0c482cd40f62309e2
Version: 1
Type: plugin
Authorized Actions:
no-op
read
ID: hplg_6PlZJpEzgx
External ID: i-0767b38ec388cf165
Version: 1
Type: plugin
Authorized Actions:
no-op
read
ID: hplg_7KjFRrcLxD
External ID: i-08c588f2d58120192
Version: 1
Type: plugin
Authorized Actions:
no-op
read
ID: hplg_FQzmhOmDqn
External ID: i-0dac079e5f8760b9c
Version: 1
Type: plugin
Authorized Actions:
no-op
read
Troubleshooting
If the boundary hosts list
command returns No hosts found
, expand the accordion below to check your work.
If the host catalog is misconfigured, hosts will not be discoverable by Boundary. There are four potential issues to check:
- The host set filter is defined correctly.
- The host catalog and host set IDs are exported correctly as environment variables.
- The IAM profile assigning
DescribeInstances
permissions has been assigned to the IAM boundary user. - The AWS region is incorrectly defined
Note
Depending on the type of configuration issue, you will need to wait approximately 5 - 10 minutes for the existing host catalog or host sets to sync with the provider and refresh their values. If you do not want to wait, you can create a new host catalog and host set from scratch, but these will also take several minutes to sync upon creation.
If incorrect, you should update the managed group filter. This process can also be used to update the managed group filter criteria in the future for any existing managed groups.
First, check the environment variables defined when creating a host catalog plugin. Ensure these are the correct values gathered when setting up the cloud hosts.
If these are incorrectly defined, you should set the environment variables again, and update the host catalog:
$ boundary host-catalogs update plugin \ -id $HOST_CATALOG_ID \ -plugin-name aws \ -attr disable_credential_rotation=true \ -attr region=$AWS_REGION \ -secret access_key_id=env://BOUNDARY_ACCESS_KEY_ID \ -secret secret_access_key=env://BOUNDARY_SECRET_ACCESS_KEY
$ boundary host-catalogs update plugin \
-id $HOST_CATALOG_ID \
-plugin-name aws \
-attr disable_credential_rotation=true \
-attr region=$AWS_REGION \
-secret access_key_id=env://BOUNDARY_ACCESS_KEY_ID \
-secret secret_access_key=env://BOUNDARY_SECRET_ACCESS_KEY
Second, check is the DescribeInstances
profile was assigned to the boundary
IAM user. If incorrect permissions are assigned or the wrong user is defined, Boundary will not be able to view the hosts.
Review the steps for configuring an IAM user.
After correcting the profile, give Boundary up to five minutes to refresh the connection to AWS, and list the available hosts again.
Now create a host set that correspond to the application
tag of dev
.
$ boundary host-sets create plugin \ -name dev \ -host-catalog-id $HOST_CATALOG_ID \ -attr filters=tag:application=dev
$ boundary host-sets create plugin \
-name dev \
-host-catalog-id $HOST_CATALOG_ID \
-attr filters=tag:application=dev
Sample output:
$ boundary host-sets create plugin \ -name dev \ -host-catalog-id $HOST_CATALOG_ID \ -attr filters=tag:application=dev Host Set information: Created Time: Tue, 18 Jul 2023 10:20:19 MDT Host Catalog ID: hcplg_Ia7R4E39oF ID: hsplg_yG2pSNlbTM Name: dev Type: plugin Updated Time: Tue, 18 Jul 2023 10:20:19 MDT Version: 1 Scope: ID: p_1234567890 Name: Generated project scope Parent Scope ID: o_1234567890 Type: project Plugin: ID: pl_1PRTDb78iY Name: aws Attributes: filters: tag:application=dev Authorized Actions: no-op read update delete
$ boundary host-sets create plugin \
-name dev \
-host-catalog-id $HOST_CATALOG_ID \
-attr filters=tag:application=dev
Host Set information:
Created Time: Tue, 18 Jul 2023 10:20:19 MDT
Host Catalog ID: hcplg_Ia7R4E39oF
ID: hsplg_yG2pSNlbTM
Name: dev
Type: plugin
Updated Time: Tue, 18 Jul 2023 10:20:19 MDT
Version: 1
Scope:
ID: p_1234567890
Name: Generated project scope
Parent Scope ID: o_1234567890
Type: project
Plugin:
ID: pl_1PRTDb78iY
Name: aws
Attributes:
filters: tag:application=dev
Authorized Actions:
no-op
read
update
delete
Copy the dev host set ID from the output (hsplg_yG2pSNlbTM
in this example) and store it in the DEV_HOST_SET_ID
environment variable.
$ export DEV_HOST_SET_ID=hsplg_yG2pSNlbTM
$ export DEV_HOST_SET_ID=hsplg_yG2pSNlbTM
Lastly, create a host set that correspond to the application
tag of production
.
$ boundary host-sets create plugin \ -name production \ -host-catalog-id $HOST_CATALOG_ID \ -attr filters=tag:application=production
$ boundary host-sets create plugin \
-name production \
-host-catalog-id $HOST_CATALOG_ID \
-attr filters=tag:application=production
Sample output:
$ boundary host-sets create plugin \ -name production \ -host-catalog-id $HOST_CATALOG_ID \ -attr filters=tag:application=production Host Set information: Created Time: Tue, 18 Jul 2023 10:20:19 MDT Host Catalog ID: hcplg_Ia7R4E39oF ID: hsplg_ZmoClk4HiD Name: production Type: plugin Updated Time: Tue, 18 Jul 2023 10:20:19 MDT Version: 1 Scope: ID: p_1234567890 Name: Generated project scope Parent Scope ID: o_1234567890 Type: project Plugin: ID: pl_1PRTDb78iY Name: aws Attributes: filters: tag:application=production Authorized Actions: no-op read update delete
$ boundary host-sets create plugin \
-name production \
-host-catalog-id $HOST_CATALOG_ID \
-attr filters=tag:application=production
Host Set information:
Created Time: Tue, 18 Jul 2023 10:20:19 MDT
Host Catalog ID: hcplg_Ia7R4E39oF
ID: hsplg_ZmoClk4HiD
Name: production
Type: plugin
Updated Time: Tue, 18 Jul 2023 10:20:19 MDT
Version: 1
Scope:
ID: p_1234567890
Name: Generated project scope
Parent Scope ID: o_1234567890
Type: project
Plugin:
ID: pl_1PRTDb78iY
Name: aws
Attributes:
filters: tag:application=production
Authorized Actions:
no-op
read
update
delete
Copy the production host set ID from the output (hsplg_ZmoClk4HiD
in this example) and store it in the PRODUCTION_HOST_SET_ID
environment variable.
$ export PRODUCTION_HOST_SET_ID=hsplg_ZmoClk4HiD
$ export PRODUCTION_HOST_SET_ID=hsplg_ZmoClk4HiD
Open the boundary.tf
file and add the boundary_host_set_plugin
resource. This creates a new plugin-type host set. Set the scope_id
to the same scope that contains the host catalog created earlier, and then set the host_catalog_id
. Define the filters
attribute by passing the host set filter tag:service-type=database
.
Additionally, add an output for the host set ID.
resource "boundary_host_set_plugin" "database_host_set" { name = "Database Host Set" description = "AWS database host set" host_catalog_id = boundary_host_catalog_plugin.aws_host_catalog.id attributes_json = jsonencode({ "filters" = ["tag:service-type=database"] }) } output "database_host_set_id" { value = boundary_host_set_plugin.database_host_set.id }
resource "boundary_host_set_plugin" "database_host_set" {
name = "Database Host Set"
description = "AWS database host set"
host_catalog_id = boundary_host_catalog_plugin.aws_host_catalog.id
attributes_json = jsonencode({
"filters" = ["tag:service-type=database"]
})
}
output "database_host_set_id" {
value = boundary_host_set_plugin.database_host_set.id
}
To learn more about defining host set plugins, refer to the boundary_host_set_plugin in the Terraform registry.
Now, add two additional host sets for the tag:application=dev
and tag:application=production
hosts. Also define outputs for their host set IDs.
resource "boundary_host_set_plugin" "dev_host_set" { name = "Dev Host Set" description = "AWS dev host set" host_catalog_id = boundary_host_catalog_plugin.aws_host_catalog.id attributes_json = jsonencode({ "filters" = ["tag:application=dev"] }) } output "dev_host_set_id" { value = boundary_host_set_plugin.dev_host_set.id } resource "boundary_host_set_plugin" "production_host_set" { name = "Production Host Set" description = "AWS Production host set" host_catalog_id = boundary_host_catalog_plugin.aws_host_catalog.id attributes_json = jsonencode({ "filters" = ["tag:application=production"] }) } output "production_host_set_id" { value = boundary_host_set_plugin.production_host_set.id }
resource "boundary_host_set_plugin" "dev_host_set" {
name = "Dev Host Set"
description = "AWS dev host set"
host_catalog_id = boundary_host_catalog_plugin.aws_host_catalog.id
attributes_json = jsonencode({
"filters" = ["tag:application=dev"]
})
}
output "dev_host_set_id" {
value = boundary_host_set_plugin.dev_host_set.id
}
resource "boundary_host_set_plugin" "production_host_set" {
name = "Production Host Set"
description = "AWS Production host set"
host_catalog_id = boundary_host_catalog_plugin.aws_host_catalog.id
attributes_json = jsonencode({
"filters" = ["tag:application=production"]
})
}
output "production_host_set_id" {
value = boundary_host_set_plugin.production_host_set.id
}
Apply the new Terraform configuration to create the host sets.
$ terraform apply -var BOUNDARY_ADDR=$BOUNDARY_ADDR --auto-approve tls_private_key.worker_ssh_key: Refreshing state... [id=eb3e3c1074d36c33f4ce288115e99b3f6bdebc3c] local_sensitive_file.worker_private_key: Refreshing state... [id=0e633212313b75960a4ac41ae50142d8a472838a] data.aws_ami.amazon: Reading... data.aws_availability_zones.boundary: Reading... data.aws_caller_identity.current: Reading... aws_key_pair.boundary_worker_key: Refreshing state... [id=default-boundary-worker-key] aws_vpc.boundary_hosts_vpc: Refreshing state... [id=vpc-0e518fe3be84c0626] aws_iam_role.boundary_worker_dhc: Refreshing state... [id=boundary-worker-dhc] data.aws_caller_identity.current: Read complete after 0s [id=807078899029] data.aws_availability_zones.boundary: Read complete after 0s [id=us-east-1] boundary_scope.aws_test_org: Refreshing state... [id=o_MBd5FSe5U7] boundary_scope.aws_project: Refreshing state... [id=p_6XNQRlugfI] boundary_host_catalog_plugin.aws_host_catalog: Refreshing state... [id=hcplg_4utedEnKQr] aws_iam_role_policy.describe_instances: Refreshing state... [id=boundary-worker-dhc:AWSEC2DescribeInstances] aws_iam_instance_profile.boundary_worker_dhc_profile: Refreshing state... [id=boundary-worker-dhc-profile] data.aws_ami.amazon: Read complete after 1s [id=ami-0ae838902389a789a] aws_subnet.boundary_hosts_subnet: Refreshing state... [id=subnet-0bc1dd3de57e82739] aws_internet_gateway.boundary_gateway: Refreshing state... [id=igw-07e0f3fb928fbcde7] aws_security_group.boundary_ssh: Refreshing state... [id=sg-033677f3eb7ac3152] aws_security_group.boundary_worker_outbound: Refreshing state... [id=sg-00f63d290a8ccf29e] aws_route_table.boundary_hosts_public_rt: Refreshing state... [id=rtb-0cfad395e11776531] aws_instance.boundary_instance[0]: Refreshing state... [id=i-07656c05a76f47415] aws_instance.boundary_instance[2]: Refreshing state... [id=i-0f24cb0a1f7ecfc7c] aws_instance.boundary_instance[1]: Refreshing state... [id=i-07bce1d9e4c185649] aws_instance.boundary_instance[3]: Refreshing state... [id=i-0d5b03f1b38cd4430] aws_instance.boundary_worker: Refreshing state... [id=i-0205439c0e83f8fa0] aws_route_table_association.public_1_rt_a: Refreshing state... [id=rtbassoc-07ef7e8b4866bc828] Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols: + create ~ update in-place Terraform will perform the following actions: # boundary_host_catalog_plugin.aws_host_catalog will be updated in-place ~ resource "boundary_host_catalog_plugin" "aws_host_catalog" { id = "hcplg_4utedEnKQr" ~ internal_force_update = "6731200570893029034" -> (known after apply) name = "AWS Catalog" # (7 unchanged attributes hidden) } # boundary_host_set_plugin.database_host_set will be created + resource "boundary_host_set_plugin" "database_host_set" { + attributes_json = jsonencode( { + filters = [ + "tag:service-type=database", ] } ) + description = "AWS database host set" + host_catalog_id = "hcplg_4utedEnKQr" + id = (known after apply) + name = "Database Host Set" + type = "plugin" } # boundary_host_set_plugin.dev_host_set will be created + resource "boundary_host_set_plugin" "dev_host_set" { + attributes_json = jsonencode( { + filters = [ + "tag:application=dev", ] } ) + description = "AWS dev host set" + host_catalog_id = "hcplg_4utedEnKQr" + id = (known after apply) + name = "Dev Host Set" + type = "plugin" } # boundary_host_set_plugin.production_host_set will be created + resource "boundary_host_set_plugin" "production_host_set" { + attributes_json = jsonencode( { + filters = [ + "tag:application=production", ] } ) + description = "AWS Production host set" + host_catalog_id = "hcplg_4utedEnKQr" + id = (known after apply) + name = "Production Host Set" + type = "plugin" } Plan: 3 to add, 1 to change, 0 to destroy. Changes to Outputs: + database_host_set_id = (known after apply) + dev_host_set_id = (known after apply) + production_host_set_id = (known after apply) boundary_host_catalog_plugin.aws_host_catalog: Modifying... [id=hcplg_4utedEnKQr] boundary_host_catalog_plugin.aws_host_catalog: Modifications complete after 0s [id=hcplg_4utedEnKQr] boundary_host_set_plugin.database_host_set: Creating... boundary_host_set_plugin.production_host_set: Creating... boundary_host_set_plugin.dev_host_set: Creating... boundary_host_set_plugin.production_host_set: Creation complete after 1s [id=hsplg_IAUBCopZJG] boundary_host_set_plugin.dev_host_set: Creation complete after 1s [id=hsplg_QdsEJ4S4i3] boundary_host_set_plugin.database_host_set: Creation complete after 1s [id=hsplg_CzXBqbfIJq] Apply complete! Resources: 3 added, 1 changed, 0 destroyed. Outputs: aws_host_catalog_id = "hcplg_4utedEnKQr" boundary_iam_role_arn = "arn:aws:iam::807078899029:role/boundary-worker-dhc" boundary_iam_role_id = "AROA3X2NGVVKTA4XKSOZQ" boundary_worker_key_pair_name = "default-boundary-worker-key" database_host_set_id = "hsplg_CzXBqbfIJq" dev_host_set_id = "hsplg_QdsEJ4S4i3" host_public_ips = [ "44.195.128.144", "44.213.128.21", "98.82.24.108", "3.236.131.158", ] production_host_set_id = "hsplg_IAUBCopZJG" worker_private_key = <sensitive> worker_public_ip = "34.234.164.123"
$ terraform apply -var BOUNDARY_ADDR=$BOUNDARY_ADDR --auto-approve
tls_private_key.worker_ssh_key: Refreshing state... [id=eb3e3c1074d36c33f4ce288115e99b3f6bdebc3c]
local_sensitive_file.worker_private_key: Refreshing state... [id=0e633212313b75960a4ac41ae50142d8a472838a]
data.aws_ami.amazon: Reading...
data.aws_availability_zones.boundary: Reading...
data.aws_caller_identity.current: Reading...
aws_key_pair.boundary_worker_key: Refreshing state... [id=default-boundary-worker-key]
aws_vpc.boundary_hosts_vpc: Refreshing state... [id=vpc-0e518fe3be84c0626]
aws_iam_role.boundary_worker_dhc: Refreshing state... [id=boundary-worker-dhc]
data.aws_caller_identity.current: Read complete after 0s [id=807078899029]
data.aws_availability_zones.boundary: Read complete after 0s [id=us-east-1]
boundary_scope.aws_test_org: Refreshing state... [id=o_MBd5FSe5U7]
boundary_scope.aws_project: Refreshing state... [id=p_6XNQRlugfI]
boundary_host_catalog_plugin.aws_host_catalog: Refreshing state... [id=hcplg_4utedEnKQr]
aws_iam_role_policy.describe_instances: Refreshing state... [id=boundary-worker-dhc:AWSEC2DescribeInstances]
aws_iam_instance_profile.boundary_worker_dhc_profile: Refreshing state... [id=boundary-worker-dhc-profile]
data.aws_ami.amazon: Read complete after 1s [id=ami-0ae838902389a789a]
aws_subnet.boundary_hosts_subnet: Refreshing state... [id=subnet-0bc1dd3de57e82739]
aws_internet_gateway.boundary_gateway: Refreshing state... [id=igw-07e0f3fb928fbcde7]
aws_security_group.boundary_ssh: Refreshing state... [id=sg-033677f3eb7ac3152]
aws_security_group.boundary_worker_outbound: Refreshing state... [id=sg-00f63d290a8ccf29e]
aws_route_table.boundary_hosts_public_rt: Refreshing state... [id=rtb-0cfad395e11776531]
aws_instance.boundary_instance[0]: Refreshing state... [id=i-07656c05a76f47415]
aws_instance.boundary_instance[2]: Refreshing state... [id=i-0f24cb0a1f7ecfc7c]
aws_instance.boundary_instance[1]: Refreshing state... [id=i-07bce1d9e4c185649]
aws_instance.boundary_instance[3]: Refreshing state... [id=i-0d5b03f1b38cd4430]
aws_instance.boundary_worker: Refreshing state... [id=i-0205439c0e83f8fa0]
aws_route_table_association.public_1_rt_a: Refreshing state... [id=rtbassoc-07ef7e8b4866bc828]
Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following
symbols:
+ create
~ update in-place
Terraform will perform the following actions:
# boundary_host_catalog_plugin.aws_host_catalog will be updated in-place
~ resource "boundary_host_catalog_plugin" "aws_host_catalog" {
id = "hcplg_4utedEnKQr"
~ internal_force_update = "6731200570893029034" -> (known after apply)
name = "AWS Catalog"
# (7 unchanged attributes hidden)
}
# boundary_host_set_plugin.database_host_set will be created
+ resource "boundary_host_set_plugin" "database_host_set" {
+ attributes_json = jsonencode(
{
+ filters = [
+ "tag:service-type=database",
]
}
)
+ description = "AWS database host set"
+ host_catalog_id = "hcplg_4utedEnKQr"
+ id = (known after apply)
+ name = "Database Host Set"
+ type = "plugin"
}
# boundary_host_set_plugin.dev_host_set will be created
+ resource "boundary_host_set_plugin" "dev_host_set" {
+ attributes_json = jsonencode(
{
+ filters = [
+ "tag:application=dev",
]
}
)
+ description = "AWS dev host set"
+ host_catalog_id = "hcplg_4utedEnKQr"
+ id = (known after apply)
+ name = "Dev Host Set"
+ type = "plugin"
}
# boundary_host_set_plugin.production_host_set will be created
+ resource "boundary_host_set_plugin" "production_host_set" {
+ attributes_json = jsonencode(
{
+ filters = [
+ "tag:application=production",
]
}
)
+ description = "AWS Production host set"
+ host_catalog_id = "hcplg_4utedEnKQr"
+ id = (known after apply)
+ name = "Production Host Set"
+ type = "plugin"
}
Plan: 3 to add, 1 to change, 0 to destroy.
Changes to Outputs:
+ database_host_set_id = (known after apply)
+ dev_host_set_id = (known after apply)
+ production_host_set_id = (known after apply)
boundary_host_catalog_plugin.aws_host_catalog: Modifying... [id=hcplg_4utedEnKQr]
boundary_host_catalog_plugin.aws_host_catalog: Modifications complete after 0s [id=hcplg_4utedEnKQr]
boundary_host_set_plugin.database_host_set: Creating...
boundary_host_set_plugin.production_host_set: Creating...
boundary_host_set_plugin.dev_host_set: Creating...
boundary_host_set_plugin.production_host_set: Creation complete after 1s [id=hsplg_IAUBCopZJG]
boundary_host_set_plugin.dev_host_set: Creation complete after 1s [id=hsplg_QdsEJ4S4i3]
boundary_host_set_plugin.database_host_set: Creation complete after 1s [id=hsplg_CzXBqbfIJq]
Apply complete! Resources: 3 added, 1 changed, 0 destroyed.
Outputs:
aws_host_catalog_id = "hcplg_4utedEnKQr"
boundary_iam_role_arn = "arn:aws:iam::807078899029:role/boundary-worker-dhc"
boundary_iam_role_id = "AROA3X2NGVVKTA4XKSOZQ"
boundary_worker_key_pair_name = "default-boundary-worker-key"
database_host_set_id = "hsplg_CzXBqbfIJq"
dev_host_set_id = "hsplg_QdsEJ4S4i3"
host_public_ips = [
"44.195.128.144",
"44.213.128.21",
"98.82.24.108",
"3.236.131.158",
]
production_host_set_id = "hsplg_IAUBCopZJG"
worker_private_key = <sensitive>
worker_public_ip = "34.234.164.123"
Verify catalog membership
With the database
, dev
, and prod
host sets defined within the AWS host catalog, the next step is to verify that the four instances listed as members of the catalog are dynamically included in the correct host sets.
Host membership can be verified by reading the host set details and verifying its membership IDs.
Check the database host set
First, verify that the database
host set contains all four members of the aws host catalog.
Navigate to the Host Catalogs page and click on the AWS Catalog.
Click on the Host Sets tab.
Click on the Database Host Set host set, then click on the Hosts tab.
Verify the host set contains the following hosts:
- boundary-1-dev
- boundary-2-dev
- boundary-3-production
- boundary-4-production
If any of these hosts are missing, expand the troubleshooting accordion to diagnose what could be wrong.
If the host catalog is misconfigured, hosts will not be discoverable by Boundary.
At this point in the tutorial hosts are contained within the host catalog, but not appearing in one or more host sets. This means that the host set itself is likely misconfigured.
Navigate to the database
host set Details page. Check the Filter
section, and verify it matches the correctly defined filter:
tag:service-type=database
If the tag is incorrectly assigned, you should update the affected host set to fix the filter by clicking the Edit button.
After you update the filter, click Save. Boundary will automatically refresh the host set.
Note
Depending on the type of configuration issue, you will need to wait approximately 5 - 10 minutes for the existing host catalog or host sets to sync with the provider and refresh their values. If you do not want to wait, you can create a new host catalog and host set from scratch, but these will also take several minutes to sync upon creation.
Check that the updated filter is working by navigating back to the Hosts tab.
If the dev
or production
host sets are affected by incorrect filters, follow the same procedure to update their filters accordingly.
Check the dev host set
Next, check the dev
host set members.
Navigate to the Host Catalogs page and click on the AWS Catalog.
Click on the Host Sets tab.
Click on the Dev Host Set host set, then click on the Hosts tab.
Verify the host set contains the following hosts:
- boundary-1-dev
- boundary-2-dev
If any of these hosts are missing, expand the troubleshooting accordion above to diagnose what could be wrong.
Check the production host set
Lastly, check the production host set members.
Navigate to the Host Catalogs page and click on the AWS Catalog.
Click on the Host Sets tab.
Click on the Production Host Set host set, then click on the Hosts tab.
Verify the host set contains the following host:
- boundary-3-production
Notice the Host IDs
section of this output. Even though there are two production instances, only one is listed in the host set.
To figure out what could be wrong, compare the members of the production
host set to the members of the database
host set. Remember, members of the production
and dev
host sets are a subset of the database
host set.
For example, by comparing the Host IDs
of the dev
host catalog to the production
catalog, a host ID such as hplg_7KjFRrcLxD
should be missing from the production
host set, although it is contained within the database
host set.
Check the database host set
Perform a read on the host set named database
to view its members.
$ boundary host-sets read -id $DATABASE_HOST_SET_ID Host Set information: Created Time: Tue, 18 Jul 2023 10:20:19 MDT Host Catalog ID: hcplg_N9p1Woaq8l ID: hsplg_sSCs67KYGD Name: database Type: plugin Updated Time: Tue, 18 Jul 2023 10:20:19 MDT Version: 3 Scope: ID: p_1234567890 Name: Generated project scope Parent Scope ID: o_1234567890 Type: project Plugin: ID: pl_1PRTDb78iY Name: aws Attributes: filters: tag:service-type=database Authorized Actions: no-op read update delete Host IDs: hplg_4JwY7y6il2 hplg_6PlZJpEzgx hplg_7KjFRrcLxD hplg_FQzmhOmDqn
$ boundary host-sets read -id $DATABASE_HOST_SET_ID
Host Set information:
Created Time: Tue, 18 Jul 2023 10:20:19 MDT
Host Catalog ID: hcplg_N9p1Woaq8l
ID: hsplg_sSCs67KYGD
Name: database
Type: plugin
Updated Time: Tue, 18 Jul 2023 10:20:19 MDT
Version: 3
Scope:
ID: p_1234567890
Name: Generated project scope
Parent Scope ID: o_1234567890
Type: project
Plugin:
ID: pl_1PRTDb78iY
Name: aws
Attributes:
filters: tag:service-type=database
Authorized Actions:
no-op
read
update
delete
Host IDs:
hplg_4JwY7y6il2
hplg_6PlZJpEzgx
hplg_7KjFRrcLxD
hplg_FQzmhOmDqn
If the Host IDs
section is missing, expand the troubleshooting accordion to diagnose what could be wrong.
If the host catalog is misconfigured, hosts will not be discoverable by Boundary.
At this point in the tutorial hosts are contained within the host catalog, but not appearing in one or more host sets. This means that the host set itself is likely misconfigured.
Earlier, you performed a read
on the database host set. Check the Attributes
section, and verify it matches the correctly defined filter:
Attributes:
filters: tag:service-type=database
Attributes:
filters: tag:service-type=database
If the tag is incorrectly assigned, you should update the affected host set to fix the filter:
$ boundary host-sets update plugin \ -id $DATABASE_HOST_SET_ID \ -name production \ -attr filters=tag:application=production
$ boundary host-sets update plugin \
-id $DATABASE_HOST_SET_ID \
-name production \
-attr filters=tag:application=production
After you update the filter, Boundary will automatically refresh the host set.
Note
Depending on the type of configuration issue, you will need to wait approximately 5 - 10 minutes for the existing host catalog or host sets to sync with the provider and refresh their values. If you do not want to wait, you can create a new host catalog and host set from scratch, but these will also take several minutes to sync upon creation.
Check that the updated filter is working by performing another read
on the database
host set.
$ boundary host-sets read -id $DATABASE_HOST_SET_ID
$ boundary host-sets read -id $DATABASE_HOST_SET_ID
If the dev
or production
host sets are affected by incorrect filters, follow the same procedure to update their filters accordingly.
Check the dev host set
Next, read the dev
host set details. Verify the Host IDs are the correctly tagged hosts from the cloud provider.
$ boundary host-sets read -id $DEV_HOST_SET_ID Host Set information: Created Time: Tue, 18 Jul 2023 10:27:29 MDT Host Catalog ID: hcplg_Ia7R4E39oF ID: hsplg_yG2pSNlbTM Name: dev Type: plugin Updated Time: Tue, 18 Jul 2023 10:27:29 MDT Version: 2 Scope: ID: p_1234567890 Name: Generated project scope Parent Scope ID: o_1234567890 Type: project Plugin: ID: pl_1PRTDb78iY Name: aws Attributes: filters: tag:application=dev Authorized Actions: no-op read update delete Host IDs: hplg_6PlZJpEzgx hplg_FQzmhOmDqn
$ boundary host-sets read -id $DEV_HOST_SET_ID
Host Set information:
Created Time: Tue, 18 Jul 2023 10:27:29 MDT
Host Catalog ID: hcplg_Ia7R4E39oF
ID: hsplg_yG2pSNlbTM
Name: dev
Type: plugin
Updated Time: Tue, 18 Jul 2023 10:27:29 MDT
Version: 2
Scope:
ID: p_1234567890
Name: Generated project scope
Parent Scope ID: o_1234567890
Type: project
Plugin:
ID: pl_1PRTDb78iY
Name: aws
Attributes:
filters: tag:application=dev
Authorized Actions:
no-op
read
update
delete
Host IDs:
hplg_6PlZJpEzgx
hplg_FQzmhOmDqn
Notice the Host IDs
section of the output, which returns the two dev instances configured in AWS.
Check the production host set
Lastly, read the production host set and verify its Host IDs.
$ boundary host-sets read -id $PRODUCTION_HOST_SET_ID Host Set information: Created Time: Tue, 18 Jul 2023 10:31:46 MDT Host Catalog ID: hcplg_Ia7R4E39oF ID: hsplg_ZmoClk4HiD Name: production Type: plugin Updated Time: Tue, 18 Jul 2023 10:31:46 MDT Version: 2 Scope: ID: p_1234567890 Name: Generated project scope Parent Scope ID: o_1234567890 Type: project Plugin: ID: pl_1PRTDb78iY Name: aws Attributes: filters: tag:application=production Authorized Actions: no-op read update delete Host IDs: hplg_4JwY7y6il2
$ boundary host-sets read -id $PRODUCTION_HOST_SET_ID
Host Set information:
Created Time: Tue, 18 Jul 2023 10:31:46 MDT
Host Catalog ID: hcplg_Ia7R4E39oF
ID: hsplg_ZmoClk4HiD
Name: production
Type: plugin
Updated Time: Tue, 18 Jul 2023 10:31:46 MDT
Version: 2
Scope:
ID: p_1234567890
Name: Generated project scope
Parent Scope ID: o_1234567890
Type: project
Plugin:
ID: pl_1PRTDb78iY
Name: aws
Attributes:
filters: tag:application=production
Authorized Actions:
no-op
read
update
delete
Host IDs:
hplg_4JwY7y6il2
Notice the Host IDs
section of this output. Even though there are two production instances, only one is listed in the host set.
To figure out what could be wrong, compare the members of the production
host set to the members of the database
host set. Remember, members of the production
and dev
host sets are a sub-set of the database
host set.
$ boundary host-sets read -id $DATABASE_HOST_SET_ID
Host Set information:
Created Time: Tue, 18 Jul 2023 10:35:54 MDT
Host Catalog ID: hcplg_Ia7R4E39oF
ID: hsplg_qnQM03I6yT
Name: database
Type: plugin
Updated Time: Tue, 18 Jul 2023 10:35:54 MDT
Version: 3
Scope:
ID: p_1234567890
Name: Generated project scope
Parent Scope ID: o_1234567890
Type: project
Plugin:
ID: pl_1PRTDb78iY
Name: aws
Attributes:
filters: tag:service-type=database
Authorized Actions:
no-op
read
update
delete
Host IDs:
hplg_4JwY7y6il2
hplg_6PlZJpEzgx
hplg_7KjFRrcLxD
hplg_FQzmhOmDqn
$ boundary host-sets read -id $DATABASE_HOST_SET_ID
Host Set information:
Created Time: Tue, 18 Jul 2023 10:35:54 MDT
Host Catalog ID: hcplg_Ia7R4E39oF
ID: hsplg_qnQM03I6yT
Name: database
Type: plugin
Updated Time: Tue, 18 Jul 2023 10:35:54 MDT
Version: 3
Scope:
ID: p_1234567890
Name: Generated project scope
Parent Scope ID: o_1234567890
Type: project
Plugin:
ID: pl_1PRTDb78iY
Name: aws
Attributes:
filters: tag:service-type=database
Authorized Actions:
no-op
read
update
delete
Host IDs:
hplg_4JwY7y6il2
hplg_6PlZJpEzgx
hplg_7KjFRrcLxD
hplg_FQzmhOmDqn
By comparing the Host IDs
of the dev
host catalog to the production
catalog, notice that host hplg_7KjFRrcLxD
is missing from the production
host set, although it is contained within database
.
Update the misconfigured host
Check the details for the missing host using the CLI or the AWS Console.
$ boundary hosts read -id hplg_7KjFRrcLxD Host information: Created Time: Tue, 18 Jul 2023 10:37:22 MDT External ID: i-08c588f2d58120192 Host Catalog ID: hcplg_Ia7R4E39oF ID: hplg_7KjFRrcLxD Type: plugin Updated Time: Tue, 18 Jul 2023 10:37:22 MDT Version: 1 Scope: ID: p_1234567890 Name: Generated project scope Parent Scope ID: o_1234567890 Type: project Plugin: ID: pl_1PRTDb78iY Name: aws Authorized Actions: no-op read Host Set IDs: hsplg_qnQM03I6yT IP Addresses: 172.31.71.39 3.239.215.108 DNS Names: ec2-3-239-215-108.compute-1.amazonaws.com ip-172-31-71-39.ec2.internal
$ boundary hosts read -id hplg_7KjFRrcLxD
Host information:
Created Time: Tue, 18 Jul 2023 10:37:22 MDT
External ID: i-08c588f2d58120192
Host Catalog ID: hcplg_Ia7R4E39oF
ID: hplg_7KjFRrcLxD
Type: plugin
Updated Time: Tue, 18 Jul 2023 10:37:22 MDT
Version: 1
Scope:
ID: p_1234567890
Name: Generated project scope
Parent Scope ID: o_1234567890
Type: project
Plugin:
ID: pl_1PRTDb78iY
Name: aws
Authorized Actions:
no-op
read
Host Set IDs:
hsplg_qnQM03I6yT
IP Addresses:
172.31.71.39
3.239.215.108
DNS Names:
ec2-3-239-215-108.compute-1.amazonaws.com
ip-172-31-71-39.ec2.internal
The External ID:
field shows the ID of the misconfigured host (i-08c588f2d58120192
in this example). Copy this value.
Recall that host set membership is defined based on the instance tags.
Describe the instance's details, and query for its tag values.
$ aws ec2 describe-instances \ --instance-ids i-08c588f2d58120192 \ --output table \ --query 'Reservations[*].Instances[*].Tags[?contains(Key, `aws`) == `false`]'
$ aws ec2 describe-instances \
--instance-ids i-08c588f2d58120192 \
--output table \
--query 'Reservations[*].Instances[*].Tags[?contains(Key, `aws`) == `false`]'
Sample output:
$ aws ec2 describe-instances \ --instance-ids i-08c588f2d58120192 \ --output table \ --query 'Reservations[*].Instances[*].Tags[?contains(Key, `aws`) == `false`]' ------------------------------------------- | DescribeInstances | +---------------+-------------------------+ | Key | Value | +---------------+-------------------------+ | service-type | database | | Name | boundary-4-production | | application | prod | +---------------+-------------------------+
$ aws ec2 describe-instances \
--instance-ids i-08c588f2d58120192 \
--output table \
--query 'Reservations[*].Instances[*].Tags[?contains(Key, `aws`) == `false`]'
-------------------------------------------
| DescribeInstances |
+---------------+-------------------------+
| Key | Value |
+---------------+-------------------------+
| service-type | database |
| Name | boundary-4-production |
| application | prod |
+---------------+-------------------------+
Notice that the application
tag is misconfigured as prod
, instead of production
. An easy mistake to make!
Remember the filter defined for the production
host set:
tag:application=production
The tag's value must equal production
exactly to be included in this host set.
Update the application
tag to production
for the misconfigured instance using the aws ec2 create-tags
command, which will overwrite the existing tag value.
$ aws ec2 create-tags --resources i-08c588f2d58120192 --tags Key=application,Value=production
$ aws ec2 create-tags --resources i-08c588f2d58120192 --tags Key=application,Value=production
Re-run the describe-instances
command to directly query for the updated tag values.
Sample output:
$ aws ec2 describe-instances \ --instance-ids i-08c588f2d58120192 \ --output table \ --query 'Reservations[*].Instances[*].Tags[?contains(Key, `aws`) == `false`]' ------------------------------------------- | DescribeInstances | +---------------+-------------------------+ | Key | Value | +---------------+-------------------------+ | application | production | | service-type | database | | Name | boundary-4-production | +---------------+-------------------------+
$ aws ec2 describe-instances \
--instance-ids i-08c588f2d58120192 \
--output table \
--query 'Reservations[*].Instances[*].Tags[?contains(Key, `aws`) == `false`]'
-------------------------------------------
| DescribeInstances |
+---------------+-------------------------+
| Key | Value |
+---------------+-------------------------+
| application | production |
| service-type | database |
| Name | boundary-4-production |
+---------------+-------------------------+
Perform a read on the missing host.
$ boundary hosts read -id hplg_7KjFRrcLxD Host information: Created Time: Tue, 18 Jul 2023 10:41:34 MDT External ID: i-08c588f2d58120192 Host Catalog ID: hcplg_Ia7R4E39oF ID: hplg_7KjFRrcLxD Type: plugin Updated Time: Tue, 18 Jul 2023 10:41:34 MDT Version: 1 Scope: ID: p_1234567890 Name: Generated project scope Parent Scope ID: o_1234567890 Type: project Plugin: ID: pl_1PRTDb78iY Name: aws Authorized Actions: no-op read Host Set IDs: hsplg_qnQM03I6yT IP Addresses: 172.31.71.39 3.239.215.108 DNS Names: ec2-3-239-215-108.compute-1.amazonaws.com ip-172-31-71-39.ec2.internal
$ boundary hosts read -id hplg_7KjFRrcLxD
Host information:
Created Time: Tue, 18 Jul 2023 10:41:34 MDT
External ID: i-08c588f2d58120192
Host Catalog ID: hcplg_Ia7R4E39oF
ID: hplg_7KjFRrcLxD
Type: plugin
Updated Time: Tue, 18 Jul 2023 10:41:34 MDT
Version: 1
Scope:
ID: p_1234567890
Name: Generated project scope
Parent Scope ID: o_1234567890
Type: project
Plugin:
ID: pl_1PRTDb78iY
Name: aws
Authorized Actions:
no-op
read
Host Set IDs:
hsplg_qnQM03I6yT
IP Addresses:
172.31.71.39
3.239.215.108
DNS Names:
ec2-3-239-215-108.compute-1.amazonaws.com
ip-172-31-71-39.ec2.internal
The External ID:
field shows the ID of the misconfigured host (i-08c588f2d58120192
in this example). Copy this value.
Recall that host set membership is defined based on the instance tags.
Describe the instance's details, and query for its tag values.
$ aws ec2 describe-instances \ --instance-ids i-08c588f2d58120192 \ --output table \ --query 'Reservations[*].Instances[*].Tags[?contains(Key, `aws`) == `false`]'
$ aws ec2 describe-instances \
--instance-ids i-08c588f2d58120192 \
--output table \
--query 'Reservations[*].Instances[*].Tags[?contains(Key, `aws`) == `false`]'
Sample output:
$ aws ec2 describe-instances \ --instance-ids i-08c588f2d58120192 \ --output table \ --query 'Reservations[*].Instances[*].Tags[?contains(Key, `aws`) == `false`]' ------------------------------------------- | DescribeInstances | +---------------+-------------------------+ | Key | Value | +---------------+-------------------------+ | service-type | database | | application | prod | | Name | boundary-4-production | +---------------+-------------------------+
$ aws ec2 describe-instances \
--instance-ids i-08c588f2d58120192 \
--output table \
--query 'Reservations[*].Instances[*].Tags[?contains(Key, `aws`) == `false`]'
-------------------------------------------
| DescribeInstances |
+---------------+-------------------------+
| Key | Value |
+---------------+-------------------------+
| service-type | database |
| application | prod |
| Name | boundary-4-production |
+---------------+-------------------------+
Notice that the application
tag is misconfigured as prod
, instead of production
. An easy mistake to make!
Remember the filter defined for the production
host set:
tag:application=production
The tag's value must equal production
exactly to be included in this host set.
Update the application
tag to production
by fixing the misconfigured tags in the hosts.tf
configuration file.
hosts.tf
# Configure the AWS hosts variable "instances" { default = [ "boundary-1-dev", "boundary-2-dev", "boundary-3-production", "boundary-4-production" ] } variable "vm_tags" { default = [ {"Name":"boundary-1-dev","service-type":"database", "application":"dev"}, {"Name":"boundary-2-dev","service-type":"database", "application":"dev"}, {"Name":"boundary-3-production","service-type":"database", "application":"production"}, {"Name":"boundary-4-production","service-type":"database", "application":"prod"} ] } ... ... ...
# Configure the AWS hosts
variable "instances" {
default = [
"boundary-1-dev",
"boundary-2-dev",
"boundary-3-production",
"boundary-4-production"
]
}
variable "vm_tags" {
default = [
{"Name":"boundary-1-dev","service-type":"database", "application":"dev"},
{"Name":"boundary-2-dev","service-type":"database", "application":"dev"},
{"Name":"boundary-3-production","service-type":"database", "application":"production"},
{"Name":"boundary-4-production","service-type":"database", "application":"prod"}
]
}
...
...
...
Lines 13 - 16 define the tags for each VM. Update the application tags on line 16 to match line 15, such that "application":"production"
.
variable "vm_tags" { default = [ {"Name":"boundary-1-dev","service-type":"database", "application":"dev"}, {"Name":"boundary-2-dev","service-type":"database", "application":"dev"}, {"Name":"boundary-3-production","service-type":"database", "application":"production"}, {"Name":"boundary-4-production","service-type":"database", "application":"production"} ] }
variable "vm_tags" {
default = [
{"Name":"boundary-1-dev","service-type":"database", "application":"dev"},
{"Name":"boundary-2-dev","service-type":"database", "application":"dev"},
{"Name":"boundary-3-production","service-type":"database", "application":"production"},
{"Name":"boundary-4-production","service-type":"database", "application":"production"}
]
}
Execute terraform apply
and confirm the new configuration with yes
when prompted.
$ terraform apply aws_security_group.boundary-ssh: Refreshing state... [id=sg-06aa32c32ea2dc6ba] aws_iam_user.boundary: Refreshing state... [id=boundary] aws_iam_user_policy.BoundaryDescribeInstances: Refreshing state... [id=boundary:BoundaryDescribeInstances] aws_iam_access_key.boundary: Refreshing state... [id=AKIASWVU2XLZOLN2IRD3] aws_instance.boundary-instance[3]: Refreshing state... [id=i-0636d8f305a675a0f] aws_instance.boundary-instance[0]: Refreshing state... [id=i-0a101da665471ef6c] aws_instance.boundary-instance[1]: Refreshing state... [id=i-03174fc1e7685ed89] aws_instance.boundary-instance[2]: Refreshing state... [id=i-09c28775132890622] Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols: ~ update in-place Terraform will perform the following actions: # aws_instance.boundary-instance[3] will be updated in-place ~ resource "aws_instance" "boundary-instance" { id = "i-0636d8f305a675a0f" ~ tags = { ~ "application" = "prod" -> "production" # (2 unchanged elements hidden) } ~ tags_all = { ~ "application" = "prod" -> "production" # (2 unchanged elements hidden) } # (27 unchanged attributes hidden) # (5 unchanged blocks hidden) } Plan: 0 to add, 1 to change, 0 to destroy. Do you want to perform these actions? Terraform will perform the actions described above. Only 'yes' will be accepted to approve. Enter a value: yes aws_instance.boundary-instance[3]: Modifying... [id=i-0636d8f305a675a0f] aws_instance.boundary-instance[3]: Modifications complete after 2s [id=i-0636d8f305a675a0f] Apply complete! Resources: 0 added, 1 changed, 0 destroyed. Outputs: boundary_access_key_id = "AKIASWVU2XLZOLN2IRD3" boundary_secret_access_key = <sensitive>
$ terraform apply
aws_security_group.boundary-ssh: Refreshing state... [id=sg-06aa32c32ea2dc6ba]
aws_iam_user.boundary: Refreshing state... [id=boundary]
aws_iam_user_policy.BoundaryDescribeInstances: Refreshing state... [id=boundary:BoundaryDescribeInstances]
aws_iam_access_key.boundary: Refreshing state... [id=AKIASWVU2XLZOLN2IRD3]
aws_instance.boundary-instance[3]: Refreshing state... [id=i-0636d8f305a675a0f]
aws_instance.boundary-instance[0]: Refreshing state... [id=i-0a101da665471ef6c]
aws_instance.boundary-instance[1]: Refreshing state... [id=i-03174fc1e7685ed89]
aws_instance.boundary-instance[2]: Refreshing state... [id=i-09c28775132890622]
Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
~ update in-place
Terraform will perform the following actions:
# aws_instance.boundary-instance[3] will be updated in-place
~ resource "aws_instance" "boundary-instance" {
id = "i-0636d8f305a675a0f"
~ tags = {
~ "application" = "prod" -> "production"
# (2 unchanged elements hidden)
}
~ tags_all = {
~ "application" = "prod" -> "production"
# (2 unchanged elements hidden)
}
# (27 unchanged attributes hidden)
# (5 unchanged blocks hidden)
}
Plan: 0 to add, 1 to change, 0 to destroy.
Do you want to perform these actions?
Terraform will perform the actions described above.
Only 'yes' will be accepted to approve.
Enter a value: yes
aws_instance.boundary-instance[3]: Modifying... [id=i-0636d8f305a675a0f]
aws_instance.boundary-instance[3]: Modifications complete after 2s [id=i-0636d8f305a675a0f]
Apply complete! Resources: 0 added, 1 changed, 0 destroyed.
Outputs:
boundary_access_key_id = "AKIASWVU2XLZOLN2IRD3"
boundary_secret_access_key = <sensitive>
The output will display the updated tags, but you can execute aws ec2 describe-instances
to directly query for the tag values.
$ aws ec2 describe-instances \ --instance-ids i-08c588f2d58120192 \ --output table \ --query 'Reservations[*].Instances[*].Tags[?contains(Key, `aws`) == `false`]' ------------------------------------------- | DescribeInstances | +---------------+-------------------------+ | Key | Value | +---------------+-------------------------+ | application | production | | service-type | database | | Name | boundary-4-production | +---------------+-------------------------+
$ aws ec2 describe-instances \
--instance-ids i-08c588f2d58120192 \
--output table \
--query 'Reservations[*].Instances[*].Tags[?contains(Key, `aws`) == `false`]'
-------------------------------------------
| DescribeInstances |
+---------------+-------------------------+
| Key | Value |
+---------------+-------------------------+
| application | production |
| service-type | database |
| Name | boundary-4-production |
+---------------+-------------------------+
Check the miscofigured host's details in the Boundary Admin UI.
Navigate to the Host Catalogs page and click on the AWS Catalog.
Click on the Host Sets tab.
Click on the production host set, then click on the Hosts tab.
Click on the boundary-4-production host to view its details.
The External ID:
field shows the ID of the misconfigured host (i-0cb29c70786bf3916
in this example). Copy this value.
Recall that host set membership is defined based on the instance tags.
Open the AWS EC2 Console and navigate to the Instances dashboard.
Select the instance whose Instance ID
matches the misconfigured host. Then, open it's Tags tab and check the defined values.
Notice that the application
tag is misconfigured as prod
, instead of production
. An easy mistake to make!
Remember the filter defined for the production
host set:
tag:application=production
The tag's value must equal production
exactly to be included in this host set.
Click Manage tags and update the application
tag to production
for the misconfigured instance. Click Save when finished.
Boundary will update the production
host set automatically the next time it refreshes. This process could take up to ten minutes.
After waiting, navigate back to the production
host set and verify that its Hosts tab contains the boundary-4-production
host as a member.
Cleanup and teardown
Delete the CloudFormation
boundary-dynamic-hosts
stack$ aws cloudformation delete-stack --stack-name boundary-dynamic-hosts
$ aws cloudformation delete-stack --stack-name boundary-dynamic-hosts
Wait a minute, then check that the
StackStatus
has changed toDELETE_COMPLETE
.$ aws cloudformation list-stacks { "StackSummaries": [ { "StackId": "arn:aws:cloudformation:us-east-1:157470686136:stack/boundary-dynamic-hosts/05ea64a0-7bbf-11ec-9531-120baac28e1f", "StackName": "boundary-dynamic-hosts", "TemplateDescription": "AWS CloudFormation template for Boundary Dynamic Hosts tutorial. Deploying this template will incur costs to your AWS account.", "CreationTime": "2022-01-22T20:08:11.060000+00:00", "DeletionTime": "2022-01-22T21:22:37.577000+00:00", "StackStatus": "DELETE_COMPLETE", "DriftInformation": { "StackDriftStatus": "NOT_CHECKED" } } ] }
$ aws cloudformation list-stacks { "StackSummaries": [ { "StackId": "arn:aws:cloudformation:us-east-1:157470686136:stack/boundary-dynamic-hosts/05ea64a0-7bbf-11ec-9531-120baac28e1f", "StackName": "boundary-dynamic-hosts", "TemplateDescription": "AWS CloudFormation template for Boundary Dynamic Hosts tutorial. Deploying this template will incur costs to your AWS account.", "CreationTime": "2022-01-22T20:08:11.060000+00:00", "DeletionTime": "2022-01-22T21:22:37.577000+00:00", "StackStatus": "DELETE_COMPLETE", "DriftInformation": { "StackDriftStatus": "NOT_CHECKED" } } ] }
Delete the
boundary-keypair
EC2 keypair.$ aws ec2 delete-key-pair --key-name boundary-keypair
$ aws ec2 delete-key-pair --key-name boundary-keypair
Delete the
boundary
user access key.Before you delete the
boundary
user, you must delete all items attached to the user. For more information on deleting an IAM user, check the User Guide docs.First list the
boundary
user access keys.$ aws iam list-access-keys --user-name boundary { "AccessKeyMetadata": [ { "UserName": "boundary", "AccessKeyId": "AKIASWVU2XLZLFLIDMVW", "Status": "Active", "CreateDate": "2022-01-19T02:26:11+00:00" } ] }
$ aws iam list-access-keys --user-name boundary { "AccessKeyMetadata": [ { "UserName": "boundary", "AccessKeyId": "AKIASWVU2XLZLFLIDMVW", "Status": "Active", "CreateDate": "2022-01-19T02:26:11+00:00" } ] }
Copy the
AccessKeyId
value (AKIASWVU2XLZLFLIDMVW
in this example).Now delete the access key.
$ aws iam delete-access-key --user-name boundary --access-key-id AKIASWVU2XLZLFLIDMVW
$ aws iam delete-access-key --user-name boundary --access-key-id AKIASWVU2XLZLFLIDMVW
Delete the
BoundaryDescribeInstances
policy.$ aws iam delete-user-policy --user-name boundary --policy-name BoundaryDescribeInstances
$ aws iam delete-user-policy --user-name boundary --policy-name BoundaryDescribeInstances
Delete the
boundary
IAM user.$ aws iam delete-user --user-name boundary
$ aws iam delete-user --user-name boundary
Stop Boundary
Destroy the Terraform resources in AWS.
Remove all resources using
terraform apply -destroy
. Enteryes
when prompted to confirm the operation.Note
Terraform 0.15.2+ uses
terraform apply -destroy
to cleanup resources. If using an earlier version of Terraform, you may need to executeterraform destroy
.$ terraform apply -destroy aws_security_group.boundary-ssh: Refreshing state... [id=sg-06aa32c32ea2dc6ba] aws_iam_user.boundary: Refreshing state... [id=boundary] aws_iam_user_policy.BoundaryDescribeInstances: Refreshing state... [id=boundary:BoundaryDescribeInstances] aws_iam_access_key.boundary: Refreshing state... [id=AKIASWVU2XLZOLN2IRD3] aws_instance.boundary-instance[3]: Refreshing state... [id=i-0636d8f305a675a0f] aws_instance.boundary-instance[0]: Refreshing state... [id=i-0a101da665471ef6c] aws_instance.boundary-instance[2]: Refreshing state... [id=i-09c28775132890622] aws_instance.boundary-instance[1]: Refreshing state... [id=i-03174fc1e7685ed89] Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols: - destroy Terraform will perform the following actions: # aws_iam_access_key.boundary will be destroyed - resource "aws_iam_access_key" "boundary" { - create_date = "2022-01-28T18:44:18Z" -> null - id = "AKIASWVU2XLZOLN2IRD3" -> null - secret = (sensitive value) - ses_smtp_password_v4 = (sensitive value) - status = "Active" -> null - user = "boundary" -> null } # aws_iam_user.boundary will be destroyed - resource "aws_iam_user" "boundary" { - arn = "arn:aws:iam::186136574706:user/boundary" -> null - force_destroy = false -> null - id = "boundary" -> null - name = "boundary" -> null - path = "/" -> null - tags = {} -> null - tags_all = {} -> null - unique_id = "AIDASWVU2XLZPCRFR4LW7" -> null } # aws_iam_user_policy.BoundaryDescribeInstances will be destroyed - resource "aws_iam_user_policy" "BoundaryDescribeInstances" { - id = "boundary:BoundaryDescribeInstances" -> null - name = "BoundaryDescribeInstances" -> null - policy = jsonencode( { - Statement = [ - { - Action = [ - "ec2:DescribeInstances", ] - Effect = "Allow" - Resource = "*" }, ] - Version = "2012-10-17" } ) -> null - user = "boundary" -> null } # aws_instance.boundary-instance[0] will be destroyed ... ... Tuncated Output ... ... Plan: 0 to add, 0 to change, 8 to destroy. Changes to Outputs: - boundary_access_key_id = "AKIASWVU2XLZOLN2IRD3" -> null - boundary_secret_access_key = (sensitive value) Do you really want to destroy all resources? Terraform will destroy all your managed infrastructure, as shown above. There is no undo. Only 'yes' will be accepted to confirm. Enter a value: yes aws_iam_user_policy.BoundaryDescribeInstances: Destroying... [id=boundary:BoundaryDescribeInstances] aws_iam_access_key.boundary: Destroying... [id=AKIASWVU2XLZOLN2IRD3] aws_instance.boundary-instance[2]: Destroying... [id=i-09c28775132890622] aws_instance.boundary-instance[1]: Destroying... [id=i-03174fc1e7685ed89] aws_instance.boundary-instance[0]: Destroying... [id=i-0a101da665471ef6c] aws_instance.boundary-instance[3]: Destroying... [id=i-0636d8f305a675a0f] aws_iam_access_key.boundary: Destruction complete after 0s aws_iam_user_policy.BoundaryDescribeInstances: Destruction complete after 0s aws_iam_user.boundary: Destroying... [id=boundary] aws_iam_user.boundary: Destruction complete after 0s aws_instance.boundary-instance[0]: Still destroying... [id=i-0a101da665471ef6c, 10s elapsed] aws_instance.boundary-instance[2]: Still destroying... [id=i-09c28775132890622, 10s elapsed] aws_instance.boundary-instance[1]: Still destroying... [id=i-03174fc1e7685ed89, 10s elapsed] aws_instance.boundary-instance[3]: Still destroying... [id=i-0636d8f305a675a0f, 10s elapsed] aws_instance.boundary-instance[0]: Still destroying... [id=i-0a101da665471ef6c, 20s elapsed] aws_instance.boundary-instance[1]: Still destroying... [id=i-03174fc1e7685ed89, 20s elapsed] aws_instance.boundary-instance[3]: Still destroying... [id=i-0636d8f305a675a0f, 20s elapsed] aws_instance.boundary-instance[2]: Still destroying... [id=i-09c28775132890622, 20s elapsed] aws_instance.boundary-instance[3]: Still destroying... [id=i-0636d8f305a675a0f, 30s elapsed] aws_instance.boundary-instance[0]: Still destroying... [id=i-0a101da665471ef6c, 30s elapsed] aws_instance.boundary-instance[2]: Still destroying... [id=i-09c28775132890622, 30s elapsed] aws_instance.boundary-instance[1]: Still destroying... [id=i-03174fc1e7685ed89, 30s elapsed] aws_instance.boundary-instance[0]: Destruction complete after 31s aws_instance.boundary-instance[1]: Destruction complete after 31s aws_instance.boundary-instance[2]: Destruction complete after 31s aws_instance.boundary-instance[3]: Still destroying... [id=i-0636d8f305a675a0f, 40s elapsed] aws_instance.boundary-instance[3]: Still destroying... [id=i-0636d8f305a675a0f, 50s elapsed] aws_instance.boundary-instance[3]: Still destroying... [id=i-0636d8f305a675a0f, 1m0s elapsed] aws_instance.boundary-instance[3]: Destruction complete after 1m2s aws_security_group.boundary-ssh: Destroying... [id=sg-06aa32c32ea2dc6ba] aws_security_group.boundary-ssh: Destruction complete after 2s Apply complete! Resources: 0 added, 0 changed, 8 destroyed.
$ terraform apply -destroy aws_security_group.boundary-ssh: Refreshing state... [id=sg-06aa32c32ea2dc6ba] aws_iam_user.boundary: Refreshing state... [id=boundary] aws_iam_user_policy.BoundaryDescribeInstances: Refreshing state... [id=boundary:BoundaryDescribeInstances] aws_iam_access_key.boundary: Refreshing state... [id=AKIASWVU2XLZOLN2IRD3] aws_instance.boundary-instance[3]: Refreshing state... [id=i-0636d8f305a675a0f] aws_instance.boundary-instance[0]: Refreshing state... [id=i-0a101da665471ef6c] aws_instance.boundary-instance[2]: Refreshing state... [id=i-09c28775132890622] aws_instance.boundary-instance[1]: Refreshing state... [id=i-03174fc1e7685ed89] Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols: - destroy Terraform will perform the following actions: # aws_iam_access_key.boundary will be destroyed - resource "aws_iam_access_key" "boundary" { - create_date = "2022-01-28T18:44:18Z" -> null - id = "AKIASWVU2XLZOLN2IRD3" -> null - secret = (sensitive value) - ses_smtp_password_v4 = (sensitive value) - status = "Active" -> null - user = "boundary" -> null } # aws_iam_user.boundary will be destroyed - resource "aws_iam_user" "boundary" { - arn = "arn:aws:iam::186136574706:user/boundary" -> null - force_destroy = false -> null - id = "boundary" -> null - name = "boundary" -> null - path = "/" -> null - tags = {} -> null - tags_all = {} -> null - unique_id = "AIDASWVU2XLZPCRFR4LW7" -> null } # aws_iam_user_policy.BoundaryDescribeInstances will be destroyed - resource "aws_iam_user_policy" "BoundaryDescribeInstances" { - id = "boundary:BoundaryDescribeInstances" -> null - name = "BoundaryDescribeInstances" -> null - policy = jsonencode( { - Statement = [ - { - Action = [ - "ec2:DescribeInstances", ] - Effect = "Allow" - Resource = "*" }, ] - Version = "2012-10-17" } ) -> null - user = "boundary" -> null } # aws_instance.boundary-instance[0] will be destroyed ... ... Tuncated Output ... ... Plan: 0 to add, 0 to change, 8 to destroy. Changes to Outputs: - boundary_access_key_id = "AKIASWVU2XLZOLN2IRD3" -> null - boundary_secret_access_key = (sensitive value) Do you really want to destroy all resources? Terraform will destroy all your managed infrastructure, as shown above. There is no undo. Only 'yes' will be accepted to confirm. Enter a value: yes aws_iam_user_policy.BoundaryDescribeInstances: Destroying... [id=boundary:BoundaryDescribeInstances] aws_iam_access_key.boundary: Destroying... [id=AKIASWVU2XLZOLN2IRD3] aws_instance.boundary-instance[2]: Destroying... [id=i-09c28775132890622] aws_instance.boundary-instance[1]: Destroying... [id=i-03174fc1e7685ed89] aws_instance.boundary-instance[0]: Destroying... [id=i-0a101da665471ef6c] aws_instance.boundary-instance[3]: Destroying... [id=i-0636d8f305a675a0f] aws_iam_access_key.boundary: Destruction complete after 0s aws_iam_user_policy.BoundaryDescribeInstances: Destruction complete after 0s aws_iam_user.boundary: Destroying... [id=boundary] aws_iam_user.boundary: Destruction complete after 0s aws_instance.boundary-instance[0]: Still destroying... [id=i-0a101da665471ef6c, 10s elapsed] aws_instance.boundary-instance[2]: Still destroying... [id=i-09c28775132890622, 10s elapsed] aws_instance.boundary-instance[1]: Still destroying... [id=i-03174fc1e7685ed89, 10s elapsed] aws_instance.boundary-instance[3]: Still destroying... [id=i-0636d8f305a675a0f, 10s elapsed] aws_instance.boundary-instance[0]: Still destroying... [id=i-0a101da665471ef6c, 20s elapsed] aws_instance.boundary-instance[1]: Still destroying... [id=i-03174fc1e7685ed89, 20s elapsed] aws_instance.boundary-instance[3]: Still destroying... [id=i-0636d8f305a675a0f, 20s elapsed] aws_instance.boundary-instance[2]: Still destroying... [id=i-09c28775132890622, 20s elapsed] aws_instance.boundary-instance[3]: Still destroying... [id=i-0636d8f305a675a0f, 30s elapsed] aws_instance.boundary-instance[0]: Still destroying... [id=i-0a101da665471ef6c, 30s elapsed] aws_instance.boundary-instance[2]: Still destroying... [id=i-09c28775132890622, 30s elapsed] aws_instance.boundary-instance[1]: Still destroying... [id=i-03174fc1e7685ed89, 30s elapsed] aws_instance.boundary-instance[0]: Destruction complete after 31s aws_instance.boundary-instance[1]: Destruction complete after 31s aws_instance.boundary-instance[2]: Destruction complete after 31s aws_instance.boundary-instance[3]: Still destroying... [id=i-0636d8f305a675a0f, 40s elapsed] aws_instance.boundary-instance[3]: Still destroying... [id=i-0636d8f305a675a0f, 50s elapsed] aws_instance.boundary-instance[3]: Still destroying... [id=i-0636d8f305a675a0f, 1m0s elapsed] aws_instance.boundary-instance[3]: Destruction complete after 1m2s aws_security_group.boundary-ssh: Destroying... [id=sg-06aa32c32ea2dc6ba] aws_security_group.boundary-ssh: Destruction complete after 2s Apply complete! Resources: 0 added, 0 changed, 8 destroyed.
Remove the Terraform state files.
$ rm *.tfstate*
$ rm *.tfstate*
Stop Boundary
Delete the CloudFormation
boundary-dynamic-hosts
stackNavigate to the CloudFormation Dashboard. Select the
boundary-dynamic-hosts
Stack name. Then click Delete and confirm by clicking Delete stack.When the delete is complete, the stack's Status will change to DELETE_COMPLETE.
Delete the
boundary-keypair
EC2 keypair.Navigate to the EC2 Console Dashboard. Select Key Pairs under Network & Security on the left side of the page.
Check the box next to
boundary-keypair
, then select the Actions dropdown and click Delete. Confirm the deletion of the keypair by typing "Delete" in the field and clicking Delete again.Delete the
boundary
user and associated access key.Navigate to the IAM Dashboard. Select the Users panel, then check the box next to the
boundary
user. Click Delete, and confirm by typing the name of the userboundary
and clicking Delete again. This will also delete the access key associated with the user.Delete the
BoundaryDescribeInstances
policy.Click on the Policies panel, and select the BoundaryDescribeInstances policy. Then click the Actions dropdown and select Delete.
Confirm by typing the name of the policy
BoundaryDescribeInstances
into the field and clicking Delete.Stop Boundary.
Next steps
This tutorial demonstrated the steps to set up a dynamic host catalog using the AWS host plugin. You deployed and tagged hosts within AWS, configured a plugin-type host catalog within Boundary, and created three host sets that filtered for the hosts based on their tag values.
To learn more about integrating Boundary with cloud providers like AWS and Azure, check out the OIDC Authentication tutorial.