Terraform
Build AWS infrastructure with CDK for Terraform
The Cloud Development Kit for Terraform (CDKTF) allows you to define your infrastructure in a familiar programming language such as TypeScript, Python, Go, C#, or Java. CDKTF generates Terraform configuration in JSON, then automatically applies that configuration via Terraform to provision your infrastructure.
In this tutorial, you will provision an EC2 instance on AWS using your preferred programming language.
If you do not have CDKTF installed on your system, follow the steps in the install CDKTF tutorial to install it before you continue with this tutorial.
Prerequisites
To follow this tutorial, you need the following installed locally:
- Terraform v1.2+
- An HCP Terraform account, with CLI authentication configured
- CDK for Terraform v0.15+
- an AWS account
- AWS Credentials configured for use with Terraform
Terraform and CDKTF will use credentials set in your environment or through other means as described in the Terraform documentation.
You will also need to install a recent version of the programming language you will use for this tutorial. We have verified this tutorial works with the following language versions.
Java OpenJDK v17 and Apache Maven v3.8.4
Initialize a new CDK for Terraform application
Start by creating a directory named learn-cdktf for your project.
$ mkdir learn-cdktf
Then navigate into it.
$ cd learn-cdktf
Inside the directory, run cdktf init, specifying the template for your
preferred language and Terraform's AWS provider. Select your HCP Terraform
Organization when prompted, and use the default name learn-cdktf for your
HCP Terraform Workspace. CDKTF will also prompt you for other information
about your project, such as the name and description. Accept the defaults for
these options.
Tip
 If you would prefer to keep your state locally, use the --local
flag with cdktf init.
$ cdktf init --template="java" --providers="aws@~>4.0"
? Project Name learn-cdktf
? Project Description A simple getting started project for cdktf.
Detected Terraform Cloud token.
We will now set up Terraform Cloud for your project.
? Terraform Cloud Organization Name hashicorp-learn
We are going to create a new Terraform Cloud Workspace for your project.
? Terraform Cloud Workspace Name learn-cdktf
Setting up remote state backend and workspace in Terraform Cloud.
? Do you want to send crash reports to the CDKTF team? See
https://www.terraform.io/cdktf/create-and-deploy/configuration-file#enable-crash
-reporting-for-the-cli for more information Yes
Generating Terraform Cloud configuration for '<YOUR_ORG>' organization and 'learn-cdktf' workspace.....
[INFO] Scanning for projects...
[INFO] 
[INFO] -----------------< com.mycompany.app:learn-cdktf-java >-----------------
[INFO] Building learn-cdktf-java 0.1
[INFO] --------------------------------[ jar ]---------------------------------
Downloading from central: https://repo.maven.apache.org/maven2/software/constructs/constructs/maven-metadata.xml
## ...
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
[INFO] Total time:  3.450 s
[INFO] Finished at: 2022-01-21T09:55:54-06:00
[INFO] ------------------------------------------------------------------------
========================================================================================================
  Your cdktf Java project is ready!
  cat help                Prints this message
  Compile:
    mvn compile           Compiles your Java packages
  Synthesize:
    cdktf synth [stack]   Synthesize Terraform resources to cdktf.out/
  Diff:
    cdktf diff [stack]    Perform a diff (terraform plan) for the given stack
  Deploy:
    cdktf deploy [stack]  Deploy the given stack
  Destroy:
    cdktf destroy [stack] Destroy the given stack
  Learn more about using modules and providers https://cdk.tf/modules-and-providers
Use Providers:
  You can add prebuilt providers (if available) or locally generated ones using the add command:
  cdktf provider add "aws@~>3.0" null kreuzwerker/docker
  All prebuilt providers are available on Maven Central: https://mvnrepository.com/artifact/com.hashicorp
  You can also add these providers directly to your pom.xml file.
  You can also build any module or provider locally. Learn more: https://cdk.tf/modules-and-providers
========================================================================================================
Checking whether pre-built provider exists for the following constraints:
  provider: aws
  version : ~>4.0
  language: java
  cdktf   : 0.15.0
Found pre-built provider.
Package installed.
CDKTF provides packages with prebuilt classes for each supported programming
language for many common Terraform providers that you can use in your CDKTF
projects. The cdktf init command you just ran will find a pre-built AWS
provider that you will use for this project. For other Terraform providers and
modules, CDKTF automatically generates the appropriate classes for your chosen
language.
Define your CDK for Terraform Application
Open the src/main/java/com/mycompany/app/MainStack.java file to view your
application code. The template creates a scaffold with no functionality.
Replace the contents of MainStack.java with the following code for a new Java
application, which uses the CDK to provision an AWS EC2 instance in us-west-1.
src/main/java/com/mycompany/app/MainStack.java
package com.mycompany.app;
import software.constructs.Construct;
import com.hashicorp.cdktf.TerraformStack;
import com.hashicorp.cdktf.TerraformOutput;
import com.hashicorp.cdktf.providers.aws.provider.AwsProvider;
import com.hashicorp.cdktf.providers.aws.instance.Instance;
public class MainStack extends TerraformStack
{
    public MainStack(final Construct scope, final String id) {
        super(scope, id);
        AwsProvider.Builder.create(this, "AWS")
          .region("us-west-1")
          .build();
        Instance instance = Instance.Builder.create(this, "compute")
          .ami("ami-01456a894f71116f2")
          .instanceType("t2.micro")
          .build();
        TerraformOutput.Builder.create(this, "public_ip")
          .value(instance.getPublicIp())
          .build();
    }
}
Open the Main.java file to view your application's main() function.
Replace the contents of Main.java with the following code for a new Java
application, which uses your stack to provision an AWS EC2 instance, and stores
its state in HCP Terraform.
src/main/java/com/mycompany/app/Main.java
package com.mycompany.app;
import com.hashicorp.cdktf.App;
import com.hashicorp.cdktf.NamedRemoteWorkspace;
import com.hashicorp.cdktf.RemoteBackend;
import com.hashicorp.cdktf.RemoteBackendProps;
import com.hashicorp.cdktf.TerraformStack;
public class Main
{
    public static void main(String[] args) {
        final App app = new App();
        TerraformStack stack = new MainStack(app, "aws_instance");
        new RemoteBackend(stack, RemoteBackendProps.builder()
            .hostname("app.terraform.io")
            .organization("<YOUR_ORG>")
            .workspaces(new NamedRemoteWorkspace("learn-cdktf"))
            .build());
        app.synth();
    }
}
Replace <YOUR_ORG> with the HCP Terraform organization name you chose when
you ran terraform init earier. If you chose a different workspace name,
replace learn-cdktf with that name.
Tip
 If you would prefer to store your project's state locally, remove or
comment out new RemoteBackend( [...] ); and the corresponding import
statements from the top of the file.
Examine the code
Most of the code is similar to concepts found in a traditional Terraform configuration written in HCL, but there are a few notable differences. Review the code for the programming language you have selected.
The example code starts by declaring the com.mycompany.app package, and
importing the Construct package.
package com.mycompany.app;
import software.constructs.Construct;
You must explicitly import any classes your Java code uses. For example, you
will use TerraformOutput to create a Terraform output value for your EC2
instance's public IP address.
import com.hashicorp.cdktf.TerraformStack;
import com.hashicorp.cdktf.TerraformOutput;
The example code also imports the AWS provider and other resources from the
package you installed earlier. In this case you need the AwsProvider and
ec2.Instance classes for your compute resource.
import com.hashicorp.cdktf.providers.aws.provider.AwsProvider;
import com.hashicorp.cdktf.providers.aws.instance.Instance;
The MainStack class defines a new stack, which contains code to define your
provider and all of your resources.
public class MainStack extends TerraformStack
{
    public MainStack(final Construct scope, final String id) {
        super(scope, id);
The code configures the AWS provider with the Builder pattern to use the
us-west-1 region.
        AwsProvider.Builder.create(this, "AWS")
          .region("us-west-1")
          .build();
The code configures the AwsProvider by calling methods that map to Terraform
arguments as listed in the AWS provider
documentation.
The Instance.Builder class creates a t2.micro EC2 instance with an AWS ami.
        Instance instance = Instance.Builder.create(this, "compute")
          .ami("ami-01456a894f71116f2")
          .instanceType("t2.micro")
          .build();
The instance is also configured with method calls, using camel case for Terraform arguments as listed in the AWS provider documentation.
The code stores the instance as a variable so that the TerraformOutput below
can reference the instance's getPublicIp() method.
        TerraformOutput.Builder.create(this, "public_ip")
          .value(instance.getPublicIp())
          .build();
When you write CDKTF code with an IDE, use it view the properties and functions
of the classes, variables, and packages in your code. This example uses the
getPublicIp() method from the instance variable.
Finally, in Main.java your application uses the stack you have defined,
configures a remote backend to store your project's state in HCP Terraform,
and calls app.synth() to generate Terraform configuration.
    public static void main(String[] args) {
        final App app = new App();
        TerraformStack stack = new MainStack(app, "aws_instance");
        new RemoteBackend(stack, RemoteBackendProps.builder()
            .hostname("app.terraform.io")
            .organization("<YOUR_ORG>")
            .workspaces(new NamedRemoteWorkspace("learn-cdktf"))
            .build());
        app.synth();
    }
}
Provision infrastructure
Now that you have initialized the project with the AWS provider and written code
to provision an instance, it's time to deploy it by running cdktf deploy.
When CDKTF asks you to confirm the deploy, respond with a yes.
$ cdktf deploy
Deploying Stack: aws_instance
Resources
 ✔ AWS_INSTANCE         compute             aws_instance.compute
Summary: 1 created, 0 updated, 0 destroyed.
Output: public_ip = 50.18.17.102
The cdktf deploy command runs terraform apply in the background.
After the instance is created, visit the AWS EC2 Dashboard.
Notice that the CDK deploy command printed out the public_ip output value,
which matches the instance's public IPv4 address.

Change infrastructure by adding the Name tag
Add a tag to the EC2 instance.
Import the java.util.Map package near the beginning of MainStack.java.
MainStack.java
package com.mycompany.app;
import java.util.Map;
import software.constructs.Construct;
Then, use Map.of() to set your EC2 instance's tags.
MainStack.java
        Instance instance = Instance.Builder.create(this, "compute")
          .ami("ami-01456a894f71116f2")
          .instanceType("t2.micro")
          .tags(Map.of("Name", "CDKTF-Demo"))
          .build();
Deploy your updated application. Confirm your deploy by choosing Approve.
$ cdktf deploy
aws_instance  Initializing the backend...
aws_instance
              Successfully configured the backend "remote"! Terraform will automatically
              use this backend unless the backend configuration changes.
aws_instance  Initializing provider plugins...
aws_instance  - Finding hashicorp/aws versions matching "4.23.0"...
aws_instance  - Using hashicorp/aws v4.23.0 from the shared cache directory
##...
              Plan: 1 to add, 0 to change, 0 to destroy.
              Changes to Outputs:
              + public_ip = (known after apply)
aws_instance
              ─────────────────────────────────────────────────────────────────────────────
              Saved the plan to: plan
              To perform exactly these actions, run the following command to apply:
              terraform apply "plan"
Please review the diff output above for aws_instance
❯ Approve  Applies the changes outlined in the plan.
  Dismiss
  Stop
##...
              Apply complete! Resources: 1 added, 0 changed, 0 destroyed.
              Outputs:
aws_instance  public_ip = "54.219.167.18"
  aws_instance
  public_ip = 54.219.167.18
Clean up your infrastructure
Destroy the application by running cdktf destroy. Confirm your destroy by
choosing Approve.
$ cdktf destroy
aws_instance  Initializing the backend...
aws_instance  Initializing provider plugins...
              - Reusing previous version of hashicorp/aws from the dependency lock file
aws_instance  - Using previously-installed hashicorp/aws v4.23.0
##...
              Plan: 0 to add, 0 to change, 1 to destroy.
              Changes to Outputs:
              - public_ip = "54.219.167.18" -> null
aws_instance
              ─────────────────────────────────────────────────────────────────────────────
              Saved the plan to: plan
              To perform exactly these actions, run the following command to apply:
              terraform apply "plan"
Please review the diff output above for aws_instance
❯ Approve  Applies the changes outlined in the plan.
  Dismiss
  Stop
##...
              Plan: 0 to add, 0 to change, 1 to destroy.
              Changes to Outputs:
              - public_ip = "54.219.167.18" -> null
aws_instance  aws_instance.compute (compute): Destroying... [id=i-0fc8d2e3931b28db3]
aws_instance  aws_instance.compute (compute): Still destroying... [id=i-0fc8d2e3931b28db3, 10s elapsed]
aws_instance  aws_instance.compute (compute): Still destroying... [id=i-0fc8d2e3931b28db3, 20s elapsed]
aws_instance  aws_instance.compute (compute): Still destroying... [id=i-0fc8d2e3931b28db3, 30s elapsed]
aws_instance  aws_instance.compute (compute): Destruction complete after 30s
aws_instance
              Destroy complete! Resources: 1 destroyed.
Destroying your CDKTF application will not remove the HCP Terraform workspace that stores your project's state. Log into the HCP Terraform application and delete the workspace.
Next steps
Now you have deployed, modified, and deleted an AWS EC2 instance using CDKTF!
CDKTF is capable of much more. For example, you can:
- Use the cdktf synthcommand to generate JSON which can be used by the Terraform executable to provision infrastructure usingterraform applyand other Terraform commands.
- Use Terraform providers and modules.
- Use programming language features (like class inheritance) or data from other sources to augment your Terraform configuration.
- Use CDKTF with HCP Terraform for persistent storage of your state file and for team collaboration.
For other examples, refer to the CDKTF documentation repository. In particular, check out the:
- CDKTF Architecture documentation for an overview of CDKTF's architecture.
- Community documentation to learn how to engage with the CDKTF developer community.
- Review example code in several programming languages.