Home Creating a Lambda in Golang with Cloud Development kit for Terraform - CDKTF as IAC - Part 1
Post
Cancel

Creating a Lambda in Golang with Cloud Development kit for Terraform - CDKTF as IAC - Part 1

Motivations

Typescript along with SAM or Serverless Framework are rather mainstream these days, to create AWS Lambda functions, leveraging Cloud Formation for our infrastructure as code.

In this post we will make of CDKTF to create our infrastructure and our lambda function using only golang.

We will be creating a function which interacts with a Dynamodb table to store and retrieve data.

CDKTF stands for cloud development kit for terraform.

Prerequisites

  • Terraform CLI (1.1+)
  • Go v1.16+
  • Node.js v16
  • Npm v16

Setting up

Given CDKTF is a JavaScript based cli make sure you have npm and node installed.

First, if you haven’t done so before, we need to install CDKTF.

1
npm install --global cdktf-cli@latest

To init our new project we will be using cdtkf to create our bootstrap template for a golang based project.

So, first let’s start by create a new empty directory where we will be initialising this project.

1
2
3
mkdir cdktf-aws-lambda
cd cdktf-aws-lambda
git init

And now, let’s initialize our project.

Since we are going to create the whole project in go, we will be using the golang template.

CDTKF at the time of writing currently supports

  • Typescript
  • Python
  • C#
  • Java
  • Go

Given we are not setting a terraform remote state backend for this project.

We will provide our cli the arguments local to keep our terraform state locally.

1
cdktf init --local --template=go --project-name=cdktf-aws-lambda --project-description="Go AWS Lambda with Dynamodb" --enable-crash-reporting

Given, that we will be using AWS as our provider, we also need to add it to our project.

Inside our cdtkf directory

1
cdktf provider add "aws@~>4.0"

After adding our provider lets check our current folder structure

1
2
3
4
5
6
7
.
├── cdktf.json
├── go.mod
├── go.sum
├── help
├── main.go
└── main_test.go

So if we inspect our main.go file we should have the following contents.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
package main

import (
	"github.com/aws/constructs-go/constructs/v10"
	"github.com/hashicorp/terraform-cdk-go/cdktf"
)

func NewMyStack(scope constructs.Construct, id string) cdktf.TerraformStack {
	stack := cdktf.NewTerraformStack(scope, &id)

	// The code that defines your stack goes here

	return stack
}

func main() {
	app := cdktf.NewApp(nil)

	NewMyStack(app, "cdktf-aws-lambda")

	app.Synth()
}

So this is an empty Stack, as no resources are being created.

Creating S3 and Lambda

Given that we are creating a lambda function, a few AWS resources are required.

And the generated main.go is still very barebones, first we need to import some go modules so we can interact with AWS.

Let’s add them by adding them to import.

1
2
3
4
5
6
7
import (
	"github.com/aws/constructs-go/constructs/v10"
	"github.com/hashicorp/terraform-cdk-go/cdktf"
	"github.com/aws/jsii-runtime-go"
	"github.com/cdktf/cdktf-provider-aws-go/aws/v10/instance"
    awsprovider "github.com/cdktf/cdktf-provider-aws-go/aws/v10/provider"
)

Ok so our project will need the following resources, and we must be able to convert them all as a golang code with CDKTF:

  • S3 (to store our lambda archive)
  • API GW (to enable access to our lambda function)
  • DynamoDB (so that the function can store and retrieve data)
  • IAM Permissions (because we need to be able to access both the lambda from API GW and Dynamodb from the function)

We will also need to be able to upload our changes to our S3 bucket, so a few local terraform resources will need to be created as well.

So let’s start by creating our S3 bucket.

First, for the sake of contextualising our code, let’s change the name of NewMyStack function to AWSLambda.

And add our first resource:

  • S3 Bucket

Le’ts add it to our import list

1
"github.com/cdktf/cdktf-provider-aws-go/aws/v10/s3bucket"

And now let’s modify our Stack creation function to include the s3 bucket, just below our AWS Provider

1
2
3
bucket := s3bucket.NewS3Bucket(stack, jsii.String("bucket"), &s3bucket.S3BucketConfig{
		Bucket: jsii.String("ilhicas-cdtkf-aws-lambda"),
	})

So, we mentioned we needed some local terraform resources, and one of those resources is Terraform Archive in order to zip our function so that is exported to our S3 bucket, which looks like this in classic Terraform (source) .

1
2
3
4
5
6
data "archive_file" "lambda_my_function" {
  type             = "zip"
  source_file      = "${path.module}/../lambda/my-function/index.js"
  output_file_mode = "0666"
  output_path      = "${path.module}/files/lambda-my-function.js.zip"
}

So, in CDKTF, this is how we would convert it.

1
2
3
4
5
cwd, _ := os.Getwd()
lambdaFile := cdktf.NewTerraformAsset(stack, jsii.String("lambda-file"), &cdktf.TerraformAssetConfig{
		Path: jsii.String(path.Join(cwd, "lambda")),
		Type: cdktf.AssetType_ARCHIVE,
	})

This requires a few std lib imports

1
2
"os"
"path"

So, now that we have an archive to send to S3, we need to declare an S3 Object resource.

This is what it looks as an example in Terraform (source)

1
2
3
4
5
resource "aws_s3_bucket_object" "example" {
  key        = "someobject"
  bucket     = aws_s3_bucket.examplebucket.id
  source     = "index.html"
}

And this is what we will add to our function.

1
2
3
4
5
lambdaS3Object := s3bucketobject.NewS3BucketObject(stack, jsii.String("lambda-archive"), &s3bucketobject.S3BucketObjectConfig{
		Bucket: bucket.Bucket(),
		Key:    lambdaFile.FileName(),
		Source: lambdaFile.Path(),
	})

Ok, for the sake of validation of what we have been doing so far, let’s do a quick deploy so that we understand the tooling we are using.

Let’s create an hello world function inside our lambda folder named main.go

1
2
3
4
5
6
7
8
.
├── cdktf.json
├── go.mod
├── go.sum
├── help
├── lambda
│   └── main.go
├── main.go

With the following contents

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
package main

import (
	"context"
	"fmt"

	"github.com/aws/aws-lambda-go/lambda"
)

type MyEvent struct {
	Name string `json:"name"`
}

func HandleRequest(ctx context.Context, name MyEvent) (string, error) {
	return fmt.Sprintf("Hello %s!", name.Name), nil
}

func main() {
	lambda.Start(HandleRequest)
}

And let’s run

1
cdtkf deploy

If everything is ok, we should have an output similar to classic Terraform, waiting on input for us to Approve the changes.

Omitted for the sake of brevity

Ok, if we go and check our AWS console or via cli, we should have a new bucket and a new object named archive.zip in it.

If we also check our current folder structure, we see that terraform left us a few objects.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
.
├── cdktf.json
├── cdktf.out
│   ├── manifest.json
│   └── stacks
│       └── cdktf-aws-lambda
│           ├── assets
│           │   └── lambda-file
│           │       └── 57EF83E10A9A45B51B944CBD25FDC1F1
│           │           └── archive.zip
│           ├── cdk.tf.json
│           └── plan
├── go.mod
├── go.sum
├── help
├── lambda
│   └── main.go
├── main.go
└── terraform.cdktf-aws-lambda.tfstate

We have our terraform state in the root folder, and we have a new folder named cdktf.out with our assets based on our resource name (lambda-file), that is then embedded into our S3Object resource, alongside the plan and the json file it generates.

So far we have the S3 Bucket, and the S3 Object that represents our lambda function, but we still don’t have a Lambda Function.

So let’s add it to our code. We don’t need to destroy any of our assets, as in terraform, we have a state, and we will increment our state with new resources.

First of all, we need to create a new role for our Lambda function. Let’s keep it simple for now so that we can run our lambda, and we will add services as we need them.

Let’s start by adding a new import for both the lambda function and the role

1
2
"github.com/cdktf/cdktf-provider-aws-go/aws/v10/iamrole"
"github.com/cdktf/cdktf-provider-aws-go/aws/v10/lambdafunction"

To mimic the behaviour as defined in the example for terraform lambda function, we will pass an assume_role_policy to our role

So let’s first create a literal string for our assume_role policy.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
lambdaRolePolicy := `
	{
		"Version": "2012-10-17",
		"Statement": [
		  {
			"Action": "sts:AssumeRole",
			"Principal": {
			  "Service": "lambda.amazonaws.com"
			},
			"Effect": "Allow",
			"Sid": ""
		  }
		]
	}`

And now let’s create the role iam role

1
2
3
lambdaRole := iamrole.NewIamRole(stack, jsii.String("cdktf-lambda-role"), &iamrole.IamRoleConfig{
		AssumeRolePolicy: &lambdaRolePolicy,
	})

Given that we need to put the archive with the built golang binary and we are only concerned with the Infrastructure deployment, we will not be building our binary within CDKTF.

In order to build our asset we must run the following command.

1
GOOS=linux GOARCH=amd64 CGO_ENABLED=0 go build -o lambda/main lambda/main.go

So, now that we have the Bucket, Bucket Object, and the Role for our Lambda Function, and we have already built our lambda function runner, let’s create the Lambda function resource by adding.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
functionName := "cdktf-aws-go-lambda"
	runtime := "go1.x"
	handler := "main"
	path := cdktf.Token_AsString(cdktf.Fn_Abspath(lambdaFile.Path()), &cdktf.EncodingOptions{})
	hash := cdktf.Fn_Filebase64sha256(path)
	lambda := lambdafunction.NewLambdaFunction(stack, jsii.String(functionName), &lambdafunction.LambdaFunctionConfig{
		FunctionName:   &functionName,
		S3Bucket:       bucket.Bucket(),
		S3Key:          lambdaS3Object.Key(),
		Role:           lambdaRole.Arn(),
		Runtime:        &runtime,
		Handler:        &handler,
		SourceCodeHash: hash,
	})

Where handler matches the entrypoint that will handle request, in the case of our Hello Word sample golang AWS Lambda, this is the binary that is named “main” from the build command we ran earlier.

However, we need to make sure that whenever we build a new version of our lambda using the go build command, we need to compare our Hash and update the Lambda function accordingly, so we need to make sure we make our state change whenever there is a change in the output of the archive file that contains our function in S3.

In order to achieve that, we added the following pre computations using Tokens so that it resolves later in the Synth function.

The following code is a bit convoluted

1
2
3
4
5
	
#Obtain the Future path that will hold the generated asset zip using Token_AsString to reference in the future similar to $resource.id
path := cdktf.Token_AsString(cdktf.Fn_Abspath(lambdaFile.Path()), &cdktf.EncodingOptions{})
#Calculate the hash as we would do in Terraform 
hash := cdktf.Fn_Filebase64sha256(path)

Recap

So our current main.go file for cdktf in root folder should now look something like this:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
package main

import (
	"os"
	"path"

	"github.com/aws/constructs-go/constructs/v10"
	"github.com/aws/jsii-runtime-go"
	"github.com/cdktf/cdktf-provider-aws-go/aws/v10/iamrole"
	"github.com/cdktf/cdktf-provider-aws-go/aws/v10/lambdafunction"
	awsprovider "github.com/cdktf/cdktf-provider-aws-go/aws/v10/provider"
	"github.com/cdktf/cdktf-provider-aws-go/aws/v10/s3bucket"
	"github.com/cdktf/cdktf-provider-aws-go/aws/v10/s3bucketobject"
	"github.com/hashicorp/terraform-cdk-go/cdktf"
)

func AWSLambda(scope constructs.Construct, id string) cdktf.TerraformStack {
	stack := cdktf.NewTerraformStack(scope, &id)

	awsprovider.NewAwsProvider(stack, jsii.String("AWS"), &awsprovider.AwsProviderConfig{
		Region: jsii.String("us-east-1"),
	})

	bucket := s3bucket.NewS3Bucket(stack, jsii.String("bucket"), &s3bucket.S3BucketConfig{
		Bucket: jsii.String("ilhicas-cdtkf-aws-lambda"),
	})
	cwd, _ := os.Getwd()

	lambdaFile := cdktf.NewTerraformAsset(stack, jsii.String("lambda-file"), &cdktf.TerraformAssetConfig{
		Path: jsii.String(path.Join(cwd, "lambda")),
		Type: cdktf.AssetType_ARCHIVE,
	})

	lambdaS3Object := s3bucketobject.NewS3BucketObject(stack, jsii.String("lambda-archive"), &s3bucketobject.S3BucketObjectConfig{
		Bucket: bucket.Bucket(),
		Key:    lambdaFile.FileName(),
		Source: lambdaFile.Path(),
	})

	lambdaRolePolicy := `
	{
		"Version": "2012-10-17",
		"Statement": [
		  {
			"Action": "sts:AssumeRole",
			"Principal": {
			  "Service": "lambda.amazonaws.com"
			},
			"Effect": "Allow",
			"Sid": ""
		  }
		]
	}`

	lambdaRole := iamrole.NewIamRole(stack, jsii.String("cdktf-lambda-role"), &iamrole.IamRoleConfig{
		AssumeRolePolicy: &lambdaRolePolicy,
	})
	functionName := "cdktf-aws-go-lambda"
	runtime := "go1.x"
	handler := "main"
	path := cdktf.Token_AsString(cdktf.Fn_Abspath(lambdaFile.Path()), &cdktf.EncodingOptions{})
	hash := cdktf.Fn_Filebase64sha256(path)
	lambda := lambdafunction.NewLambdaFunction(stack, jsii.String(functionName), &lambdafunction.LambdaFunctionConfig{
		FunctionName:   &functionName,
		S3Bucket:       bucket.Bucket(),
		S3Key:          lambdaS3Object.Key(),
		Role:           lambdaRole.Arn(),
		Runtime:        &runtime,
		Handler:        &handler,
		SourceCodeHash: hash,
	})
	lambda.Arn()
	return stack
}

func main() {
	app := cdktf.NewApp(nil)

	AWSLambda(app, "cdktf-aws-lambda")

	app.Synth()
}

This is by now a long, post, you can now deploy a lambda, make changes locally, upload them to S3 and have them sync in your lambda function.

This is what we have done so far:

  • Created a way to archive our Golang Lambda Function
  • Created our S3 Bucket
  • Created a mechanism to upload our binary to S3
  • Created a Role for our Lambda to assume
  • Created a Lambda function
  • Created a way to achieve redeploy whenever our binary changes

Next Steps

  • Create an API Gateway
  • Create a DynamoDB

So this concludes Part 1 of this post.

Check out next blog post to create more resources and interact with them within our lambda function, everything written in Golang.

Github Repo

As always, you can find the associated repository with the code used for this Post.

Given this is a Two Part Post, you can find the part for this under the following branch:

GitHub - Ilhicas/cdktf-aws-lambda: A cdktf example repository companion to blog Ilhicas.com

This post is licensed under CC BY 4.0 by the author.