This Pulumi project manages a simple serverless REST API that is tested with Kitchen-Pulumi. This project serves as a good tutorial on Kitchen-Pulumi's feature set.
If you don't have the Pulumi CLI installed, please install it before continuing. You can download it following these instructions. Additionally, ensure you have active AWS credentials as this tutorial will create live resources in your AWS account.
-
To get started, clone this repository and navigate to this directory:
$ git clone https://github.com/jacoblearned/kitchen-pulumi $ cd kitchen-pulumi/examples/aws/serverless-rest-api-lambda
-
Create a
Gemfile
and add kitchen-pulumi to your dependencies. If you don't have Bundler installed, go ahead and install that as well:$ gem install bundler $ touch Gemfile
# Gemfile gem 'kitchen-pulumi', require: false, group: :test
-
Install your dependencies with Bundler:
$ bundle install
-
Ensure your setup looks good:
$ bundle exec kitchen list Instance Driver Provisioner Verifier Transport Last Action Last Error dev-stack-serverless-rest-api Pulumi Pulumi Busser Ssh <Not Created> <None>
In our project directory, we have Pulumi.yaml
which defines a Node.js Pulumi project named serverless-rest-api-lambda
as well as
Pulumi.dev.yaml
which defines two configuration values for our dev
stack:
aws:region
- our desired AWS region,us-east-1
serverless-rest-api-lambda:api_response_text
- the response string that our API will return. For now it will be "default".
Since we're using Node.js, let's download the @pulumi/pulumi
and @pulumi/awsx
Node packages that our project depends on as listed in our package.json
:
$ npm install
Our infra code is contained in index.js
and sets up our API with two endpoints: one at /
that serves the static content of the www
directory,
and a /response
endpoint that will return the value of the api_response_text
we set in our stack config:
// Import the [pulumi/aws](https://pulumi.io/reference/pkg/nodejs/@pulumi/aws/index.html) package
const pulumi = require("@pulumi/pulumi");
const awsx = require("@pulumi/awsx");
const config = new pulumi.Config();
const responseText = config.require("api_response_text");
// Create a public HTTP endpoint (using AWS APIGateway)
const endpoint = new awsx.apigateway.API("hello", {
routes: [
// Serve static files from the `www` folder (using AWS S3)
{
path: "/",
localPath: "www"
},
// Serve a simple REST API on `GET /response` (using AWS Lambda)
{
path: "/response",
method: "GET",
eventHandler: (req, ctx, cb) => {
cb(undefined, {
statusCode: 200,
body: Buffer.from(
JSON.stringify({ response: responseText }),
"utf8"
).toString("base64"),
isBase64Encoded: true,
headers: { "content-type": "application/json" }
});
}
}
]
});
// Export the public URL for the HTTP service
exports.url = endpoint.url;
If you create and provision this stack by executing pulumi up --stack dev
, you can
navigate to the exported URL value in your browser and see the index page of www/
along with the response value "default" that we set in the stack config.
Go ahead and destroy the stack for now if you have validated this in your browser:
$ pulumi destroy -y
For the first iteration of our integration test, we want to use Kitchen-Pulumi to simply create and destroy the stack infrastructure to ensure both operations are completed without error.
Looking at .kitchen.yml
, you will see that we have a single suite called serverless-rest-api
and a single platform called dev-stack
. Together this means we have a single Kitchen instance
called serverless-rest-api-dev-stack
that we can test against. You can verify this using kitchen list
:
$ bundle exec kitchen list
Instance Driver Provisioner Verifier Transport Last Action Last Error
serverless-rest-api-dev-stack Pulumi Pulumi Busser Ssh <Not Created> <None>
Setting attributes on the driver is how we customize our integration tests.
Currently, we set the driver's config_file
attribute to the value Pulumi.dev.yaml
.
This means that the dev-stack
platform will run tests against a stack named dev-stack
using the config values set in Pulumi.dev.yaml
.
We can create our dev stack by running kitchen create
:
$ bundle exec kitchen create
-----> Starting Kitchen (v2.3.2)
-----> Creating <serverless-rest-api-dev-stack>...
$$$$$$ Running pulumi login https://api.pulumi.com
Logged into pulumi.com as <username> (https://app.pulumi.com/<username>)
$$$$$$ Running pulumi stack init dev-stack -C /Users/<username>/OSS/kitchen-pulumi/examples/aws/serverless-rest-api-lambda
Created stack 'dev-stack'
Finished creating <serverless-rest-api-dev-stack> (0m2.67s).
-----> Kitchen is finished. (0m3.43s)
You can see from the output that kitchen create
does two things when using Kitchen-Pulumi's driver:
- It logs in to the Pulumi service. By default it will be the SaaS backend but we'll cover how to override this a bit later.
- It ensures the stack exists by calling
pulumi stack init dev-stack
. If the stack already exists, Kitchen-Pulumi simply continues without error.
We can now provision our stack resources by running kitchen converge
:
$ bundle exec kitchen converge
-----> Starting Kitchen (v2.2.5)
-----> Converging <serverless-rest-api-dev-stack>...
$$$$$$ Running pulumi login https://api.pulumi.com
Logged into pulumi.com as <username> (https://app.pulumi.com/<username>)
$$$$$$ Running pulumi up -y -r --show-config -s dev-stack -C /Path/to/kitchen-pulumi/examples/aws/serverless-rest-api-lambda
Previewing update (dev-stack):
Configuration:
aws:region: us-east-1
serverless-rest-api-lambda:api_response_text: default
...
<A lot of output from the update preview and from the update execution>
...
Outputs:
url: "https://abc123fooexample.execute-api.us-east-1.amazonaws.com/stage/"
Resources:
+ 14 created
Duration: 19s
Permalink: https://app.pulumi.com/<username>/serverless-rest-api-lambda/dev-stack/updates/1
Finished converging <serverless-rest-api-dev-stack> (0m27.39s).
-----> Kitchen is finished. (0m20.31s)
Using Kitchen-Pulumi's provisioner, calling kitchen converge
will call pulumi up
on the stack set on the driver for each kitchen instance.
You will also see another login to the Pulumi backend. This is because kitchen
commands could run against the same stack from different
machines or by different users in a variety of invocation order permutations, so Kitchen-Pulumi will
attempt a login anytime a call to the Pulumi CLI is necessary.
If you visit the value of the url
stack Output, you should see the index page and the "default" API response text.
Now that we have manually validated our test stack, we can destroy it with kitchen destroy
:
$ bundle exec kitchen destroy
-----> Starting Kitchen (v2.2.5)
-----> Destroying <serverless-rest-api-dev-stack>...
$$$$$$ Running pulumi login https://api.pulumi.com
Logged into pulumi.com as <username> (https://app.pulumi.com/<username>)
$$$$$$ Running pulumi destroy -y -r --show-config -s dev-stack -C /Path/to/kitchen-pulumi/examples/aws/serverless-rest-api-lambda
Previewing destroy (dev-stack):
Configuration:
aws:region: us-east-1
serverless-rest-api-lambda:api_response_text: default
...
<Preview and Destroy output>
...
Resources:
- 14 deleted
Duration: 10s
Permalink: https://app.pulumi.com/<username>/serverless-rest-api-lambda/dev-stack/updates/2
The resources in the stack have been deleted, but the history and configuration associated with the stack are still maintained.
If you want to remove the stack completely, run 'pulumi stack rm dev-stack'.
$$$$$$ Running pulumi stack rm --preserve-config -y -s dev-stack -C /Users/<username>/OSS/kitchen-pulumi/examples/aws/serverless-rest-api-lambda
Stack 'dev-stack' has been removed!
Finished destroying <serverless-rest-api-dev-stack> (0m20.04s).
-----> Kitchen is finished. (0m23.42s)
kitchen destroy
will run pulumi destroy
on our stack and then a final pulumi stack rm
to remove the stack entirely.
We remove the stack at the end to ensure our test stacks are ephemeral and do not clog the Pulumi stack namespace after
we are finished testing. You can verify this by running pulumi stack ls
to see that the dev-stack
stack is not listed.
So far, we've seen how to
- Create a stack with
kitchen create
- Update a stack with
kitchen converge
- Destroy it with
kitchen destroy
In the next section, we will cover some more advanced stack testing features like testing multiple stacks, using other backends, overriding stack config values, providing secrets, and simulating changes in a stack's configuration over time.
Our simple test gave us confidence our stack is being provisioned as expected. Since our stack is only deployed to the us-east-1 region, however, it isn't resilient to regional disasters. We would like to increase the availability of the service in production by deploying it to multiple AWS regions. We want to capture this in our integration tests as well to mirror our production environment as much as possible.
To test our new us-west-2 based stack, we will change our current test platform in .kitchen.yml
to dev-east-test
, introduce another platform
called dev-west-test
, and override the value of the aws:region
for dev-west-test
to be us-west-2
instead of us-east-1
:
# .kitchen.yml
driver:
name: pulumi
provisioner:
name: pulumi
suites:
- name: serverless-rest-api
platforms:
- name: dev-east-test
driver:
config_file: Pulumi.dev.yaml
- name: dev-west-test
driver:
config_file: Pulumi.dev.yaml
config:
aws:
region: us-west-2
Let's break down what we changed:
- We removed the
test_stack_name
driver attribute because Kitchen-Pulumi will use the name of the instance by default. So the stacks that will be created for us will be namedserverless-rest-api-dev-east-test
andserverless-rest-api-dev-west-test
. - We set the
config_file
driver attribute for both suites to bePulumi.dev.yaml
. This allows us to use the same base stack config file for both stacks. The value ofconfig_file
can be any valid YAML file that matches the Pulumi stack config file specification. - We override the value of the
aws:region
stack config on thedev-west
stack using theconfig
driver attribute. Theconfig
attribute is a map of maps whose top-level keys correspond to Pulumi namespaces. The values defined in aconfig
driver attribute will always take precedence over those defined in an instance'sconfig_file
.
With this configuration, we can now create two identical test stacks deployed to both us-east-1 and us-west-2:
$ bundle exec kitchen converge
-----> Creating <serverless-rest-api-dev-east-test>...
$$$$$$ Running pulumi login https://api.pulumi.com
Logged into pulumi.com as <username> (https://app.pulumi.com/<username>)
$$$$$$ Running pulumi stack init serverless-rest-api-dev-east-test -C /Users/<username>/OSS/kitchen-pulumi/examples/aws/serverless-rest-api-lambda
Created stack 'serverless-rest-api-dev-east-test'
Finished creating <serverless-rest-api-dev-east-test> (0m2.21s).
-----> Converging <serverless-rest-api-dev-east-test>...
<Update output for east stack>
Outputs:
url: "https://y0nh87lz59.execute-api.us-east-1.amazonaws.com/stage/"
Resources:
+ 14 created
Duration: 19s
Permalink: https://app.pulumi.com/<username>/serverless-rest-api-lambda/serverless-rest-api-dev-east-test/updates/1
Finished converging <serverless-rest-api-dev-east-test> (0m25.91s).
-----> Creating <serverless-rest-api-dev-west-test>...
$$$$$$ Running pulumi login https://api.pulumi.com
Logged into pulumi.com as <username> (https://app.pulumi.com/<username>)
$$$$$$ Running pulumi stack init serverless-rest-api-dev-west-test -C /Users/<username>/OSS/kitchen-pulumi/examples/aws/serverless-rest-api-lambda
Created stack 'serverless-rest-api-dev-west-test'
Finished creating <serverless-rest-api-dev-west-test> (0m1.84s).
-----> Converging <serverless-rest-api-dev-west-test>...
<Update output for west stack>
Outputs:
url: "https://t87sy6zivb.execute-api.us-west-2.amazonaws.com/stage/"
Resources:
+ 14 created
Duration: 29s
Permalink: https://app.pulumi.com/<username>/serverless-rest-api-lambda/serverless-rest-api-dev-west-test/updates/1
Finished converging <serverless-rest-api-dev-west-test> (0m37.75s).
If you visit both of the output URLs, you will see our service is now live in both regions.
Whenever you are ready, destroy both stacks with bundle exec kitchen destroy
.
If you're organization has its own internal backend or you would like to use your local machine as a backend, you can tell Kitchen-Pulumi to do so using the backend
driver attribute. The value of backend
defaults to the Pulumi SaaS backend, and accepts any valid URL or the keyword local
for using the local backend.
Note: When using the local backend, you may see stack config files being created. These are created by Pulumi to properly encrypt values and will be removed during kitchen destroy
.
The following will use a local backend for the west stack and an S3 bucket for the east:
# .kitchen.yml
driver:
name: pulumi
provisioner:
name: pulumi
suites:
- name: serverless-rest-api
platforms:
- name: dev-east-test
driver:
backend: s3://my-pulumi-state-bucket
config_file: Pulumi.dev.yaml
- name: dev-west-test
driver:
backend: local
config_file: Pulumi.dev.yaml
config:
aws:
region: us-west-2
If you would like to use an alternative secret encryption provider
with your test stacks, you can provide a value to the secrets_provider
driver attribute.
When the dev-stack stack gets created, it will use the specified KMS key to encrypt secrets.
# .kitchen.yml
---
driver:
name: pulumi
provisioner:
name: pulumi
suites:
- name: serverless-rest-api
platforms:
- name: dev-stack
driver:
test_stack_name: dev-stack
config_file: Pulumi.dev.yaml
secrets_provider: "awskms://1234abcd-12ab-34cd-56ef-1234567890ab?region=us-east-1"
If you have already set secret values in a stack config file, but would like to test
the stack with a different value for certain secrets without permanently overriding
the stack config file, you can specify a secrets
map. This driver attribute is similar to the config
map we used earlier to override the value of aws:region
in our west test stack.
This can be useful when secrets change between deployment environments or you have
credentials for testing purposes only. The following configuration will set the
my-project:ssh_key
stack secret to the value of the TEST_USER_SSH_KEY
environment variable using Ruby's flexible ERB templating syntax without affecting the existing value of my-project:ssh_key
defined in Pulumi.dev.yaml
.
# .kitchen.yml
---
driver:
name: pulumi
provisioner:
name: pulumi
suites:
- name: serverless-rest-api
platforms:
- name: dev-stack
driver:
test_stack_name: dev-stack
config_file: Pulumi.dev.yaml
secrets:
my-project:
ssh_key: <%= ENV['TEST_USER_SSH_KEY'] %>
To further test the resolve of your Pulumi project, you may want to test how
existing stacks will react to changes in configuration values after their initial
provisioning. Kitchen-Pulumi allows you to test successive changes to existing
test stacks through the stack_evolution
driver attribute.
stack_evolution
takes a list of desired configuration changes as specified using the following three values (at least one must be provided):
config_file
- A valid YAML file to use instead of the config file defined on the top-levelconfig_file
driver attribute.config
- A map of values with same structure as the top-levelconfig
driver attribute. These values are merged with the top-levelconfig
and any keys specified in both will be overwritten by thestack_evolution
step's value.secrets
- A map of secrets with same structure as the top-levelsecrets
driver attribute. These values are merged with the top-levelsecrets
and any keys specified in both will be overwritten by thestack_evolution
step's value.
Each item in stack_evolution
represents an independent stack configuration.
Kitchen-Pulumi will call pulumi up
on the test stack for each configuration.
The example below will perform the following stack updates on dev-stack when kitchen converge
runs against it:
- The initial update using the configuration specified in the top-level
config_file
,Pulumi.dev.yaml
. - If the first update succeeded, the stack will be updated using the configuration specified in
test-cases/second_update_changed_response.yaml
. - If the second update succeeded, the stack will be updated using the configuration specified in the top-level config file,
Pulumi.dev.yaml
, but with theserverless-rest-api-lambda:api_response_text
andserverless-rest-api-lambda:db_password
values overridden.
# .kitchen.yml
driver:
name: pulumi
provisioner:
name: pulumi
suites:
- name: serverless-rest-api
platforms:
- name: dev-stack
driver:
test_stack_name: dev-stack
config_file: Pulumi.dev.yaml
stack_evolution:
- config_file: test-cases/second_update_changed_response.yaml
- config:
serverless-rest-api-lambda:
api_response_text: third update
secrets:
serverless-rest-api-lambda:
db_password: <%= ENV['NEW_DB_PASSWORD'] %>
You can think of the top-level config_file
, config
, and secrets
values as "global" settings for the driver across stack updates, and those specified in
stack_evolution
as temporary overrides.
Although a successful stack update gives us some amount of confidence our stack is working, there is still a lot that can go wrong when deploying infrastructure and services. For example, the API code we ship with the Lambda function may have a bug not caught by unit tests and causes HTTP 500 errors to be returned by the service.
To provide more control over validation logic, Kitchen-Pulumi provides a Verifier that allows you to run custom validation code as described by InSpec Profiles (in a derivative way to that of Kitchen-Terraform). This provides a lot of flexibility to the test code you can write against your test stacks.
For example, let's revisit our multi-region API setup from earlier with a new requirement: When a user accesses the us-west-2 endpoint, we want to display a different response text value than that of the us-east-1 API. (Say, for example we are routing all users from western U.S. states to our us-west-2 API using geolocation-based DNS records.)
We'd like to test that both stacks are created properly in each region and that the
API response text returned by the /response
endpoint is different for each region.
Let's setup the integration test described and add the Kitchen Pulumi verifier in our .kithen.yml:
# .kitchen.yml
driver:
name: pulumi
provisioner:
name: pulumi
verifier:
name: pulumi
systems:
- name: API response test
backend: local
suites:
- name: serverless-rest-api
platforms:
- name: dev-east
driver:
config_file: Pulumi.dev.yaml
- name: dev-west
driver:
config_file: Pulumi.dev.yaml
config:
serverless-rest-api-lambda:
api_response_text: Hello from us-west-2
We added the verifier named pulumi
with one test system named API response test that
will use our local machine as the InSpec backend. You can name your systems whatever you'd like as Kithchen-Pulumi will look for InSpec Profiles in the test/integration/<kitchen suite name>
directory.
If you run bundle exec kitchen list
, you should see our two instances now have Pulumi
as the value of the Verifier:
$ bundle exec kitchen list
Instance Driver Provisioner Verifier Transport Last Action Last Error
serverless-rest-api-dev-east Pulumi Pulumi Pulumi Ssh <Not Created> <None>
serverless-rest-api-dev-west Pulumi Pulumi Pulumi Ssh <Not Created> <None>
Now let's create the InSpec profile that will contain the test logic by first setting up the profile's structure:
$ mkdir -p test/integration/serverless-rest-api/controls
$ touch test/integration/serverless-rest-api/inspec.yml
$ touch test/integration/serverless-rest-api/controls/verify_response.rb
We created a profile directory for our serverless-rest-api
suite. Kitchen-Pulumi uses test suite names to look for the location of InSpec profiles. By breaking our
east and west tests into separate platforms on a single suite, we can test both stacks
using the same InSpec profile.
Go ahead and place the following into the inspec.yml
file we created:
# test/integration/serverless-rest-api/inspec.yml
name: serverless-rest-api
inputs:
- name: serverless-rest-api-lambda:api_response_text
type: string
required: true
- name: url
type: string
required: true
This file describes our serverless-rest-api
profile and defines two InSpec Input values to use in our test:
- The stack configuration value for
serverless-rest-api-lambda:api_response_text
at the time the kitchen instance was verified. - The stack output value named
url
exported from our Pulumi project code inindex.js
.
If you prefer to make the source of these InSpec inputs more explicit, you can prefix them with input_
and output_
respectively (but either form is acceptable):
# test/integration/serverless-rest-api/inspec.yml
name: serverless-rest-api
inputs:
- name: input_serverless-rest-api-lambda:api_response_text
type: string
required: true
- name: output_url
type: string
required: true
With these two inputs available to us, we can write an InSpec Control in controls/verify_response.rb
to ensure the API is returning the expected response:
# frozen_string_literal: true
# test/integration/serverless-rest-api/controls/verify_response.rb
require 'net/http'
require 'json'
api_url = input('output_url')
control 'Verify API Response' do
describe 'API response text' do
subject do
input('serverless-rest-api-lambda:api_response_text')
end
endpoint = "#{api_url}/response"
response = Net::HTTP.get(URI(endpoint))
response_text = JSON.parse(response).fetch('response')
it { should eq response_text }
end
end
With our control code ready, we can now test both stacks are created properly and
they API is healthy and returning the expected values for both regions. If you run bundle exec kitchen verify
, you should see both test stacks created, updated, and
verified with our control code with terminal output similar to the following for the
west region API:
$ bundle exec kitchen verify
<... A lot of Test Kitchen output ...>
API response test: Verifying
Profile: serverless-rest-api
Version: (not specified)
Target: local://
✔ Verify API Response: API response text
✔ API response text should eq "Hello from us-west-2"
Profile Summary: 1 successful control, 0 control failures, 0 controls skipped
Test Summary: 1 successful, 0 failures, 0 skipped
Finished verifying <serverless-rest-api-dev-west> (0m5.23s).
We have successfully verified that both our API deployments are healthy and performing as expected. Go ahead and destroy the stacks whenever you are ready:
$ bundle exec kitchen destroy
This tutorial gave an overview of the Kitchen-Pulumi's features. If you have any questions or spot an issue with the tutorial code or writing, please feel free to submit an issue or a pull request so it can be remediated.