Skip to content

Commit

Permalink
Template infra deploy #9652505319
Browse files Browse the repository at this point in the history
  • Loading branch information
nava-platform-bot committed Jun 24, 2024
1 parent 2130574 commit 34abfdc
Show file tree
Hide file tree
Showing 4 changed files with 57 additions and 7 deletions.
2 changes: 1 addition & 1 deletion .template-version
Original file line number Diff line number Diff line change
@@ -1 +1 @@
49f299f6678cab844bce6772c2d471ffeaf8407d
5c603d32819a0b358f7f39b740904aac2b2519d5
40 changes: 40 additions & 0 deletions docs/infra/infrastructure-configuration.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,40 @@
# Infrastructure configuration

## Configure infrastructure with configuration modules

The infrastructure derives all of its configuration from the following modules:

- Project config ([/infra/project-config/](/infra/project-config/))
- App config (`/infra/<APP_NAME>/app-config` per application)

Shell scripts running in CI jobs or locally on developer machines treat config modules as root modules and fetch configuration values by running `terraform apply -auto-approve` followed by `terraform output`.

Root modules across the infrastructure layers fetch configuration values by calling the config modules as child modules:

```terraform
module "project_config" {
source = "../../project-config"
}
module "app_config" {
source = "../app-config"
}
```

### Design config module outputs to be static

Config modules are designed to be static. This means that all of the outputs can be statically determined without needing to execute the code. In particular:

- All config module outputs are either constant or derived from constants via deterministic functions.
- Config module outputs do not rely on the environment, including which root module is being applied, which workspace is selected, or the current timestamp.
- Config modules have no side effects. In particular, they do not create any infrastructure resources.

When configuring your project and application, keep these principles in mind to avoid violating the static nature of config modules.

## Benefits of config modules over variable definitions (.tfvars) files

Putting configuration in static configuration modules has a number of benefits over managing configuration in Terraform [variable definitions (.tfvars) files](https://developer.hashicorp.com/terraform/language/values/variables#assigning-values-to-root-module-variables):

1. Environment-specific configuration can be forced to adopt a common convention by generating the configuration value through code. For example, each application's service name is be defined as `"${local.prefix}${var.app_name}-${var.environment}"`.
2. Configuration values can be used outside of Terraform by shell scripts and CI/CD workflows by calling `terraform output` after calling `terraform apply -auto-approve`. If configuration values were embedded in `.tfvars` files, the scripts would need to parse the `.tfvars` files for those values. Note that `-auto-approve` is safe for config modules since they are entirely static and have no side effects.
3. Eliminate the possibility of passing in the incorrect `.tfvars` file to `terraform plan/apply`. Since we [reuse the same root module with multiple terraform backend configs](/docs/decisions/infra/0004-separate-terraform-backend-configs-into-separate-config-files.md), having separate `.tfvars` requires that after `terraform init` is called with a specific `-backend-config` file, the corresponding `.tfvars` file needs to be passed to `terraform plan`/`terraform apply`. This creates opportunity for error if the incorrect variable definitions file is used when a particular backend has been initialized.
10 changes: 5 additions & 5 deletions docs/infra/module-architecture.md
Original file line number Diff line number Diff line change
Expand Up @@ -77,15 +77,15 @@ app/database --> accounts

When deciding which layer to put an infrastructure resource in, follow the following guidelines.

* **Default to the service layer** By default, consider putting application resources as part of the service layer. This way the resource is managed together with everything else in the environment, and spinning up new application environments automatically spins up the resource.
- **Default to the service layer** By default, consider putting application resources as part of the service layer. This way the resource is managed together with everything else in the environment, and spinning up new application environments automatically spins up the resource.

* **Consider variations in the number and types of environments of each layer:** If the resource does not or might not map one-to-one with application environments, consider putting the resource in a different layer. For example, the number of AWS accounts may or may not match the number of VPCs, which may or may not match the number of application environments. As another example, each application only has one instance of a build repository, which is shared across all environments. As a final example, an application may or may not need a database layer at all, so by putting database-related resources in the database layer, and application can skip those resources by skipping the entire layer rather than by needing to change the behavior of an existing layer. Choose the layer for the resource that maps most closely with that resource's purpose.
- **Consider variations in the number and types of environments of each layer:** If the resource does not or might not map one-to-one with application environments, consider putting the resource in a different layer. For example, the number of AWS accounts may or may not match the number of VPCs, which may or may not match the number of application environments. As another example, each application only has one instance of a build repository, which is shared across all environments. As a final example, an application may or may not need a database layer at all, so by putting database-related resources in the database layer, and application can skip those resources by skipping the entire layer rather than by needing to change the behavior of an existing layer. Choose the layer for the resource that maps most closely with that resource's purpose.

* **Consider AWS uniqueness constraints on resources:** This is a special case of the previous consideration: resources that AWS requires to be unique should be managed by a layer that creates only one of that resource per instance of that layer. For example, there can only be one OIDC provider for GitHub actions per AWS account (see [Creating OIDC identity providers](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_providers_create_oidc.html)), so the OIDC provider should go in the account layer. As another example, there can only be one VPC endpoint per VPC per AWS service (see [Fix conflicting DNS domain errors for interface VPC endpoints](https://repost.aws/knowledge-center/vpc-interface-endpoint-domain-conflict)). Therefore, if multiple application environments share a VPC, they can't each create a VPC endpoint for the same AWS service. As such, the VPC endpoint logically belongs to the network layer and VPC endpoints should be created and managed per network instance rather than per application environment.
- **Consider AWS uniqueness constraints on resources:** This is a special case of the previous consideration: resources that AWS requires to be unique should be managed by a layer that creates only one of that resource per instance of that layer. For example, there can only be one OIDC provider for GitHub actions per AWS account (see [Creating OIDC identity providers](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_providers_create_oidc.html)), so the OIDC provider should go in the account layer. As another example, there can only be one VPC endpoint per VPC per AWS service (see [Fix conflicting DNS domain errors for interface VPC endpoints](https://repost.aws/knowledge-center/vpc-interface-endpoint-domain-conflict)). Therefore, if multiple application environments share a VPC, they can't each create a VPC endpoint for the same AWS service. As such, the VPC endpoint logically belongs to the network layer and VPC endpoints should be created and managed per network instance rather than per application environment.

* **Consider policy constraints on what resources the project team is authorized to manage:** Different categories of resources may have different requirements on who is allowed to create and manage those resources. Resources that the project team are not allowed to manage directly should not be mixed with resources that the project team needs to manage directly.
- **Consider policy constraints on what resources the project team is authorized to manage:** Different categories of resources may have different requirements on who is allowed to create and manage those resources. Resources that the project team are not allowed to manage directly should not be mixed with resources that the project team needs to manage directly.

* **Consider out-of-band dependencies:** Put infrastructure resources that require steps outside of terraform to be completed configured in layers that are upstream to resources that depend on those completed resources. For example, after creating a database cluster, the database schemas, roles, and privileges need to be configured before they can be used by a downstream service. Therefore database resources should be separate from the service layer so that the database can be configured fully before attempting to create the service layer resources.
- **Consider out-of-band dependencies:** Put infrastructure resources that require steps outside of terraform to be completed configured in layers that are upstream to resources that depend on those completed resources. For example, after creating a database cluster, the database schemas, roles, and privileges need to be configured before they can be used by a downstream service. Therefore database resources should be separate from the service layer so that the database can be configured fully before attempting to create the service layer resources.

## Making changes to infrastructure

Expand Down
12 changes: 11 additions & 1 deletion infra/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,7 @@ The structure for the infrastructure code looks like this:

```text
infra/ Infrastructure code
project-config/ Project-level configuration for account-level resources and resource tags
accounts/ [Root module] IaC and IAM resources
[app_name]/ Application directory: infrastructure for the main application
modules/ Reusable child modules
Expand All @@ -27,9 +28,18 @@ Details about terraform root modules and child modules are documented in [module

## 🏗️ Project architecture

### ⚙️ Configuration

The infrastructure derives all of its configuration from static configuration modules:

- Project config
- App config (per application)

The configuration modules contain only statically known information and do not have any side effects such as creating infrastructure resources. As such, they are used as both (a) root modules by shell scripts and CI/CD workflows and (b) child modules called by root modules across the infrastructure layers. See [infrastructure configuration](/docs/infra/infrastructure-configuration.md) for more info.

### 🧅 Infrastructure layers

The infrastructure template is designed to operate on different layers:
The infrastructure is designed to operate on different layers:

- Account layer
- Network layer
Expand Down

0 comments on commit 34abfdc

Please sign in to comment.