Skip to content

Commit

Permalink
Merge branch 'main' into bugfix/apache-airflow
Browse files Browse the repository at this point in the history
  • Loading branch information
shapirov103 authored Jun 12, 2023
2 parents 676a2ea + 97f7eb7 commit 64acafc
Show file tree
Hide file tree
Showing 10 changed files with 160 additions and 35 deletions.
2 changes: 1 addition & 1 deletion docs/addons/cluster-autoscaler.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ import * as blueprints from '@aws-quickstart/eks-blueprints';

const app = new cdk.App();

const addOn = new blueprints.addons.ClusterAutoscalerAddOn();
const addOn = new blueprints.addons.ClusterAutoScalerAddOn();

const blueprint = blueprints.EksBlueprint.builder()
.addOns(addOn)
Expand Down
10 changes: 6 additions & 4 deletions docs/addons/container-insights.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,6 +10,8 @@ CloudWatch does not automatically create all possible metrics from the log data,

Metrics collected by Container Insights are charged as custom metrics. For more information about [CloudWatch pricing](https://aws.amazon.com/cloudwatch/pricing/), see Amazon CloudWatch Pricing.

Also it is important to note that this add-on can not co-exist with `adot-addon` on same EKS cluster. `adot-addon` and this add-on is mutually exclusive due to `adot-collector-sa` service account.

## Usage

Add the following as an add-on to your main.ts file to add Containers Insights to your cluster
Expand All @@ -34,7 +36,7 @@ Once the Container Insights add-on has been installed in your cluster, validate

```bash
kubectl get all -n amazon-cloudwatch
kubectl get all -n amzn-cloudwatch-metrics
kubectl get all -n amazon-metrics
```

You should see output similar to the following respectively (assuming two node cluster):
Expand All @@ -48,11 +50,11 @@ NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE
daemonset.apps/fluent-bit 2 2 2 2 2 <none> 100s
NAME READY STATUS RESTARTS AGE
pod/adot-collector-daemonset-b2rpc 1/1 Running 0 106s
pod/adot-collector-daemonset-k6tfw 1/1 Running 2 106s
pod/adot-collector-daemonset-k7n4p 1/1 Running 0 2m4s
pod/adot-collector-daemonset-qjdps 1/1 Running 0 114s
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
daemonset.apps/adot-collector-daemonset 2 2 2 2 2 <none> 106s
daemonset.apps/adot-collector-daemonset 2 2 2 2 2 <none> 73m
```

To enable or disable control plane logs with the console, run the following command in your terminal.
Expand Down
20 changes: 10 additions & 10 deletions docs/addons/external-dns.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,12 +16,12 @@ const app = new cdk.App();
const hostedZoneName = ...

const addOn = new blueprints.addons.ExternalDnsAddOn({
hostedZoneProviders: [hostedZoneName]; // can be multiple
hostedZoneResources: [hostedZoneName]; // can be multiple
});

const blueprint = blueprints.EksBlueprint.builder()
.addOns(addOn)
.resourceProvider(hostedZoneName, new blueprints.addons.LookupHostedZoneProvider(hostedZoneName))
.resourceProvider(hostedZoneName, new blueprints.LookupHostedZoneProvider(hostedZoneName))
.addOns(addOn)
.build(app, 'my-stack-name');
```
Expand Down Expand Up @@ -75,8 +75,8 @@ blueprints.EksBlueprint.builder()
// Register hosted zone1 under the name of MyHostedZone1
.resourceProvider("MyHostedZone1", new blueprints.LookupHostedZoneProvider(myDomainName))
.addOns(new blueprints.addons.ExternalDnsAddOn({
hostedZoneProviders: ["MyHostedZone1"];
})
hostedZoneResources: ["MyHostedZone1"];
}))
.build(...);
```

Expand All @@ -86,10 +86,10 @@ If the hosted zone ID is known, then the recommended approach is to use a `Impor
const myHostedZoneId = "";
blueprints.EksBlueprint.builder()
// Register hosted zone1 under the name of MyHostedZone1
.resourceProvider("MyHostedZone1", new blueprints.addons.ImportHostedZoneProvider(myHostedZoneId))
.resourceProvider("MyHostedZone1", new blueprints.ImportHostedZoneProvider(myHostedZoneId))
.addOns(new blueprints.addons.ExternalDnsAddOn({
hostedZoneProviders: ["MyHostedZone1"];
})
hostedZoneResources: ["MyHostedZone1"];
}))
.build(...);
```

Expand Down Expand Up @@ -128,10 +128,10 @@ blueprints.EksBlueprint.builder()
parentAccountId: parentDnsAccountId,
delegatingRoleName: 'DomainOperatorRole',
wildcardSubdomain: true
})
}))
.addOns(new blueprints.addons.ExternalDnsAddOn({
hostedZoneProviders: ["MyHostedZone1"];
})
hostedZoneResources: ["MyHostedZone1"];
}))
```

Expand Down
6 changes: 3 additions & 3 deletions docs/addons/grafana-operator.md
Original file line number Diff line number Diff line change
Expand Up @@ -35,11 +35,11 @@ There should be list the grafana-operator namespace
```bash
grafana-operator Active 31m
```
Verify if the pods are running correctly in flux-system namespace
Verify if everything is running correctly in the grafana-operator namespace
```bash
kubectl get pods -n grafana-operator
kubectl get all -n grafana-operator
```
There should list 3 pods starting with name flux-system
This should list 1 pod, 1 service, 1 deployment, and 1 replica-set starting with name grafana-operator
For Eg:
```bash
NAME READY STATUS RESTARTS AGE
Expand Down
2 changes: 1 addition & 1 deletion docs/getting-started.md
Original file line number Diff line number Diff line change
Expand Up @@ -77,7 +77,7 @@ For application of the EKS Blueprints Framework with [AWS Organizations](https:/
[Bootstrap](https://docs.aws.amazon.com/cdk/latest/guide/bootstrapping.html) your environment with the following command.

```bash
cdk bootstrap
cdk bootstrap aws://<your-account-number>/<region-to-bootstrap>
```

Note: if the account/region combination used in the code example above is different from the initial combination used with `cdk bootstrap`, you will need to perform `cdk bootstrap` again to avoid error.
Expand Down
28 changes: 27 additions & 1 deletion docs/resource-providers/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -50,6 +50,31 @@ class DynamoDbTableResourceProvider implements ResourceProvider<ITable> {
}
}

/**
* Example implementation of a VPC Provider that creates a NAT Gateway
* which is available in all 3 AZs of the VPC while only being in one
*/
class OtherVpcResourceProvider implements ResourceProvider<IVpc> {
provide(context: ResourceContext): IVpc {
return new Vpc(context.scope, '<vpc-name>', {
availabilityZones: ['us-east-1a', 'us-east-1b', 'us-east-1c'], // VPC spans all AZs
subnetConfiguration: [{
cidrMask: 24,
name: 'private',
subnetType: SubnetType.PRIVATE_WITH_EGRESS
}, {
cidrMask: 24,
name: 'public',
subnetType: SubnetType.PUBLIC
}],
natGatewaySubnets: {
availabilityZones: ['us-east-1b'] // NAT gateway only in 1 AZ
subnetType: SubnetType.PUBLIC
}
});
}
}

```

Access to registered resources from other resource providers and/or add-ons and teams:
Expand Down Expand Up @@ -124,6 +149,7 @@ export class ClusterInfo {
**Registering Resource Providers for a Blueprint**

Note: `GlobalResources.HostedZone` and `GlobalResources.Certificate` are provided for convenience as commonly referenced constants.
Full list of Resource Providers can be found [here](https://aws-quickstart.github.io/cdk-eks-blueprints/api/modules/resources.html).

```typescript
const myVpcId = ...; // e.g. app.node.tryGetContext('my-vpc', 'default) will look up property my-vpc in the cdk.json
Expand Down Expand Up @@ -229,7 +255,7 @@ blueprints.EksBlueprint.builder()
## Implementing Custom Resource Providers

1. Select the type of the resource that you need. Let's say it will be an FSx File System. Note: it must be one of the derivatives/implementations of `IResource` interface.
2. Implement ResourceProvider interface:
2. Implement [`ResourceProvider`](https://aws-quickstart.github.io/cdk-eks-blueprints/api/interfaces/ResourceProvider.html) interface:

```typescript
class MyResourceProvider implements blueprints.ResourceProvider<fsx.IFileSystem> {
Expand Down
3 changes: 2 additions & 1 deletion examples/blueprint-construct/index.ts
Original file line number Diff line number Diff line change
Expand Up @@ -65,6 +65,7 @@ export default class BlueprintConstruct {
}),
new blueprints.addons.XrayAdotAddOn(),
// new blueprints.addons.CloudWatchAdotAddOn(),
// new blueprints.addons.ContainerInsightsAddOn(),
new blueprints.addons.IstioBaseAddOn(),
new blueprints.addons.IstioControlPlaneAddOn(),
new blueprints.addons.CalicoOperatorAddOn(),
Expand Down Expand Up @@ -209,7 +210,7 @@ export default class BlueprintConstruct {
"LaunchTemplate": "Custom",
"Instance": "ONDEMAND"
},
requireImdsv2: true
requireImdsv2: false
}
},
{
Expand Down
12 changes: 6 additions & 6 deletions lib/addons/cluster-autoscaler/index.ts
Original file line number Diff line number Diff line change
@@ -1,11 +1,10 @@
import { CfnJson, Tags } from "aws-cdk-lib";
import { KubernetesVersion } from "aws-cdk-lib/aws-eks";
import * as iam from "aws-cdk-lib/aws-iam";
import { assert } from "console";
import { Construct } from "constructs";
import { assertEC2NodeGroup } from "../../cluster-providers";
import { ClusterInfo } from "../../spi";
import { conflictsWith, createNamespace, createServiceAccount, setPath } from "../../utils";
import { conflictsWith, createNamespace, createServiceAccount, logger, setPath } from "../../utils";
import { HelmAddOn, HelmAddOnUserProps } from "../helm-addon";

/**
Expand Down Expand Up @@ -41,8 +40,7 @@ const defaultProps: ClusterAutoScalerAddOnProps = {
/**
* Version of the autoscaler, controls the image tag
*/
const versionMap = new Map([
[KubernetesVersion.of("1.26"), "9.29.0"],
const versionMap: Map<KubernetesVersion, string> = new Map([
[KubernetesVersion.V1_26, "9.29.0"],
[KubernetesVersion.V1_25, "9.29.0"],
[KubernetesVersion.V1_24, "9.25.0"],
Expand All @@ -68,8 +66,10 @@ export class ClusterAutoScalerAddOn extends HelmAddOn {

if(this.options.version?.trim() === 'auto') {
this.options.version = versionMap.get(clusterInfo.version);
assert(this.options.version, "Unable to auto-detect cluster autoscaler version. Applying latest. Provided EKS cluster version: "
+ clusterInfo.version?.version ?? clusterInfo.version);
if(!this.options.version) {
this.options.version = versionMap.values().next().value;
logger.warn(`Unable to auto-detect cluster autoscaler version. Applying latest: ${this.options.version}`);
}
}

const cluster = clusterInfo.cluster;
Expand Down
52 changes: 44 additions & 8 deletions lib/addons/container-insights/index.ts
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,7 @@ import { assertEC2NodeGroup } from "../..";
import { ClusterInfo } from "../../spi";
import { HelmAddOn, HelmAddOnUserProps } from "../helm-addon";
import { ValuesSchema } from "./values";
import { conflictsWith, createNamespace } from "../../utils";

export interface ContainerInsightAddonProps extends Omit<HelmAddOnUserProps, "namespace"> {
values?: ValuesSchema
Expand All @@ -14,15 +15,11 @@ const defaultProps = {
name: "adot-exporter-for-eks-on-ec2",
namespace: undefined, // the chart will choke if this value is set
chart: "adot-exporter-for-eks-on-ec2",
version: "0.1.0",
version: "0.15.0",
release: "adot-eks-addon",
repository: "https://aws-observability.github.io/aws-otel-helm-charts"
};


/**
* @deprecated Use CloudWatchAdotAddOn.
*/
export class ContainerInsightsAddOn extends HelmAddOn {

constructor(props?: ContainerInsightAddonProps) {
Expand All @@ -32,27 +29,66 @@ export class ContainerInsightsAddOn extends HelmAddOn {
/**
* @override
*/
@conflictsWith("AdotCollectorAddOn")
deploy(clusterInfo: ClusterInfo): Promise<Construct> {
const cluster = clusterInfo.cluster;
const nodeGroups = assertEC2NodeGroup(clusterInfo, ContainerInsightsAddOn.name);

const policy = ManagedPolicy.fromAwsManagedPolicyName('CloudWatchAgentServerPolicy');

nodeGroups.forEach(nodeGroup => {
nodeGroup.role.addManagedPolicy(policy);
});

// Create an adot-collector service account.
const serviceAccountName = "adot-collector-sa";
let serviceAccountNamespace;

if (this.props.namespace) {
serviceAccountNamespace = this.props.namespace;
}
else {
serviceAccountNamespace = "amazon-metrics";
}

const ns = createNamespace(serviceAccountNamespace, cluster, true);
const sa = cluster.addServiceAccount(serviceAccountName, {
name: serviceAccountName,
namespace: serviceAccountNamespace,
});

// Apply Managed IAM policy to the service account.
sa.role.addManagedPolicy(policy);
sa.node.addDependency(ns);

let values: ValuesSchema = {
awsRegion: cluster.stack.region,
clusterName: cluster.clusterName,
fluentbit: {
enabled: true
serviceAccount: {
create: false,
},
adotCollector: {
daemonSet: {
createNamespace: false,
service: {
metrics: {
receivers: ["awscontainerinsightreceiver"],
exporters: ["awsemf"],
}
},
serviceAccount: {
create: false,
},
cwexporters: {
logStreamName: "EKSNode",
}
}
}
};

values = merge(values, this.props.values ?? {});

const chart = this.addHelmChart(clusterInfo, values, true, false);
chart.node.addDependency(sa);
return Promise.resolve(chart);
}
}
60 changes: 60 additions & 0 deletions test/cluster-autoscaler.test.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,60 @@
import * as cdk from 'aws-cdk-lib';
import { Template } from 'aws-cdk-lib/assertions';
import { KubernetesVersion } from 'aws-cdk-lib/aws-eks';
import * as blueprints from '../lib';

test("Cluster autoscaler correctly is using correct defaults if EKS version is not defined in the version map", () => {
const app = new cdk.App();

const stack = blueprints.EksBlueprint.builder()
.account('123456789').region('us-west-2')
.version(KubernetesVersion.of("1.27"))
.addOns(new blueprints.ClusterAutoScalerAddOn())
.build(app, "ca-stack-127");

const template = Template.fromStack(stack);

template.hasResource("Custom::AWSCDK-EKS-HelmChart", {
Properties: {
Version: "9.29.0",
},
});
});


test("Cluster autoscaler correctly is using correct version for 1.26", () => {
const app = new cdk.App();

const stack = blueprints.EksBlueprint.builder()
.account('123456789').region('us-west-2')
.version(KubernetesVersion.V1_26)
.addOns(new blueprints.ClusterAutoScalerAddOn())
.build(app, "ca-stack-126");

const template = Template.fromStack(stack);

template.hasResource("Custom::AWSCDK-EKS-HelmChart", {
Properties: {
Version: "9.29.0",
},
});
});


test("Cluster autoscaler correctly is using correct version for 1.26 specified as string", () => {
const app = new cdk.App();

const stack = blueprints.EksBlueprint.builder()
.account('123456789').region('us-west-2')
.version(KubernetesVersion.of("1.26"))
.addOns(new blueprints.ClusterAutoScalerAddOn())
.build(app, "ca-stack-127");

const template = Template.fromStack(stack);

template.hasResource("Custom::AWSCDK-EKS-HelmChart", {
Properties: {
Version: "9.29.0",
},
});
});

0 comments on commit 64acafc

Please sign in to comment.