You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardexpand all lines: packages/@aws-cdk/aws-eks/README.md
+34-34
Original file line number
Diff line number
Diff line change
@@ -47,6 +47,8 @@ cluster.addManifest('mypod', {
47
47
});
48
48
```
49
49
50
+
> **NOTE: You can only create 1 cluster per stack.** If you have a use-case for multiple clusters per stack, > or would like to understand more about this limitation, see https://github.com/aws/aws-cdk/issues/10073.
51
+
50
52
In order to interact with your cluster through `kubectl`, you can use the `aws
The default value is `eks.EndpointAccess.PUBLIC_AND_PRIVATE`. Which means the cluster endpoint is accessible from outside of your VPC, and worker node traffic to the endpoint will stay within your VPC.
103
+
The default value is `eks.EndpointAccess.PUBLIC_AND_PRIVATE`. Which means the cluster endpoint is accessible from outside of your VPC, but worker node traffic as well as `kubectl` commands
104
+
to the endpoint will stay within your VPC.
102
105
103
106
### Capacity
104
107
@@ -139,16 +142,12 @@ new eks.Cluster(this, 'cluster-with-no-capacity', {
139
142
});
140
143
```
141
144
142
-
The `cluster.defaultCapacity` property will reference the `AutoScalingGroup`
143
-
resource for the default capacity. It will be `undefined` if `defaultCapacity`
144
-
is set to `0` or `defaultCapacityType` is either `NODEGROUP` or undefined.
145
+
When creating a cluster with default capacity (i.e `defaultCapacity !== 0` or is undefined), you can access the allocated capacity using:
145
146
146
-
And the `cluster.defaultNodegroup` property will reference the `Nodegroup`
147
-
resource for the default capacity. It will be `undefined` if `defaultCapacity`
148
-
is set to `0` or `defaultCapacityType` is `EC2`.
147
+
-`cluster.defaultCapacity` will reference the `AutoScalingGroup` resource in case `defaultCapacityType` is set to `EC2` or is undefined.
148
+
-`cluster.defaultNodegroup` will reference the `Nodegroup` resource in case `defaultCapacityType` is set to `NODEGROUP`.
149
149
150
-
You can add `AutoScalingGroup` resource as customized capacity through `cluster.addCapacity()` or
151
-
`cluster.addAutoScalingGroup()`:
150
+
You can add customized capacity in the form of an `AutoScalingGroup` resource through `cluster.addCapacity()` or `cluster.addAutoScalingGroup()`:
152
151
153
152
```ts
154
153
cluster.addCapacity('frontend-nodes', {
@@ -167,7 +166,7 @@ for Amazon EKS Kubernetes clusters. By default, `eks.Nodegroup` create a nodegro
Instance types with `ARM64` architecture are supported in both managed nodegroup and self-managed capacity. Simply specify an ARM64 `instanceType` (such as `m6g.medium`), and the latest
208
+
Instance types with `ARM64` architecture are supported in both managed nodegroup and self-managed capacity. Simply specify an ARM64 `instanceType` (such as `m6g.medium`), and the latest
210
209
Amazon Linux 2 AMI for ARM64 will be automatically selected.
211
210
212
211
```ts
213
-
// create a cluster with a default managed nodegroup
212
+
// create a cluster with a default managed nodegroup
214
213
cluster=neweks.Cluster(this, 'Cluster', {
215
214
vpc,
216
-
mastersRole,
217
215
version: eks.KubernetesVersion.V1_17,
218
216
});
219
217
@@ -298,12 +296,9 @@ can cause your EC2 instance to become unavailable, such as [EC2 maintenance even
298
296
and [EC2 Spot interruptions](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/spot-interruptions.html) and helps gracefully stop all pods running on spot nodes that are about to be
The resources are created in the cluster by running `kubectl apply` from a python lambda function. You can configure the environment of this function by specifying it at cluster instantiation. For example, this can useful in order to configure an http proxy:
385
+
The resources are created in the cluster by running `kubectl apply` from a python lambda function. You can configure the environment of this function by specifying it at cluster instantiation. For example, this can be useful in order to configure an http proxy:
Since Kubernetes resources are implemented as CloudFormation resources in the
454
-
CDK. This means that if the resource is deleted from your code (or the stack is
448
+
Since Kubernetes manifests are implemented as CloudFormation resources in the
449
+
CDK. This means that if the manifest is deleted from your code (or the stack is
455
450
deleted), the next `cdk deploy` will issue a `kubectl delete` command and the
456
-
Kubernetes resources will be deleted.
451
+
Kubernetes resources in that manifest will be deleted.
452
+
453
+
#### Caveat
454
+
455
+
If you have multiple resources in a single `KubernetesManifest`, and one of those **resources** is removed from the manifest, it will not be deleted and will remain orphan. See [Support Object pruning](https://github.com/aws/aws-cdk/issues/10495) for more details.
457
456
458
457
#### Dependencies
459
458
@@ -482,9 +481,9 @@ const service = cluster.addManifest('my-service', {
482
481
service.node.addDependency(namespace); // will apply `my-namespace` before `my-service`.
483
482
```
484
483
485
-
NOTE: when a `KubernetesManifest` includes multiple resources (either directly
484
+
**NOTE:** when a `KubernetesManifest` includes multiple resources (either directly
486
485
or through `cluster.addManifest()`) (e.g. `cluster.addManifest('foo', r1, r2,
487
-
r3,...))`), these resources will be applied as a single manifest via `kubectl`
486
+
r3,...)`), these resources will be applied as a single manifest via `kubectl`
488
487
and will be applied sequentially (the standard behavior in `kubectl`).
489
488
490
489
### Patching Kubernetes Resources
@@ -582,7 +581,7 @@ If the cluster is configured with private-only or private and restricted public
582
581
Kubernetes [endpoint access](#endpoint-access), you must also specify:
583
582
584
583
-`kubectlSecurityGroupId` - the ID of an EC2 security group that is allowed
585
-
connections to the cluster's control security group.
584
+
connections to the cluster's control security group. For example, the EKS managed [cluster security group](#cluster-security-group).
586
585
-`kubectlPrivateSubnetIds` - a list of private VPC subnets IDs that will be used
587
586
to access the Kubernetes endpoint.
588
587
@@ -598,7 +597,7 @@ users, roles and accounts.
598
597
Furthermore, when auto-scaling capacity is added to the cluster (through
599
598
`cluster.addCapacity` or `cluster.addAutoScalingGroup`), the IAM instance role
600
599
of the auto-scaling group will be automatically mapped to RBAC so nodes can
601
-
connect to the cluster. No manual mapping is required any longer.
600
+
connect to the cluster. No manual mapping is required.
602
601
603
602
For example, let's say you want to grant an IAM user administrative privileges
If you want to be able to SSH into your worker nodes, you must already
660
-
have an SSH key in the region you're connecting to and pass it, and you must
661
-
be able to connect to the hosts (meaning they must have a public IP and you
659
+
have an SSH key in the region you're connecting to and pass it when you add capacity to the cluster. You must also be able to connect to the hosts (meaning they must have a public IP and you
662
660
should be allowed to connect to them on port 22):
663
661
664
-
[ssh into nodes example](test/example.ssh-into-nodes.lit.ts)
662
+
See [SSH into nodes](test/example.ssh-into-nodes.lit.ts) for a code example.
665
663
666
664
If you want to SSH into nodes in a private subnet, you should set up a
667
665
bastion host in a public subnet. That setup is recommended, but is
0 commit comments