Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

chore: make examples in aws-ec2 package compilable #5011

Merged
merged 5 commits into from
Jan 6, 2020
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -28,4 +28,4 @@ coverage/
cdk.context.json
.cdk.staging/
cdk.out/

*.tabl.json
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

do we want this in pkglint?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's just going to be created here, not in any subpackage. So not really?

63 changes: 60 additions & 3 deletions CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -36,6 +36,7 @@ and let us know if it's not up-to-date (even better, submit a PR with your corr
- [Updating all Dependencies](#updating-all-dependencies)
- [Running CLI integration tests](#running-cli-integration-tests)
- [API Compatibility Checks](#api-compatibility-checks)
- [Examples](#examples)
- [Feature Flags](#feature-flags)
- [Troubleshooting](#troubleshooting)
- [Debugging](#debugging)
Expand Down Expand Up @@ -527,6 +528,62 @@ this API we will not break anyone, because they weren't able to use it. The file
`allowed-breaking-changes.txt` in the root of the repo is an exclusion file that
can be used in these cases.

### Examples

Examples typed in fenced code blocks (looking like `'''ts`, but then with backticks
rix0rrr marked this conversation as resolved.
Show resolved Hide resolved
instead of regular quotes) will be automatically extrated, compiled and translated
to other languages when the bindings are generated.

To successfully do that, they must be compilable. The easiest way to do that is using
a *fixture*, which looks like this:

```
'''ts fixture=with-bucket
bucket.addLifecycleTransition({ ... });
'''
```
rix0rrr marked this conversation as resolved.
Show resolved Hide resolved

While processing the examples, the tool will look for a file called
`rosetta/with-bucket.ts-fixture` in the package directory. This file will be
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Perhaps a bit late now but would be better to have used the extension format .ts.fixture in line with how template files are usually named like .html.erb, .template.json, etc.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That should be .fixture.ts then. I recall not doing that because TSC will want to compile it then, something like that.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hence the suggestion to go .ts.fixture.

Rationale: The file might not be strictly typescript (or atleast not a complete typescript app). When the fixture is resolved (with the example code), it becomes correct typescript (or fully valid).

Non-blocking.

treated as a regular TypeScript source file, but it must also contain the text
`/// here`, at which point the example will be inserted. The complete file must
compile properly.

Before the `/// here` marker, the fixture should import the necessary packages
and initialize the required variables.
Comment on lines +548 to +553
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I got a bit lost in the text here, until I saw the ec2 package.
Can we clean up the lang a bit and/or mention the examples in the ec2 packages earlier on?


If no fixture is specified, the fixture with the name
`rosetta/default.ts-fixture` will be used if present. `nofixture` can be used to
opt out of that behavior.
rix0rrr marked this conversation as resolved.
Show resolved Hide resolved

In an `@example` block, which is unfenced, the first line of the example can
contain three slashes to achieve the same effect:

```
/**
* @example
* /// fixture=with-bucket
* bucket.addLifecycleTransition({ ... });
*/
```

When including packages in your examples (even the package you're writing the
examples for), use the full package name (e.g. `import s3 =
require('@aws-cdk/aws-s3);`). The example will be compiled in an environment
where all CDK packages are available using their public names. In this way,
it's also possible to import packages that are not in the dependency set of
the current package.

For a practical example of how making sample code compilable works, see the
`aws-ec2` package.

Examples of all packages are extracted and compiled as part of the packaging
step. If you are working on getting rid of example compilation errors of a
single package, you can run `scripts/compile-samples` on the package by itself.
rix0rrr marked this conversation as resolved.
Show resolved Hide resolved

For now, non-compiling examples will not yet block the build, but at some point
in the future they will.

### Feature Flags

Sometimes we want to introduce new breaking behavior because we believe this is
Expand Down Expand Up @@ -559,9 +616,9 @@ The pattern is simple:
5. Under `BREAKING CHANGES` in your commit message describe this new behavior:

```
BREAKING CHANGE: template file names for new projects created through "cdk init"
will use the template artifact ID instead of the physical stack name to enable
multiple stacks to use the same name. This is enabled through the flag
BREAKING CHANGE: template file names for new projects created through "cdk init"
will use the template artifact ID instead of the physical stack name to enable
multiple stacks to use the same name. This is enabled through the flag
`@aws-cdk/core:enableStackNameDuplicates` in newly generated `cdk.json` files.
```

Expand Down
19 changes: 14 additions & 5 deletions pack.sh
Original file line number Diff line number Diff line change
Expand Up @@ -25,22 +25,31 @@ function lerna_scopes() {
done
}

echo "Packaging jsii modules" >&2
# Compile examples with respect to "decdk" directory, as all packages will
# be symlinked there so they can all be included.
echo "Extracting code samples" >&2
node --experimental-worker $(which jsii-rosetta) \
--compile \
--output samples.tabl.json \
--directory packages/decdk \
$(cat $TMPDIR/jsii.txt)
Comment on lines +28 to +35
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This should ideally be in the build step of each module so that I can see errors on invalid example code. If this happens only in the pack step, I won't know of this until a PR build.
Why not do this in the build step as done in compile samples?

Also, this is technically not 'packing'. If I were to speed up PR builds by removing packing, we would still want this to be present.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why not do this in the build step as done in compile samples?

Because it's not hermetic. Samples will require everything to have been built and linked into decdk package.


# Jsii packaging (all at once using jsii-pacmak)
echo "Packaging jsii modules" >&2
jsii-pacmak \
--verbose \
--outdir $distdir/ \
--rosetta-tablet samples.tabl.json \
$(cat $TMPDIR/jsii.txt)

# Non-jsii packaging, which means running 'package' in every individual
# module and rsync'ing the result to the shared dist directory.
# module
echo "Packaging non-jsii modules" >&2
lerna run $(lerna_scopes $(cat $TMPDIR/nonjsii.txt)) --sort --concurrency=1 --stream package

# Finally rsync all 'dist' directories together into a global 'dist' directory
for dir in $(find packages -name dist | grep -v node_modules | grep -v run-wrappers); do
echo "Merging ${dir} into ${distdir}"
rsync -av $dir/ ${distdir}/
echo "Merging ${dir} into ${distdir}" >&2
rsync -a $dir/ ${distdir}/
done

# Remove a JSII aggregate POM that may have snuk past
Expand Down
1 change: 1 addition & 0 deletions package.json
Original file line number Diff line number Diff line change
Expand Up @@ -17,6 +17,7 @@
"fs-extra": "^8.1.0",
"jsii-diff": "^0.21.1",
"jsii-pacmak": "^0.21.1",
"jsii-rosetta": "^0.21.1",
"lerna": "^3.20.2",
"typescript": "~3.7.4"
},
Expand Down
4 changes: 2 additions & 2 deletions packages/@aws-cdk/aws-cloudformation/lib/nested-stack.ts
Original file line number Diff line number Diff line change
Expand Up @@ -122,8 +122,8 @@ export class NestedStack extends Stack {
* - If this is referenced from the parent stack, it will return a token that parses the name from the stack ID.
* - If this is referenced from the context of the nested stack, it will return `{ "Ref": "AWS::StackName" }`
*
* @example mystack-mynestedstack-sggfrhxhum7w
* @attribute
* @example mystack-mynestedstack-sggfrhxhum7w
*/
public get stackName() {
return this._contextualStackName;
Expand All @@ -136,8 +136,8 @@ export class NestedStack extends Stack {
* - If this is referenced from the parent stack, it will return `{ "Ref": "LogicalIdOfNestedStackResource" }`.
* - If this is referenced from the context of the nested stack, it will return `{ "Ref": "AWS::StackId" }`
*
* @example "arn:aws:cloudformation:us-east-2:123456789012:stack/mystack-mynestedstack-sggfrhxhum7w/f449b250-b969-11e0-a185-5081d0136786"
* @attribute
* @example "arn:aws:cloudformation:us-east-2:123456789012:stack/mystack-mynestedstack-sggfrhxhum7w/f449b250-b969-11e0-a185-5081d0136786"
*/
public get stackId() {
return this._contextualStackId;
Expand Down
55 changes: 29 additions & 26 deletions packages/@aws-cdk/aws-ec2/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,15 +12,17 @@
The `@aws-cdk/aws-ec2` package contains primitives for setting up networking and
instances.

```ts nofixture
import ec2 = require('@aws-cdk/aws-ec2');
```

## VPC

Most projects need a Virtual Private Cloud to provide security by means of
network partitioning. This is achieved by creating an instance of
`Vpc`:

```ts
import ec2 = require('@aws-cdk/aws-ec2');

const vpc = new ec2.Vpc(this, 'VPC');
```

Expand Down Expand Up @@ -186,7 +188,6 @@ by setting the `reserved` subnetConfiguration property to true, as shown
below:

```ts
import ec2 = require('@aws-cdk/aws-ec2');
const vpc = new ec2.Vpc(this, 'TheVPC', {
natGateways: 1,
subnetConfiguration: [
Expand Down Expand Up @@ -257,7 +258,7 @@ which you can add egress traffic rules.

You can manipulate Security Groups directly:

```ts
```ts fixture=with-vpc
const mySecurityGroup = new ec2.SecurityGroup(this, 'SecurityGroup', {
vpc,
description: 'Allow ssh access to ec2 instances',
Expand All @@ -275,7 +276,7 @@ have security groups, you have to add an **Egress** rule to one Security Group,
and an **Ingress** rule to the other. The connections object will automatically
take care of this for you:

```ts
```ts fixture=conns
// Allow connections from anywhere
loadBalancer.connections.allowFromAnyIpv4(ec2.Port.tcp(443), 'Allow inbound HTTPS');

Expand All @@ -290,23 +291,23 @@ appFleet.connections.allowTo(dbFleet, ec2.Port.tcp(443), 'App can call database'

There are various classes that implement the connection peer part:

```ts
```ts fixture=conns
// Simple connection peers
let peer = ec2.Peer.ipv4("10.0.0.0/16");
let peer = ec2.Peer.anyIpv4();
let peer = ec2.Peer.ipv6("::/0");
let peer = ec2.Peer.anyIpv6();
let peer = ec2.Peer.prefixList("pl-12345");
fleet.connections.allowTo(peer, ec2.Port.tcp(443), 'Allow outbound HTTPS');
peer = ec2.Peer.anyIpv4();
peer = ec2.Peer.ipv6("::0/0");
peer = ec2.Peer.anyIpv6();
peer = ec2.Peer.prefixList("pl-12345");
appFleet.connections.allowTo(peer, ec2.Port.tcp(443), 'Allow outbound HTTPS');
```

Any object that has a security group can itself be used as a connection peer:

```ts
```ts fixture=conns
// These automatically create appropriate ingress and egress rules in both security groups
fleet1.connections.allowTo(fleet2, ec2.Port.tcp(80), 'Allow between fleets');

fleet.connections.allowFromAnyIpv4(ec2.Port.tcp(80), 'Allow from load balancer');
appFleet.connections.allowFromAnyIpv4(ec2.Port.tcp(80), 'Allow from load balancer');
```

### Port Ranges
Expand Down Expand Up @@ -336,12 +337,12 @@ If the object you're calling the peering method on has a default port associated

For example:

```ts
```ts fixture=conns
// Port implicit in listener
listener.connections.allowDefaultPortFromAnyIpv4('Allow public');

// Port implicit in peer
fleet.connections.allowDefaultPortTo(rdsDatabase, 'Fleet can access database');
appFleet.connections.allowDefaultPortTo(rdsDatabase, 'Fleet can access database');
```

## Machine Images (AMIs)
Expand All @@ -368,7 +369,7 @@ examples of things you might want to use:
Create your VPC with VPN connections by specifying the `vpnConnections` props (keys are construct `id`s):

```ts
const vpc = new ec2.Vpc(stack, 'MyVpc', {
const vpc = new ec2.Vpc(this, 'MyVpc', {
vpnConnections: {
dynamic: { // Dynamic routing (BGP)
ip: '1.2.3.4'
Expand All @@ -387,13 +388,13 @@ const vpc = new ec2.Vpc(stack, 'MyVpc', {
To create a VPC that can accept VPN connections, set `vpnGateway` to `true`:

```ts
const vpc = new ec2.Vpc(stack, 'MyVpc', {
const vpc = new ec2.Vpc(this, 'MyVpc', {
vpnGateway: true
});
```

VPN connections can then be added:
```ts
```ts fixture=with-vpc
vpc.addVpnConnection('Dynamic', {
ip: '1.2.3.4'
});
Expand All @@ -402,9 +403,10 @@ vpc.addVpnConnection('Dynamic', {
Routes will be propagated on the route tables associated with the private subnets.

VPN connections expose [metrics (cloudwatch.Metric)](https://github.com/aws/aws-cdk/blob/master/packages/%40aws-cdk/aws-cloudwatch/README.md) across all tunnels in the account/region and per connection:
```ts

```ts fixture=with-vpc
// Across all tunnels in the account/region
const allDataOut = VpnConnection.metricAllTunnelDataOut();
const allDataOut = ec2.VpnConnection.metricAllTunnelDataOut();

// For a specific vpn connection
const vpnConnection = vpc.addVpnConnection('Dynamic', {
Expand All @@ -425,8 +427,9 @@ By default, interface VPC endpoints create a new security group and traffic is *
automatically allowed from the VPC CIDR.

Use the `connections` object to allow traffic to flow to the endpoint:
```ts
myEndpoint.connections.allowDefaultPortFrom(...);

```ts fixture=conns
myEndpoint.connections.allowDefaultPortFromAnyIpv4();
```

Alternatively, existing security groups can be used by specifying the `securityGroups` prop.
Expand All @@ -437,17 +440,17 @@ You can use bastion hosts using a standard SSH connection targetting port 22 on
feature of AWS Systems Manager Session Manager, which does not need an opened security group. (https://aws.amazon.com/about-aws/whats-new/2019/07/session-manager-launches-tunneling-support-for-ssh-and-scp/)

A default bastion host for use via SSM can be configured like:
```ts
```ts fixture=with-vpc
const host = new ec2.BastionHostLinux(this, 'BastionHost', { vpc });
```

If you want to connect from the internet using SSH, you need to place the host into a public subnet. You can then configure allowed source hosts.
```ts
```ts fixture=with-vpc
const host = new ec2.BastionHostLinux(this, 'BastionHost', {
vpc,
subnetSelection: { subnetType: SubnetType.PUBLIC },
subnetSelection: { subnetType: ec2.SubnetType.PUBLIC },
});
host.allowSshAccessFrom(Peer.ipv4('1.2.3.4/32'));
host.allowSshAccessFrom(ec2.Peer.ipv4('1.2.3.4/32'));
```

As there are no SSH public keys deployed on this machine, you need to use [EC2 Instance Connect](https://aws.amazon.com/de/blogs/compute/new-using-amazon-ec2-instance-connect-for-ssh-access-to-your-ec2-instances/)
Expand Down
7 changes: 3 additions & 4 deletions packages/@aws-cdk/aws-ec2/lib/instance.ts
Original file line number Diff line number Diff line change
Expand Up @@ -142,10 +142,9 @@ export interface InstanceProps {
* The role must be assumable by the service principal `ec2.amazonaws.com`:
*
* @example
*
* const role = new iam.Role(this, 'MyRole', {
* assumedBy: new iam.ServicePrincipal('ec2.amazonaws.com')
* });
* const role = new iam.Role(this, 'MyRole', {
* assumedBy: new iam.ServicePrincipal('ec2.amazonaws.com')
* });
*
* @default - A role will automatically be created, it can be accessed via the `role` property
*/
Expand Down
6 changes: 3 additions & 3 deletions packages/@aws-cdk/aws-ec2/lib/nat.ts
Original file line number Diff line number Diff line change
Expand Up @@ -111,9 +111,9 @@ export interface NatInstanceProps {
* If you have a specific AMI ID you want to use, pass a `GenericLinuxImage`. For example:
*
* ```ts
* NatProvider.instance({
* instanceType: new InstanceType('t3.micro'),
* machineImage: new GenericLinuxImage({
* ec2.NatProvider.instance({
* instanceType: new ec2.InstanceType('t3.micro'),
* machineImage: new ec2.GenericLinuxImage({
* 'us-east-2': 'ami-0f9c61b5a562a16af'
* })
* })
Expand Down
Loading