Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

chore: make examples compile #18020

Merged
merged 14 commits into from
Jan 10, 2022
Merged
19 changes: 11 additions & 8 deletions packages/@aws-cdk/aws-backup/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,8 @@ const plan = backup.BackupPlan.dailyWeeklyMonthly5YearRetention(this, 'Plan');

Assigning resources to a plan can be done with `addSelection()`:

```ts fixture=with-plan
```ts
declare const plan: backup.BackupPlan;
const myTable = dynamodb.Table.fromTableName(this, 'Table', 'myTableName');
const myCoolConstruct = new Construct(this, 'MyCoolConstruct');

Expand All @@ -50,22 +51,24 @@ created for the selection. The `BackupSelection` implements `IGrantable`.

To add rules to a plan, use `addRule()`:

```ts fixture=with-plan
```ts
declare const plan: backup.BackupPlan;
plan.addRule(new backup.BackupPlanRule({
completionWindow: Duration.hours(2),
startWindow: Duration.hours(1),
scheduleExpression: events.Schedule.cron({ // Only cron expressions are supported
day: '15',
hour: '3',
minute: '30'
minute: '30',
}),
moveToColdStorageAfter: Duration.days(30)
moveToColdStorageAfter: Duration.days(30),
}));
```

Ready-made rules are also available:

```ts fixture=with-plan
```ts
declare const plan: backup.BackupPlan;
plan.addRule(backup.BackupPlanRule.daily());
plan.addRule(backup.BackupPlanRule.weekly());
```
Expand Down Expand Up @@ -139,7 +142,7 @@ const vault = new backup.BackupVault(this, 'Vault', {
},
}),
],
});
}),
})
```

Expand All @@ -153,8 +156,8 @@ new backup.BackupVault(this, 'Vault', {
blockRecoveryPointDeletion: true,
});

const plan = backup.BackupPlan.dailyMonthly1YearRetention(this, 'Plan');
plan.backupVault.blockRecoveryPointDeletion();
declare const backupVault: backup.BackupVault;
backupVault.blockRecoveryPointDeletion();
```

By default access is not restricted.
Expand Down
9 changes: 8 additions & 1 deletion packages/@aws-cdk/aws-backup/package.json
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,14 @@
]
}
},
"projectReferences": true
"projectReferences": true,
"metadata": {
"jsii": {
"rosetta": {
"strict": true
}
}
}
},
"repository": {
"type": "git",
Expand Down
2 changes: 2 additions & 0 deletions packages/@aws-cdk/aws-backup/rosetta/default.ts-fixture
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,8 @@ import { Duration, RemovalPolicy, Stack } from '@aws-cdk/core';
import { Construct } from 'constructs';
import * as backup from '@aws-cdk/aws-backup';
import * as iam from '@aws-cdk/aws-iam';
import * as dynamodb from '@aws-cdk/aws-dynamodb';
import * as events from '@aws-cdk/aws-events';
import * as kms from '@aws-cdk/aws-kms';
import * as sns from '@aws-cdk/aws-sns';

Expand Down
16 changes: 0 additions & 16 deletions packages/@aws-cdk/aws-backup/rosetta/with-plan.ts-fixture

This file was deleted.

19 changes: 10 additions & 9 deletions packages/@aws-cdk/aws-s3-assets/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -95,18 +95,19 @@ method `tryBundle()` which should return `true` if local bundling was performed.
If `false` is returned, docker bundling will be done:

```ts
function tryBundle(outputDir: string, options: BundlingOptions) {
const canRunLocally = true // replace with actual logic
if (canRunLocally) {
// perform local bundling here
return true;
}
return false;
},

new assets.Asset(this, 'BundledAsset', {
path: '/path/to/asset',
bundling: {
local: {
tryBundle(outputDir: string, options: BundlingOptions) {
if (canRunLocally) {
// perform local bundling here
return true;
}
return false;
},
},
local: {tryBundle},
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Will this compile in Python too?

Copy link
Contributor Author

@kaizencc kaizencc Dec 14, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It passes rosetta:extract --compile so it should. However, this entire PR is waiting on a fix in backup I'm going to make very soon (which is why build is failing).

edit: nevermind, no fix necessary.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This won't work actually. If Rosetta doesn't flag this, that's a bug in Rosetta.

You need to declare an explicit class that implements the right interface.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@kaizen3031593 for context: #17928

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Rip rosetta. I can investigate what is going on in rosetta later. In fact, I looked back at what I did in lambda-go which also included a bundling function. I recall that rosetta:extract --compile did not pass there and what I did was end up making that example a text snippet and say that the snippet is only available in TypeScript.

Which I guess is bad for the Python folks because I'm basically saying, sorry this doesn't work in Python.

You need to declare an explicit class that implements the right interface.

That seems to be what #17928 tried and it did not work.

// Docker bundling fallback
image: DockerImage.fromRegistry('alpine'),
entrypoint: ['/bin/sh', '-c'],
Expand Down
4 changes: 2 additions & 2 deletions packages/@aws-cdk/aws-s3-assets/lib/asset.ts
Original file line number Diff line number Diff line change
Expand Up @@ -76,13 +76,13 @@ export class Asset extends CoreConstruct implements cdk.IAsset {

/**
* Attribute which represents the S3 HTTP URL of this asset.
* @example https://s3.us-west-1.amazonaws.com/bucket/key
* For example, `https://s3.us-west-1.amazonaws.com/bucket/key`
*/
public readonly httpUrl: string;

/**
* Attribute which represents the S3 URL of this asset.
* @example s3://bucket/key
* For example, `s3://bucket/key`
*/
public readonly s3ObjectUrl: string;

Expand Down
9 changes: 8 additions & 1 deletion packages/@aws-cdk/aws-s3-assets/package.json
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,14 @@
]
}
},
"projectReferences": true
"projectReferences": true,
"metadata": {
"jsii": {
"rosetta": {
"strict": true
}
}
}
},
"repository": {
"type": "git",
Expand Down
12 changes: 12 additions & 0 deletions packages/@aws-cdk/aws-s3-assets/rosetta/default.ts-fixture
Original file line number Diff line number Diff line change
@@ -0,0 +1,12 @@
// Fixture with packages imported, but nothing else
import { Construct } from 'constructs';
import { BundlingOptions, BundlingOutput, DockerImage, Stack } from '@aws-cdk/core';
import * as assets from '@aws-cdk/aws-s3-assets';

class Fixture extends Stack {
constructor(scope: Construct, id: string) {
super(scope, id);

/// here
}
}
58 changes: 34 additions & 24 deletions packages/@aws-cdk/aws-s3-deployment/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -20,13 +20,13 @@ enabled and populates it from a local directory on disk.
```ts
const websiteBucket = new s3.Bucket(this, 'WebsiteBucket', {
websiteIndexDocument: 'index.html',
publicReadAccess: true
publicReadAccess: true,
});

new s3deploy.BucketDeployment(this, 'DeployWebsite', {
sources: [s3deploy.Source.asset('./website-dist')],
destinationBucket: websiteBucket,
destinationKeyPrefix: 'web/static' // optional prefix in destination bucket
destinationKeyPrefix: 'web/static', // optional prefix in destination bucket
});
```

Expand Down Expand Up @@ -110,6 +110,7 @@ when the `BucketDeployment` resource is created or updated. You can use the opti
this behavior, in which case the files will not be deleted.

```ts
declare const destinationBucket: s3.Bucket;
new s3deploy.BucketDeployment(this, 'DeployMeWithoutDeletingFilesOnDestination', {
sources: [s3deploy.Source.asset(path.join(__dirname, 'my-website'))],
destinationBucket,
Expand All @@ -122,17 +123,18 @@ each with its own characteristics. For example, you can set different cache-cont
based on file extensions:

```ts
new BucketDeployment(this, 'BucketDeployment', {
sources: [Source.asset('./website', { exclude: ['index.html'] })],
destinationBucket: bucket,
cacheControl: [CacheControl.fromString('max-age=31536000,public,immutable')],
declare const destinationBucket: s3.Bucket;
new s3deploy.BucketDeployment(this, 'BucketDeployment', {
sources: [s3deploy.Source.asset('./website', { exclude: ['index.html'] })],
destinationBucket,
cacheControl: [s3deploy.CacheControl.fromString('max-age=31536000,public,immutable')],
prune: false,
});

new BucketDeployment(this, 'HTMLBucketDeployment', {
sources: [Source.asset('./website', { exclude: ['*', '!index.html'] })],
destinationBucket: bucket,
cacheControl: [CacheControl.fromString('max-age=0,no-cache,no-store,must-revalidate')],
new s3deploy.BucketDeployment(this, 'HTMLBucketDeployment', {
sources: [s3deploy.Source.asset('./website', { exclude: ['*', '!index.html'] })],
destinationBucket,
cacheControl: [s3deploy.CacheControl.fromString('max-age=0,no-cache,no-store,must-revalidate')],
prune: false,
});
```
Expand All @@ -142,19 +144,21 @@ new BucketDeployment(this, 'HTMLBucketDeployment', {
There are two points at which filters are evaluated in a deployment: asset bundling and the actual deployment. If you simply want to exclude files in the asset bundling process, you should leverage the `exclude` property of `AssetOptions` when defining your source:

```ts
new BucketDeployment(this, 'HTMLBucketDeployment', {
sources: [Source.asset('./website', { exclude: ['*', '!index.html'] })],
destinationBucket: bucket,
declare const destinationBucket: s3.Bucket;
new s3deploy.BucketDeployment(this, 'HTMLBucketDeployment', {
sources: [s3deploy.Source.asset('./website', { exclude: ['*', '!index.html'] })],
destinationBucket,
});
```

If you want to specify filters to be used in the deployment process, you can use the `exclude` and `include` filters on `BucketDeployment`. If excluded, these files will not be deployed to the destination bucket. In addition, if the file already exists in the destination bucket, it will not be deleted if you are using the `prune` option:

```ts
declare const destinationBucket: s3.Bucket;
new s3deploy.BucketDeployment(this, 'DeployButExcludeSpecificFiles', {
sources: [s3deploy.Source.asset(path.join(__dirname, 'my-website'))],
destinationBucket,
exclude: ['*.txt']
exclude: ['*.txt'],
});
```

Expand Down Expand Up @@ -189,7 +193,7 @@ and [`aws s3 sync` documentation](https://docs.aws.amazon.com/cli/latest/referen
```ts
const websiteBucket = new s3.Bucket(this, 'WebsiteBucket', {
websiteIndexDocument: 'index.html',
publicReadAccess: true
publicReadAccess: true,
});

new s3deploy.BucketDeployment(this, 'DeployWebsite', {
Expand All @@ -201,9 +205,12 @@ new s3deploy.BucketDeployment(this, 'DeployWebsite', {
// system-defined metadata
contentType: "text/html",
contentLanguage: "en",
storageClass: StorageClass.INTELLIGENT_TIERING,
serverSideEncryption: ServerSideEncryption.AES_256,
cacheControl: [CacheControl.setPublic(), CacheControl.maxAge(cdk.Duration.hours(1))],
storageClass: s3deploy.StorageClass.INTELLIGENT_TIERING,
serverSideEncryption: s3deploy.ServerSideEncryption.AES_256,
cacheControl: [
s3deploy.CacheControl.setPublic(),
s3deploy.CacheControl.maxAge(Duration.hours(1)),
],
accessControl: s3.BucketAccessControl.BUCKET_OWNER_FULL_CONTROL,
});
```
Expand Down Expand Up @@ -250,13 +257,16 @@ Please note that creating VPC inline may cause stack deletion failures. It is sh
To avoid such condition, keep your network infra (VPC) in a separate stack and pass as props.

```ts
declare const destinationBucket: s3.Bucket;
declare const vpc: ec2.Vpc;

new s3deploy.BucketDeployment(this, 'DeployMeWithEfsStorage', {
sources: [s3deploy.Source.asset(path.join(__dirname, 'my-website'))],
destinationBucket,
destinationKeyPrefix: 'efs/',
useEfs: true,
vpc: new ec2.Vpc(this, 'Vpc'),
retainOnDelete: false,
sources: [s3deploy.Source.asset(path.join(__dirname, 'my-website'))],
destinationBucket,
destinationKeyPrefix: 'efs/',
useEfs: true,
vpc,
retainOnDelete: false,
});
```

Expand Down
9 changes: 8 additions & 1 deletion packages/@aws-cdk/aws-s3-deployment/package.json
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,14 @@
]
}
},
"projectReferences": true
"projectReferences": true,
"metadata": {
"jsii": {
"rosetta": {
"strict": true
}
}
}
},
"repository": {
"type": "git",
Expand Down
15 changes: 15 additions & 0 deletions packages/@aws-cdk/aws-s3-deployment/rosetta/default.ts-fixture
Original file line number Diff line number Diff line change
@@ -0,0 +1,15 @@
// Fixture with packages imported, but nothing else
import { Duration, Stack } from '@aws-cdk/core';
import { Construct } from 'constructs';
import * as s3deploy from '@aws-cdk/aws-s3-deployment';
import * as s3 from '@aws-cdk/aws-s3';
import * as ec2 from'@aws-cdk/aws-ec2';
import * as path from 'path';

class Fixture extends Stack {
constructor(scope: Construct, id: string) {
super(scope, id);

/// here
}
}
16 changes: 8 additions & 8 deletions packages/@aws-cdk/aws-s3-notifications/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,24 +18,24 @@ The following example shows how to send a notification to an SNS
topic when an object is created in an S3 bucket:

```ts
import * as s3n from '@aws-cdk/aws-s3-notifications';
import * as sns from '@aws-cdk/aws-sns';

const bucket = new s3.Bucket(stack, 'Bucket');
const topic = new sns.Topic(stack, 'Topic');
const bucket = new s3.Bucket(this, 'Bucket');
const topic = new sns.Topic(this, 'Topic');

bucket.addEventNotification(s3.EventType.OBJECT_CREATED_PUT, new s3n.SnsDestination(topic));
```

The following example shows how to send a notification to a Lambda function when an object is created in an S3 bucket:

```ts
import * as s3n from '@aws-cdk/aws-s3-notifications';
import * as lambda from '@aws-cdk/aws-lambda';

const bucket = new s3.Bucket(stack, 'Bucket');
const fn = new Function(this, 'MyFunction', {
runtime: Runtime.NODEJS_12_X,
const bucket = new s3.Bucket(this, 'Bucket');
const fn = new lambda.Function(this, 'MyFunction', {
runtime: lambda.Runtime.NODEJS_12_X,
handler: 'index.handler',
code: Code.fromAsset(path.join(__dirname, 'lambda-handler')),
code: lambda.Code.fromAsset(path.join(__dirname, 'lambda-handler')),
});

bucket.addEventNotification(s3.EventType.OBJECT_CREATED, new s3n.LambdaDestination(fn));
Expand Down
9 changes: 8 additions & 1 deletion packages/@aws-cdk/aws-s3-notifications/package.json
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,14 @@
]
}
},
"projectReferences": true
"projectReferences": true,
"metadata": {
"jsii": {
"rosetta": {
"strict": true
}
}
}
},
"repository": {
"type": "git",
Expand Down
14 changes: 14 additions & 0 deletions packages/@aws-cdk/aws-s3-notifications/rosetta/default.ts-fixture
Original file line number Diff line number Diff line change
@@ -0,0 +1,14 @@
// Fixture with packages imported, but nothing else
import { Stack } from '@aws-cdk/core';
import { Construct } from 'constructs';
import * as s3n from '@aws-cdk/aws-s3-notifications';
import * as s3 from '@aws-cdk/aws-s3';
import * as path from 'path';

class Fixture extends Stack {
constructor(scope: Construct, id: string) {
super(scope, id);

/// here
}
}
Loading