Skip to content

Commit

Permalink
reafactor code fences to use proper language or command shortcode
Browse files Browse the repository at this point in the history
  • Loading branch information
alexrashed committed Oct 7, 2021
1 parent d0b6107 commit f2f7ff3
Show file tree
Hide file tree
Showing 11 changed files with 57 additions and 169 deletions.
11 changes: 6 additions & 5 deletions content/en/docs/Integrations/architect/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,11 +17,12 @@ If you are adapting an existing configuration, you might be able to skip certain
## Example

### Setup
To use Architect in conjunction with Localstack, simply install the ```arclocal``` command (sources can be found [here](https://github.com/localstack/architect-local)).
```
npm install -g architect-local @architect/architect aws-sdk
```
The ``` arclocal``` command has the same usage as the ```arc``` command, so you can start right away.
To use Architect in conjunction with Localstack, simply install the `arclocal` command (sources can be found [here](https://github.com/localstack/architect-local)).
{{< command >}}
$ npm install -g architect-local @architect/architect aws-sdk
{{< /command >}}

The `arclocal` command has the same usage as the `arc` command, so you can start right away.

Create a test directory

Expand Down
6 changes: 3 additions & 3 deletions content/en/docs/Integrations/pulumi/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -52,8 +52,8 @@ Installing dependencies...

This will create the following directory structure.

```language
% tree -L 1
{{< command >}}
$ tree -L 1
.
├── index.ts
├── node_modules
Expand All @@ -62,7 +62,7 @@ This will create the following directory structure.
├── Pulumi.dev.yaml
├── Pulumi.yaml
└── tsconfig.json
```
{{< / command >}}

Now edit your stack configuration `Pulumi.dev.yaml` as follows:

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -245,7 +245,7 @@ Let's configure it to lookup our function Beans by HTTP method and path, create
new `application.properties` file under `src/main/resources/application.properties`
with the following content:

```properties
```env
spring.main.banner-mode=off
spring.cloud.function.definition=functionRouter
spring.cloud.function.routing-expression=headers['httpMethod'].concat(' ').concat(headers['path'])
Expand Down
12 changes: 6 additions & 6 deletions content/en/docs/Integrations/terraform/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,7 @@ The following changes go into this file.

First, we have to specify mock credentials for the AWS provider:

```
```hcl
provider "aws" {
access_key = "test"
Expand All @@ -48,7 +48,7 @@ provider "aws" {
Second, we need to avoid issues with routing and authentication (as we do not need it).
Therefore we need to supply some general parameters:

```
```hcl
provider "aws" {
access_key = "test"
Expand All @@ -66,7 +66,7 @@ provider "aws" {
Additionally, we have to point the individual services to LocalStack.
In case of S3, this looks like the following snippet

```
```hcl
endpoints {
s3 = "http://localhost:4566"
}
Expand All @@ -79,7 +79,7 @@ In case of S3, this looks like the following snippet
### S3 Bucket

Now we are adding a minimal s3 bucket outside the provider
```
```hcl
resource "aws_s3_bucket" "test-bucket" {
bucket = "my-bucket"
}
Expand All @@ -89,7 +89,7 @@ resource "aws_s3_bucket" "test-bucket" {
### Final Configuration

The final (minimal) configuration to deploy an s3 bucket thus looks like this
```
```hcl
provider "aws" {
access_key = "mock_access_key"
Expand Down Expand Up @@ -128,7 +128,7 @@ $ terraform deploy

Here is a configuration example with additional endpoints:

```
```hcl
provider "aws" {
access_key = "test"
secret_key = "test"
Expand Down
54 changes: 28 additions & 26 deletions content/en/docs/Local AWS Services/cognito/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@ LocalStack Pro contains basic support for authentication via Cognito. You can cr
{{< /alert >}}

First, start up LocalStack. In addition to the normal setup, we need to pass several SMTP settings as environment variables.
```
```env
SMTP_HOST=<smtp-host-address>
SMTP_USER=<email-user-name>
SMTP_PASS=<email-password>
Expand All @@ -28,12 +28,12 @@ Don't forget to pass Cognito as a service as well.
## Creating a User Pool

Just as with aws, you can create a User Pool in LocalStack via
```
awslocal cognito-idp create-user-pool --pool-name test
```
{{< command >}}
$ awslocal cognito-idp create-user-pool --pool-name test
{{< /command >}}
The response should look similar to this

```
```json
"UserPool": {
"Id": "us-east-1_fd924693e9b04f549f989283123a29c2",
"Name": "test",
Expand All @@ -60,28 +60,31 @@ The response should look similar to this
"AllowAdminCreateUserOnly": false
},
"Arn": "arn:aws:cognito-idp:us-east-1:000000000000:userpool/us-east-1_fd924693e9b04f549f989283123a29c2"
}
```
We will need the pool-id for further operations, so save it in a ```pool_id``` variable.
We will need the pool-id for further operations, so save it in a `pool_id` variable.
Alternatively, you can also use a JSON processor like [jq](https://stedolan.github.io/jq/) to directly extract the necessary information when creating a pool.
```
pool_id=$(awslocal cognito-idp create-user-pool --pool-name test | jq -rc ".UserPool.Id")
```

{{< command >}}
$ pool_id=$(awslocal cognito-idp create-user-pool --pool-name test | jq -rc ".UserPool.Id")
{{< /command >}}

## Adding a Client

Now we add a client to our newly created pool. We will also need the ID of the created client for the next step. The complete command for client creation with subsequent ID extraction is therefore

```
client_id=$(awslocal cognito-idp create-user-pool-client --user-pool-id $pool_id --client-name test-client | jq -rc ".UserPoolClient.ClientId")
```
{{< command >}}
$ client_id=$(awslocal cognito-idp create-user-pool-client --user-pool-id $pool_id --client-name test-client | jq -rc ".UserPoolClient.ClientId")
{{< /command >}}

## Signing up and confirming a user

With these steps already taken, we can now sign up a user.
```
awslocal cognito-idp sign-up --client-id $client_id --username example_user --password 12345678 --user-attributes Name=email,Value=<[email protected]>
```
{{< command >}}
$ awslocal cognito-idp sign-up --client-id $client_id --username example_user --password 12345678 --user-attributes Name=email,Value=<[email protected]>
{{< /command >}}
The response should look similar to this
```
```json
{
"UserConfirmed": false,
"UserSub": "5fdbe1d5-7901-4fee-9d1d-518103789c94"
Expand All @@ -91,17 +94,17 @@ and you should have received a new e-mail!

As you can see, our user is still unconfirmed. We can change this with the following instruction.

```
awslocal cognito-idp confirm-sign-up --client-id $client_id --username example_user --confirmation-code <received-confirmation-code>
```
{{< command >}}
$ awslocal cognito-idp confirm-sign-up --client-id $client_id --username example_user --confirmation-code <received-confirmation-code>
{{< /command >}}
The verification code for the user is in the e-mail you received. Additionally, LocalStack prints out the verification code in the console.

The above command doesn't return an answer, you need to check the pool to see that it was successful
```
awslocal cognito-idp list-users --user-pool-id $pool_id
```
{{< command >}}
$ awslocal cognito-idp list-users --user-pool-id $pool_id
{{< /command >}}
which should return something similar to this
<pre>
```json {hl_lines=[20]}
{
"Users": [
{
Expand All @@ -121,12 +124,11 @@ which should return something similar to this
}
],
"Enabled": true,
<b>"UserStatus": "CONFIRMED"</b>
"UserStatus": "CONFIRMED"
}
]
}

</pre>
```

## OAuth Flows via Cognito Login Form

Expand Down
1 change: 0 additions & 1 deletion content/en/docs/Local AWS Services/elasticsearch/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -72,7 +72,6 @@ In the LocalStack log you will see something like
2021-10-01T21:14:27:INFO:localstack.services.install: Installing Elasticsearch plugin analysis-stempel
2021-10-01T21:14:45:INFO:localstack.services.install: Installing Elasticsearch plugin analysis-ukrainian
2021-10-01T21:15:01:INFO:localstack.services.es.cluster: starting elasticsearch: /opt/code/localstack/localstack/infra/elasticsearch/bin/elasticsearch -E http.port=59237 -E http.publish_port=59237 -E transport.port=0 -E network.host=127.0.0.1 -E http.compression=false -E path.data="/opt/code/localstack/localstack/infra/elasticsearch/data" -E path.repo="/tmp/localstack/es_backup" -E xpack.ml.enabled=false with env {'ES_JAVA_OPTS': '-Xms200m -Xmx600m', 'ES_TMPDIR': '/opt/code/localstack/localstack/infra/elasticsearch/tmp'}
```

and after some time, you should see that the `Created` state of the domain is set to `true`:
Expand Down
2 changes: 1 addition & 1 deletion content/en/docs/Local AWS Services/glue/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -68,7 +68,7 @@ For a more detailed example illustrating how to run a local Glue PySpark job, pl
The Glue data catalog is integrated with Athena, and the database/table definitions can be imported via the `import-catalog-to-glue` API.

Assume you are running the following Athena queries to create databases and table definitions:
```
```sql
CREATE DATABASE db2
CREATE EXTERNAL TABLE db2.table1 (a1 Date, a2 STRING, a3 INT) LOCATION 's3://test/table1'
CREATE EXTERNAL TABLE db2.table2 (a1 Date, a2 STRING, a3 INT) LOCATION 's3://test/table2'
Expand Down
114 changes: 0 additions & 114 deletions content/en/docs/LocalStack Tools/Lambda Tools/debugging.md.bak

This file was deleted.

18 changes: 9 additions & 9 deletions content/en/docs/LocalStack Tools/Lambda Tools/debugging/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -38,11 +38,11 @@ There, the necessary code fragments for enabling debugging are already present.
### Configure LocalStack for remote Python debugging

First, make sure that LocalStack is started with the following configuration (see the [Configuration docs]({{< ref "configuration#lambda" >}}) for more information):
```sh
LAMBDA_REMOTE_DOCKER=0 \
{{< command >}}
$ LAMBDA_REMOTE_DOCKER=0 \
LAMBDA_DOCKER_FLAGS='-p 19891:19891' \
DEBUG=1 localstack start
```
{{< /command >}}

### Preparing your code

Expand Down Expand Up @@ -86,19 +86,19 @@ To create the Lambda function, you just need to take care of two things:

So, in our [example](https://github.com/localstack/localstack-pro-samples/tree/master/lambda-mounting-and-debugging), this would be:

```sh
awslocal lambda create-function --function-name my-cool-local-function \
{{< command >}}
$ awslocal lambda create-function --function-name my-cool-local-function \
--code S3Bucket="__local__",S3Key="$(pwd)/" \
--handler handler.handler \
--runtime python3.8 \
--role cool-stacklifter
```
{{< /command >}}

We can quickly verify that it works by invoking it with a simple payload:

```sh
awslocal lambda invoke --function-name my-cool-local-function --payload '{"message": "Hello from LocalStack!"}' output.txt
```
{{< command >}}
$ awslocal lambda invoke --function-name my-cool-local-function --payload '{"message": "Hello from LocalStack!"}' output.txt
{{< /command >}}

### Configuring Visual Studio Code for remote Python debugging

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -37,9 +37,9 @@ The main advantage of this mode is, that no DNS magic is involved, and SSL certi

## Configuration

If you want to disable this behavior, and use the DNS server to resolve the endpoints for AWS, you can disable this behavior using:
If you want to disable this behavior, and use the DNS server to resolve the endpoints for AWS, you can disable this behavior by using:

```
```bash
TRANSPARENT_LOCAL_ENDPOINTS=0
```

Expand Down
Loading

0 comments on commit f2f7ff3

Please sign in to comment.