From 948bc5ca785e169f5cc28a829c27d2649d24ca46 Mon Sep 17 00:00:00 2001 From: Matt Moore Date: Thu, 7 Feb 2019 16:40:05 +0000 Subject: [PATCH 1/2] Manually run prettier.io Trying to fix the stuff that hits prettier.io bugs. --- build/auth.md | 15 +- build/build-templates.md | 14 +- community/README.md | 13 +- community/samples/README.md | 7 +- .../serving/helloworld-clojure/README.md | 36 +-- .../samples/serving/helloworld-dart/README.md | 18 +- .../serving/helloworld-elixir/README.md | 5 +- .../serving/helloworld-haskell/README.md | 56 ++-- .../samples/serving/helloworld-rust/README.md | 30 +-- .../serving/helloworld-shell/README.md | 147 +++++------ .../serving/helloworld-vertx/README.md | 26 +- doc-releases.md | 13 +- eventing/README.md | 29 ++- eventing/channels/README.md | 24 +- eventing/debugging/README.md | 243 +++++++++++++----- eventing/samples/gcp-pubsub-source/README.md | 7 +- .../samples/writing-a-source/01-bootstrap.md | 14 +- .../writing-a-source/02-define-source.md | 22 +- .../writing-a-source/04-publish-to-cluster.md | 29 ++- eventing/samples/writing-a-source/README.md | 45 ++-- eventing/sources/README.md | 58 ++--- install/Knative-custom-install.md | 106 ++++---- install/Knative-with-AKS.md | 50 ++-- install/Knative-with-GKE.md | 47 ++-- install/Knative-with-Gardener.md | 49 ++-- install/Knative-with-ICP.md | 59 ++--- install/Knative-with-IKS.md | 50 ++-- install/Knative-with-Minikube.md | 6 +- install/Knative-with-Minishift.md | 10 +- install/Knative-with-OpenShift.md | 6 +- install/Knative-with-PKS.md | 45 ++-- install/Knative-with-any-k8s.md | 50 ++-- install/README.md | 62 ++--- serving/cluster-local-route.md | 9 +- serving/gke-assigning-static-ip-address.md | 7 +- serving/installing-logging-metrics-traces.md | 180 +++++++------ serving/samples/README.md | 30 +-- serving/samples/autoscale-go/README.md | 1 + serving/samples/blue-green-deployment.md | 13 +- serving/samples/helloworld-csharp/README.md | 48 ++-- serving/samples/helloworld-go/README.md | 9 +- serving/samples/helloworld-java/README.md | 56 ++-- serving/samples/helloworld-kotlin/README.md | 42 +-- serving/samples/helloworld-nodejs/README.md | 43 ++-- serving/samples/helloworld-php/README.md | 42 +-- serving/samples/helloworld-python/README.md | 40 +-- serving/samples/helloworld-ruby/README.md | 42 +-- serving/samples/helloworld-scala/README.md | 43 +++- serving/samples/knative-routing-go/README.md | 9 +- serving/samples/telemetry-go/README.md | 4 +- serving/samples/traffic-splitting/README.md | 4 +- 51 files changed, 1107 insertions(+), 906 deletions(-) diff --git a/build/auth.md b/build/auth.md index c87def5057a..37457f3a29d 100644 --- a/build/auth.md +++ b/build/auth.md @@ -41,10 +41,11 @@ into their respective files in `$HOME`. # This is non-standard, but its use is encouraged to make this more secure. known_hosts: ``` + `build.knative.dev/git-0` in the example above specifies which web address these credentials belong to. See - [Guiding Credential Selection](#guiding-credential-selection) below for - more information. + [Guiding Credential Selection](#guiding-credential-selection) below for more + information. 1. Generate the value of `ssh-privatekey` by copying the value of (for example) `cat ~/.ssh/id_rsa | base64`. @@ -103,10 +104,11 @@ to authenticate with the Git service. username: password: ``` + `build.knative.dev/git-0` in the example above specifies which web address these credentials belong to. See - [Guiding Credential Selection](#guiding-credential-selection) below for - more information. + [Guiding Credential Selection](#guiding-credential-selection) below for more + information. 1. Next, direct a `ServiceAccount` to use this `Secret`: @@ -159,10 +161,11 @@ credentials are then used to authenticate with the Git repository. username: password: ``` + `build.knative.dev/docker-0` in the example above specifies which web address these credentials belong to. See - [Guiding Credential Selection](#guiding-credential-selection) below for - more information. + [Guiding Credential Selection](#guiding-credential-selection) below for more + information. 1. Direct a `ServiceAccount` to use this `Secret`: diff --git a/build/build-templates.md b/build/build-templates.md index 22780588467..07aa5a70fbb 100644 --- a/build/build-templates.md +++ b/build/build-templates.md @@ -7,12 +7,15 @@ A set of curated and supported build templates is available in the ## What is a Build Template? -A `BuildTemplate` and `ClusterBuildTemplate` encapsulates a shareable [build](./builds.md) -process with some limited parameterization capabilities. +A `BuildTemplate` and `ClusterBuildTemplate` encapsulates a shareable +[build](./builds.md) process with some limited parameterization capabilities. -A `BuildTemplate` is available within a namespace, and `ClusterBuildTemplate` is available across entire Kubernetes cluster. +A `BuildTemplate` is available within a namespace, and `ClusterBuildTemplate` is +available across entire Kubernetes cluster. -A `BuildTemplate` functions exactly like a `ClusterBuildTemplate`, and as such all references to `BuildTemplate` below are also describing `ClusterBuildTemplate`. +A `BuildTemplate` functions exactly like a `ClusterBuildTemplate`, and as such +all references to `BuildTemplate` below are also describing +`ClusterBuildTemplate`. ### Example template @@ -145,7 +148,8 @@ spec: value: Dockerfile-17.06.1 ``` -The `spec.template.kind` is optional and defaults to `BuildTemplate`. Alternately it could have value `ClusterBuildTemplate`. +The `spec.template.kind` is optional and defaults to `BuildTemplate`. +Alternately it could have value `ClusterBuildTemplate`. --- diff --git a/community/README.md b/community/README.md index 7743fbcd7cd..27d684614e7 100644 --- a/community/README.md +++ b/community/README.md @@ -60,13 +60,12 @@ Community tutorials are stored in Markdown files on [GitHub](./samples/README.md) where they can be reviewed and edited by the community. -Please submit a Pull Request to the community sample directory under the -Knative component that your tutorial highlights - -[Serving](./samples/serving/), [Eventing](./samples/eventing/), -or [Build](./samples/build/). A reviewer will be assigned to review your -submission. They'll work with you to ensure your submission meets the -[style guide](DOCS-CONTRIBUTING.md), but it helps if you follow it as you -write your tutorial. +Please submit a Pull Request to the community sample directory under the Knative +component that your tutorial highlights - [Serving](./samples/serving/), +[Eventing](./samples/eventing/), or [Build](./samples/build/). A reviewer will +be assigned to review your submission. They'll work with you to ensure your +submission meets the [style guide](DOCS-CONTRIBUTING.md), but it helps if you +follow it as you write your tutorial. ## Meetings and work groups diff --git a/community/samples/README.md b/community/samples/README.md index 0d9959b6c74..852f3db01fe 100644 --- a/community/samples/README.md +++ b/community/samples/README.md @@ -1,7 +1,8 @@ # Knative Community Samples -This directory contains Knative sample applications submitted from the community. +This directory contains Knative sample applications submitted from the +community. -| Sample Name | Description | Language(s) | -| -------------------------- | -------------------------- | -------------------------- | +| Sample Name | Description | Language(s) | +| ----------- | -------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | Hello World | A quick introduction that highlights how to deploy an app using Knative Serving. | [Clojure](./serving/helloworld-clojure/README.md), [Dart](./serving/helloworld-dart/README.md), [Elixir](./serving/helloworld-elixir/README.md), [Haskell](./serving/helloworld-haskell/README.md), [Rust](./serving/helloworld-rust/README.md), [Shell](./serving/helloworld-shell/README.md), [Swift](./serving/helloworld-swift/README.md), [Vertx](./serving/helloworld-vertx/README.md) | diff --git a/community/samples/serving/helloworld-clojure/README.md b/community/samples/serving/helloworld-clojure/README.md index b92df6f664b..dcc0c034262 100644 --- a/community/samples/serving/helloworld-clojure/README.md +++ b/community/samples/serving/helloworld-clojure/README.md @@ -59,29 +59,29 @@ recreate the source files from this folder. see [the clojure image documentation](https://github.com/docker-library/docs/tree/master/clojure). - ```docker - # Use the official Clojure image. - # https://hub.docker.com/_/clojure - FROM clojure + ```docker + # Use the official Clojure image. + # https://hub.docker.com/_/clojure + FROM clojure - # Create the project and download dependencies. - WORKDIR /usr/src/app - COPY project.clj . - RUN lein deps + # Create the project and download dependencies. + WORKDIR /usr/src/app + COPY project.clj . + RUN lein deps - # Copy local code to the container image. - COPY . . + # Copy local code to the container image. + COPY . . - # Build an uberjar release artifact. - RUN mv "$(lein uberjar | sed -n 's/^Created \(.*standalone\.jar\)/\1/p')" app-standalone.jar + # Build an uberjar release artifact. + RUN mv "$(lein uberjar | sed -n 's/^Created \(.*standalone\.jar\)/\1/p')" app-standalone.jar - # Service must listen to $PORT environment variable. - # This default value facilitates local development. - ENV PORT 8080 + # Service must listen to $PORT environment variable. + # This default value facilitates local development. + ENV PORT 8080 - # Run the web service on container startup. - CMD ["java", "-jar", "app-standalone.jar"] - ``` + # Run the web service on container startup. + CMD ["java", "-jar", "app-standalone.jar"] + ``` 1. Create a new file, `service.yaml` and copy the following service definition into the file. Make sure to replace `{username}` with your Docker Hub diff --git a/community/samples/serving/helloworld-dart/README.md b/community/samples/serving/helloworld-dart/README.md index 25de5a26150..5fafbb92f70 100644 --- a/community/samples/serving/helloworld-dart/README.md +++ b/community/samples/serving/helloworld-dart/README.md @@ -71,15 +71,15 @@ be created using the following instructions. 4. Create a new file named `Dockerfile`, this file defines instructions for dockerizing your applications, for dart apps this can be done as follows: - ```Dockerfile - # Use Google's official Dart image. - # https://hub.docker.com/r/google/dart-runtime/ - FROM google/dart-runtime - - # Service must listen to $PORT environment variable. - # This default value facilitates local development. - ENV PORT 8080 - ``` + ```Dockerfile + # Use Google's official Dart image. + # https://hub.docker.com/r/google/dart-runtime/ + FROM google/dart-runtime + + # Service must listen to $PORT environment variable. + # This default value facilitates local development. + ENV PORT 8080 + ``` 5. Create a new file, `service.yaml` and copy the following service definition into the file. Make sure to replace `{username}` with your Docker Hub diff --git a/community/samples/serving/helloworld-elixir/README.md b/community/samples/serving/helloworld-elixir/README.md index f078a76b3f6..eb6067b9734 100644 --- a/community/samples/serving/helloworld-elixir/README.md +++ b/community/samples/serving/helloworld-elixir/README.md @@ -179,8 +179,7 @@ above. xxxxxxx-ingressgateway LoadBalancer 10.23.247.74 35.203.155.229 80:32380/TCP,443:32390/TCP,32400:32400/TCP 2d ``` - -1. To find the URL for your service, use +1) To find the URL for your service, use ``` kubectl get ksvc helloworld-elixir --output=custom-columns=NAME:.metadata.name,DOMAIN:.status.domain @@ -189,7 +188,7 @@ above. helloworld-elixir helloworld-elixir.default.example.com ``` -1. Now you can make a request to your app to see the results. Replace +1) Now you can make a request to your app to see the results. Replace `{IP_ADDRESS}` with the address you see returned in the previous step. ```shell diff --git a/community/samples/serving/helloworld-haskell/README.md b/community/samples/serving/helloworld-haskell/README.md index c53bffdc6fc..bfcfd87367d 100644 --- a/community/samples/serving/helloworld-haskell/README.md +++ b/community/samples/serving/helloworld-haskell/README.md @@ -80,34 +80,34 @@ recreate the source files from this folder. 1. In your project directory, create a file named `Dockerfile` and copy the code block below into it. - ```docker - # Use the official Haskell image to create a build artifact. - # https://hub.docker.com/_/haskell/ - FROM haskell:8.2.2 as builder - - # Copy local code to the container image. - WORKDIR /app - COPY . . - - # Build and test our code, then build the “helloworld-haskell-exe” executable. - RUN stack setup - RUN stack build --copy-bins - - # Use a Docker multi-stage build to create a lean production image. - # https://docs.docker.com/develop/develop-images/multistage-build/#use-multi-stage-builds - FROM fpco/haskell-scratch:integer-gmp - - # Copy the "helloworld-haskell-exe" executable from the builder stage to the production image. - WORKDIR /root/ - COPY --from=builder /root/.local/bin/helloworld-haskell-exe . - - # Service must listen to $PORT environment variable. - # This default value facilitates local development. - ENV PORT 8080 - - # Run the web service on container startup. - CMD ["./helloworld-haskell-exe"] - ``` + ```docker + # Use the official Haskell image to create a build artifact. + # https://hub.docker.com/_/haskell/ + FROM haskell:8.2.2 as builder + + # Copy local code to the container image. + WORKDIR /app + COPY . . + + # Build and test our code, then build the “helloworld-haskell-exe” executable. + RUN stack setup + RUN stack build --copy-bins + + # Use a Docker multi-stage build to create a lean production image. + # https://docs.docker.com/develop/develop-images/multistage-build/#use-multi-stage-builds + FROM fpco/haskell-scratch:integer-gmp + + # Copy the "helloworld-haskell-exe" executable from the builder stage to the production image. + WORKDIR /root/ + COPY --from=builder /root/.local/bin/helloworld-haskell-exe . + + # Service must listen to $PORT environment variable. + # This default value facilitates local development. + ENV PORT 8080 + + # Run the web service on container startup. + CMD ["./helloworld-haskell-exe"] + ``` 1. Create a new file, `service.yaml` and copy the following service definition into the file. Make sure to replace `{username}` with your Docker Hub diff --git a/community/samples/serving/helloworld-rust/README.md b/community/samples/serving/helloworld-rust/README.md index 6fa175cd7b7..409aa124add 100644 --- a/community/samples/serving/helloworld-rust/README.md +++ b/community/samples/serving/helloworld-rust/README.md @@ -87,25 +87,25 @@ recreate the source files from this folder. 1. In your project directory, create a file named `Dockerfile` and copy the code block below into it. - ```docker - # Use the official Rust image. - # https://hub.docker.com/_/rust - FROM rust:1.27.0 + ```docker + # Use the official Rust image. + # https://hub.docker.com/_/rust + FROM rust:1.27.0 - # Copy local code to the container image. - WORKDIR /usr/src/app - COPY . . + # Copy local code to the container image. + WORKDIR /usr/src/app + COPY . . - # Install production dependencies and build a release artifact. - RUN cargo install + # Install production dependencies and build a release artifact. + RUN cargo install - # Service must listen to $PORT environment variable. - # This default value facilitates local development. - ENV PORT 8080 + # Service must listen to $PORT environment variable. + # This default value facilitates local development. + ENV PORT 8080 - # Run the web service on container startup. - CMD ["hellorust"] - ``` + # Run the web service on container startup. + CMD ["hellorust"] + ``` 1. Create a new file, `service.yaml` and copy the following service definition into the file. Make sure to replace `{username}` with your Docker Hub diff --git a/community/samples/serving/helloworld-shell/README.md b/community/samples/serving/helloworld-shell/README.md index aeaf23415b2..a484b0d487f 100644 --- a/community/samples/serving/helloworld-shell/README.md +++ b/community/samples/serving/helloworld-shell/README.md @@ -1,8 +1,8 @@ # Hello World - Shell sample -A simple web app that executes a shell script. -The shell script reads an env variable `TARGET` and prints `Hello ${TARGET}!`. -If the `TARGET` environment variable is not specified, the script uses `World`. +A simple web app that executes a shell script. The shell script reads an env +variable `TARGET` and prints `Hello ${TARGET}!`. If the `TARGET` environment +variable is not specified, the script uses `World`. ## Prerequisites @@ -20,83 +20,83 @@ recreate the source files from this folder. 1. Create a new file named `script.sh` and paste the following script: - ```sh - #!/bin/sh - echo Hello ${TARGET:=World} - ``` - -1. Create a new file named `invoke.go` and paste the following code. - We use a basic web server written in Go to execute the shell script: - - ```go - package main - - import ( - "fmt" - "log" - "net/http" - "os" - "os/exec" - ) - - func handler(w http.ResponseWriter, r *http.Request) { - cmd := exec.CommandContext(r.Context(), "/bin/sh", "script.sh") - cmd.Stderr = os.Stderr - out, err := cmd.Output() - if err != nil { - w.WriteHeader(500) - } - w.Write(out) - } - - func main() { - http.HandleFunc("/", handler) - - port := os.Getenv("PORT") - if port == "" { - port = "8080" - } - - log.Fatal(http.ListenAndServe(fmt.Sprintf(":%s", port), nil)) - } - ``` + ```sh + #!/bin/sh + echo Hello ${TARGET:=World} + ``` + +1. Create a new file named `invoke.go` and paste the following code. We use a + basic web server written in Go to execute the shell script: + + ```go + package main + + import ( + "fmt" + "log" + "net/http" + "os" + "os/exec" + ) + + func handler(w http.ResponseWriter, r *http.Request) { + cmd := exec.CommandContext(r.Context(), "/bin/sh", "script.sh") + cmd.Stderr = os.Stderr + out, err := cmd.Output() + if err != nil { + w.WriteHeader(500) + } + w.Write(out) + } + + func main() { + http.HandleFunc("/", handler) + + port := os.Getenv("PORT") + if port == "" { + port = "8080" + } + + log.Fatal(http.ListenAndServe(fmt.Sprintf(":%s", port), nil)) + } + ``` 1. Create a new file named `Dockerfile` and copy the code block below into it. - ```docker - FROM golang:1.11 + ```docker + FROM golang:1.11 - WORKDIR /go/src/invoke + WORKDIR /go/src/invoke - COPY invoke.go . - RUN go install -v + COPY invoke.go . + RUN go install -v - COPY . . + COPY . . - CMD ["invoke"] - ``` + CMD ["invoke"] + ``` 1. Create a new file, `service.yaml` and copy the following service definition into the file. Make sure to replace `{username}` with your Docker Hub username. - ```yaml - apiVersion: serving.knative.dev/v1alpha1 - kind: Service - metadata: - name: helloworld-shell - namespace: default - spec: - runLatest: - configuration: - revisionTemplate: - spec: - container: - image: docker.io/{username}/helloworld-shell - env: - - name: TARGET - value: "Shell" - ``` + ```yaml + apiVersion: serving.knative.dev/v1alpha1 + kind: Service + metadata: + name: helloworld-shell + namespace: default + spec: + runLatest: + configuration: + revisionTemplate: + spec: + container: + image: docker.io/{username}/helloworld-shell + env: + - name: TARGET + value: "Shell" + ``` ## Building and deploying the sample @@ -126,9 +126,10 @@ folder) you're ready to build and deploy the sample app. 1. Now that your service is created, Knative performs the following steps: - - Create a new immutable revision for this version of the app. - - Network programming to create a route, ingress, service, and load balance for your app. - - Automatically scale your pods up and down (including to zero active pods). + - Create a new immutable revision for this version of the app. + - Network programming to create a route, ingress, service, and load balance + for your app. + - Automatically scale your pods up and down (including to zero active pods). 1. Run the following command to find the external IP address for your service. The ingress IP for your cluster is returned. If you just created your @@ -161,8 +162,8 @@ folder) you're ready to build and deploy the sample app. ``` 1. Test your app by sending it a request. Use the following `curl` command with - the domain URL `helloworld-shell.default.example.com` and `EXTERNAL-IP` address - that you retrieved in the previous steps: + the domain URL `helloworld-shell.default.example.com` and `EXTERNAL-IP` + address that you retrieved in the previous steps: ```shell curl -H "Host: helloworld-shell.default.example.com" http://{EXTERNAL_IP_ADDRESS} diff --git a/community/samples/serving/helloworld-vertx/README.md b/community/samples/serving/helloworld-vertx/README.md index 80220d51a79..4ce11cf3d77 100644 --- a/community/samples/serving/helloworld-vertx/README.md +++ b/community/samples/serving/helloworld-vertx/README.md @@ -143,19 +143,19 @@ To create and configure the source files in the root of your working directory: 1. Create the `Dockerfile` file: - ```docker - # Use fabric8's s2i Builder image. - # https://hub.docker.com/r/fabric8/s2i-java - FROM fabric8/s2i-java:2.0 - - # Service must listen to $PORT environment variable. - # This default value facilitates local development. - ENV PORT 8080 - - # Copy the JAR file to the deployment directory. - ENV JAVA_APP_DIR=/deployments - COPY target/helloworld-1.0.0-SNAPSHOT.jar /deployments/ - ``` + ```docker + # Use fabric8's s2i Builder image. + # https://hub.docker.com/r/fabric8/s2i-java + FROM fabric8/s2i-java:2.0 + + # Service must listen to $PORT environment variable. + # This default value facilitates local development. + ENV PORT 8080 + + # Copy the JAR file to the deployment directory. + ENV JAVA_APP_DIR=/deployments + COPY target/helloworld-1.0.0-SNAPSHOT.jar /deployments/ + ``` 1. Create the `service.yaml` file. You must specify your Docker Hub username in `{username}`. You can also configure the `TARGET`, for example you can modify diff --git a/doc-releases.md b/doc-releases.md index face2139582..fb47d35c60b 100644 --- a/doc-releases.md +++ b/doc-releases.md @@ -1,19 +1,20 @@ # Documentation Releases -The following list shows the available versions of Knative documentation. -Select the version that matches your installed version of Knative. +The following list shows the available versions of Knative documentation. Select +the version that matches your installed version of Knative. ## `knative/docs` repositories ### Released versions -* [Branch: **`release-0.2`**](https://github.com/knative/docs/tree/release-0.2) -* [Branch: **`release-0.1`**](https://github.com/knative/docs/tree/release-0.1) +- [Branch: **`release-0.2`**](https://github.com/knative/docs/tree/release-0.2) +- [Branch: **`release-0.1`**](https://github.com/knative/docs/tree/release-0.1) ### In development (pre-release) version -* [Branch: **`master`**](https://github.com/knative/docs/tree/master) +- [Branch: **`master`**](https://github.com/knative/docs/tree/master) ## Documentation website -* `https://knative.dev` ([Coming soon!](https://github.com/knative/docs/projects/5)) +- `https://knative.dev` + ([Coming soon!](https://github.com/knative/docs/projects/5)) diff --git a/eventing/README.md b/eventing/README.md index f0ce690ea26..10eef7b9441 100644 --- a/eventing/README.md +++ b/eventing/README.md @@ -70,9 +70,9 @@ Knative Eventing currently requires Knative Serving and Istio version 1.0 or later installed. [Follow the instructions to install on the platform of your choice](../install/README.md). -Many of the sources require making outbound connections to create the event subscription, -and if you have any functions that make use of any external (to cluster) services, you -must enable it also for them to work. +Many of the sources require making outbound connections to create the event +subscription, and if you have any functions that make use of any external (to +cluster) services, you must enable it also for them to work. [Follow the instructions to configure outbound network access](../serving/outbound-network-access.md). Install the core Knative Eventing (which provides an in-memory @@ -84,7 +84,8 @@ kubectl apply --filename https://github.com/knative/eventing/releases/download/v kubectl apply --filename https://github.com/knative/eventing-sources/releases/download/v0.3.0/release.yaml ``` -In addition to the core sources, there are [other sources](./sources/README.md) that you can install. +In addition to the core sources, there are [other sources](./sources/README.md) +that you can install. This document will be updated as additional sources (which are custom resource definitions and an associated controller) and channels @@ -130,7 +131,8 @@ format, but may be expressed as simple lists, etc in YAML. All Sources should be part of the `sources` category, so you can list all existing Sources with `kubectl get sources`. The currently-implemented Sources are described below: -_Want to implement your own source? Check out [the tutorial](samples/writing-a-source/README.md)._ +_Want to implement your own source? Check out +[the tutorial](samples/writing-a-source/README.md)._ ### KubernetesEventSource @@ -217,8 +219,8 @@ FTP server for new files or generate events at a set time interval. **Spec fields**: - `image` (**required**): `string` A docker image of the container to be run. -- `args`: `[]string` Command-line arguments. If no `--sink` flag is provided, - one will be added and filled in with the DNS address of the `sink` object. +- `args`: `[]string` Command-line arguments. If no `--sink` flag is provided, + one will be added and filled in with the DNS address of the `sink` object. - `env`: `map[string]string` Environment variables to be set in the container. - `serviceAccountName`: `string` The name of the ServiceAccount to run the container as. @@ -228,13 +230,17 @@ FTP server for new files or generate events at a set time interval. ### CronJobSource -The CronJobSource fires events based on given [Cron](https://en.wikipedia.org/wiki/Cron) schedule. +The CronJobSource fires events based on given +[Cron](https://en.wikipedia.org/wiki/Cron) schedule. **Spec fields**: -- `schedule` (**required**): `string` A [Cron](https://en.wikipedia.org/wiki/Cron) format string, such as `0 * * * *` or `@hourly`. +- `schedule` (**required**): `string` A + [Cron](https://en.wikipedia.org/wiki/Cron) format string, such as `0 * * * *` + or `@hourly`. - `data`: `string` Optional data sent to downstream receiver. -- `serviceAccountName`: `string` The name of the ServiceAccount to run the container as. +- `serviceAccountName`: `string` The name of the ServiceAccount to run the + container as. - `sink`: [ObjectReference](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.12/#objectreference-v1-core) A reference to the object that should receive events. @@ -246,8 +252,9 @@ The CronJobSource fires events based on given [Cron](https://en.wikipedia.org/wi - [Run samples](samples/) ## Configuration + - [Default Channels](channels/default-channels.md) provide a way to choose the -persistence strategy for Channels across the cluster. + persistence strategy for Channels across the cluster. --- diff --git a/eventing/channels/README.md b/eventing/channels/README.md index 8406a372f8f..10271e6c533 100644 --- a/eventing/channels/README.md +++ b/eventing/channels/README.md @@ -12,26 +12,24 @@ procedure: # Knative Channels -Channels are Kubernetes Custom Resources which define a single event forwarding and persistence layer. -Messaging implementations may provide implementations of Channels via the +Channels are Kubernetes Custom Resources which define a single event forwarding +and persistence layer. Messaging implementations may provide implementations of +Channels via the [ClusterChannelProvisioner](https://github.com/knative/eventing/blob/master/pkg/apis/eventing/v1alpha1/cluster_channel_provisioner_types.go#L35) -object, supporting different technologies, such as Apache Kafka or NATS Streaming. +object, supporting different technologies, such as Apache Kafka or NATS +Streaming. This is a non-exhaustive list of Channels for Knative. - ### Inclusion in this list is not an endorsement, nor does it imply any level of support. - ## Channels These are the channels `CRD`s. -Name | Status | Support | Description ---- | --- | --- | --- -[Apache Kafka](https://github.com/knative/eventing/tree/master/contrib/kafka/config) | Proof of Concept | None | Channels are backed by [Apache Kafka](http://kafka.apache.org/) topics. -[GCP PubSub](https://github.com/knative/eventing/tree/master/contrib/gcppubsub/config) | Proof of Concept | None | Channels are backed by [GCP PubSub](https://cloud.google.com/pubsub/). -[In-Memory](https://github.com/knative/eventing/tree/master/config/provisioners/in-memory-channel) | Proof of Concept | None | In-memory channels are a best effort Channel. They should NOT be used in Production. They are useful for development. -[Natss](https://github.com/knative/eventing/tree/master/contrib/natss/config) | Proof of Concept | None | Channels are backed by [NATS Streaming](https://github.com/nats-io/nats-streaming-server#configuring). - - +| Name | Status | Support | Description | +| -------------------------------------------------------------------------------------------------- | ---------------- | ------- | --------------------------------------------------------------------------------------------------------------------- | +| [Apache Kafka](https://github.com/knative/eventing/tree/master/contrib/kafka/config) | Proof of Concept | None | Channels are backed by [Apache Kafka](http://kafka.apache.org/) topics. | +| [GCP PubSub](https://github.com/knative/eventing/tree/master/contrib/gcppubsub/config) | Proof of Concept | None | Channels are backed by [GCP PubSub](https://cloud.google.com/pubsub/). | +| [In-Memory](https://github.com/knative/eventing/tree/master/config/provisioners/in-memory-channel) | Proof of Concept | None | In-memory channels are a best effort Channel. They should NOT be used in Production. They are useful for development. | +| [Natss](https://github.com/knative/eventing/tree/master/contrib/natss/config) | Proof of Concept | None | Channels are backed by [NATS Streaming](https://github.com/nats-io/nats-streaming-server#configuring). | diff --git a/eventing/debugging/README.md b/eventing/debugging/README.md index 6df4dd78903..a55f5831057 100644 --- a/eventing/debugging/README.md +++ b/eventing/debugging/README.md @@ -1,25 +1,33 @@ # Debugging Knative Eventing -This is an evolving document on how to debug a non-working Knative Eventing setup. +This is an evolving document on how to debug a non-working Knative Eventing +setup. + ## Audience -This document is intended for people that are familiar with [Knative Eventing](../README.md)'s object model. You don't need to be an expert, but do need to know roughly how things fit together. +This document is intended for people that are familiar with +[Knative Eventing](../README.md)'s object model. You don't need to be an expert, +but do need to know roughly how things fit together. ## Version -This document works with [Eventing 0.3](https://github.com/knative/eventing/releases/tag/v0.3.0) and [Eventing Sources 0.3](https://github.com/knative/eventing-sources/releases/tag/v0.3.0). + +This document works with +[Eventing 0.3](https://github.com/knative/eventing/releases/tag/v0.3.0) and +[Eventing Sources 0.3](https://github.com/knative/eventing-sources/releases/tag/v0.3.0). ## Prerequisites -1. Setup - [Knative Eventing and Eventing-Sources](../README.md). +1. Setup [Knative Eventing and Eventing-Sources](../README.md). ## Example -This guide uses an example consisting of an Event Source sending events to a function. +This guide uses an example consisting of an Event Source sending events to a +function. ![src -> chan -> sub -> svc -> fn](ExampleModel.png) -See [example.yaml](example.yaml) for the entire YAML. For any commands in this guide to work, you must apply [example.yaml](example.yaml): +See [example.yaml](example.yaml) for the entire YAML. For any commands in this +guide to work, you must apply [example.yaml](example.yaml): ```shell kubectl apply -f example.yaml @@ -27,7 +35,10 @@ kubectl apply -f example.yaml ## Triggering Events -Knative events will occur whenever a Kubernetes [`Event`](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.11/#event-v1-core) occurs in the `knative-debug` namespace. We can cause this to occur with the following commands: +Knative events will occur whenever a Kubernetes +[`Event`](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.11/#event-v1-core) +occurs in the `knative-debug` namespace. We can cause this to occur with the +following commands: ```shell kubectl -n knative-debug run to-be-deleted --image=image-that-doesnt-exist --restart=Never @@ -36,7 +47,8 @@ sleep 5 kubectl -n knative-debug delete pod to-be-deleted ``` -Then we can see the Kubernetes `Event`s (note that these are not Knative events!): +Then we can see the Kubernetes `Event`s (note that these are not Knative +events!): ```shell kubectl -n knative-debug get events @@ -61,17 +73,20 @@ But you don't see any events arrive. Where is the problem? ### Control Plane -We will first check the control plane, to ensure everything should be working properly. +We will first check the control plane, to ensure everything should be working +properly. #### Resources -The first thing to check are all the created resources, do their statuses contain `ready` true? +The first thing to check are all the created resources, do their statuses +contain `ready` true? We will attempt to determine why from the most basic pieces out: 1. `fn` - The `Deployment` has no dependencies inside Knative. 1. `svc` - The `Service` has no dependencies inside Knative. -1. `chan` - The `Channel` depends on its backing `ClusterChannelProvisioner` and somewhat depends on `sub`. +1. `chan` - The `Channel` depends on its backing `ClusterChannelProvisioner` and + somewhat depends on `sub`. 1. `src` - The `Source` depends on `chan`. 1. `sub` - The `Subscription` depends on both `chan` and `svc`. @@ -81,13 +96,15 @@ We will attempt to determine why from the most basic pieces out: kubectl -n knative-debug get deployment fn -o jsonpath='{.status.availableReplicas}' ``` -We want to see '1'. If you don't, then you need to debug the `Deployment`. Is there anything obviously wrong mentioned in the `status`? +We want to see '1'. If you don't, then you need to debug the `Deployment`. Is +there anything obviously wrong mentioned in the `status`? ```shell kubectl -n knative-debug get deployment fn -o yaml ``` -If it is not obvious what is wrong, then you need to debug the `Deployment`, which is out of scope of this document. +If it is not obvious what is wrong, then you need to debug the `Deployment`, +which is out of scope of this document. Verify that the `Pod` is `Ready`: @@ -95,8 +112,8 @@ Verify that the `Pod` is `Ready`: kubectl -n knative-debug get pod -l app=fn -o jsonpath='{.items[*].status.conditions[?(@.type == "Ready")].status}' ``` -This should return `True`. If it doesn't, then try to debug the `Deployment`, which is out of scope of this document. - +This should return `True`. If it doesn't, then try to debug the `Deployment`, +which is out of scope of this document. ##### `svc` @@ -104,7 +121,8 @@ This should return `True`. If it doesn't, then try to debug the `Deployment`, wh kubectl -n knative-debug get service svc ``` -We just want to ensure this exists and has the correct name. If it doesn't exist, then you probably need to re-apply [example.yaml](example.yaml). +We just want to ensure this exists and has the correct name. If it doesn't +exist, then you probably need to re-apply [example.yaml](example.yaml). Verify it points at the expected pod. @@ -113,11 +131,15 @@ svcLabels=$(kubectl -n knative-debug get service svc -o go-template='{{range $k, kubectl -n knative-debug get pods -l $svcLabels ``` -This should return a single Pod, which if you inspect is the one generated by `fn`. +This should return a single Pod, which if you inspect is the one generated by +`fn`. ##### `chan` -`chan` uses the [`in-memory-channel`](https://github.com/knative/eventing/tree/master/config/provisioners/in-memory-channel) as its `ClusterChannelProvisioner`. This is a very basic provisioner and has few failure modes that will be exhibited in `chan`'s `status`. +`chan` uses the +[`in-memory-channel`](https://github.com/knative/eventing/tree/master/config/provisioners/in-memory-channel) +as its `ClusterChannelProvisioner`. This is a very basic provisioner and has few +failure modes that will be exhibited in `chan`'s `status`. ```shell kubectl -n knative-debug get channel chan -o jsonpath='{.status.conditions[?(@.type == "Ready")].status}' @@ -129,7 +151,8 @@ This should return `True`. If it doesn't, get the full resource: kubectl -n knative-debug get channel chan -o yaml ``` -If `status` is completely missing, it implies that something is wrong with the `in-memory-channel` controller. See [Channel Controller](#channel-controller). +If `status` is completely missing, it implies that something is wrong with the +`in-memory-channel` controller. See [Channel Controller](#channel-controller). Next verify that `chan` is addressable: @@ -137,9 +160,12 @@ Next verify that `chan` is addressable: kubectl -n knative-debug get channel chan -o jsonpath='{.status.address.hostname}' ``` -This should return a URI, likely ending in '.cluster.local'. If it doesn't, then it implies that something went wrong during reconcilation. See [Channel Controller](#channel-controller). +This should return a URI, likely ending in '.cluster.local'. If it doesn't, then +it implies that something went wrong during reconcilation. See +[Channel Controller](#channel-controller). -We will verify that the two resources that the `chan` creates exist and are `Ready`. +We will verify that the two resources that the `chan` creates exist and are +`Ready`. ###### `Service` @@ -149,11 +175,15 @@ We will verify that the two resources that the `chan` creates exist and are `Rea kubectl -n knative-debug get service -l provisioner=in-memory-channel,channel=chan ``` -It's spec is completely unimportant, as Istio will ignore it. It just needs to exist so that `src` can send events to it. If it doesn't exist, it implies that something went wrong during `chan` reconciliation. See [Channel Controller](#channel-controller). +It's spec is completely unimportant, as Istio will ignore it. It just needs to +exist so that `src` can send events to it. If it doesn't exist, it implies that +something went wrong during `chan` reconciliation. See +[Channel Controller](#channel-controller). ###### `VirtualService` -`chan` creates a `VirtualService` which redirects its hostname to the `in-memory-channel` dispatcher. +`chan` creates a `VirtualService` which redirects its hostname to the +`in-memory-channel` dispatcher. ```shell kubectl -n knative-debug get virtualservice -l provisioner=in-memory-channel,channel=chan -o custom-columns='HOST:.spec.hosts[0],DESTINATION:.spec.http[0].route[0].destination.host' @@ -161,14 +191,20 @@ kubectl -n knative-debug get virtualservice -l provisioner=in-memory-channel,cha Verify that -1. 'HOST' is the same as the hostname returned by in `chan`'s `status.address.hostname`. -1. 'DESTINATION' is 'in-memory-channel-dispatcher.knative-eventing.svc.cluster.local'. +1. 'HOST' is the same as the hostname returned by in `chan`'s + `status.address.hostname`. +1. 'DESTINATION' is + 'in-memory-channel-dispatcher.knative-eventing.svc.cluster.local'. -If either of those is not accurate, then it implies that something went wrong during `chan` reconciliation. See [Channel Controller](#channel-controller). +If either of those is not accurate, then it implies that something went wrong +during `chan` reconciliation. See [Channel Controller](#channel-controller). ##### `src` -`src` is a [`KubernetesEventSource`](https://github.com/knative/eventing-sources/blob/master/pkg/apis/sources/v1alpha1/kuberneteseventsource_types.go), which creates an underlying [`ContainerSource`](https://github.com/knative/eventing-sources/blob/master/pkg/apis/sources/v1alpha1/containersource_types.go). +`src` is a +[`KubernetesEventSource`](https://github.com/knative/eventing-sources/blob/master/pkg/apis/sources/v1alpha1/kuberneteseventsource_types.go), +which creates an underlying +[`ContainerSource`](https://github.com/knative/eventing-sources/blob/master/pkg/apis/sources/v1alpha1/containersource_types.go). First we will verify that `src` is writing to `chan`. @@ -176,7 +212,11 @@ First we will verify that `src` is writing to `chan`. kubectl -n knative-debug get kuberneteseventsource src -o jsonpath='{.spec.sink}' ``` -Which should return `map[apiVersion:eventing.knative.dev/v1alpha1 kind:Channel name:chan]`. If it doesn't, then `src` was setup incorrectly and its `spec` needs to be fixed. Fixing should be as simple as updating its `spec` to have the correct `sink` (see [example.yaml](example.yaml)). +Which should return +`map[apiVersion:eventing.knative.dev/v1alpha1 kind:Channel name:chan]`. If it +doesn't, then `src` was setup incorrectly and its `spec` needs to be fixed. +Fixing should be as simple as updating its `spec` to have the correct `sink` +(see [example.yaml](example.yaml)). Now that we know `src` is sending to `chan`, let's verify that it is `Ready`. @@ -184,14 +224,14 @@ Now that we know `src` is sending to `chan`, let's verify that it is `Ready`. kubectl -n knative-debug get kuberneteseventsource src -o jsonpath='{.status.conditions[?(.type == "Ready")].status}' ``` -This should return `True`. If it doesn't, then we need to investigate why. First we will look at the owned `ContainerSource` that underlies `src`, and if that is not fruitful, look at the [Source Controller](#source-controller). +This should return `True`. If it doesn't, then we need to investigate why. First +we will look at the owned `ContainerSource` that underlies `src`, and if that is +not fruitful, look at the [Source Controller](#source-controller). ##### ContainerSource `src` is backed by a `ContainerSource` resource. - - Is the `ContainerSource` `Ready`? ```shell @@ -199,9 +239,11 @@ srcUID=$(kubectl -n knative-debug get kuberneteseventsource src -o jsonpath='{.m kubectl -n knative-debug get containersource -o jsonpath="{.items[?(.metadata.ownerReferences[0].uid == '$srcUID')].status.conditions[?(.type == 'Ready')].status}" ``` -That should be `True`. If it is, but `src` is not `Ready`, then that implies the problem is in the [Source Controller](#source-controller). +That should be `True`. If it is, but `src` is not `Ready`, then that implies the +problem is in the [Source Controller](#source-controller). -If `ContainerSource` is not `Ready`, then we need to look at its entire `status`: +If `ContainerSource` is not `Ready`, then we need to look at its entire +`status`: ```shell srcUID=$(kubectl -n knative-debug get kuberneteseventsource src -o jsonpath='{.metadata.uid}') @@ -217,7 +259,9 @@ containerSourceName=$(kubectl -n knative-debug get containersource -o jsonpath=" kubectl -n knative-debug get containersource $containerSourceName -o jsonpath='{.status.conditions[?(.type == "Deployed")].message}' ``` -You should see something like `Updated deployment src-xz59f-hmtkp`. Let's see the health of the `Deployment` that `ContainerSource` created (named in the message, but we will get it directly in the following command): +You should see something like `Updated deployment src-xz59f-hmtkp`. Let's see +the health of the `Deployment` that `ContainerSource` created (named in the +message, but we will get it directly in the following command): ```shell srcUID=$(kubectl -n knative-debug get kuberneteseventsource src -o jsonpath='{.metadata.uid}') @@ -226,9 +270,13 @@ deploymentName=$(kubectl -n knative-debug get deployment -o jsonpath="{.items[?( kubectl -n knative-debug get deployment $deploymentName -o yaml ``` -If this is unhealthy, then it should tell you why. E.g. `'pods "src-xz59f-hmtkp-7bd4bc6964-" is forbidden: error looking up service account knative-debug/events-sa: serviceaccount "events-sa" not found'`. Fix any errors so that it the `Deployment` is healthy. +If this is unhealthy, then it should tell you why. E.g. +`'pods "src-xz59f-hmtkp-7bd4bc6964-" is forbidden: error looking up service account knative-debug/events-sa: serviceaccount "events-sa" not found'`. +Fix any errors so that it the `Deployment` is healthy. -If the `Deployment` is healthy, but the `ContainerSource` isn't, that implies something went wrong in [ContainerSource Controller](#containersource-controller). +If the `Deployment` is healthy, but the `ContainerSource` isn't, that implies +something went wrong in +[ContainerSource Controller](#containersource-controller). #### `sub` @@ -248,19 +296,25 @@ kubectl -n knative-debug get subscription sub -o yaml #### Controllers -Each of the resources has a Controller that is watching it. As of today, they tend to do a poor job of writing failure status messages and events, so we need to look at the Controller's logs. +Each of the resources has a Controller that is watching it. As of today, they +tend to do a poor job of writing failure status messages and events, so we need +to look at the Controller's logs. ##### Deployment Controller -The Kubernetes Deployment Controller, controlling `fn`, is out of scope for this document. +The Kubernetes Deployment Controller, controlling `fn`, is out of scope for this +document. ##### Service Controller -The Kubernetes Service Controller, controlling `svc`, is out of scope for this document. +The Kubernetes Service Controller, controlling `svc`, is out of scope for this +document. ##### Channel Controller -There is not a single `Channel` Controller. Instead, there is a single Controller for each `ClusterChannelProvisioner`. `chan` uses the `in-memory-channel` `ClusterChannelProvisioner`, whose Controller is: +There is not a single `Channel` Controller. Instead, there is a single +Controller for each `ClusterChannelProvisioner`. `chan` uses the +`in-memory-channel` `ClusterChannelProvisioner`, whose Controller is: ```shell kubectl -n knative-eventing get pod -l clusterChannelProvisioner=in-memory-channel,role=controller -o yaml @@ -272,19 +326,26 @@ See its logs with: kubectl -n knative-eventing logs -l clusterChannelProvisioner=in-memory-channel,role=controller ``` -Pay particular attention to any lines that have a logging level of `warning` or `error`. +Pay particular attention to any lines that have a logging level of `warning` or +`error`. ##### Source Controller -Each Source will have its own Controller. `src` is a `KubernetesEventSource`, so its Controller is: +Each Source will have its own Controller. `src` is a `KubernetesEventSource`, so +its Controller is: ```shell kubectl -n knative-sources get pod -l control-plane=controller-manager ``` -This is actually a single binary that runs multiple Source Controllers, importantly including [ContainerSource Controller](#containersource-controller). +This is actually a single binary that runs multiple Source Controllers, +importantly including [ContainerSource Controller](#containersource-controller). -The `KubernetesEventSource` is fairly simple, as it delegates all functionality to an underlying [ContainerSource](#containersource), so there is likely no useful information in its logs. Instead more useful information is likely in the [ContainerSource Controller](#containersource-controller)'s logs. If you want to look at `KubernetesEventSource` Controller's logs anyway, they can be see with: +The `KubernetesEventSource` is fairly simple, as it delegates all functionality +to an underlying [ContainerSource](#containersource), so there is likely no +useful information in its logs. Instead more useful information is likely in the +[ContainerSource Controller](#containersource-controller)'s logs. If you want to +look at `KubernetesEventSource` Controller's logs anyway, they can be see with: ```shell kubectl -n knative-sources logs -l control-plane=controller-manager @@ -292,7 +353,8 @@ kubectl -n knative-sources logs -l control-plane=controller-manager ###### ContainerSource Controller -The `ContainerSource` Controller is run in the same binary as some other Source Controllers. It is: +The `ContainerSource` Controller is run in the same binary as some other Source +Controllers. It is: ```shell kubectl -n knative-sources get pod -l control-plane=controller-manager @@ -304,11 +366,14 @@ View its logs with: kubectl -n knative-sources logs -l control-plane=controller-manager ``` -Pay particular attention to any lines that have a logging level of `warning` or `error`. +Pay particular attention to any lines that have a logging level of `warning` or +`error`. ##### Subscription Controller -The `Subscription` Controller controls `sub`. It attempts to resolve the addresses that a `Channel` should send events to, and once resolved, inject those into the `Channel`'s `spec.subscribable`. +The `Subscription` Controller controls `sub`. It attempts to resolve the +addresses that a `Channel` should send events to, and once resolved, inject +those into the `Channel`'s `spec.subscribable`. ```shell kubectl -n knative-eventing get pod -l app=eventing-controller @@ -320,22 +385,35 @@ View its logs with: kubectl -n knative-eventing logs -l app=eventing-controller ``` -Pay particular attention to any lines that have a logging level of `warning` or `error`. +Pay particular attention to any lines that have a logging level of `warning` or +`error`. ### Data Plane -The entire [Control Plane](#control-plane) looks healthy, but we're still not getting any events. Now we need to investigate the data plane. +The entire [Control Plane](#control-plane) looks healthy, but we're still not +getting any events. Now we need to investigate the data plane. The Knative event takes the following path: 1. Event is generated by `src`. - - In this case, it is caused by having a Kubernetes `Event` trigger it, but as far as Knative is concerned, the `Source` is generating the event denovo (from nothing). -1. `src` is POSTing the event to `chan`'s address, `chan-channel-45k5h.knative-debug.svc.cluster.local`. + - In this case, it is caused by having a Kubernetes `Event` trigger it, but + as far as Knative is concerned, the `Source` is generating the event denovo + (from nothing). + +1. `src` is POSTing the event to `chan`'s address, + `chan-channel-45k5h.knative-debug.svc.cluster.local`. -1. `src`'s Istio proxy is intercepting the request, seeing that the Host matches a `VirtualService`. The request's Host is rewritten to `chan.knative-debug.channels.cluster.local` and sent to the [Channel Dispatcher](#channel-dispatcher), `in-memory-channel-dispatcher.knative-eventing.svc.cluster.local`. +1. `src`'s Istio proxy is intercepting the request, seeing that the Host matches + a `VirtualService`. The request's Host is rewritten to + `chan.knative-debug.channels.cluster.local` and sent to the + [Channel Dispatcher](#channel-dispatcher), + `in-memory-channel-dispatcher.knative-eventing.svc.cluster.local`. -1. The Channel Dispatcher receives the request and introspects the Host header to determine which `Channel` it corresponds to. It sees that it corresponds to `knative-debug/chan` so forwards the request to the subscribers defined in `sub`, in particular `svc`, which is backed by `fn`. +1. The Channel Dispatcher receives the request and introspects the Host header + to determine which `Channel` it corresponds to. It sees that it corresponds + to `knative-debug/chan` so forwards the request to the subscribers defined in + `sub`, in particular `svc`, which is backed by `fn`. 1. `fn` receives the request and logs it. @@ -351,20 +429,27 @@ containerSourceName=$(kubectl -n knative-debug get containersource -o jsonpath=" kubectl -n knative-debug logs -l source=$containerSourceName -c source ``` -Note that a few log lines within the first ~15 seconds of the `Pod` starting like the following are fine. They represent the time waiting for the Istio proxy to start. If you see these more than a few seconds after the `Pod` starts, then something is wrong. +Note that a few log lines within the first ~15 seconds of the `Pod` starting +like the following are fine. They represent the time waiting for the Istio proxy +to start. If you see these more than a few seconds after the `Pod` starts, then +something is wrong. ```shell E0116 23:59:40.033667 1 reflector.go:205] github.com/knative/eventing-sources/pkg/adapter/kubernetesevents/adapter.go:73: Failed to list *v1.Event: Get https://10.51.240.1:443/api/v1/namespaces/kna tive-debug/events?limit=500&resourceVersion=0: dial tcp 10.51.240.1:443: connect: connection refused E0116 23:59:41.034572 1 reflector.go:205] github.com/knative/eventing-sources/pkg/adapter/kubernetesevents/adapter.go:73: Failed to list *v1.Event: Get https://10.51.240.1:443/api/v1/namespaces/kna tive-debug/events?limit=500&resourceVersion=0: dial tcp 10.51.240.1:443: connect: connection refused ``` -The success message is `debug` level, so we don't expect to see anything. If you see lines with a logging level of `error`, look at their `msg`. For example: +The success message is `debug` level, so we don't expect to see anything. If you +see lines with a logging level of `error`, look at their `msg`. For example: ```shell "msg":"[404] unexpected response \"\"" ``` -Which means that `src` correctly got the Kubernetes `Event` and tried to send it to `chan`, but failed to do so. In this case, the response code was a 404. We will look at the Istio proxy's logs to see if we can get any further information: +Which means that `src` correctly got the Kubernetes `Event` and tried to send it +to `chan`, but failed to do so. In this case, the response code was a 404. We +will look at the Istio proxy's logs to see if we can get any further +information: ```shell srcUID=$(kubectl -n knative-debug get kuberneteseventsource src -o jsonpath='{.metadata.uid}') @@ -378,13 +463,25 @@ We see lines like: [2019-01-17T17:16:11.898Z] "POST / HTTP/1.1" 404 NR 0 0 0 - "-" "Go-http-client/1.1" "4702a818-11e3-9e15-b523-277b94598101" "chan-channel-45k5h.knative-debug.svc.cluster.local" "-" ``` -These are lines emitted by [Envoy](https://www.envoyproxy.io). The line is documented as Envoy's [Access Logging](https://www.envoyproxy.io/docs/envoy/latest/configuration/access_log). That's odd, we already verified that there is a [`VirtualService`](#virtualservice) for `chan`. In fact, we don't expect to see `chan-channel-45k5h.knative-debug.svc.cluster.local` at all, it should be replaced with `chan.knative-debug.channels.cluster.local`. We keep looking in the same Istio proxy logs and see: +These are lines emitted by [Envoy](https://www.envoyproxy.io). The line is +documented as Envoy's +[Access Logging](https://www.envoyproxy.io/docs/envoy/latest/configuration/access_log). +That's odd, we already verified that there is a +[`VirtualService`](#virtualservice) for `chan`. In fact, we don't expect to see +`chan-channel-45k5h.knative-debug.svc.cluster.local` at all, it should be +replaced with `chan.knative-debug.channels.cluster.local`. We keep looking in +the same Istio proxy logs and see: ```shell [2019-01-16 23:59:41.408][23][warning][config] bazel-out/k8-opt/bin/external/envoy/source/common/config/_virtual_includes/grpc_mux_subscription_lib/common/config/grpc_mux_subscription_impl.h:70] gRPC config for type.googleapis.com/envoy.api.v2.RouteConfiguration rejected: Only unique values for domains are permitted. Duplicate entry of domain chan.knative-debug.channels.cluster.local - ``` +``` -This shows that the [`VirtualService`](#virtualservice) created for `chan`, which tries to map two hosts, `chan-channel-45k5h.knative-debug.svc.cluster.local` and `chan.knative-debug.channels.cluster.local`, is not working. The most likely cause is duplicate `VirtualService`s that all try to rewrite those hosts. Look at all the `VirtualService`s in the namespace and see what hosts they rewrite: +This shows that the [`VirtualService`](#virtualservice) created for `chan`, +which tries to map two hosts, +`chan-channel-45k5h.knative-debug.svc.cluster.local` and +`chan.knative-debug.channels.cluster.local`, is not working. The most likely +cause is duplicate `VirtualService`s that all try to rewrite those hosts. Look +at all the `VirtualService`s in the namespace and see what hosts they rewrite: ```shell kubectl -n knative-debug get virtualservice -o custom-columns='NAME:.metadata.name,HOST:.spec.hosts[*]' @@ -402,24 +499,33 @@ chan-channel-8dc2x chan-channel-45k5h.knative-debug.svc.cluster.local,chan.kna Note: This shouldn't happen normally. It only happened here because I had local edits to the Channel controller and created a bug. If you see this with any released Channel Controllers, open a bug with all relevant information (Channel Controller info and YAML of all the VirtualServices). ``` -Both are owned by `chan`. Deleting both, causes the [Channel Controller](#channel-controller) to recreate the correct one. After deleting both, a single new one is created (same command as above): +Both are owned by `chan`. Deleting both, causes the +[Channel Controller](#channel-controller) to recreate the correct one. After +deleting both, a single new one is created (same command as above): ```shell NAME HOST chan-channel-9kbr8 chan-channel-45k5h.knative-debug.svc.cluster.local,chan.knative-debug.channels.cluster.local ``` -After [forcing a Kubernetes event to occur](#triggering-events), the Istio proxy logs now have: +After [forcing a Kubernetes event to occur](#triggering-events), the Istio proxy +logs now have: ```shell [2019-01-17T18:04:07.571Z] "POST / HTTP/1.1" 202 - 795 0 1 1 "-" "Go-http-client/1.1" "ba36be7e-4fc4-9f26-83bd-b1438db730b0" "chan.knative-debug.channels.cluster.local" "10.48.1.94:8080" ``` -Which looks correct. Most importantly, the return code is now 202 Accepted. In addition, the request's Host is being correctly rewritten to `chan.knative-debug.channels.cluster.local`. +Which looks correct. Most importantly, the return code is now 202 Accepted. In +addition, the request's Host is being correctly rewritten to +`chan.knative-debug.channels.cluster.local`. #### Channel Dispatcher -The Channel Dispatcher is the component that receives POSTs pushing events into `Channel`s and then POSTs to subscribers of those `Channel`s when an event is received. For the `in-memory-channel` used in this example, there is a single binary that handles both the receiving and dispatching sides for all `in-memory-channel` `Channel`s. +The Channel Dispatcher is the component that receives POSTs pushing events into +`Channel`s and then POSTs to subscribers of those `Channel`s when an event is +received. For the `in-memory-channel` used in this example, there is a single +binary that handles both the receiving and dispatching sides for all +`in-memory-channel` `Channel`s. First we will inspect the Dispatcher's logs to see if it is anything obvious: @@ -434,7 +540,8 @@ Ideally we will see lines like: {"level":"info","ts":1547752472.9582398,"caller":"provisioners/message_dispatcher.go:106","msg":"Dispatching message to http://svc.knative-debug.svc.cluster.local/"} ``` -Which shows that the request is being received and then sent to `svc`, which is returning a 2XX response code (likely 200, 202, or 204). +Which shows that the request is being received and then sent to `svc`, which is +returning a 2XX response code (likely 200, 202, or 204). However if we see something like: @@ -444,9 +551,11 @@ However if we see something like: {"level":"error","ts":1547752478.6035335,"caller":"fanout/fanout_handler.go:108","msg":"Fanout had an error","error":"Unable to complete request Post http://svc.knative-debug.svc.cluster.local/: EOF","stacktrace":"github.com/knative/eventing/pkg/sidecar/fanout.(*Handler).dispatch\n\t/usr/local/google/home/harwayne/go/src/github.com/knative/eventing/pkg/sidecar/fanout/fanout_handler.go:108\ngithub.com/knative/eventing/pkg/sidecar/fanout.createReceiverFunction.func1\n\t/usr/local/google/home/harwayne/go/src/github.com/knative/eventing/pkg/sidecar/fanout/fanout_handler.go:86\ngithub.com/knative/eventing/pkg/provisioners.(*MessageReceiver).HandleRequest\n\t/usr/local/google/home/harwayne/go/src/github.com/knative/eventing/pkg/provisioners/message_receiver.go:132\ngithub.com/knative/eventing/pkg/sidecar/fanout.(*Handler).ServeHTTP\n\t/usr/local/google/home/harwayne/go/src/github.com/knative/eventing/pkg/sidecar/fanout/fanout_handler.go:91\ngithub.com/knative/eventing/pkg/sidecar/multichannelfanout.(*Handler).ServeHTTP\n\t/usr/local/google/home/harwayne/go/src/github.com/knative/eventing/pkg/sidecar/multichannelfanout/multi_channel_fanout_handler.go:128\ngithub.com/knative/eventing/pkg/sidecar/swappable.(*Handler).ServeHTTP\n\t/usr/local/google/home/harwayne/go/src/github.com/knative/eventing/pkg/sidecar/swappable/swappable.go:105\nnet/http.serverHandler.ServeHTTP\n\t/usr/lib/google-golang/src/net/http/server.go:2740\nnet/http.(*conn).serve\n\t/usr/lib/google-golang/src/net/http/server.go:1846"} ``` -Then we know there was a problem posting to `http://svc.knative-debug.svc.cluster.local/`. +Then we know there was a problem posting to +`http://svc.knative-debug.svc.cluster.local/`. -TODO Finish this section. Especially after the Channel Dispatcher emits K8s events about failures. +TODO Finish this section. Especially after the Channel Dispatcher emits K8s +events about failures. #### `fn` diff --git a/eventing/samples/gcp-pubsub-source/README.md b/eventing/samples/gcp-pubsub-source/README.md index a589dc70126..1c25320308f 100644 --- a/eventing/samples/gcp-pubsub-source/README.md +++ b/eventing/samples/gcp-pubsub-source/README.md @@ -19,10 +19,11 @@ source is most useful as a bridge from other GCP services, such as 1. Setup [Knative Serving](https://github.com/knative/docs/blob/master/install) 1. Setup - [Knative Eventing](https://github.com/knative/docs/tree/master/eventing). - In addition, install the GCP PubSub event source from `release-gcppubsub.yaml`: + [Knative Eventing](https://github.com/knative/docs/tree/master/eventing). In + addition, install the GCP PubSub event source from `release-gcppubsub.yaml`: - kubectl apply --filename kubectl apply --filename https://github.com/knative/eventing-sources/releases/download/v0.3.0/release-gcppubsub.yaml + kubectl apply --filename kubectl apply --filename + https://github.com/knative/eventing-sources/releases/download/v0.3.0/release-gcppubsub.yaml 1. Enable the 'Cloud Pub/Sub API' on your project: diff --git a/eventing/samples/writing-a-source/01-bootstrap.md b/eventing/samples/writing-a-source/01-bootstrap.md index 67a59f02344..83d1e69106e 100644 --- a/eventing/samples/writing-a-source/01-bootstrap.md +++ b/eventing/samples/writing-a-source/01-bootstrap.md @@ -32,13 +32,13 @@ basic project structure. You'll need to choose the following: -* **A license.** The reference project uses Apache 2. -* **A domain name.** This is the unique domain used to identify your project's - resources. The reference project uses `knative.dev`, but you should choose - one unique to you or your organization. -* **An author name.** This is the copyright owner listed in the copyright - notice at the top of each source file. The reference project uses `The - Knative Authors.` +- **A license.** The reference project uses Apache 2. +- **A domain name.** This is the unique domain used to identify your project's + resources. The reference project uses `knative.dev`, but you should choose one + unique to you or your organization. +- **An author name.** This is the copyright owner listed in the copyright notice + at the top of each source file. The reference project uses + `The Knative Authors.` ```sh kubebuilder init --domain knative.dev --license apache2 --owner "The Knative Authors" diff --git a/eventing/samples/writing-a-source/02-define-source.md b/eventing/samples/writing-a-source/02-define-source.md index ae3a6384e19..a9929f250d3 100644 --- a/eventing/samples/writing-a-source/02-define-source.md +++ b/eventing/samples/writing-a-source/02-define-source.md @@ -15,17 +15,17 @@ CRD and a controller to reconcile it. You'll need to choose the following: -* **A group name.** This is the resource group that will contain the resource. - It's prepended to the domain name chosen earlier to produce the - fully-qualified resource name. The reference project uses `sources`. -* **A version name.** This is the initial version string for the CRD. It's - usually `v1alpha1` for new resources. The reference project uses `v1alpha1`. -* **A kind name.** This is the unqualified type name of the resource. The - reference project uses `SampleSource`. - - The fully-qualified name of the reference resource is - `samplesources.sources.knative.dev`, and its `apiVersion` is - `sources.knative.dev/v1alpha1`. +- **A group name.** This is the resource group that will contain the resource. + It's prepended to the domain name chosen earlier to produce the + fully-qualified resource name. The reference project uses `sources`. +- **A version name.** This is the initial version string for the CRD. It's + usually `v1alpha1` for new resources. The reference project uses `v1alpha1`. +- **A kind name.** This is the unqualified type name of the resource. The + reference project uses `SampleSource`. + + The fully-qualified name of the reference resource is + `samplesources.sources.knative.dev`, and its `apiVersion` is + `sources.knative.dev/v1alpha1`. ```sh kubebuilder create api --group sources --version v1alpha1 --kind SampleSource diff --git a/eventing/samples/writing-a-source/04-publish-to-cluster.md b/eventing/samples/writing-a-source/04-publish-to-cluster.md index 070879a010a..d5941a7e824 100644 --- a/eventing/samples/writing-a-source/04-publish-to-cluster.md +++ b/eventing/samples/writing-a-source/04-publish-to-cluster.md @@ -49,7 +49,16 @@ reference project, that error looks like this: _Stacktraces in log messages have been elided for clarity._ ```json -{"level":"error","ts":1546896989.0428371,"logger":"kubebuilder.controller","msg":"Reconciler error","controller":"samplesource-controller","request":"default/samplesource-sample","error":"Failed to get sink URI: sink reference is nil","stacktrace":"..."} +{ + "level": "error", + "ts": 1546896989.0428371, + "logger": "kubebuilder.controller", + "msg": "Reconciler error", + "controller": "samplesource-controller", + "request": "default/samplesource-sample", + "error": "Failed to get sink URI: sink reference is nil", + "stacktrace": "..." +} ``` Create a TestSink CRD to use as an Addressable. @@ -99,11 +108,17 @@ spec: namespace: default" | kubectl apply -f - ``` -Check the controller logs in the first terminal. You should see an `Updated -Status` log line. In the reference project, that line looks like this: +Check the controller logs in the first terminal. You should see an +`Updated Status` log line. In the reference project, that line looks like this: ```json -{"level":"info","ts":1546898070.4645903,"logger":"controller","msg":"Updating Status","request":{"namespace":"default","name":"samplesource-sample"}} +{ + "level": "info", + "ts": 1546898070.4645903, + "logger": "controller", + "msg": "Updating Status", + "request": { "namespace": "default", "name": "samplesource-sample" } +} ``` Verify that the source's SinkURI was updated by the controller. In the reference @@ -131,9 +146,9 @@ status: Normally controllers run inside the Kubernetes cluster. This requires publishing a container image and creating several Kubernetes objects: -* Namespace to run the controller pod in -* StatefulSet or Deployment to manage the controller pod -* RBAC rules granting permissions to manipulate Kubernetes resources +- Namespace to run the controller pod in +- StatefulSet or Deployment to manage the controller pod +- RBAC rules granting permissions to manipulate Kubernetes resources Export the `IMG` environment variable with a value equal to the desired container image URL. This URL will be different depending on your container diff --git a/eventing/samples/writing-a-source/README.md b/eventing/samples/writing-a-source/README.md index 70f4faaab4e..627b50acb23 100644 --- a/eventing/samples/writing-a-source/README.md +++ b/eventing/samples/writing-a-source/README.md @@ -18,34 +18,33 @@ wants to develop a new event source for use with Knative Eventing. You'll need these tools installed: -* git -* golang -* make -* [dep](https://github.com/golang/dep) -* [kubebuilder](https://github.com/kubernetes-sigs/kubebuilder) -* [kustomize](https://github.com/kubernetes-sigs/kustomize) -* [kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/) - (optional) -* [minikube](https://github.com/kubernetes/minikube) (optional) +- git +- golang +- make +- [dep](https://github.com/golang/dep) +- [kubebuilder](https://github.com/kubernetes-sigs/kubebuilder) +- [kustomize](https://github.com/kubernetes-sigs/kustomize) +- [kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/) (optional) +- [minikube](https://github.com/kubernetes/minikube) (optional) ## Steps -* [Bootstrap Project](01-bootstrap.md) -* [Define The Source Resource](02-define-source.md) -* [Reconcile Sources](03-reconcile-sources.md) -* [Publish to Cluster](04-publish-to-cluster.md) -* Dispatching Events +- [Bootstrap Project](01-bootstrap.md) +- [Define The Source Resource](02-define-source.md) +- [Reconcile Sources](03-reconcile-sources.md) +- [Publish to Cluster](04-publish-to-cluster.md) +- Dispatching Events ## Alternatives Kubebuilder not your thing? Prefer the easy way? Check out these alternatives. -* [ContainerSource](https://github.com/knative/docs/tree/master/eventing/sources#meta-sources) - is an easy way to turn any dispatcher container into an Event Source. -* [Auto ContainerSource](https://github.com/knative/docs/tree/master/eventing/sources#meta-sources) - is an even easier way to turn any dispatcher container into an Event Source - without writing any controller code. It requires Metacontroller. -* [Metacontroller](https://metacontroller.app) can be used to write - controllers as webhooks in any language. -* The [Cloud Scheduler source](https://github.com/vaikas-google/csr) uses the - standard Kubernetes Golang client library instead of Kubebuilder. +- [ContainerSource](https://github.com/knative/docs/tree/master/eventing/sources#meta-sources) + is an easy way to turn any dispatcher container into an Event Source. +- [Auto ContainerSource](https://github.com/knative/docs/tree/master/eventing/sources#meta-sources) + is an even easier way to turn any dispatcher container into an Event Source + without writing any controller code. It requires Metacontroller. +- [Metacontroller](https://metacontroller.app) can be used to write controllers + as webhooks in any language. +- The [Cloud Scheduler source](https://github.com/vaikas-google/csr) uses the + standard Kubernetes Golang client library instead of Kubebuilder. diff --git a/eventing/sources/README.md b/eventing/sources/README.md index ccf69f70799..4f9f9ca9189 100644 --- a/eventing/sources/README.md +++ b/eventing/sources/README.md @@ -12,54 +12,48 @@ procedure: # Knative Event Sources -Event Sources are Kubernetes Custom Resources which provide a mechanism for registering interest in -a class of events from a particular software system. Since different event sources may be described -by different Custom Resources, this page provides an index of the available source resource types as -well as links to installation instructions. +Event Sources are Kubernetes Custom Resources which provide a mechanism for +registering interest in a class of events from a particular software system. +Since different event sources may be described by different Custom Resources, +this page provides an index of the available source resource types as well as +links to installation instructions. This is a non-exhaustive list of Event sources for Knative. - ### Inclusion in this list is not an endorsement, nor does it imply any level of support. - ## Sources These are sources that are installed as `CRD`s. -Name | Status | Support | Description ---- | --- | --- | --- -[AWS SQS](https://github.com/knative/eventing-sources/blob/master/pkg/apis/sources/v1alpha1/aws_sqs_types.go) | Proof of Concept | None | Brings [AWS Simple Quele Service](https://aws.amazon.com/sqs/) messages into Knative. -[Cron Job](https://github.com/knative/eventing-sources/blob/master/pkg/apis/sources/v1alpha1/cron_job_types.go) | Proof of Concept | None | Uses an in-memory timer to produce events on the specified Cron schedule. -[GCP PubSub](https://github.com/knative/eventing-sources/blob/master/contrib/gcppubsub/pkg/apis/sources/v1alpha1/gcp_pubsub_types.go) | Proof of Concept | None | Brings [GCP PubSub](https://cloud.google.com/pubsub/) messages into Knative. -[GitHub](https://github.com/knative/eventing-sources/blob/master/pkg/apis/sources/v1alpha1/githubsource_types.go) | Proof of Concept | None | Registers for events of the specified types on the specified GitHub organization/repository. Brings those events into Knative. -[GitLab](https://gitlab.com/triggermesh/gitlabsource) | Proof of Concept | None | Registers for events of the specified types on the specified GitLab repository. Brings those events into Knative. -[Google Cloud Scheduler](https://github.com/vaikas-google/csr) | Active Development | None | Create, update, and delete [Google Cloud Scheduler](https://cloud.google.com/scheduler/) Jobs. When those jobs are triggered, receive the event inside Knative. -[Google Cloud Storage](https://github.com/vaikas-google/gcs) | Active Development | None | Registers for events of the specified types on the specified Google Cloud Storage bucket and optional object prefix. Brings those events into Knative. -[Kubernetes](https://github.com/knative/eventing-sources/blob/master/pkg/apis/sources/v1alpha1/kuberneteseventsource_types.go) | Active Development | Knative | Brings Kubernetes cluster events into Knative. Uses ContainerSource for underlying infrastructure. - - +| Name | Status | Support | Description | +| ------------------------------------------------------------------------------------------------------------------------------------- | ------------------ | ------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| [AWS SQS](https://github.com/knative/eventing-sources/blob/master/pkg/apis/sources/v1alpha1/aws_sqs_types.go) | Proof of Concept | None | Brings [AWS Simple Quele Service](https://aws.amazon.com/sqs/) messages into Knative. | +| [Cron Job](https://github.com/knative/eventing-sources/blob/master/pkg/apis/sources/v1alpha1/cron_job_types.go) | Proof of Concept | None | Uses an in-memory timer to produce events on the specified Cron schedule. | +| [GCP PubSub](https://github.com/knative/eventing-sources/blob/master/contrib/gcppubsub/pkg/apis/sources/v1alpha1/gcp_pubsub_types.go) | Proof of Concept | None | Brings [GCP PubSub](https://cloud.google.com/pubsub/) messages into Knative. | +| [GitHub](https://github.com/knative/eventing-sources/blob/master/pkg/apis/sources/v1alpha1/githubsource_types.go) | Proof of Concept | None | Registers for events of the specified types on the specified GitHub organization/repository. Brings those events into Knative. | +| [GitLab](https://gitlab.com/triggermesh/gitlabsource) | Proof of Concept | None | Registers for events of the specified types on the specified GitLab repository. Brings those events into Knative. | +| [Google Cloud Scheduler](https://github.com/vaikas-google/csr) | Active Development | None | Create, update, and delete [Google Cloud Scheduler](https://cloud.google.com/scheduler/) Jobs. When those jobs are triggered, receive the event inside Knative. | +| [Google Cloud Storage](https://github.com/vaikas-google/gcs) | Active Development | None | Registers for events of the specified types on the specified Google Cloud Storage bucket and optional object prefix. Brings those events into Knative. | +| [Kubernetes](https://github.com/knative/eventing-sources/blob/master/pkg/apis/sources/v1alpha1/kuberneteseventsource_types.go) | Active Development | Knative | Brings Kubernetes cluster events into Knative. Uses ContainerSource for underlying infrastructure. | ## Meta Sources These are not directly usable, but make writing a Source much easier. -Name | Status | Support | Description ---- | --- | --- | --- -[Auto Container Source](https://github.com/Harwayne/auto-container-source) | Proof of Concept | None | AutoContainerSource is a controller that allows the Source CRDs _without_ needing a controller. It notices CRDs with a specific label and starts controlling resources of that type. It utilizes Container Source as underlying infrastructure. -[Container Source](https://github.com/knative/eventing-sources/blob/master/pkg/apis/sources/v1alpha1/containersource_types.go) | Active Development | Knative | Container Source is a generic controller. Given an Image URL, it will keep a single `Pod` running with the specified image, environment, and arguments. It is used by multiple other Sources as underlying infrastructure. -[Sample Source](https://github.com/grantr/sample-source) | Proof of Concept | None | SampleSource is a reference implementation supporting the [Writing an Event Source the Hard Way tutorial](../samples/writing-a-source). - - +| Name | Status | Support | Description | +| ------------------------------------------------------------------------------------------------------------------------------ | ------------------ | ------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| [Auto Container Source](https://github.com/Harwayne/auto-container-source) | Proof of Concept | None | AutoContainerSource is a controller that allows the Source CRDs _without_ needing a controller. It notices CRDs with a specific label and starts controlling resources of that type. It utilizes Container Source as underlying infrastructure. | +| [Container Source](https://github.com/knative/eventing-sources/blob/master/pkg/apis/sources/v1alpha1/containersource_types.go) | Active Development | Knative | Container Source is a generic controller. Given an Image URL, it will keep a single `Pod` running with the specified image, environment, and arguments. It is used by multiple other Sources as underlying infrastructure. | +| [Sample Source](https://github.com/grantr/sample-source) | Proof of Concept | None | SampleSource is a reference implementation supporting the [Writing an Event Source the Hard Way tutorial](../samples/writing-a-source). | ### ContainerSource Containers These are containers intended to be used with `ContainerSource`. -Name | Status | Support | Description ---- | --- | --- | --- -[Heartbeat](https://github.com/knative/eventing-sources/tree/master/cmd/heartbeats) | Proof of Concept | None | Uses an in-memory timer to produce events at the specified interval. -[Heartbeat](https://github.com/Harwayne/auto-container-source/tree/master/heartbeat-source) | Proof of Concept | None | Uses an in-memory timer to produce events as the specified interval. Uses AutoContainerSource for underlying infrastructure. -[K8s](https://github.com/Harwayne/auto-container-source/tree/master/k8s-event-source) | Proof of Concept | None | Brings Kubernetes cluster events into Knative. Uses AutoContainerSource for underlying infrastructure. -[WebSocket](https://github.com/knative/eventing-sources/tree/master/cmd/websocketsource) | Active Development | None | Opens a WebSocket to the specified source and packages each received message as a Knative event. - +| Name | Status | Support | Description | +| ------------------------------------------------------------------------------------------- | ------------------ | ------- | ---------------------------------------------------------------------------------------------------------------------------- | +| [Heartbeat](https://github.com/knative/eventing-sources/tree/master/cmd/heartbeats) | Proof of Concept | None | Uses an in-memory timer to produce events at the specified interval. | +| [Heartbeat](https://github.com/Harwayne/auto-container-source/tree/master/heartbeat-source) | Proof of Concept | None | Uses an in-memory timer to produce events as the specified interval. Uses AutoContainerSource for underlying infrastructure. | +| [K8s](https://github.com/Harwayne/auto-container-source/tree/master/k8s-event-source) | Proof of Concept | None | Brings Kubernetes cluster events into Knative. Uses AutoContainerSource for underlying infrastructure. | +| [WebSocket](https://github.com/knative/eventing-sources/tree/master/cmd/websocketsource) | Active Development | None | Opens a WebSocket to the specified source and packages each received message as a Knative event. | diff --git a/install/Knative-custom-install.md b/install/Knative-custom-install.md index 9fded4ea1ba..e40173f9c11 100644 --- a/install/Knative-custom-install.md +++ b/install/Knative-custom-install.md @@ -5,8 +5,8 @@ Kubernetes cluster. Knative's pluggable components allow you to install only what you need. The steps covered in this guide are for advanced operators who want to customize -each Knative installation. Installing individual Knative components requires -you to run multiple installation commands. +each Knative installation. Installing individual Knative components requires you +to run multiple installation commands. ## Before you begin @@ -66,12 +66,11 @@ service mesh. If you install any of the following options, you must install #### Istio installation options - -| Istio Install Filename | Description | -| ----------------------------- | ------------------------------------------------------------------------ | -| [`istio-crds.yaml`][a]† | Creates CRDs before installing Istio. | -| [`istio.yaml`][b]† | Install Istio with service mesh enabled (automatic sidecar injection). | -| [`istio-lean.yaml`][c] | Install Istio and disable the service mesh by default. | +| Istio Install Filename | Description | +| ----------------------- | ---------------------------------------------------------------------- | +| [`istio-crds.yaml`][a]† | Creates CRDs before installing Istio. | +| [`istio.yaml`][b]† | Install Istio with service mesh enabled (automatic sidecar injection). | +| [`istio-lean.yaml`][c] | Install Istio and disable the service mesh by default. | † These are the recommended standard install files suitable for most use cases. @@ -151,6 +150,7 @@ with Knative. ### Choosing Knative installation files The following Knative installation files are available: + - **Serving Component and Observability Plugins**: - https://github.com/knative/serving/releases/download/v0.3.0/serving.yaml - https://github.com/knative/serving/releases/download/v0.3.0/monitoring.yaml @@ -180,26 +180,26 @@ files from the Knative repositories: - [Eventing][4] - [Eventing Sources][5] -| Knative Install Filename | Notes | Dependencies | -| -------------------------------------------------| ----------------------------------------------------------------------------------------------------- | ----------------------------------------------------------------- | -| **knative/serving** | | | -| [`serving.yaml`][1.1]† | Installs the Serving component. | | -| [`monitoring.yaml`][1.2]† | Installs the [ELK stack][2], [Prometheus][2.1], [Grafana][2.2], and [Zipkin][2.3]**\*** | Serving component | -| [`monitoring-logs-elasticsearch.yaml`][1.3] | Installs only the [ELK stack][2]**\*** | Serving component | -| [`monitoring-metrics-prometheus.yaml`][1.4] | Installs only [Prometheus][2.1]**\*** | Serving component | -| [`monitoring-tracing-zipkin.yaml`][1.5] | Installs only [Zipkin][2.3].**\*** | Serving component, ELK stack (monitoring-logs-elasticsearch.yaml) | -| [`monitoring-tracing-zipkin-in-mem.yaml`][1.6] | Installs only [Zipkin in-memory][2.3]**\*** | Serving component | -| **knative/build** | | | -| [`release.yaml`][3.1]† | Installs the Build component. | | -| **knative/eventing** | | | -| [`release.yaml`][4.1]† | Installs the Eventing component. Includes the in-memory channel provisioner. | Serving component | -| [`eventing.yaml`][4.2] | Installs the Eventing component. Does not include the in-memory channel provisioner. | Serving component | -| [`in-memory-channel.yaml`][4.3] | Installs only the in-memory channel provisioner. | Serving component, Eventing component | -| [`kafka.yaml`][4.4] | Installs only the Kafka channel provisioner. | Serving component, Eventing component | -| **knative/eventing-sources** | | | -| [`release.yaml`][5.1]† | Installs the following sources: [Kubernetes][6], [GitHub][6.1], [Container image][6.2], [CronJob][6.3]| Serving component, Eventing component | -| [`release-gcppubsub.yaml`][5.2] | Installs the following sources: [PubSub][6.4] | Serving component, Eventing component | -| [`message-dumper.yaml`][5.3] | Installs an Event logging service for debugging. | Serving component, Eventing component | +| Knative Install Filename | Notes | Dependencies | +| ---------------------------------------------- | ------------------------------------------------------------------------------------------------------ | ----------------------------------------------------------------- | +| **knative/serving** | | | +| [`serving.yaml`][1.1]† | Installs the Serving component. | | +| [`monitoring.yaml`][1.2]† | Installs the [ELK stack][2], [Prometheus][2.1], [Grafana][2.2], and [Zipkin][2.3]**\*** | Serving component | +| [`monitoring-logs-elasticsearch.yaml`][1.3] | Installs only the [ELK stack][2]**\*** | Serving component | +| [`monitoring-metrics-prometheus.yaml`][1.4] | Installs only [Prometheus][2.1]**\*** | Serving component | +| [`monitoring-tracing-zipkin.yaml`][1.5] | Installs only [Zipkin][2.3].**\*** | Serving component, ELK stack (monitoring-logs-elasticsearch.yaml) | +| [`monitoring-tracing-zipkin-in-mem.yaml`][1.6] | Installs only [Zipkin in-memory][2.3]**\*** | Serving component | +| **knative/build** | | | +| [`release.yaml`][3.1]† | Installs the Build component. | | +| **knative/eventing** | | | +| [`release.yaml`][4.1]† | Installs the Eventing component. Includes the in-memory channel provisioner. | Serving component | +| [`eventing.yaml`][4.2] | Installs the Eventing component. Does not include the in-memory channel provisioner. | Serving component | +| [`in-memory-channel.yaml`][4.3] | Installs only the in-memory channel provisioner. | Serving component, Eventing component | +| [`kafka.yaml`][4.4] | Installs only the Kafka channel provisioner. | Serving component, Eventing component | +| **knative/eventing-sources** | | | +| [`release.yaml`][5.1]† | Installs the following sources: [Kubernetes][6], [GitHub][6.1], [Container image][6.2], [CronJob][6.3] | Serving component, Eventing component | +| [`release-gcppubsub.yaml`][5.2] | Installs the following sources: [PubSub][6.4] | Serving component, Eventing component | +| [`message-dumper.yaml`][5.3] | Installs an Event logging service for debugging. | Serving component, Eventing component | _\*_ See [Installing logging, metrics, and traces](../serving/installing-logging-metrics-traces.md) @@ -209,11 +209,16 @@ for details about installing the various supported observability plug-ins. [1]: https://github.com/knative/serving/releases/tag/v0.3.0 [1.1]: https://github.com/knative/serving/releases/download/v0.3.0/serving.yaml -[1.2]: https://github.com/knative/serving/releases/download/v0.3.0/monitoring.yaml -[1.3]: https://github.com/knative/serving/releases/download/v0.3.0/monitoring-logs-elasticsearch.yaml -[1.4]: https://github.com/knative/serving/releases/download/v0.3.0/monitoring-metrics-prometheus.yaml -[1.5]: https://github.com/knative/serving/releases/download/v0.3.0/monitoring-tracing-zipkin.yaml -[1.6]: https://github.com/knative/serving/releases/download/v0.3.0/monitoring-tracing-zipkin-in-mem.yaml +[1.2]: + https://github.com/knative/serving/releases/download/v0.3.0/monitoring.yaml +[1.3]: + https://github.com/knative/serving/releases/download/v0.3.0/monitoring-logs-elasticsearch.yaml +[1.4]: + https://github.com/knative/serving/releases/download/v0.3.0/monitoring-metrics-prometheus.yaml +[1.5]: + https://github.com/knative/serving/releases/download/v0.3.0/monitoring-tracing-zipkin.yaml +[1.6]: + https://github.com/knative/serving/releases/download/v0.3.0/monitoring-tracing-zipkin-in-mem.yaml [2]: https://www.elastic.co/elk-stack [2.1]: https://prometheus.io [2.2]: https://grafana.com @@ -222,17 +227,24 @@ for details about installing the various supported observability plug-ins. [3.1]: https://github.com/knative/build/releases/download/v0.3.0/release.yaml [4]: https://github.com/knative/eventing/releases/tag/v0.3.0 [4.1]: https://github.com/knative/eventing/releases/download/v0.3.0/release.yaml -[4.2]: https://github.com/knative/eventing/releases/download/v0.3.0/eventing.yaml -[4.3]: https://github.com/knative/eventing/releases/download/v0.3.0/in-memory-channel.yaml +[4.2]: + https://github.com/knative/eventing/releases/download/v0.3.0/eventing.yaml +[4.3]: + https://github.com/knative/eventing/releases/download/v0.3.0/in-memory-channel.yaml [4.4]: https://github.com/knative/eventing/releases/download/v0.3.0/kafka.yaml [5]: https://github.com/knative/eventing-sources/releases/tag/v0.3.0 -[5.1]: https://github.com/knative/eventing-sources/releases/download/v0.3.0/release.yaml -[5.2]: https://github.com/knative/eventing-sources/releases/download/v0.3.0/release-gcppubsub.yaml -[5.3]: https://github.com/knative/eventing-sources/releases/download/v0.3.0/message-dumper.yaml -[6]: https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.12/#event-v1-core +[5.1]: + https://github.com/knative/eventing-sources/releases/download/v0.3.0/release.yaml +[5.2]: + https://github.com/knative/eventing-sources/releases/download/v0.3.0/release-gcppubsub.yaml +[5.3]: + https://github.com/knative/eventing-sources/releases/download/v0.3.0/message-dumper.yaml +[6]: + https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.12/#event-v1-core [6.1]: https://developer.github.com/v3/activity/events/types/ [6.2]: https://github.com/knative/docs/tree/master/eventing#containersource -[6.3]: https://github.com/knative/eventing-sources/blob/master/samples/cronjob-source/README.md +[6.3]: + https://github.com/knative/eventing-sources/blob/master/samples/cronjob-source/README.md [6.4]: https://cloud.google.com/pubsub/ ### Installing Knative @@ -288,10 +300,10 @@ commands below. --filename https://github.com/knative/eventing-sources/releases/download/v0.3.0/release.yaml ``` -1. Depending on what you chose to install, view the status of your - installation by running one or more of the following commands. It might take - a few seconds, so rerun the commands until all of the components show a - `STATUS` of `Running`: +1. Depending on what you chose to install, view the status of your installation + by running one or more of the following commands. It might take a few + seconds, so rerun the commands until all of the components show a `STATUS` of + `Running`: ```bash kubectl get pods --namespace knative-serving @@ -310,12 +322,12 @@ commands below. kubectl get pods --namespace knative-monitoring ``` - See + See [Installing logging, metrics, and traces](../serving/installing-logging-metrics-traces.md) for details about setting up the various supported observability plug-ins. -You are now ready to deploy an app, run a build, or start sending and -receiving events in your Knative cluster. +You are now ready to deploy an app, run a build, or start sending and receiving +events in your Knative cluster. ## What's next diff --git a/install/Knative-with-AKS.md b/install/Knative-with-AKS.md index a5d7f63b79a..0a169f8256f 100644 --- a/install/Knative-with-AKS.md +++ b/install/Knative-with-AKS.md @@ -131,15 +131,16 @@ recommended configuration for a cluster is: Knative depends on Istio. 1. Install Istio: + ```bash kubectl apply --filename https://github.com/knative/serving/releases/download/v0.3.0/istio-crds.yaml && \ kubectl apply --filename https://github.com/knative/serving/releases/download/v0.3.0/istio.yaml ``` - Note: the resources (CRDs) defined in the `istio-crds.yaml`file are - also included in the `istio.yaml` file, but they are pulled out so that - the CRD definitions are created first. If you see an error when creating - resources about an unknown type, run the second `kubectl apply` command - again. + + Note: the resources (CRDs) defined in the `istio-crds.yaml`file are also + included in the `istio.yaml` file, but they are pulled out so that the CRD + definitions are created first. If you see an error when creating resources + about an unknown type, run the second `kubectl apply` command again. 1. Label the default namespace with `istio-injection=enabled`: @@ -160,25 +161,26 @@ rerun the command to see the current status. ## Installing Knative The following commands install all available Knative components. To customize -your Knative installation, see [Performing a Custom Knative Installation](Knative-custom-install.md). +your Knative installation, see +[Performing a Custom Knative Installation](Knative-custom-install.md). 1. Run the `kubectl apply` command to install Knative and its dependencies: - ```bash - kubectl apply --filename https://github.com/knative/serving/releases/download/v0.3.0/serving.yaml \ - --filename https://github.com/knative/build/releases/download/v0.3.0/release.yaml \ - --filename https://github.com/knative/eventing/releases/download/v0.3.0/release.yaml \ - --filename https://github.com/knative/eventing-sources/releases/download/v0.3.0/release.yaml \ - --filename https://github.com/knative/serving/releases/download/v0.3.0/monitoring.yaml - ``` -1. Monitor the Knative components until all of the components show a - `STATUS` of `Running`: - ```bash - kubectl get pods --namespace knative-serving - kubectl get pods --namespace knative-build - kubectl get pods --namespace knative-eventing - kubectl get pods --namespace knative-sources - kubectl get pods --namespace knative-monitoring - ``` + ```bash + kubectl apply --filename https://github.com/knative/serving/releases/download/v0.3.0/serving.yaml \ + --filename https://github.com/knative/build/releases/download/v0.3.0/release.yaml \ + --filename https://github.com/knative/eventing/releases/download/v0.3.0/release.yaml \ + --filename https://github.com/knative/eventing-sources/releases/download/v0.3.0/release.yaml \ + --filename https://github.com/knative/serving/releases/download/v0.3.0/monitoring.yaml + ``` +1. Monitor the Knative components until all of the components show a `STATUS` of + `Running`: + ```bash + kubectl get pods --namespace knative-serving + kubectl get pods --namespace knative-build + kubectl get pods --namespace knative-eventing + kubectl get pods --namespace knative-sources + kubectl get pods --namespace knative-monitoring + ``` ## What's next @@ -192,8 +194,8 @@ guide. To get started with Knative Eventing, pick one of the [Eventing Samples](../eventing/samples/) to walk through. -To get started with Knative Build, read the -[Build README](../build/README.md), then choose a sample to walk through. +To get started with Knative Build, read the [Build README](../build/README.md), +then choose a sample to walk through. ## Cleaning up diff --git a/install/Knative-with-GKE.md b/install/Knative-with-GKE.md index b01ac4cbcef..9f84b9ee5ae 100644 --- a/install/Knative-with-GKE.md +++ b/install/Knative-with-GKE.md @@ -131,15 +131,16 @@ Admin permissions are required to create the necessary Knative depends on Istio. 1. Install Istio: + ```bash kubectl apply --filename https://github.com/knative/serving/releases/download/v0.3.0/istio-crds.yaml && \ kubectl apply --filename https://github.com/knative/serving/releases/download/v0.3.0/istio.yaml ``` - Note: the resources (CRDs) defined in the `istio-crds.yaml`file are - also included in the `istio.yaml` file, but they are pulled out so that - the CRD definitions are created first. If you see an error when creating - resources about an unknown type, run the second `kubectl apply` command - again. + + Note: the resources (CRDs) defined in the `istio-crds.yaml`file are also + included in the `istio.yaml` file, but they are pulled out so that the CRD + definitions are created first. If you see an error when creating resources + about an unknown type, run the second `kubectl apply` command again. 1. Label the default namespace with `istio-injection=enabled`: ```bash @@ -165,22 +166,22 @@ standard set of observability plugins. To customize your Knative installation, see [Performing a Custom Knative Installation](Knative-custom-install.md). 1. Run the `kubectl apply` command to install Knative and its dependencies: - ```bash - kubectl apply --filename https://github.com/knative/serving/releases/download/v0.3.0/serving.yaml \ - --filename https://github.com/knative/build/releases/download/v0.3.0/release.yaml \ - --filename https://github.com/knative/eventing/releases/download/v0.3.0/release.yaml \ - --filename https://github.com/knative/eventing-sources/releases/download/v0.3.0/release.yaml \ - --filename https://github.com/knative/serving/releases/download/v0.3.0/monitoring.yaml - ``` -1. Monitor the Knative components until all of the components show a - `STATUS` of `Running`: - ```bash - kubectl get pods --namespace knative-serving - kubectl get pods --namespace knative-build - kubectl get pods --namespace knative-eventing - kubectl get pods --namespace knative-sources - kubectl get pods --namespace knative-monitoring - ``` + ```bash + kubectl apply --filename https://github.com/knative/serving/releases/download/v0.3.0/serving.yaml \ + --filename https://github.com/knative/build/releases/download/v0.3.0/release.yaml \ + --filename https://github.com/knative/eventing/releases/download/v0.3.0/release.yaml \ + --filename https://github.com/knative/eventing-sources/releases/download/v0.3.0/release.yaml \ + --filename https://github.com/knative/serving/releases/download/v0.3.0/monitoring.yaml + ``` +1. Monitor the Knative components until all of the components show a `STATUS` of + `Running`: + ```bash + kubectl get pods --namespace knative-serving + kubectl get pods --namespace knative-build + kubectl get pods --namespace knative-eventing + kubectl get pods --namespace knative-sources + kubectl get pods --namespace knative-monitoring + ``` ## What's next @@ -194,8 +195,8 @@ guide. To get started with Knative Eventing, pick one of the [Eventing Samples](../eventing/samples/) to walk through. -To get started with Knative Build, read the -[Build README](../build/README.md), then choose a sample to walk through. +To get started with Knative Build, read the [Build README](../build/README.md), +then choose a sample to walk through. ## Cleaning up diff --git a/install/Knative-with-Gardener.md b/install/Knative-with-Gardener.md index 390d45927fb..3c84625a456 100644 --- a/install/Knative-with-Gardener.md +++ b/install/Knative-with-Gardener.md @@ -70,15 +70,16 @@ of this guide be sure you have `export KUBECONFIG=my-cluster.yaml` set. Knative depends on Istio. 1. Install Istio: + ```bash - kubectl apply --filename https://github.com/knative/serving/releases/download/v0.3.0/istio-crds.yaml && \ + kubectl apply --filename https://github.com/knative/serving/releases/download/v0.3.0/istio-crds.yaml && \ kubectl apply --filename https://github.com/knative/serving/releases/download/v0.3.0/istio.yaml ``` - Note: the resources (CRDs) defined in the `istio-crds.yaml`file are - also included in the `istio.yaml` file, but they are pulled out so that - the CRD definitions are created first. If you see an error when creating - resources about an unknown type, run the second `kubectl apply` command - again. + + Note: the resources (CRDs) defined in the `istio-crds.yaml`file are also + included in the `istio.yaml` file, but they are pulled out so that the CRD + definitions are created first. If you see an error when creating resources + about an unknown type, run the second `kubectl apply` command again. 2. Label the default namespace with `istio-injection=enabled`: ```bash @@ -101,22 +102,22 @@ standard set of observability plugins. To customize your Knative installation, see [Performing a Custom Knative Installation](Knative-custom-install.md). 1. Run the `kubectl apply` command to install Knative and its dependencies: - ```bash - kubectl apply --filename https://github.com/knative/serving/releases/download/v0.3.0/serving.yaml \ - --filename https://github.com/knative/build/releases/download/v0.3.0/release.yaml \ - --filename https://github.com/knative/eventing/releases/download/v0.3.0/release.yaml \ - --filename https://github.com/knative/eventing-sources/releases/download/v0.3.0/release.yaml \ - --filename https://github.com/knative/serving/releases/download/v0.3.0/monitoring.yaml - ``` -1. Monitor the Knative components until all of the components show a - `STATUS` of `Running`: - ```bash - kubectl get pods --namespace knative-serving - kubectl get pods --namespace knative-build - kubectl get pods --namespace knative-eventing - kubectl get pods --namespace knative-sources - kubectl get pods --namespace knative-monitoring - ``` + ```bash + kubectl apply --filename https://github.com/knative/serving/releases/download/v0.3.0/serving.yaml \ + --filename https://github.com/knative/build/releases/download/v0.3.0/release.yaml \ + --filename https://github.com/knative/eventing/releases/download/v0.3.0/release.yaml \ + --filename https://github.com/knative/eventing-sources/releases/download/v0.3.0/release.yaml \ + --filename https://github.com/knative/serving/releases/download/v0.3.0/monitoring.yaml + ``` +1. Monitor the Knative components until all of the components show a `STATUS` of + `Running`: + ```bash + kubectl get pods --namespace knative-serving + kubectl get pods --namespace knative-build + kubectl get pods --namespace knative-eventing + kubectl get pods --namespace knative-sources + kubectl get pods --namespace knative-monitoring + ``` ## Set your custom domain @@ -161,8 +162,8 @@ guide. To get started with Knative Eventing, pick one of the [Eventing Samples](../eventing/samples/) to walk through. -To get started with Knative Build, read the -[Build README](../build/README.md), then choose a sample to walk through. +To get started with Knative Build, read the [Build README](../build/README.md), +then choose a sample to walk through. ## Cleaning up diff --git a/install/Knative-with-ICP.md b/install/Knative-with-ICP.md index 43b145b4fef..2706045614f 100644 --- a/install/Knative-with-ICP.md +++ b/install/Knative-with-ICP.md @@ -49,12 +49,12 @@ in IBM Cloud Private to allow the access to the Knative image: ``` 2. Update `spec.repositories` by adding the following entries, for example: - ```yaml - spec: - repositories: - - name: gcr.io/knative-releases/* - - name: k8s.gcr.io/* - - name: quay.io/* + ```yaml + spec: + repositories: + - name: gcr.io/knative-releases/* + - name: k8s.gcr.io/* + - name: quay.io/* ``` #### Update pod security policy @@ -157,20 +157,21 @@ see [Performing a Custom Knative Installation](Knative-custom-install.md). | sed 's/LoadBalancer/NodePort/' \ | kubectl apply --filename - ``` - - See [Installing logging, metrics, and traces](../serving/installing-logging-metrics-traces.md) + + See + [Installing logging, metrics, and traces](../serving/installing-logging-metrics-traces.md) for details about installing the various supported observability plug-ins. - - -1. Monitor the Knative components until all of the components show a - `STATUS` of `Running`: - ```bash - kubectl get pods --namespace knative-serving - kubectl get pods --namespace knative-build - kubectl get pods --namespace knative-eventing - kubectl get pods --namespace knative-sources - kubectl get pods --namespace knative-monitoring - ``` + +1) Monitor the Knative components until all of the components show a `STATUS` of + `Running`: + + ```bash + kubectl get pods --namespace knative-serving + kubectl get pods --namespace knative-build + kubectl get pods --namespace knative-eventing + kubectl get pods --namespace knative-sources + kubectl get pods --namespace knative-monitoring + ``` > Note: Instead of rerunning the command, you can add `--watch` to the above > command to view the component's status updates in real time. Use CTRL+C to @@ -187,24 +188,24 @@ To deploy your first app with Knative, follow the step-by-step [Getting Started with Knative App Deployment](getting-started-knative-app.md) guide. -> **Note**: When looking up the IP address to use for accessing your app, you need - the address used for ICP. The following command looks up the value to - use for the {IP_ADDRESS} placeholder in the samples: +> **Note**: When looking up the IP address to use for accessing your app, you +> need the address used for ICP. The following command looks up the value to use +> for the {IP_ADDRESS} placeholder in the samples: - ```shell - echo $(ICP cluster ip):$(kubectl get svc istio-ingressgateway --namespace istio-system \ - --output 'jsonpath={.spec.ports[?(@.port==80)].nodePort}') - ``` +```shell +echo $(ICP cluster ip):$(kubectl get svc istio-ingressgateway --namespace istio-system \ +--output 'jsonpath={.spec.ports[?(@.port==80)].nodePort}') +``` To get started with Knative Eventing, walk through one of the [Eventing Samples](../eventing/samples/). -To get started with Knative Build, read the -[Build README](../build/README.md), then choose a sample to walk through. +To get started with Knative Build, read the [Build README](../build/README.md), +then choose a sample to walk through. ## Cleaning up -To remove Knative from your IBM Cloud Private cluster, run the following +To remove Knative from your IBM Cloud Private cluster, run the following commands: ```shell diff --git a/install/Knative-with-IKS.md b/install/Knative-with-IKS.md index ce25540880f..411a900c266 100644 --- a/install/Knative-with-IKS.md +++ b/install/Knative-with-IKS.md @@ -71,10 +71,11 @@ components, the recommended configuration for a cluster is: - 4 vCPU nodes with 16GB memory (`b2c.4x16`) 1. Set `ibmcloud` to the appropriate region: + ```bash ibmcloud cs region-set $CLUSTER_REGION ``` - + 1. Create a Kubernetes cluster on IKS with the required specifications: ```bash @@ -130,15 +131,16 @@ components, the recommended configuration for a cluster is: Knative depends on Istio. 1. Install Istio: + ```bash kubectl apply --filename https://github.com/knative/serving/releases/download/v0.3.0/istio-crds.yaml && \ kubectl apply --filename https://github.com/knative/serving/releases/download/v0.3.0/istio.yaml ``` - Note: the resources (CRDs) defined in the `istio-crds.yaml`file are - also included in the `istio.yaml` file, but they are pulled out so that - the CRD definitions are created first. If you see an error when creating - resources about an unknown type, run the second `kubectl apply` command - again. + + Note: the resources (CRDs) defined in the `istio-crds.yaml`file are also + included in the `istio.yaml` file, but they are pulled out so that the CRD + definitions are created first. If you see an error when creating resources + about an unknown type, run the second `kubectl apply` command again. 1. Label the default namespace with `istio-injection=enabled`: ```bash @@ -164,22 +166,22 @@ standard set of observability plugins. To customize your Knative installation, see [Performing a Custom Knative Installation](Knative-custom-install.md). 1. Run the `kubectl apply` command to install Knative and its dependencies: - ```bash - kubectl apply --filename https://github.com/knative/serving/releases/download/v0.3.0/serving.yaml \ - --filename https://github.com/knative/build/releases/download/v0.3.0/release.yaml \ - --filename https://github.com/knative/eventing/releases/download/v0.3.0/release.yaml \ - --filename https://github.com/knative/eventing-sources/releases/download/v0.3.0/release.yaml \ - --filename https://github.com/knative/serving/releases/download/v0.3.0/monitoring.yaml - ``` -1. Monitor the Knative components until all of the components show a - `STATUS` of `Running`: - ```bash - kubectl get pods --namespace knative-serving - kubectl get pods --namespace knative-build - kubectl get pods --namespace knative-eventing - kubectl get pods --namespace knative-sources - kubectl get pods --namespace knative-monitoring - ``` + ```bash + kubectl apply --filename https://github.com/knative/serving/releases/download/v0.3.0/serving.yaml \ + --filename https://github.com/knative/build/releases/download/v0.3.0/release.yaml \ + --filename https://github.com/knative/eventing/releases/download/v0.3.0/release.yaml \ + --filename https://github.com/knative/eventing-sources/releases/download/v0.3.0/release.yaml \ + --filename https://github.com/knative/serving/releases/download/v0.3.0/monitoring.yaml + ``` +1. Monitor the Knative components until all of the components show a `STATUS` of + `Running`: + ```bash + kubectl get pods --namespace knative-serving + kubectl get pods --namespace knative-build + kubectl get pods --namespace knative-eventing + kubectl get pods --namespace knative-sources + kubectl get pods --namespace knative-monitoring + ``` ## What's next @@ -193,8 +195,8 @@ guide. To get started with Knative Eventing, pick one of the [Eventing Samples](../eventing/samples/) to walk through. -To get started with Knative Build, read the -[Build README](../build/README.md), then choose a sample to walk through. +To get started with Knative Build, read the [Build README](../build/README.md), +then choose a sample to walk through. ## Cleaning up diff --git a/install/Knative-with-Minikube.md b/install/Knative-with-Minikube.md index 80b733a651c..179e267ffd2 100644 --- a/install/Knative-with-Minikube.md +++ b/install/Knative-with-Minikube.md @@ -124,9 +124,9 @@ If you'd like to view the available sample apps and deploy one of your choosing, head to the [sample apps](../serving/samples/README.md) repo. > Note: When looking up the IP address to use for accessing your app, you need -> to look up the NodePort for the `istio-ingressgateway` well as the IP -> address used for Minikube. You can use the following command to look up the -> value to use for the {IP_ADDRESS} placeholder used in the samples: +> to look up the NodePort for the `istio-ingressgateway` well as the IP address +> used for Minikube. You can use the following command to look up the value to +> use for the {IP_ADDRESS} placeholder used in the samples: ```shell # In Knative 0.2.x and prior versions, the `knative-ingressgateway` service was used instead of `istio-ingressgateway`. diff --git a/install/Knative-with-Minishift.md b/install/Knative-with-Minishift.md index 322ff610b21..1f85840c9c5 100644 --- a/install/Knative-with-Minishift.md +++ b/install/Knative-with-Minishift.md @@ -164,11 +164,11 @@ curl -s https://raw.githubusercontent.com/knative/docs/master/install/scripts/is kubectl apply --filename https://github.com/knative/serving/releases/download/v0.3.0/istio-crds.yaml && \ oc apply -f https://github.com/knative/serving/releases/download/v0.3.0/istio.yaml ``` - Note: the resources (CRDs) defined in the `istio-crds.yaml`file are - also included in the `istio.yaml` file, but they are pulled out so that - the CRD definitions are created first. If you see an error when creating - resources about an unknown type, run the second `kubectl apply` command - again. + + Note: the resources (CRDs) defined in the `istio-crds.yaml`file are also + included in the `istio.yaml` file, but they are pulled out so that the CRD + definitions are created first. If you see an error when creating resources + about an unknown type, run the second `kubectl apply` command again. 2. Ensure the istio-sidecar-injector pods runs as provileged: ```shell diff --git a/install/Knative-with-OpenShift.md b/install/Knative-with-OpenShift.md index a48b3a683a8..40e070d4db8 100644 --- a/install/Knative-with-OpenShift.md +++ b/install/Knative-with-OpenShift.md @@ -212,9 +212,9 @@ If you'd like to view the available sample apps and deploy one of your choosing, head to the [sample apps](../serving/samples/README.md) repo. > Note: When looking up the IP address to use for accessing your app, you need -> to look up the NodePort for the `istio-ingressgateway` well as the IP -> address used for OpenShift. You can use the following command to look up the -> value to use for the {IP_ADDRESS} placeholder used in the samples: +> to look up the NodePort for the `istio-ingressgateway` well as the IP address +> used for OpenShift. You can use the following command to look up the value to +> use for the {IP_ADDRESS} placeholder used in the samples: ```shell # In Knative 0.2.x and prior versions, the `knative-ingressgateway` service was used instead of `istio-ingressgateway`. diff --git a/install/Knative-with-PKS.md b/install/Knative-with-PKS.md index c0a7aefd7b4..6c65f8e7b03 100644 --- a/install/Knative-with-PKS.md +++ b/install/Knative-with-PKS.md @@ -47,15 +47,16 @@ Knative depends on Istio. Istio workloads require privileged mode for Init Containers 1. Install Istio: + ```bash kubectl apply --filename https://github.com/knative/serving/releases/download/v0.3.0/istio-crds.yaml && \ kubectl apply --filename https://github.com/knative/serving/releases/download/v0.3.0/istio.yaml ``` - Note: the resources (CRDs) defined in the `istio-crds.yaml`file are - also included in the `istio.yaml` file, but they are pulled out so that - the CRD definitions are created first. If you see an error when creating - resources about an unknown type, run the second `kubectl apply` command - again. + + Note: the resources (CRDs) defined in the `istio-crds.yaml`file are also + included in the `istio.yaml` file, but they are pulled out so that the CRD + definitions are created first. If you see an error when creating resources + about an unknown type, run the second `kubectl apply` command again. 1. Label the default namespace with `istio-injection=enabled`: ```bash @@ -78,21 +79,21 @@ standard set of observability plugins. To customize your Knative installation, see [Performing a Custom Knative Installation](Knative-custom-install.md). 1. Run the `kubectl apply` command to install Knative and its dependencies: - ```bash - kubectl apply --filename https://github.com/knative/serving/releases/download/v0.3.0/serving.yaml \ - --filename https://github.com/knative/build/releases/download/v0.3.0/release.yaml \ - --filename https://github.com/knative/eventing/releases/download/v0.3.0/release.yaml \ - --filename https://github.com/knative/eventing-sources/releases/download/v0.3.0/release.yaml - ``` -1. Monitor the Knative components until all of the components show a - `STATUS` of `Running`: - ```bash - kubectl get pods --namespace knative-serving - kubectl get pods --namespace knative-build - kubectl get pods --namespace knative-eventing - kubectl get pods --namespace knative-sources - kubectl get pods --namespace knative-monitoring - ``` + ```bash + kubectl apply --filename https://github.com/knative/serving/releases/download/v0.3.0/serving.yaml \ + --filename https://github.com/knative/build/releases/download/v0.3.0/release.yaml \ + --filename https://github.com/knative/eventing/releases/download/v0.3.0/release.yaml \ + --filename https://github.com/knative/eventing-sources/releases/download/v0.3.0/release.yaml + ``` +1. Monitor the Knative components until all of the components show a `STATUS` of + `Running`: + ```bash + kubectl get pods --namespace knative-serving + kubectl get pods --namespace knative-build + kubectl get pods --namespace knative-eventing + kubectl get pods --namespace knative-sources + kubectl get pods --namespace knative-monitoring + ``` ## What's next @@ -106,8 +107,8 @@ guide. To get started with Knative Eventing, pick one of the [Eventing Samples](../eventing/samples/) to walk through. -To get started with Knative Build, read the -[Build README](../build/README.md), then choose a sample to walk through. +To get started with Knative Build, read the [Build README](../build/README.md), +then choose a sample to walk through. ## Cleaning up diff --git a/install/Knative-with-any-k8s.md b/install/Knative-with-any-k8s.md index 99f4924cacb..a4ad1ea75ba 100644 --- a/install/Knative-with-any-k8s.md +++ b/install/Knative-with-any-k8s.md @@ -20,15 +20,16 @@ Knative depends on Istio. Istio workloads require privileged mode for Init Containers. 1. Install Istio: + ```bash kubectl apply --filename https://github.com/knative/serving/releases/download/v0.3.0/istio-crds.yaml && \ kubectl apply --filename https://github.com/knative/serving/releases/download/v0.3.0/istio.yaml ``` - Note: the resources (CRDs) defined in the `istio-crds.yaml`file are - also included in the `istio.yaml` file, but they are pulled out so that - the CRD definitions are created first. If you see an error when creating - resources about an unknown type, run the second `kubectl apply` command - again. + + Note: the resources (CRDs) defined in the `istio-crds.yaml`file are also + included in the `istio.yaml` file, but they are pulled out so that the CRD + definitions are created first. If you see an error when creating resources + about an unknown type, run the second `kubectl apply` command again. 1. Label the default namespace with `istio-injection=enabled`: ```bash @@ -50,25 +51,26 @@ rerun the command to see the current status. ## Installing Knative The following commands install all available Knative components. To customize -your Knative installation, see [Performing a Custom Knative Installation](Knative-custom-install.md). +your Knative installation, see +[Performing a Custom Knative Installation](Knative-custom-install.md). 1. Run the `kubectl apply` command to install Knative and its dependencies: - ```bash - kubectl apply --filename https://github.com/knative/serving/releases/download/v0.3.0/serving.yaml \ - --filename https://github.com/knative/build/releases/download/v0.3.0/release.yaml \ - --filename https://github.com/knative/eventing/releases/download/v0.3.0/release.yaml \ - --filename https://github.com/knative/eventing-sources/releases/download/v0.3.0/release.yaml \ - --filename https://github.com/knative/serving/releases/download/v0.3.0/monitoring.yaml - ``` -1. Monitor the Knative components until all of the components show a - `STATUS` of `Running`: - ```bash - kubectl get pods --namespace knative-serving - kubectl get pods --namespace knative-build - kubectl get pods --namespace knative-eventing - kubectl get pods --namespace knative-sources - kubectl get pods --namespace knative-monitoring - ``` + ```bash + kubectl apply --filename https://github.com/knative/serving/releases/download/v0.3.0/serving.yaml \ + --filename https://github.com/knative/build/releases/download/v0.3.0/release.yaml \ + --filename https://github.com/knative/eventing/releases/download/v0.3.0/release.yaml \ + --filename https://github.com/knative/eventing-sources/releases/download/v0.3.0/release.yaml \ + --filename https://github.com/knative/serving/releases/download/v0.3.0/monitoring.yaml + ``` +1. Monitor the Knative components until all of the components show a `STATUS` of + `Running`: + ```bash + kubectl get pods --namespace knative-serving + kubectl get pods --namespace knative-build + kubectl get pods --namespace knative-eventing + kubectl get pods --namespace knative-sources + kubectl get pods --namespace knative-monitoring + ``` ## What's next @@ -82,5 +84,5 @@ guide. To get started with Knative Eventing, pick one of the [Eventing Samples](../eventing/samples/) to walk through. -To get started with Knative Build, read the -[Build README](../build/README.md), then choose a sample to walk through. +To get started with Knative Build, read the [Build README](../build/README.md), +then choose a sample to walk through. diff --git a/install/README.md b/install/README.md index 6544226bacc..73ff4df5f10 100644 --- a/install/README.md +++ b/install/README.md @@ -22,13 +22,13 @@ clusters. There are several options when installing Knative: -* **Comprehensive install** -- Comes with the default versions of all Knative - components as well as a set of observability plugins. Quickest option - for setup. +- **Comprehensive install** -- Comes with the default versions of all Knative + components as well as a set of observability plugins. Quickest option for + setup. -* **Limited install** -- Installs a subset of Knative components. +- **Limited install** -- Installs a subset of Knative components. -* **Custom install** -- Takes longer, but allows you to choose exactly which +- **Custom install** -- Takes longer, but allows you to choose exactly which components and oberservability plugins to install. For new users, we recommend the comprehensive install to get you up and running @@ -44,45 +44,47 @@ Knative components. The guides below show you how to create a Kubernetes cluster with the right specs for Knative on your platform of choice, then walk through installing all available Knative components and a set of observability plugins. -* [Knative Install on Azure Kubernetes Service](Knative-with-AKS.md) -* [Knative Install on Gardener](Knative-with-Gardener.md) -* [Knative Install on Google Kubernetes Engine](Knative-with-GKE.md) -* [Knative Install on IBM Cloud Kubernetes Service](Knative-with-IKS.md) -* [Knative Install on IBM Cloud Private](Knative-with-ICP.md) -* [Knative Install on Pivotal Container Service](Knative-with-PKS.md) - -If you already have a Kubernetes cluster you're comfortable installing *alpha* + +- [Knative Install on Azure Kubernetes Service](Knative-with-AKS.md) +- [Knative Install on Gardener](Knative-with-Gardener.md) +- [Knative Install on Google Kubernetes Engine](Knative-with-GKE.md) +- [Knative Install on IBM Cloud Kubernetes Service](Knative-with-IKS.md) +- [Knative Install on IBM Cloud Private](Knative-with-ICP.md) +- [Knative Install on Pivotal Container Service](Knative-with-PKS.md) + +If you already have a Kubernetes cluster you're comfortable installing _alpha_ software on, use the following guide to install all Knative components: - [Knative Install on any Kubernetes](Knative-with-any-k8s.md) **Limited install guides** -The guides below install some of the available Knative components, without all available -observability plugins, to minimize the disk space used for install. -* [Knative Install on Docker for Mac](Knative-with-Docker-for-Mac.md) -* [Knative Install on Minikube](Knative-with-Minikube.md) -* [Knative Install on Minishift](Knative-with-Minishift.md) -* [Knative Install on OpenShift](Knative-with-OpenShift.md) +The guides below install some of the available Knative components, without all +available observability plugins, to minimize the disk space used for install. + +- [Knative Install on Docker for Mac](Knative-with-Docker-for-Mac.md) +- [Knative Install on Minikube](Knative-with-Minikube.md) +- [Knative Install on Minishift](Knative-with-Minishift.md) +- [Knative Install on OpenShift](Knative-with-OpenShift.md) **Custom install guide** -To choose which components and observability plugins to install, -follow the custom install guide: +To choose which components and observability plugins to install, follow the +custom install guide: -* [Perfoming a Custom Knative Installation](Knative-custom-install.md) +- [Perfoming a Custom Knative Installation](Knative-custom-install.md) > **Note**: If need to set up a Kubernetes cluster with the correct - specifications to run Knative, you can follow any of the install - instructions through the creation of the cluster, then follow the - [Perfoming a Custom Knative Installation](knative-custom-install.md) guide. +> specifications to run Knative, you can follow any of the install instructions +> through the creation of the cluster, then follow the +> [Perfoming a Custom Knative Installation](knative-custom-install.md) guide. **Observability install guide** -Follow this guide to install and set up the available observability -plugins on a Knative cluster. +Follow this guide to install and set up the available observability plugins on a +Knative cluster. -* [Monitoring, Logging and Tracing Installation](../serving/installing-logging-metrics-traces.md) +- [Monitoring, Logging and Tracing Installation](../serving/installing-logging-metrics-traces.md) ## Deploying an app @@ -94,8 +96,8 @@ Now you're ready to deploy an app: - View the available [sample apps](../serving/samples) and deploy one of your choosing. - -- Walk through the Google codelab, + +- Walk through the Google codelab, [Using Knative to deploy serverless applications to Kubernetes](https://codelabs.developers.google.com/codelabs/knative-intro/#0). ## Configuring Knative Serving diff --git a/serving/cluster-local-route.md b/serving/cluster-local-route.md index 3394df811f1..7dbbfa52516 100644 --- a/serving/cluster-local-route.md +++ b/serving/cluster-local-route.md @@ -6,11 +6,12 @@ In Knative 0.3.x or later, all Routes with a domain suffix of This can be done by changing the `config-domain` config map as instructed [here](./using-a-custom-domain.md). -You can also set the label -`serving.knative.dev/visibility=cluster-local` on your Route or KService to -achieve the same effect. +You can also set the label `serving.knative.dev/visibility=cluster-local` on +your Route or KService to achieve the same effect. -For example, if you didn't set a label when you created the Route `helloworld-go` and you want to make it local to the `default namespace cluster, run: +For example, if you didn't set a label when you created the Route +`helloworld-go` and you want to make it local to the `default namespace cluster, +run: ```shell kubectl label route helloworld-go serving.knative.dev/visibility=cluster-local diff --git a/serving/gke-assigning-static-ip-address.md b/serving/gke-assigning-static-ip-address.md index 4b4974ecf4d..ea4b6de9dba 100644 --- a/serving/gke-assigning-static-ip-address.md +++ b/serving/gke-assigning-static-ip-address.md @@ -52,8 +52,8 @@ In the ## Step 2: Update the external IP of `istio-ingressgateway` service -Run following command to configure the external IP of the -`istio-ingressgateway` service to the static IP that you reserved: +Run following command to configure the external IP of the `istio-ingressgateway` +service to the static IP that you reserved: ```shell # In Knative 0.2.x and prior versions, the `knative-ingressgateway` service was used instead of `istio-ingressgateway`. @@ -71,7 +71,8 @@ kubectl patch svc $INGRESSGATEWAY --namespace istio-system --patch '{"spec": { " ## Step 3: Verify the static IP address of `istio-ingressgateway` service -Run the following command to ensure that the external IP of the ingressgateway service has been updated: +Run the following command to ensure that the external IP of the ingressgateway +service has been updated: ```shell kubectl get svc $INGRESSGATEWAY --namespace istio-system diff --git a/serving/installing-logging-metrics-traces.md b/serving/installing-logging-metrics-traces.md index 3968e039297..5be9b11ddc8 100644 --- a/serving/installing-logging-metrics-traces.md +++ b/serving/installing-logging-metrics-traces.md @@ -18,8 +18,9 @@ sections to do so now. kubectl apply --filename https://github.com/knative/serving/releases/download/v0.3.0/monitoring-metrics-prometheus.yaml ``` -1. Ensure that the `grafana-*`, `kibana-logging-*`, `kube-state-metrics-*`, `node-exporter-*` and `prometheus-system-*` - pods all report a `Running` status: +1. Ensure that the `grafana-*`, `kibana-logging-*`, `kube-state-metrics-*`, + `node-exporter-*` and `prometheus-system-*` pods all report a `Running` + status: ```shell kubectl get pods --namespace knative-monitoring --watch @@ -41,11 +42,13 @@ sections to do so now. Tip: Hit CTRL+C to exit watch mode. -[Accessing Metrics](./accessing-metrics.md) for more information about metrics in Knative. +[Accessing Metrics](./accessing-metrics.md) for more information about metrics +in Knative. ## Logs -Knative offers three different setups for collecting logs. Choose one to install: +Knative offers three different setups for collecting logs. Choose one to +install: 1. [Elasticsearch and Kibana](#elasticsearch-and-kibana) 1. [Stackdriver](#stackdriver) @@ -59,11 +62,13 @@ Knative offers three different setups for collecting logs. Choose one to install kubectl apply --filename https://github.com/knative/serving/releases/download/v0.3.0/monitoring-logs-elasticsearch.yaml ``` -1. Ensure that the `elasticsearch-logging-*`, `fluentd-ds-*`, and `kibana-logging-*` pods all report a `Running` status: +1. Ensure that the `elasticsearch-logging-*`, `fluentd-ds-*`, and + `kibana-logging-*` pods all report a `Running` status: ```shell kubectl get pods --namespace knative-monitoring --watch ``` + For example: ```text @@ -75,10 +80,11 @@ Knative offers three different setups for collecting logs. Choose one to install fluentd-ds-xghk9 1/1 Running 0 2d kibana-logging-7d474fbb45-6qb8x 1/1 Running 0 2d ``` - + Tip: Hit CTRL+C to exit watch mode. -1. Verify that each of your nodes have the `beta.kubernetes.io/fluentd-ds-ready=true` label: +1. Verify that each of your nodes have the + `beta.kubernetes.io/fluentd-ds-ready=true` label: ```shell kubectl get nodes --selector beta.kubernetes.io/fluentd-ds-ready=true @@ -86,29 +92,33 @@ Knative offers three different setups for collecting logs. Choose one to install 1. If you receive the `No Resources Found` response: - 1. Run the following command to ensure that the Fluentd DaemonSet runs on all your nodes: + 1. Run the following command to ensure that the Fluentd DaemonSet runs on all + your nodes: ```shell kubectl label nodes --all beta.kubernetes.io/fluentd-ds-ready="true" ``` - 1. Run the following command to ensure that the `fluentd-ds` daemonset is ready on at least one node: + 1. Run the following command to ensure that the `fluentd-ds` daemonset is + ready on at least one node: ```shell kubectl get daemonset fluentd-ds --namespace knative-monitoring --watch ``` - + Tip: Hit CTRL+C to exit watch mode. -1. When the installation is complete and all the resources are running, you can continue to the next section - and begin creating your Elasticsearch indices. +1. When the installation is complete and all the resources are running, you can + continue to the next section and begin creating your Elasticsearch indices. #### Create Elasticsearch Indices -To visualize logs with Kibana, you need to set which Elasticsearch indices to explore. +To visualize logs with Kibana, you need to set which Elasticsearch indices to +explore. -- To open the Kibana UI (the visualization tool for [Elasticsearch](https://info.elastic.co)), - you must start a local proxy by running the following command: +- To open the Kibana UI (the visualization tool for + [Elasticsearch](https://info.elastic.co)), you must start a local proxy by + running the following command: ```shell kubectl proxy @@ -127,13 +137,14 @@ To visualize logs with Kibana, you need to set which Elasticsearch indices to ex ![Create logstash-* index](images/kibana-landing-page-configure-index.png) -See [Accessing Logs](./accessing-logs.md) for more information about logs in Knative. +See [Accessing Logs](./accessing-logs.md) for more information about logs in +Knative. ### Stackdriver To configure and setup monitoring: -1. Clone the Knative Serving repository: +1. Clone the Knative Serving repository: ```shell git clone https://github.com/knative/serving knative-serving @@ -141,87 +152,100 @@ To configure and setup monitoring: git checkout v0.3.0 ``` -1. Choose a container image that meets the - [Fluentd image requirements](fluentd/README.md#requirements). For example, you can use a - public image. Or you can create a custom one and upload the image to a - container registry which your cluster has read access to. +1. Choose a container image that meets the + [Fluentd image requirements](fluentd/README.md#requirements). For example, + you can use a public image. Or you can create a custom one and upload the + image to a container registry which your cluster has read access to. - You must configure and build your own Fluentd image if either of the following are true: - - Your Knative Serving component is not hosted on a Google Cloud Platform (GCP) based cluster. + You must configure and build your own Fluentd image if either of the + following are true: + + - Your Knative Serving component is not hosted on a Google Cloud Platform + (GCP) based cluster. - You want to send logs to another GCP project. -1. Follow the instructions in - ["Setting up a logging plugin"](setting-up-a-logging-plugin.md#Configuring) - to configure the stackdriver components settings. +1. Follow the instructions in + ["Setting up a logging plugin"](setting-up-a-logging-plugin.md#Configuring) + to configure the stackdriver components settings. -1. Install Knative Stackdriver components by running the following command from the root directory of - [knative/serving](https://github.com/knative/serving) repository: +1. Install Knative Stackdriver components by running the following command from + the root directory of [knative/serving](https://github.com/knative/serving) + repository: - ```shell - kubectl apply --recursive --filename config/monitoring/100-namespace.yaml \ - --filename third_party/config/monitoring/logging/stackdriver - ``` + ```shell + kubectl apply --recursive --filename config/monitoring/100-namespace.yaml \ + --filename third_party/config/monitoring/logging/stackdriver + ``` - 1. Ensure that the `fluentd-ds-*` pods all report a `Running` status: - - ```shell - kubectl get pods --namespace knative-monitoring --watch - ``` - - For example: - - ```text - NAME READY STATUS RESTARTS AGE - fluentd-ds-5kc85 1/1 Running 0 2d - fluentd-ds-vhrcq 1/1 Running 0 2d - fluentd-ds-xghk9 1/1 Running 0 2d - ``` - - Tip: Hit CTRL+C to exit watch mode. - -1. Verify that each of your nodes have the `beta.kubernetes.io/fluentd-ds-ready=true` label: +1. Ensure that the `fluentd-ds-*` pods all report a `Running` status: - ```shell - kubectl get nodes --selector beta.kubernetes.io/fluentd-ds-ready=true - ``` + ```shell + kubectl get pods --namespace knative-monitoring --watch + ``` -1. If you receive the `No Resources Found` response: + For example: - 1. Run the following command to ensure that the Fluentd DaemonSet runs on all your nodes: + ```text + NAME READY STATUS RESTARTS AGE + fluentd-ds-5kc85 1/1 Running 0 2d + fluentd-ds-vhrcq 1/1 Running 0 2d + fluentd-ds-xghk9 1/1 Running 0 2d + ``` - ```shell - kubectl label nodes --all beta.kubernetes.io/fluentd-ds-ready="true" - ``` + Tip: Hit CTRL+C to exit watch mode. - 1. Run the following command to ensure that the `fluentd-ds` daemonset is ready on at least one node: +1. Verify that each of your nodes have the + `beta.kubernetes.io/fluentd-ds-ready=true` label: - ```shell - kubectl get daemonset fluentd-ds --namespace knative-monitoring - ``` -See [Accessing Logs](./accessing-logs.md) for more information about logs in Knative. + ```shell + kubectl get nodes --selector beta.kubernetes.io/fluentd-ds-ready=true + ``` + +1. If you receive the `No Resources Found` response: + + 1. Run the following command to ensure that the Fluentd DaemonSet runs on + all your nodes: + + ```shell + kubectl label nodes --all beta.kubernetes.io/fluentd-ds-ready="true" + ``` + + 1. Run the following command to ensure that the `fluentd-ds` daemonset is + ready on at least one node: + + ```shell + kubectl get daemonset fluentd-ds --namespace knative-monitoring + ``` + + See [Accessing Logs](./accessing-logs.md) for more information about + logs in Knative. ## End to end traces -- If Elasticsearch is not installed or if you don't want to persist end to end traces, run: +- If Elasticsearch is not installed or if you don't want to persist end to end + traces, run: - ```shell - kubectl apply --filename https://github.com/knative/serving/releases/download/v0.3.0/monitoring-tracing-zipkin-in-mem.yaml - ``` + ```shell + kubectl apply --filename https://github.com/knative/serving/releases/download/v0.3.0/monitoring-tracing-zipkin-in-mem.yaml + ``` -- If Elasticsearch is installed and you want to persist end to end traces, first run: +- If Elasticsearch is installed and you want to persist end to end traces, first + run: - ```shell - kubectl apply --filename https://github.com/knative/serving/releases/download/v0.3.0/monitoring-tracing-zipkin.yaml - ``` - - Next, create an Elasticsearch index for end to end traces: + ```shell + kubectl apply --filename https://github.com/knative/serving/releases/download/v0.3.0/monitoring-tracing-zipkin.yaml + ``` + + Next, create an Elasticsearch index for end to end traces: - - Open Kibana UI as described in [Create Elasticsearch Indices](#create-elasticsearch-indices) section. - - Select `Create Index Pattern` button on top left of the page. - Enter `zipkin*` to `Index pattern` and select `timestamp_millis` - from `Time Filter field name` and click on `Create` button. + - Open Kibana UI as described in + [Create Elasticsearch Indices](#create-elasticsearch-indices) section. + - Select `Create Index Pattern` button on top left of the page. Enter + `zipkin*` to `Index pattern` and select `timestamp_millis` from + `Time Filter field name` and click on `Create` button. -Visit [Accessing Traces](./accessing-traces.md) for more information on end to end traces. +Visit [Accessing Traces](./accessing-traces.md) for more information on end to +end traces. ## Learn More diff --git a/serving/samples/README.md b/serving/samples/README.md index 7e17b8fa3f0..d5ee7f83054 100644 --- a/serving/samples/README.md +++ b/serving/samples/README.md @@ -5,19 +5,19 @@ different use-cases and resources. See [Knative serving](https://github.com/knative/docs/tree/master/serving) to learn more about Knative Serving resources. -| Name | Description | Languages | -| -------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | :--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------: | +| Name | Description | Languages | +| -------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | :--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------: | | Hello World | A quick introduction that highlights how to deploy an app using Knative Serving. | [C#](helloworld-csharp/README.md), [Go](helloworld-go/README.md), [Java](helloworld-java/README.md), [Kotlin](helloworld-kotlin/README.md), [Node.js](helloworld-nodejs/README.md), [PHP](helloworld-php/README.md), [Python](helloworld-python/README.md), [Ruby](helloworld-ruby/README.md), [Scala](helloworld-scala/README.md) | -| Advanced Deployment | Simple blue/green-like application deployment pattern illustrating the process of updating a live application without dropping any traffic. | [YAML](blue-green-deployment.md) | -| Autoscale | A demonstration of the autoscaling capabilities of Knative. | [Go](autoscale-go/README.md) | -| Private Repo Build | An example of deploying a Knative Serving Service using a Github deploy-key and a DockerHub image pull secret. | [Go](build-private-repo-go/README.md) | -| Buildpack for Applications | A sample app that demonstrates using Cloud Foundry buildpacks on Knative Serving. | [.NET](buildpack-app-dotnet/README.md) | -| Buildpack for Functions | A sample function that demonstrates using Cloud Foundry buildpacks on Knative Serving. | [Node.js](buildpack-function-nodejs/README.md) | -| Github Webhook | A simple webhook handler that demonstrates interacting with Github. | [Go](gitwebhook-go/README.md) | -| gRPC | A simple gRPC server. | [Go](grpc-ping-go/README.md) | -| Knative Routing | An example of mapping multiple Knative services to different paths under a single domain name using the Istio VirtualService concept. | [Go](knative-routing-go/README.md) | -| REST API | A simple Restful service that exposes an endpoint defined by an environment variable described in the Knative Configuration. | [Go](rest-api-go/README.md) | -| Source to URL | A sample that shows how to use Knative to go from source code in a git repository to a running application with a URL. | [Go](source-to-url-go/README.md) | -| Telemetry | This sample runs a simple web server that makes calls to other in-cluster services and responds to requests with "Hello World!". The purpose of this sample is to show generating metrics, logs, and distributed traces. | [Go](telemetry-go/README.md) | -| Thumbnailer | An example of deploying a "dockerized" application to Knative Serving which takes video URL as an input and generates its thumbnail image. | [Go](thumbnailer-go/README.md) | -| Traffic Splitting | This samples builds off the [Creating a RESTful Service](./rest-api-go) sample to illustrate applying a revision, then using that revision for manual traffic splitting. | [YAML](traffic-splitting/README.md) | +| Advanced Deployment | Simple blue/green-like application deployment pattern illustrating the process of updating a live application without dropping any traffic. | [YAML](blue-green-deployment.md) | +| Autoscale | A demonstration of the autoscaling capabilities of Knative. | [Go](autoscale-go/README.md) | +| Private Repo Build | An example of deploying a Knative Serving Service using a Github deploy-key and a DockerHub image pull secret. | [Go](build-private-repo-go/README.md) | +| Buildpack for Applications | A sample app that demonstrates using Cloud Foundry buildpacks on Knative Serving. | [.NET](buildpack-app-dotnet/README.md) | +| Buildpack for Functions | A sample function that demonstrates using Cloud Foundry buildpacks on Knative Serving. | [Node.js](buildpack-function-nodejs/README.md) | +| Github Webhook | A simple webhook handler that demonstrates interacting with Github. | [Go](gitwebhook-go/README.md) | +| gRPC | A simple gRPC server. | [Go](grpc-ping-go/README.md) | +| Knative Routing | An example of mapping multiple Knative services to different paths under a single domain name using the Istio VirtualService concept. | [Go](knative-routing-go/README.md) | +| REST API | A simple Restful service that exposes an endpoint defined by an environment variable described in the Knative Configuration. | [Go](rest-api-go/README.md) | +| Source to URL | A sample that shows how to use Knative to go from source code in a git repository to a running application with a URL. | [Go](source-to-url-go/README.md) | +| Telemetry | This sample runs a simple web server that makes calls to other in-cluster services and responds to requests with "Hello World!". The purpose of this sample is to show generating metrics, logs, and distributed traces. | [Go](telemetry-go/README.md) | +| Thumbnailer | An example of deploying a "dockerized" application to Knative Serving which takes video URL as an input and generates its thumbnail image. | [Go](thumbnailer-go/README.md) | +| Traffic Splitting | This samples builds off the [Creating a RESTful Service](./rest-api-go) sample to illustrate applying a revision, then using that revision for manual traffic splitting. | [YAML](traffic-splitting/README.md) | diff --git a/serving/samples/autoscale-go/README.md b/serving/samples/autoscale-go/README.md index 49de89c1320..1b021a194c8 100644 --- a/serving/samples/autoscale-go/README.md +++ b/serving/samples/autoscale-go/README.md @@ -27,6 +27,7 @@ A demonstration of the autoscaling capabilities of a Knative Serving Revision. ``` 1. Find the ingress hostname and IP and export as an environment variable: + ``` # In Knative 0.2.x and prior versions, the `knative-ingressgateway` service was used instead of `istio-ingressgateway`. INGRESSGATEWAY=knative-ingressgateway diff --git a/serving/samples/blue-green-deployment.md b/serving/samples/blue-green-deployment.md index 0558d90fc86..a038beabbf6 100644 --- a/serving/samples/blue-green-deployment.md +++ b/serving/samples/blue-green-deployment.md @@ -88,11 +88,14 @@ with Knative. > can interact with your app using cURL requests if you have the host URL and IP > address: > `curl -H "Host: blue-green-demo.default.example.com" http://IP_ADDRESS` -> Knative creates the host URL by combining the name of your Route object, the namespace, -> and `example.com`, if you haven't configured a custom domain. For example, `[route-name].[namespace].example.com`. -> You can get the IP address by entering `kubectl get svc istio-ingressgateway --namespace istio-system` (or -> `kubectl get svc istio-ingressgateway --namespace istio-system` if using Knative 0.2.x or prior versions) -> and copying the `EXTERNAL-IP` returned by that command. See [Interacting with your app](../../install/getting-started-knative-app.md#interacting-with-your-app) +> Knative creates the host URL by combining the name of your Route object, the +> namespace, and `example.com`, if you haven't configured a custom domain. For +> example, `[route-name].[namespace].example.com`. You can get the IP address by +> entering `kubectl get svc istio-ingressgateway --namespace istio-system` (or +> `kubectl get svc istio-ingressgateway --namespace istio-system` if using +> Knative 0.2.x or prior versions) and copying the `EXTERNAL-IP` returned by +> that command. See +> [Interacting with your app](../../install/getting-started-knative-app.md#interacting-with-your-app) > for more information. ## Deploying Revision 2 (Green) diff --git a/serving/samples/helloworld-csharp/README.md b/serving/samples/helloworld-csharp/README.md index dba6e3efdc1..be15d66580e 100644 --- a/serving/samples/helloworld-csharp/README.md +++ b/serving/samples/helloworld-csharp/README.md @@ -55,30 +55,30 @@ recreate the source files from this folder. app, see [dockerizing a .NET core app](https://docs.microsoft.com/en-us/dotnet/core/docker/docker-basics-dotnet-core#dockerize-the-net-core-application). - ```docker - # Use Microsoft's official .NET image. - # https://hub.docker.com/r/microsoft/dotnet - FROM microsoft/dotnet:2.1-sdk - - # Install production dependencies. - # Copy csproj and restore as distinct layers. - WORKDIR /app - COPY *.csproj . - RUN dotnet restore - - # Copy local code to the container image. - COPY . . - - # Build a release artifact. - RUN dotnet publish -c Release -o out - - # Service must listen to $PORT environment variable. - # This default value facilitates local development. - ENV PORT 8080 - - # Run the web service on container startup. - CMD ["dotnet", "out/helloworld-csharp.dll"] - ``` + ```docker + # Use Microsoft's official .NET image. + # https://hub.docker.com/r/microsoft/dotnet + FROM microsoft/dotnet:2.1-sdk + + # Install production dependencies. + # Copy csproj and restore as distinct layers. + WORKDIR /app + COPY *.csproj . + RUN dotnet restore + + # Copy local code to the container image. + COPY . . + + # Build a release artifact. + RUN dotnet publish -c Release -o out + + # Service must listen to $PORT environment variable. + # This default value facilitates local development. + ENV PORT 8080 + + # Run the web service on container startup. + CMD ["dotnet", "out/helloworld-csharp.dll"] + ``` 1. Create a new file, `service.yaml` and copy the following service definition into the file. Make sure to replace `{username}` with your Docker Hub diff --git a/serving/samples/helloworld-go/README.md b/serving/samples/helloworld-go/README.md index b1b26d25767..f22a89ce920 100644 --- a/serving/samples/helloworld-go/README.md +++ b/serving/samples/helloworld-go/README.md @@ -137,10 +137,11 @@ folder) you're ready to build and deploy the sample app. ``` 1. Now that your service is created, Knative will perform the following steps: - - Create a new immutable revision for this version of the app. - - Network programming to create a route, ingress, service, and load balance - for your app. - - Automatically scale your pods up and down (including to zero active pods). + + - Create a new immutable revision for this version of the app. + - Network programming to create a route, ingress, service, and load balance + for your app. + - Automatically scale your pods up and down (including to zero active pods). 1. Run the following command to find the external IP address for your service. The ingress IP for your cluster is returned. If you just created your diff --git a/serving/samples/helloworld-java/README.md b/serving/samples/helloworld-java/README.md index 7935d58061d..0875252fbd9 100644 --- a/serving/samples/helloworld-java/README.md +++ b/serving/samples/helloworld-java/README.md @@ -86,34 +86,34 @@ recreate the source files from this folder. For additional information on multi-stage docker builds for Java see [Creating Smaller Java Image using Docker Multi-stage Build](http://blog.arungupta.me/smaller-java-image-docker-multi-stage-build/). - ```docker - # Use the official maven/Java 8 image to create a build artifact. - # https://hub.docker.com/_/maven - FROM maven:3.5-jdk-8-alpine as builder - - # Copy local code to the container image. - WORKDIR /app - COPY pom.xml . - COPY src ./src - - # Build a release artifact. - RUN mvn package -DskipTests - - # Use the Official OpenJDK image for a lean production stage of our multi-stage build. - # https://hub.docker.com/_/openjdk - # https://docs.docker.com/develop/develop-images/multistage-build/#use-multi-stage-builds - FROM openjdk:8-jre-alpine - - # Copy the jar to the production image from the builder stage. - COPY --from=builder /app/target/helloworld-*.jar /helloworld.jar - - # Service must listen to $PORT environment variable. - # This default value facilitates local development. - ENV PORT 8080 - - # Run the web service on container startup. - CMD ["java","-Djava.security.egd=file:/dev/./urandom","-Dserver.port=${PORT}","-jar","/helloworld.jar"] - ``` + ```docker + # Use the official maven/Java 8 image to create a build artifact. + # https://hub.docker.com/_/maven + FROM maven:3.5-jdk-8-alpine as builder + + # Copy local code to the container image. + WORKDIR /app + COPY pom.xml . + COPY src ./src + + # Build a release artifact. + RUN mvn package -DskipTests + + # Use the Official OpenJDK image for a lean production stage of our multi-stage build. + # https://hub.docker.com/_/openjdk + # https://docs.docker.com/develop/develop-images/multistage-build/#use-multi-stage-builds + FROM openjdk:8-jre-alpine + + # Copy the jar to the production image from the builder stage. + COPY --from=builder /app/target/helloworld-*.jar /helloworld.jar + + # Service must listen to $PORT environment variable. + # This default value facilitates local development. + ENV PORT 8080 + + # Run the web service on container startup. + CMD ["java","-Djava.security.egd=file:/dev/./urandom","-Dserver.port=${PORT}","-jar","/helloworld.jar"] + ``` 1. Create a new file, `service.yaml` and copy the following service definition into the file. Make sure to replace `{username}` with your Docker Hub diff --git a/serving/samples/helloworld-kotlin/README.md b/serving/samples/helloworld-kotlin/README.md index e73107babe2..503092ff1cc 100644 --- a/serving/samples/helloworld-kotlin/README.md +++ b/serving/samples/helloworld-kotlin/README.md @@ -108,33 +108,33 @@ recreate the source files from this folder. 5. Create a file named `Dockerfile` and copy the code block below into it. - ```docker - # Use the official gradle image to create a build artifact. - # https://hub.docker.com/_/gradle - FROM gradle as builder + ```docker + # Use the official gradle image to create a build artifact. + # https://hub.docker.com/_/gradle + FROM gradle as builder - # Copy local code to the container image. - COPY build.gradle . - COPY src ./src + # Copy local code to the container image. + COPY build.gradle . + COPY src ./src - # Build a release artifact. - RUN gradle clean build --no-daemon + # Build a release artifact. + RUN gradle clean build --no-daemon - # Use the Official OpenJDK image for a lean production stage of our multi-stage build. - # https://hub.docker.com/_/openjdk - # https://docs.docker.com/develop/develop-images/multistage-build/#use-multi-stage-builds - FROM openjdk:8-jre-alpine + # Use the Official OpenJDK image for a lean production stage of our multi-stage build. + # https://hub.docker.com/_/openjdk + # https://docs.docker.com/develop/develop-images/multistage-build/#use-multi-stage-builds + FROM openjdk:8-jre-alpine - # Copy the jar to the production image from the builder stage. - COPY --from=builder /home/gradle/build/libs/gradle.jar /helloworld.jar + # Copy the jar to the production image from the builder stage. + COPY --from=builder /home/gradle/build/libs/gradle.jar /helloworld.jar - # Service must listen to $PORT environment variable. - # This default value facilitates local development. - ENV PORT 8080 + # Service must listen to $PORT environment variable. + # This default value facilitates local development. + ENV PORT 8080 - # Run the web service on container startup. - CMD [ "java", "-jar", "-Djava.security.egd=file:/dev/./urandom", "/helloworld.jar" ] - ``` + # Run the web service on container startup. + CMD [ "java", "-jar", "-Djava.security.egd=file:/dev/./urandom", "/helloworld.jar" ] + ``` 6. Create a new file, `service.yaml` and copy the following service definition into the file. Make sure to replace `{username}` with your Docker Hub diff --git a/serving/samples/helloworld-nodejs/README.md b/serving/samples/helloworld-nodejs/README.md index 398e804c1d8..69074110ca3 100644 --- a/serving/samples/helloworld-nodejs/README.md +++ b/serving/samples/helloworld-nodejs/README.md @@ -19,7 +19,7 @@ While you can clone all of the code from this directory, hello world apps are generally more useful if you build them step-by-step. The following instructions recreate the source files from this folder. -1. Create a new directory and initalize `npm`. +1. Create a new directory and initalize `npm`. ```shell npm init @@ -84,33 +84,32 @@ recreate the source files from this folder. see [Dockerizing a Node.js web app](https://nodejs.org/en/docs/guides/nodejs-docker-webapp/). +```Dockerfile +# Use the official Node.js 10 image. +# https://hub.docker.com/_/node +FROM node:10 - ```Dockerfile - # Use the official Node.js 10 image. - # https://hub.docker.com/_/node - FROM node:10 - - # Create and change to the app directory. - WORKDIR /usr/src/app +# Create and change to the app directory. +WORKDIR /usr/src/app - # Copy application dependency manifests to the container image. - # A wildcard is used to ensure both package.json AND package-lock.json are copied. - # Copying this separately prevents re-running npm install on every code change. - COPY package*.json ./ +# Copy application dependency manifests to the container image. +# A wildcard is used to ensure both package.json AND package-lock.json are copied. +# Copying this separately prevents re-running npm install on every code change. +COPY package*.json ./ - # Install production dependencies. - RUN npm install --only=production +# Install production dependencies. +RUN npm install --only=production - # Copy local code to the container image. - COPY . . +# Copy local code to the container image. +COPY . . - # Service must listen to $PORT environment variable. - # This default value facilitates local development. - ENV PORT 8080 +# Service must listen to $PORT environment variable. +# This default value facilitates local development. +ENV PORT 8080 - # Run the web service on container startup. - CMD [ "npm", "start" ] - ``` +# Run the web service on container startup. +CMD [ "npm", "start" ] +``` 1. Create a new file, `service.yaml` and copy the following service definition into the file. Make sure to replace `{username}` with your Docker Hub diff --git a/serving/samples/helloworld-php/README.md b/serving/samples/helloworld-php/README.md index 09764000b8f..1054fdb12ec 100644 --- a/serving/samples/helloworld-php/README.md +++ b/serving/samples/helloworld-php/README.md @@ -36,27 +36,27 @@ recreate the source files from this folder. 1. Create a file named `Dockerfile` and copy the code block below into it. See [official PHP docker image](https://hub.docker.com/_/php/) for more details. - ```docker - # Use the official PHP 7.2 image. - # https://hub.docker.com/_/php - FROM php:7.2.6-apache - - # Copy local code to the container image. - COPY index.php /var/www/html/ - - # Use the PORT environment variable in Apache configuration files. - RUN sed -i 's/80/${PORT}/g' /etc/apache2/sites-available/000-default.conf /etc/apache2/ports.conf - - # Configure PHP for development. - # Switch to the production php.ini for production operations. - # RUN mv "$PHP_INI_DIR/php.ini-production" "$PHP_INI_DIR/php.ini" - # https://hub.docker.com/_/php#configuration - RUN mv "$PHP_INI_DIR/php.ini-development" "$PHP_INI_DIR/php.ini" - - # Service must listen to $PORT environment variable. - # This default value facilitates local development. - ENV PORT 8080 - ``` + ```docker + # Use the official PHP 7.2 image. + # https://hub.docker.com/_/php + FROM php:7.2.6-apache + + # Copy local code to the container image. + COPY index.php /var/www/html/ + + # Use the PORT environment variable in Apache configuration files. + RUN sed -i 's/80/${PORT}/g' /etc/apache2/sites-available/000-default.conf /etc/apache2/ports.conf + + # Configure PHP for development. + # Switch to the production php.ini for production operations. + # RUN mv "$PHP_INI_DIR/php.ini-production" "$PHP_INI_DIR/php.ini" + # https://hub.docker.com/_/php#configuration + RUN mv "$PHP_INI_DIR/php.ini-development" "$PHP_INI_DIR/php.ini" + + # Service must listen to $PORT environment variable. + # This default value facilitates local development. + ENV PORT 8080 + ``` 1. Create a new file, `service.yaml` and copy the following service definition into the file. Make sure to replace `{username}` with your Docker Hub diff --git a/serving/samples/helloworld-python/README.md b/serving/samples/helloworld-python/README.md index 03efd2219a0..323f8e11a84 100644 --- a/serving/samples/helloworld-python/README.md +++ b/serving/samples/helloworld-python/README.md @@ -47,26 +47,26 @@ recreate the source files from this folder. [official Python docker image](https://hub.docker.com/_/python/) for more details. - ```docker - # Use the official Python image. - # https://hub.docker.com/_/python - FROM python - - # Copy local code to the container image. - ENV APP_HOME /app - WORKDIR $APP_HOME - COPY . . - - # Install production dependencies. - RUN pip install Flask - - # Service must listen to $PORT environment variable. - # This default value facilitates local development. - ENV PORT 8080 - - # Run the web service on container startup. - CMD ["python", "app.py"] - ``` + ```docker + # Use the official Python image. + # https://hub.docker.com/_/python + FROM python + + # Copy local code to the container image. + ENV APP_HOME /app + WORKDIR $APP_HOME + COPY . . + + # Install production dependencies. + RUN pip install Flask + + # Service must listen to $PORT environment variable. + # This default value facilitates local development. + ENV PORT 8080 + + # Run the web service on container startup. + CMD ["python", "app.py"] + ``` 1. Create a new file, `service.yaml` and copy the following service definition into the file. Make sure to replace `{username}` with your Docker Hub diff --git a/serving/samples/helloworld-ruby/README.md b/serving/samples/helloworld-ruby/README.md index 2456f1cfd43..19e6d1debc9 100644 --- a/serving/samples/helloworld-ruby/README.md +++ b/serving/samples/helloworld-ruby/README.md @@ -42,27 +42,27 @@ recreate the source files from this folder. [official Ruby docker image](https://hub.docker.com/_/ruby/) for more details. - ```docker - # Use the official Ruby image. - # https://hub.docker.com/_/ruby - FROM ruby:2.5 - - # Install production dependencies. - WORKDIR /usr/src/app - COPY Gemfile Gemfile.lock ./ - ENV BUNDLE_FROZEN=true - RUN bundle install - - # Copy local code to the container image. - COPY . . - - # Service must listen to $PORT environment variable. - # This default value facilitates local development. - ENV PORT 8080 - - # Run the web service on container startup. - CMD ["ruby", "./app.rb"] - ``` + ```docker + # Use the official Ruby image. + # https://hub.docker.com/_/ruby + FROM ruby:2.5 + + # Install production dependencies. + WORKDIR /usr/src/app + COPY Gemfile Gemfile.lock ./ + ENV BUNDLE_FROZEN=true + RUN bundle install + + # Copy local code to the container image. + COPY . . + + # Service must listen to $PORT environment variable. + # This default value facilitates local development. + ENV PORT 8080 + + # Run the web service on container startup. + CMD ["ruby", "./app.rb"] + ``` 1. Create a file named `Gemfile` and copy the text block below into it. diff --git a/serving/samples/helloworld-scala/README.md b/serving/samples/helloworld-scala/README.md index afb9cea2cb4..8955b85baa6 100644 --- a/serving/samples/helloworld-scala/README.md +++ b/serving/samples/helloworld-scala/README.md @@ -1,17 +1,28 @@ # Hello World - Scala using Akka HTTP sample -A microservice which demonstrates how to get set up and running with Knative Serving when using [Scala](https://scala-lang.org/) and [Akka](https://akka.io/) [HTTP](https://doc.akka.io/docs/akka-http/current/). It will respond to a HTTP request with a text specified as an `ENV` variable named `MESSAGE`, defaulting to `"Hello World!"`. +A microservice which demonstrates how to get set up and running with Knative +Serving when using [Scala](https://scala-lang.org/) and [Akka](https://akka.io/) +[HTTP](https://doc.akka.io/docs/akka-http/current/). It will respond to a HTTP +request with a text specified as an `ENV` variable named `MESSAGE`, defaulting +to `"Hello World!"`. ## Prerequisites -* A Kubernetes cluster [installation](https://github.com/knative/docs/blob/master/install/README.md) with Knative Serving up and running. -* [Docker](https://www.docker.com) installed locally, and running, optionally a Docker Hub account configured or some other Docker Repository installed locally. -* [Java JDK8 or later](https://adoptopenjdk.net/installation.html) installed locally. -* [Scala's](https://scala-lang.org/) standard build tool [sbt](https://www.scala-sbt.org/) installed locally. +- A Kubernetes cluster + [installation](https://github.com/knative/docs/blob/master/install/README.md) + with Knative Serving up and running. +- [Docker](https://www.docker.com) installed locally, and running, optionally a + Docker Hub account configured or some other Docker Repository installed + locally. +- [Java JDK8 or later](https://adoptopenjdk.net/installation.html) installed + locally. +- [Scala's](https://scala-lang.org/) standard build tool + [sbt](https://www.scala-sbt.org/) installed locally. ## Configuring the sbt build -If you want to use your Docker Hub repository, set the repository to "docker.io/yourusername/yourreponame". +If you want to use your Docker Hub repository, set the repository to +"docker.io/yourusername/yourreponame". If you use Minikube, you first need to run: @@ -19,13 +30,16 @@ If you use Minikube, you first need to run: eval $(minikube docker-env) ``` -If want to use the Docker Repository inside Minikube, either set this to "dev.local" or if you want to use another repository name, then you need to run the following command after `docker:publishLocal`: +If want to use the Docker Repository inside Minikube, either set this to +"dev.local" or if you want to use another repository name, then you need to run +the following command after `docker:publishLocal`: ```shell docker tag yourreponame/helloworld-scala: dev.local/helloworld-scala: ``` -Otherwise Knative Serving won't be able to resolve this image from the Minikube Docker Repository. +Otherwise Knative Serving won't be able to resolve this image from the Minikube +Docker Repository. You specify the repository in [build.sbt](build.sbt): @@ -33,11 +47,14 @@ You specify the repository in [build.sbt](build.sbt): dockerRepository := Some("your_repository_name") ``` -You can learn more about the build configuration syntax [here](https://www.scala-sbt.org/1.x/docs/Basic-Def.html). +You can learn more about the build configuration syntax +[here](https://www.scala-sbt.org/1.x/docs/Basic-Def.html). ## Configuring the Service descriptor -Importantly, in [helloworld-scala.yaml](helloworld-scala.yaml) **change the image reference to match up with the repository**, name, and version specified in the [build.sbt](build.sbt) in the previous section. +Importantly, in [helloworld-scala.yaml](helloworld-scala.yaml) **change the +image reference to match up with the repository**, name, and version specified +in the [build.sbt](build.sbt) in the previous section. ```yaml apiVersion: serving.knative.dev/v1alpha1 @@ -58,7 +75,6 @@ spec: value: "Scala & Akka on Knative says hello!" - name: HOST value: "localhost" - ``` ## Publishing to Docker @@ -75,7 +91,8 @@ or sbt docker:publish ``` -Which of them to use is depending on whether you are publishing to a remote or a local Docker Repository. +Which of them to use is depending on whether you are publishing to a remote or a +local Docker Repository. ## Deploying to Knative Serving @@ -133,4 +150,4 @@ curl -v -H "Host: helloworld-scala.default.example.com" http://$SERVING_GATEWAY ```shell kubectl delete --filename helloworld-scala.yaml -``` \ No newline at end of file +``` diff --git a/serving/samples/knative-routing-go/README.md b/serving/samples/knative-routing-go/README.md index d027c8997bc..feee214c4e1 100644 --- a/serving/samples/knative-routing-go/README.md +++ b/serving/samples/knative-routing-go/README.md @@ -185,8 +185,7 @@ kubectl get VirtualService entry-route --output yaml 3. Send a request to the `Search` service and the `Login` service by using corresponding URIs. You should get the same results as directly accessing - these services. - \_ Get the ingress IP: + these services. \_ Get the ingress IP: ```shell # In Knative 0.2.x and prior versions, the `knative-ingressgateway` service was used instead of `istio-ingressgateway`. @@ -216,9 +215,9 @@ kubectl get VirtualService entry-route --output yaml ## How It Works When an external request with host `example.com` reaches -`knative-ingress-gateway` Gateway, the `entry-route` VirtualService will check if -it has `/search` or `/login` URI. If the URI matches, then the host of request -will be rewritten into the host of `Search` service or `Login` service +`knative-ingress-gateway` Gateway, the `entry-route` VirtualService will check +if it has `/search` or `/login` URI. If the URI matches, then the host of +request will be rewritten into the host of `Search` service or `Login` service correspondingly. This resets the final destination of the request. The request with updated host will be forwarded to `knative-ingress-gateway` Gateway again. The Gateway proxy checks the updated host, and forwards it to `Search` or diff --git a/serving/samples/telemetry-go/README.md b/serving/samples/telemetry-go/README.md index e7b3acbe0e6..47108c8df5e 100644 --- a/serving/samples/telemetry-go/README.md +++ b/serving/samples/telemetry-go/README.md @@ -114,8 +114,8 @@ kubectl get revisions --output yaml To access this service via `curl`, you need to determine its ingress address. -1. To determine if your service is ready: - Check the status of your Knative gateway: +1. To determine if your service is ready: Check the status of your Knative + gateway: ``` # In Knative 0.2.x and prior versions, the `knative-ingressgateway` service was used instead of `istio-ingressgateway`. diff --git a/serving/samples/traffic-splitting/README.md b/serving/samples/traffic-splitting/README.md index fb54715999b..d0c8c35effc 100644 --- a/serving/samples/traffic-splitting/README.md +++ b/serving/samples/traffic-splitting/README.md @@ -43,8 +43,8 @@ kubectl apply --filename serving/samples/traffic-splitting/updated_configuration kubectl get route --output yaml ``` -4. When the new route is ready, you can access the new endpoints: - The hostname and IP address can be found in the same manner as the +4. When the new route is ready, you can access the new endpoints: The hostname + and IP address can be found in the same manner as the [Creating a RESTful Service](../rest-api-go) sample: ``` From 3641d3d5ba4c066c7ce783b2d383c55176e57932 Mon Sep 17 00:00:00 2001 From: Matt Moore Date: Thu, 7 Feb 2019 23:31:14 +0000 Subject: [PATCH 2/2] Fix prettier.io issues. --- .../serving/helloworld-elixir/README.md | 4 +- install/Knative-with-ICP.md | 2 +- serving/installing-logging-metrics-traces.md | 24 +++++------ serving/samples/helloworld-nodejs/README.md | 40 +++++++++---------- 4 files changed, 34 insertions(+), 36 deletions(-) diff --git a/community/samples/serving/helloworld-elixir/README.md b/community/samples/serving/helloworld-elixir/README.md index eb6067b9734..e879a64ddbb 100644 --- a/community/samples/serving/helloworld-elixir/README.md +++ b/community/samples/serving/helloworld-elixir/README.md @@ -179,7 +179,7 @@ above. xxxxxxx-ingressgateway LoadBalancer 10.23.247.74 35.203.155.229 80:32380/TCP,443:32390/TCP,32400:32400/TCP 2d ``` -1) To find the URL for your service, use +1. To find the URL for your service, use ``` kubectl get ksvc helloworld-elixir --output=custom-columns=NAME:.metadata.name,DOMAIN:.status.domain @@ -188,7 +188,7 @@ above. helloworld-elixir helloworld-elixir.default.example.com ``` -1) Now you can make a request to your app to see the results. Replace +1. Now you can make a request to your app to see the results. Replace `{IP_ADDRESS}` with the address you see returned in the previous step. ```shell diff --git a/install/Knative-with-ICP.md b/install/Knative-with-ICP.md index 2706045614f..e50e5fefda2 100644 --- a/install/Knative-with-ICP.md +++ b/install/Knative-with-ICP.md @@ -162,7 +162,7 @@ see [Performing a Custom Knative Installation](Knative-custom-install.md). [Installing logging, metrics, and traces](../serving/installing-logging-metrics-traces.md) for details about installing the various supported observability plug-ins. -1) Monitor the Knative components until all of the components show a `STATUS` of +1. Monitor the Knative components until all of the components show a `STATUS` of `Running`: ```bash diff --git a/serving/installing-logging-metrics-traces.md b/serving/installing-logging-metrics-traces.md index 5be9b11ddc8..f80c0f2a167 100644 --- a/serving/installing-logging-metrics-traces.md +++ b/serving/installing-logging-metrics-traces.md @@ -201,24 +201,22 @@ To configure and setup monitoring: kubectl get nodes --selector beta.kubernetes.io/fluentd-ds-ready=true ``` -1. If you receive the `No Resources Found` response: +1. If you receive the `No Resources Found` response: - 1. Run the following command to ensure that the Fluentd DaemonSet runs on - all your nodes: + 1. Run the following command to ensure that the Fluentd DaemonSet runs on all your nodes: - ```shell - kubectl label nodes --all beta.kubernetes.io/fluentd-ds-ready="true" - ``` + ```shell + kubectl label nodes --all beta.kubernetes.io/fluentd-ds-ready="true" + ``` - 1. Run the following command to ensure that the `fluentd-ds` daemonset is - ready on at least one node: + 1. Run the following command to ensure that the `fluentd-ds` daemonset is ready on at least one node: - ```shell - kubectl get daemonset fluentd-ds --namespace knative-monitoring - ``` + ```shell + kubectl get daemonset fluentd-ds --namespace knative-monitoring + ``` - See [Accessing Logs](./accessing-logs.md) for more information about - logs in Knative. +See [Accessing Logs](./accessing-logs.md) for more information about +logs in Knative. ## End to end traces diff --git a/serving/samples/helloworld-nodejs/README.md b/serving/samples/helloworld-nodejs/README.md index 69074110ca3..102b7b612a4 100644 --- a/serving/samples/helloworld-nodejs/README.md +++ b/serving/samples/helloworld-nodejs/README.md @@ -84,32 +84,32 @@ recreate the source files from this folder. see [Dockerizing a Node.js web app](https://nodejs.org/en/docs/guides/nodejs-docker-webapp/). -```Dockerfile -# Use the official Node.js 10 image. -# https://hub.docker.com/_/node -FROM node:10 + ```Dockerfile + # Use the official Node.js 10 image. + # https://hub.docker.com/_/node + FROM node:10 -# Create and change to the app directory. -WORKDIR /usr/src/app + # Create and change to the app directory. + WORKDIR /usr/src/app -# Copy application dependency manifests to the container image. -# A wildcard is used to ensure both package.json AND package-lock.json are copied. -# Copying this separately prevents re-running npm install on every code change. -COPY package*.json ./ + # Copy application dependency manifests to the container image. + # A wildcard is used to ensure both package.json AND package-lock.json are copied. + # Copying this separately prevents re-running npm install on every code change. + COPY package*.json ./ -# Install production dependencies. -RUN npm install --only=production + # Install production dependencies. + RUN npm install --only=production -# Copy local code to the container image. -COPY . . + # Copy local code to the container image. + COPY . . -# Service must listen to $PORT environment variable. -# This default value facilitates local development. -ENV PORT 8080 + # Service must listen to $PORT environment variable. + # This default value facilitates local development. + ENV PORT 8080 -# Run the web service on container startup. -CMD [ "npm", "start" ] -``` + # Run the web service on container startup. + CMD [ "npm", "start" ] + ``` 1. Create a new file, `service.yaml` and copy the following service definition into the file. Make sure to replace `{username}` with your Docker Hub