From 3ff5ec1effb3d204a3c77e74b2a2dc39c88eb901 Mon Sep 17 00:00:00 2001 From: Karen Bradshaw Date: Thu, 11 Feb 2021 15:51:47 -0500 Subject: [PATCH] clean up use of word: just --- .../en/docs/concepts/architecture/nodes.md | 2 +- .../cluster-administration/flow-control.md | 2 +- .../manage-deployment.md | 4 ++-- .../cluster-administration/proxies.md | 2 +- .../manage-resources-containers.md | 5 ++--- .../en/docs/concepts/configuration/secret.md | 2 +- content/en/docs/concepts/containers/images.md | 2 +- .../docs/concepts/extend-kubernetes/_index.md | 2 +- .../extend-kubernetes/extend-cluster.md | 2 +- .../working-with-objects/namespaces.md | 4 ++-- .../scheduling-eviction/assign-pod-node.md | 6 +++--- .../concepts/security/controlling-access.md | 2 +- .../connect-applications-service.md | 2 +- .../concepts/services-networking/ingress.md | 2 +- .../services-networking/service-topology.md | 4 ++-- .../concepts/services-networking/service.md | 7 +++---- .../concepts/storage/ephemeral-volumes.md | 3 ++- .../concepts/storage/persistent-volumes.md | 2 +- .../docs/concepts/storage/storage-classes.md | 4 ++-- .../concepts/storage/volume-pvc-datasource.md | 2 +- content/en/docs/concepts/storage/volumes.md | 2 +- .../workloads/controllers/deployment.md | 2 +- .../concepts/workloads/controllers/job.md | 2 +- .../workloads/controllers/replicaset.md | 2 +- .../controllers/replicationcontroller.md | 3 +-- .../concepts/workloads/pods/pod-lifecycle.md | 2 +- .../new-content/blogs-case-studies.md | 4 ++-- .../contribute/new-content/new-features.md | 5 ++--- .../docs/reference/access-authn-authz/abac.md | 2 +- .../access-authn-authz/authorization.md | 2 +- .../access-authn-authz/bootstrap-tokens.md | 2 +- .../certificate-signing-requests.md | 2 +- .../docs/reference/access-authn-authz/rbac.md | 4 ++-- .../kubelet-tls-bootstrapping.md | 6 +++--- .../glossary/cloud-controller-manager.md | 2 +- .../en/docs/reference/kubectl/cheatsheet.md | 2 +- content/en/docs/reference/kubectl/overview.md | 2 +- .../reference/using-api/deprecation-policy.md | 2 +- .../reference/using-api/server-side-apply.md | 9 ++++----- .../setup/best-practices/cluster-large.md | 7 +++---- .../production-environment/tools/kubespray.md | 2 +- .../windows/intro-windows-in-kubernetes.md | 6 +++--- .../access-cluster.md | 4 ++-- .../access-cluster-services.md | 2 +- .../extended-resource-node.md | 2 +- ...aranteed-scheduling-critical-addon-pods.md | 11 ++-------- .../kubeadm/kubeadm-certs.md | 2 +- ...k-if-dockershim-deprecation-affects-you.md | 4 ++-- .../namespaces-walkthrough.md | 2 +- .../tasks/administer-cluster/namespaces.md | 4 ++-- .../calico-network-policy.md | 2 +- .../administer-cluster/safely-drain-node.md | 4 ++-- .../managing-secret-using-config-file.md | 2 +- .../managing-secret-using-kubectl.md | 5 ++--- .../managing-secret-using-kustomize.md | 2 +- .../assign-cpu-resource.md | 2 +- ...igure-liveness-readiness-startup-probes.md | 2 +- .../pull-image-private-registry.md | 2 +- .../translate-compose-kubernetes.md | 4 ++-- .../debug-application-introspection.md | 2 +- .../debug-pod-replication-controller.md | 2 +- .../debug-service.md | 7 +++---- .../logging-stackdriver.md | 6 +++--- .../configure-multiple-schedulers.md | 20 +++++++++---------- .../custom-resource-definition-versioning.md | 2 +- .../custom-resource-definitions.md | 2 +- .../setup-extension-api-server.md | 2 +- .../coarse-parallel-processing-work-queue.md | 7 +++---- .../fine-parallel-processing-work-queue.md | 4 ++-- .../manage-daemon/rollback-daemon-set.md | 2 +- .../tasks/manage-daemon/update-daemon-set.md | 4 ++-- .../docs/tasks/manage-gpus/scheduling-gpus.md | 2 +- .../run-application/delete-stateful-set.md | 4 ++-- .../force-delete-stateful-set-pod.md | 2 +- .../horizontal-pod-autoscale-walkthrough.md | 2 +- .../horizontal-pod-autoscale.md | 2 +- .../install-service-catalog-using-sc.md | 5 +---- content/en/docs/test.md | 2 +- .../en/docs/tutorials/clusters/apparmor.md | 2 +- content/en/docs/tutorials/clusters/seccomp.md | 4 ++-- content/en/docs/tutorials/hello-minikube.md | 4 ++-- 81 files changed, 130 insertions(+), 148 deletions(-) diff --git a/content/en/docs/concepts/architecture/nodes.md b/content/en/docs/concepts/architecture/nodes.md index 823dddf7104f6..2c0f206e45785 100644 --- a/content/en/docs/concepts/architecture/nodes.md +++ b/content/en/docs/concepts/architecture/nodes.md @@ -17,7 +17,7 @@ and contains the services necessary to run {{< glossary_tooltip text="Pods" term_id="pod" >}} Typically you have several nodes in a cluster; in a learning or resource-limited -environment, you might have just one. +environment, you might have only one node. The [components](/docs/concepts/overview/components/#node-components) on a node include the {{< glossary_tooltip text="kubelet" term_id="kubelet" >}}, a diff --git a/content/en/docs/concepts/cluster-administration/flow-control.md b/content/en/docs/concepts/cluster-administration/flow-control.md index e8fd3e4061a88..3e94277d93107 100644 --- a/content/en/docs/concepts/cluster-administration/flow-control.md +++ b/content/en/docs/concepts/cluster-administration/flow-control.md @@ -427,7 +427,7 @@ poorly-behaved workloads that may be harming system health. histogram vector of queue lengths for the queues, broken down by the labels `priority_level` and `flow_schema`, as sampled by the enqueued requests. Each request that gets queued contributes one - sample to its histogram, reporting the length of the queue just + sample to its histogram, reporting the length of the queue immediately after the request was added. Note that this produces different statistics than an unbiased survey would. {{< note >}} diff --git a/content/en/docs/concepts/cluster-administration/manage-deployment.md b/content/en/docs/concepts/cluster-administration/manage-deployment.md index 40320e428542e..f51911116d4a4 100644 --- a/content/en/docs/concepts/cluster-administration/manage-deployment.md +++ b/content/en/docs/concepts/cluster-administration/manage-deployment.md @@ -278,7 +278,7 @@ pod/my-nginx-2035384211-u3t6x labeled ``` This first filters all pods with the label "app=nginx", and then labels them with the "tier=fe". -To see the pods you just labeled, run: +To see the pods you labeled, run: ```shell kubectl get pods -l app=nginx -L tier @@ -411,7 +411,7 @@ and ## Disruptive updates -In some cases, you may need to update resource fields that cannot be updated once initialized, or you may just want to make a recursive change immediately, such as to fix broken pods created by a Deployment. To change such fields, use `replace --force`, which deletes and re-creates the resource. In this case, you can modify your original configuration file: +In some cases, you may need to update resource fields that cannot be updated once initialized, or you may want to make a recursive change immediately, such as to fix broken pods created by a Deployment. To change such fields, use `replace --force`, which deletes and re-creates the resource. In this case, you can modify your original configuration file: ```shell kubectl replace -f https://k8s.io/examples/application/nginx/nginx-deployment.yaml --force diff --git a/content/en/docs/concepts/cluster-administration/proxies.md b/content/en/docs/concepts/cluster-administration/proxies.md index 9bf204bd9f246..ba86c969b8d7a 100644 --- a/content/en/docs/concepts/cluster-administration/proxies.md +++ b/content/en/docs/concepts/cluster-administration/proxies.md @@ -39,7 +39,7 @@ There are several different proxies you may encounter when using Kubernetes: - proxies UDP, TCP and SCTP - does not understand HTTP - provides load balancing - - is just used to reach services + - is only used to reach services 1. A Proxy/Load-balancer in front of apiserver(s): diff --git a/content/en/docs/concepts/configuration/manage-resources-containers.md b/content/en/docs/concepts/configuration/manage-resources-containers.md index 2668050d26554..fa683e97f349a 100644 --- a/content/en/docs/concepts/configuration/manage-resources-containers.md +++ b/content/en/docs/concepts/configuration/manage-resources-containers.md @@ -72,8 +72,7 @@ You cannot overcommit `hugepages-*` resources. This is different from the `memory` and `cpu` resources. {{< /note >}} -CPU and memory are collectively referred to as *compute resources*, or just -*resources*. Compute +CPU and memory are collectively referred to as *compute resources*, or *resources*. Compute resources are measurable quantities that can be requested, allocated, and consumed. They are distinct from [API resources](/docs/concepts/overview/kubernetes-api/). API resources, such as Pods and @@ -554,7 +553,7 @@ extender. ### Consuming extended resources -Users can consume extended resources in Pod specs just like CPU and memory. +Users can consume extended resources in Pod specs like CPU and memory. The scheduler takes care of the resource accounting so that no more than the available amount is simultaneously allocated to Pods. diff --git a/content/en/docs/concepts/configuration/secret.md b/content/en/docs/concepts/configuration/secret.md index 0a3688df712f2..7f47aeeaf57a5 100644 --- a/content/en/docs/concepts/configuration/secret.md +++ b/content/en/docs/concepts/configuration/secret.md @@ -109,7 +109,7 @@ empty-secret Opaque 0 2m6s ``` The `DATA` column shows the number of data items stored in the Secret. -In this case, `0` means we have just created an empty Secret. +In this case, `0` means we have created an empty Secret. ### Service account token Secrets diff --git a/content/en/docs/concepts/containers/images.md b/content/en/docs/concepts/containers/images.md index 1166d4106aeba..6d0db16fe8f34 100644 --- a/content/en/docs/concepts/containers/images.md +++ b/content/en/docs/concepts/containers/images.md @@ -135,7 +135,7 @@ Here are the recommended steps to configuring your nodes to use a private regist example, run these on your desktop/laptop: 1. Run `docker login [server]` for each set of credentials you want to use. This updates `$HOME/.docker/config.json` on your PC. - 1. View `$HOME/.docker/config.json` in an editor to ensure it contains just the credentials you want to use. + 1. View `$HOME/.docker/config.json` in an editor to ensure it contains only the credentials you want to use. 1. Get a list of your nodes; for example: - if you want the names: `nodes=$( kubectl get nodes -o jsonpath='{range.items[*].metadata}{.name} {end}' )` - if you want to get the IP addresses: `nodes=$( kubectl get nodes -o jsonpath='{range .items[*].status.addresses[?(@.type=="ExternalIP")]}{.address} {end}' )` diff --git a/content/en/docs/concepts/extend-kubernetes/_index.md b/content/en/docs/concepts/extend-kubernetes/_index.md index 429912a9eda36..cc5ba809ecf41 100644 --- a/content/en/docs/concepts/extend-kubernetes/_index.md +++ b/content/en/docs/concepts/extend-kubernetes/_index.md @@ -145,7 +145,7 @@ Kubernetes provides several built-in authentication methods, and an [Authenticat ### Authorization -[Authorization](/docs/reference/access-authn-authz/webhook/) determines whether specific users can read, write, and do other operations on API resources. It just works at the level of whole resources -- it doesn't discriminate based on arbitrary object fields. If the built-in authorization options don't meet your needs, and [Authorization webhook](/docs/reference/access-authn-authz/webhook/) allows calling out to user-provided code to make an authorization decision. +[Authorization](/docs/reference/access-authn-authz/webhook/) determines whether specific users can read, write, and do other operations on API resources. It works at the level of whole resources -- it doesn't discriminate based on arbitrary object fields. If the built-in authorization options don't meet your needs, and [Authorization webhook](/docs/reference/access-authn-authz/webhook/) allows calling out to user-provided code to make an authorization decision. ### Dynamic Admission Control diff --git a/content/en/docs/concepts/extend-kubernetes/extend-cluster.md b/content/en/docs/concepts/extend-kubernetes/extend-cluster.md index 84d14cee3e722..2bdc74e7e96ef 100644 --- a/content/en/docs/concepts/extend-kubernetes/extend-cluster.md +++ b/content/en/docs/concepts/extend-kubernetes/extend-cluster.md @@ -146,7 +146,7 @@ Kubernetes provides several built-in authentication methods, and an [Authenticat ### Authorization -[Authorization](/docs/reference/access-authn-authz/webhook/) determines whether specific users can read, write, and do other operations on API resources. It just works at the level of whole resources -- it doesn't discriminate based on arbitrary object fields. If the built-in authorization options don't meet your needs, and [Authorization webhook](/docs/reference/access-authn-authz/webhook/) allows calling out to user-provided code to make an authorization decision. +[Authorization](/docs/reference/access-authn-authz/webhook/) determines whether specific users can read, write, and do other operations on API resources. It works at the level of whole resources -- it doesn't discriminate based on arbitrary object fields. If the built-in authorization options don't meet your needs, and [Authorization webhook](/docs/reference/access-authn-authz/webhook/) allows calling out to user-provided code to make an authorization decision. ### Dynamic Admission Control diff --git a/content/en/docs/concepts/overview/working-with-objects/namespaces.md b/content/en/docs/concepts/overview/working-with-objects/namespaces.md index f078cb86360d8..b7ae176d7c0bb 100644 --- a/content/en/docs/concepts/overview/working-with-objects/namespaces.md +++ b/content/en/docs/concepts/overview/working-with-objects/namespaces.md @@ -28,7 +28,7 @@ resource can only be in one namespace. Namespaces are a way to divide cluster resources between multiple users (via [resource quota](/docs/concepts/policy/resource-quotas/)). -It is not necessary to use multiple namespaces just to separate slightly different +It is not necessary to use multiple namespaces to separate slightly different resources, such as different versions of the same software: use [labels](/docs/concepts/overview/working-with-objects/labels) to distinguish resources within the same namespace. @@ -91,7 +91,7 @@ kubectl config view --minify | grep namespace: When you create a [Service](/docs/concepts/services-networking/service/), it creates a corresponding [DNS entry](/docs/concepts/services-networking/dns-pod-service/). This entry is of the form `..svc.cluster.local`, which means -that if a container just uses ``, it will resolve to the service which +that if a container only uses ``, it will resolve to the service which is local to a namespace. This is useful for using the same configuration across multiple namespaces such as Development, Staging and Production. If you want to reach across namespaces, you need to use the fully qualified domain name (FQDN). diff --git a/content/en/docs/concepts/scheduling-eviction/assign-pod-node.md b/content/en/docs/concepts/scheduling-eviction/assign-pod-node.md index f0437a9b8b9f9..beab67f2fc664 100644 --- a/content/en/docs/concepts/scheduling-eviction/assign-pod-node.md +++ b/content/en/docs/concepts/scheduling-eviction/assign-pod-node.md @@ -120,12 +120,12 @@ pod is eligible to be scheduled on, based on labels on the node. There are currently two types of node affinity, called `requiredDuringSchedulingIgnoredDuringExecution` and `preferredDuringSchedulingIgnoredDuringExecution`. You can think of them as "hard" and "soft" respectively, -in the sense that the former specifies rules that *must* be met for a pod to be scheduled onto a node (just like +in the sense that the former specifies rules that *must* be met for a pod to be scheduled onto a node (similar to `nodeSelector` but using a more expressive syntax), while the latter specifies *preferences* that the scheduler will try to enforce but will not guarantee. The "IgnoredDuringExecution" part of the names means that, similar to how `nodeSelector` works, if labels on a node change at runtime such that the affinity rules on a pod are no longer -met, the pod will still continue to run on the node. In the future we plan to offer -`requiredDuringSchedulingRequiredDuringExecution` which will be just like `requiredDuringSchedulingIgnoredDuringExecution` +met, the pod continues to run on the node. In the future we plan to offer +`requiredDuringSchedulingRequiredDuringExecution` which will be identical to `requiredDuringSchedulingIgnoredDuringExecution` except that it will evict pods from nodes that cease to satisfy the pods' node affinity requirements. Thus an example of `requiredDuringSchedulingIgnoredDuringExecution` would be "only run the pod on nodes with Intel CPUs" diff --git a/content/en/docs/concepts/security/controlling-access.md b/content/en/docs/concepts/security/controlling-access.md index e025ac10e3d98..9d6c2b9617ef7 100644 --- a/content/en/docs/concepts/security/controlling-access.md +++ b/content/en/docs/concepts/security/controlling-access.md @@ -43,7 +43,7 @@ Authenticators are described in more detail in [Authentication](/docs/reference/access-authn-authz/authentication/). The input to the authentication step is the entire HTTP request; however, it typically -just examines the headers and/or client certificate. +examines the headers and/or client certificate. Authentication modules include client certificates, password, and plain tokens, bootstrap tokens, and JSON Web Tokens (used for service accounts). diff --git a/content/en/docs/concepts/services-networking/connect-applications-service.md b/content/en/docs/concepts/services-networking/connect-applications-service.md index 402c3c57ca1cc..14bc98101fea0 100644 --- a/content/en/docs/concepts/services-networking/connect-applications-service.md +++ b/content/en/docs/concepts/services-networking/connect-applications-service.md @@ -387,7 +387,7 @@ $ curl https://: -k

Welcome to nginx!

``` -Let's now recreate the Service to use a cloud load balancer, just change the `Type` of `my-nginx` Service from `NodePort` to `LoadBalancer`: +Let's now recreate the Service to use a cloud load balancer. Change the `Type` of `my-nginx` Service from `NodePort` to `LoadBalancer`: ```shell kubectl edit svc my-nginx diff --git a/content/en/docs/concepts/services-networking/ingress.md b/content/en/docs/concepts/services-networking/ingress.md index 7a189a401b00c..b6be91cb9aa95 100644 --- a/content/en/docs/concepts/services-networking/ingress.md +++ b/content/en/docs/concepts/services-networking/ingress.md @@ -260,7 +260,7 @@ There are existing Kubernetes concepts that allow you to expose a single Service {{< codenew file="service/networking/test-ingress.yaml" >}} If you create it using `kubectl apply -f` you should be able to view the state -of the Ingress you just added: +of the Ingress you added: ```bash kubectl get ingress test-ingress diff --git a/content/en/docs/concepts/services-networking/service-topology.md b/content/en/docs/concepts/services-networking/service-topology.md index d36b76f55f003..66976b23fb6d3 100644 --- a/content/en/docs/concepts/services-networking/service-topology.md +++ b/content/en/docs/concepts/services-networking/service-topology.md @@ -57,7 +57,7 @@ the first label matches the originating Node's value for that label. If there is no backend for the Service on a matching Node, then the second label will be considered, and so forth, until no labels remain. -If no match is found, the traffic will be rejected, just as if there were no +If no match is found, the traffic will be rejected, as if there were no backends for the Service at all. That is, endpoints are chosen based on the first topology key with available backends. If this field is specified and all entries have no backends that match the topology of the client, the service has no @@ -87,7 +87,7 @@ traffic as follows. * Service topology is not compatible with `externalTrafficPolicy=Local`, and therefore a Service cannot use both of these features. It is possible to use - both features in the same cluster on different Services, just not on the same + both features in the same cluster on different Services, only not on the same Service. * Valid topology keys are currently limited to `kubernetes.io/hostname`, diff --git a/content/en/docs/concepts/services-networking/service.md b/content/en/docs/concepts/services-networking/service.md index b7a7edcd386a9..b57fe5001bc2f 100644 --- a/content/en/docs/concepts/services-networking/service.md +++ b/content/en/docs/concepts/services-networking/service.md @@ -527,7 +527,7 @@ for NodePort use. Using a NodePort gives you the freedom to set up your own load balancing solution, to configure environments that are not fully supported by Kubernetes, or even -to just expose one or more nodes' IPs directly. +to expose one or more nodes' IPs directly. Note that this Service is visible as `:spec.ports[*].nodePort` and `.spec.clusterIP:spec.ports[*].port`. (If the `--nodeport-addresses` flag in kube-proxy is set, would be filtered NodeIP(s).) @@ -785,8 +785,7 @@ you can use the following annotations: ``` In the above example, if the Service contained three ports, `80`, `443`, and -`8443`, then `443` and `8443` would use the SSL certificate, but `80` would just -be proxied HTTP. +`8443`, then `443` and `8443` would use the SSL certificate, but `80` would be proxied HTTP. From Kubernetes v1.9 onwards you can use [predefined AWS SSL policies](https://docs.aws.amazon.com/elasticloadbalancing/latest/classic/elb-security-policy-table.html) with HTTPS or SSL listeners for your Services. To see which policies are available for use, you can use the `aws` command line tool: @@ -1107,7 +1106,7 @@ but the current API requires it. ## Virtual IP implementation {#the-gory-details-of-virtual-ips} -The previous information should be sufficient for many people who just want to +The previous information should be sufficient for many people who want to use Services. However, there is a lot going on behind the scenes that may be worth understanding. diff --git a/content/en/docs/concepts/storage/ephemeral-volumes.md b/content/en/docs/concepts/storage/ephemeral-volumes.md index 9b0b9464f5c92..bc391a3f36fee 100644 --- a/content/en/docs/concepts/storage/ephemeral-volumes.md +++ b/content/en/docs/concepts/storage/ephemeral-volumes.md @@ -135,8 +135,9 @@ As a cluster administrator, you can use a [PodSecurityPolicy](/docs/concepts/pol This feature requires the `GenericEphemeralVolume` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/) to be enabled. Because this is an alpha feature, it is disabled by default. -Generic ephemeral volumes are similar to `emptyDir` volumes, just more +Generic ephemeral volumes are similar to `emptyDir` volumes, except more flexible: + - Storage can be local or network-attached. - Volumes can have a fixed size that Pods are not able to exceed. - Volumes may have some initial data, depending on the driver and diff --git a/content/en/docs/concepts/storage/persistent-volumes.md b/content/en/docs/concepts/storage/persistent-volumes.md index ef46b7f99aae4..54e42bae9ee50 100644 --- a/content/en/docs/concepts/storage/persistent-volumes.md +++ b/content/en/docs/concepts/storage/persistent-volumes.md @@ -29,7 +29,7 @@ A _PersistentVolume_ (PV) is a piece of storage in the cluster that has been pro A _PersistentVolumeClaim_ (PVC) is a request for storage by a user. It is similar to a Pod. Pods consume node resources and PVCs consume PV resources. Pods can request specific levels of resources (CPU and Memory). Claims can request specific size and access modes (e.g., they can be mounted ReadWriteOnce, ReadOnlyMany or ReadWriteMany, see [AccessModes](#access-modes)). -While PersistentVolumeClaims allow a user to consume abstract storage resources, it is common that users need PersistentVolumes with varying properties, such as performance, for different problems. Cluster administrators need to be able to offer a variety of PersistentVolumes that differ in more ways than just size and access modes, without exposing users to the details of how those volumes are implemented. For these needs, there is the _StorageClass_ resource. +While PersistentVolumeClaims allow a user to consume abstract storage resources, it is common that users need PersistentVolumes with varying properties, such as performance, for different problems. Cluster administrators need to be able to offer a variety of PersistentVolumes that differ in more ways than size and access modes, without exposing users to the details of how those volumes are implemented. For these needs, there is the _StorageClass_ resource. See the [detailed walkthrough with working examples](/docs/tasks/configure-pod-container/configure-persistent-volume-storage/). diff --git a/content/en/docs/concepts/storage/storage-classes.md b/content/en/docs/concepts/storage/storage-classes.md index 6834977d70b1d..0abdf6b545eac 100644 --- a/content/en/docs/concepts/storage/storage-classes.md +++ b/content/en/docs/concepts/storage/storage-classes.md @@ -37,7 +37,7 @@ request a particular class. Administrators set the name and other parameters of a class when first creating StorageClass objects, and the objects cannot be updated once they are created. -Administrators can specify a default StorageClass just for PVCs that don't +Administrators can specify a default StorageClass only for PVCs that don't request any particular class to bind to: see the [PersistentVolumeClaim section](/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims) for details. @@ -569,7 +569,7 @@ parameters: `"http(s)://api-server:7860"` * `registry`: Quobyte registry to use to mount the volume. You can specify the registry as ``:`` pair or if you want to specify multiple - registries you just have to put a comma between them e.q. + registries, put a comma between them. ``:,:,:``. The host can be an IP address or if you have a working DNS you can also provide the DNS names. diff --git a/content/en/docs/concepts/storage/volume-pvc-datasource.md b/content/en/docs/concepts/storage/volume-pvc-datasource.md index 8210df661cb76..9e59560d1d460 100644 --- a/content/en/docs/concepts/storage/volume-pvc-datasource.md +++ b/content/en/docs/concepts/storage/volume-pvc-datasource.md @@ -40,7 +40,7 @@ Users need to be aware of the following when using this feature: ## Provisioning -Clones are provisioned just like any other PVC with the exception of adding a dataSource that references an existing PVC in the same namespace. +Clones are provisioned like any other PVC with the exception of adding a dataSource that references an existing PVC in the same namespace. ```yaml apiVersion: v1 diff --git a/content/en/docs/concepts/storage/volumes.md b/content/en/docs/concepts/storage/volumes.md index 8ed91067e06dd..a00b22fcbf87f 100644 --- a/content/en/docs/concepts/storage/volumes.md +++ b/content/en/docs/concepts/storage/volumes.md @@ -38,7 +38,7 @@ that run within the pod, and data is preserved across container restarts. When a ceases to exist, Kubernetes destroys ephemeral volumes; however, Kubernetes does not destroy persistent volumes. -At its core, a volume is just a directory, possibly with some data in it, which +At its core, a volume is a directory, possibly with some data in it, which is accessible to the containers in a pod. How that directory comes to be, the medium that backs it, and the contents of it are determined by the particular volume type used. diff --git a/content/en/docs/concepts/workloads/controllers/deployment.md b/content/en/docs/concepts/workloads/controllers/deployment.md index 038088941412f..22b95255c50bf 100644 --- a/content/en/docs/concepts/workloads/controllers/deployment.md +++ b/content/en/docs/concepts/workloads/controllers/deployment.md @@ -708,7 +708,7 @@ nginx-deployment-618515232 11 11 11 7m You can pause a Deployment before triggering one or more updates and then resume it. This allows you to apply multiple fixes in between pausing and resuming without triggering unnecessary rollouts. -* For example, with a Deployment that was just created: +* For example, with a Deployment that was created: Get the Deployment details: ```shell kubectl get deploy diff --git a/content/en/docs/concepts/workloads/controllers/job.md b/content/en/docs/concepts/workloads/controllers/job.md index 2c99a704d1efd..be4393d8919de 100644 --- a/content/en/docs/concepts/workloads/controllers/job.md +++ b/content/en/docs/concepts/workloads/controllers/job.md @@ -99,7 +99,7 @@ pi-5rwd7 ``` Here, the selector is the same as the selector for the Job. The `--output=jsonpath` option specifies an expression -that just gets the name from each Pod in the returned list. +with the name from each Pod in the returned list. View the standard output of one of the pods: diff --git a/content/en/docs/concepts/workloads/controllers/replicaset.md b/content/en/docs/concepts/workloads/controllers/replicaset.md index e45d20c8f7d6c..edc96e4c00f83 100644 --- a/content/en/docs/concepts/workloads/controllers/replicaset.md +++ b/content/en/docs/concepts/workloads/controllers/replicaset.md @@ -222,7 +222,7 @@ In this manner, a ReplicaSet can own a non-homogenous set of Pods ## Writing a ReplicaSet manifest As with all other Kubernetes API objects, a ReplicaSet needs the `apiVersion`, `kind`, and `metadata` fields. -For ReplicaSets, the kind is always just ReplicaSet. +For ReplicaSets, the `kind` is always a ReplicaSet. In Kubernetes 1.9 the API version `apps/v1` on the ReplicaSet kind is the current version and is enabled by default. The API version `apps/v1beta2` is deprecated. Refer to the first lines of the `frontend.yaml` example for guidance. diff --git a/content/en/docs/concepts/workloads/controllers/replicationcontroller.md b/content/en/docs/concepts/workloads/controllers/replicationcontroller.md index 23d87f81fd617..9bfb1264a4ea9 100644 --- a/content/en/docs/concepts/workloads/controllers/replicationcontroller.md +++ b/content/en/docs/concepts/workloads/controllers/replicationcontroller.md @@ -110,8 +110,7 @@ nginx-3ntk0 nginx-4ok8v nginx-qrm3m Here, the selector is the same as the selector for the ReplicationController (seen in the `kubectl describe` output), and in a different form in `replication.yaml`. The `--output=jsonpath` option -specifies an expression that just gets the name from each pod in the returned list. - +specifies an expression with the name from each pod in the returned list. ## Writing a ReplicationController Spec diff --git a/content/en/docs/concepts/workloads/pods/pod-lifecycle.md b/content/en/docs/concepts/workloads/pods/pod-lifecycle.md index 832785923a54c..778bee6c02d6e 100644 --- a/content/en/docs/concepts/workloads/pods/pod-lifecycle.md +++ b/content/en/docs/concepts/workloads/pods/pod-lifecycle.md @@ -312,7 +312,7 @@ can specify a readiness probe that checks an endpoint specific to readiness that is different from the liveness probe. {{< note >}} -If you just want to be able to drain requests when the Pod is deleted, you do not +If you want to be able to drain requests when the Pod is deleted, you do not necessarily need a readiness probe; on deletion, the Pod automatically puts itself into an unready state regardless of whether the readiness probe exists. The Pod remains in the unready state while it waits for the containers in the Pod diff --git a/content/en/docs/contribute/new-content/blogs-case-studies.md b/content/en/docs/contribute/new-content/blogs-case-studies.md index 66289256946af..8f2f6baaf7f36 100644 --- a/content/en/docs/contribute/new-content/blogs-case-studies.md +++ b/content/en/docs/contribute/new-content/blogs-case-studies.md @@ -39,8 +39,8 @@ Anyone can write a blog post and submit it for review. - Posts about other CNCF projects may or may not be on topic. We recommend asking the blog team before submitting a draft. - Many CNCF projects have their own blog. These are often a better choice for posts. There are times of major feature or milestone for a CNCF project that users would be interested in reading on the Kubernetes blog. - Blog posts should be original content - - The official blog is not for repurposing existing content from a third party as new content. - - The [license](https://github.com/kubernetes/website/blob/master/LICENSE) for the blog does allow commercial use of the content for commercial purposes, just not the other way around. + - The official blog is not for repurposing existing content from a third party as new content. + - The [license](https://github.com/kubernetes/website/blob/master/LICENSE) for the blog allows commercial use of the content for commercial purposes, but not the other way around. - Blog posts should aim to be future proof - Given the development velocity of the project, we want evergreen content that won't require updates to stay accurate for the reader. - It can be a better choice to add a tutorial or update official documentation than to write a high level overview as a blog post. diff --git a/content/en/docs/contribute/new-content/new-features.md b/content/en/docs/contribute/new-content/new-features.md index a0e36005628a9..268c447402e43 100644 --- a/content/en/docs/contribute/new-content/new-features.md +++ b/content/en/docs/contribute/new-content/new-features.md @@ -77,9 +77,8 @@ merged. Keep the following in mind: Alpha features. - It's hard to test (and therefore to document) a feature that hasn't been merged, or is at least considered feature-complete in its PR. -- Determining whether a feature needs documentation is a manual process and - just because a feature is not marked as needing docs doesn't mean it doesn't - need them. +- Determining whether a feature needs documentation is a manual process. Even if + a feature is not marked as needing docs, you may need to document the feature. ## For developers or other SIG members diff --git a/content/en/docs/reference/access-authn-authz/abac.md b/content/en/docs/reference/access-authn-authz/abac.md index 99fce41aba80c..3e2aea6b3623b 100644 --- a/content/en/docs/reference/access-authn-authz/abac.md +++ b/content/en/docs/reference/access-authn-authz/abac.md @@ -19,7 +19,7 @@ Attribute-based access control (ABAC) defines an access control paradigm whereby To enable `ABAC` mode, specify `--authorization-policy-file=SOME_FILENAME` and `--authorization-mode=ABAC` on startup. The file format is [one JSON object per line](https://jsonlines.org/). There -should be no enclosing list or map, just one map per line. +should be no enclosing list or map, only one map per line. Each line is a "policy object", where each such object is a map with the following properties: diff --git a/content/en/docs/reference/access-authn-authz/authorization.md b/content/en/docs/reference/access-authn-authz/authorization.md index 04963e10eebee..af73a23350601 100644 --- a/content/en/docs/reference/access-authn-authz/authorization.md +++ b/content/en/docs/reference/access-authn-authz/authorization.md @@ -138,7 +138,7 @@ no exposes the API server authorization to external services. Other resources in this group include: -* `SubjectAccessReview` - Access review for any user, not just the current one. Useful for delegating authorization decisions to the API server. For example, the kubelet and extension API servers use this to determine user access to their own APIs. +* `SubjectAccessReview` - Access review for any user, not only the current one. Useful for delegating authorization decisions to the API server. For example, the kubelet and extension API servers use this to determine user access to their own APIs. * `LocalSubjectAccessReview` - Like `SubjectAccessReview` but restricted to a specific namespace. * `SelfSubjectRulesReview` - A review which returns the set of actions a user can perform within a namespace. Useful for users to quickly summarize their own access, or for UIs to hide/show actions. diff --git a/content/en/docs/reference/access-authn-authz/bootstrap-tokens.md b/content/en/docs/reference/access-authn-authz/bootstrap-tokens.md index 856669a5d8914..f128c14a7ab34 100644 --- a/content/en/docs/reference/access-authn-authz/bootstrap-tokens.md +++ b/content/en/docs/reference/access-authn-authz/bootstrap-tokens.md @@ -167,7 +167,7 @@ data: users: [] ``` -The `kubeconfig` member of the ConfigMap is a config file with just the cluster +The `kubeconfig` member of the ConfigMap is a config file with only the cluster information filled out. The key thing being communicated here is the `certificate-authority-data`. This may be expanded in the future. diff --git a/content/en/docs/reference/access-authn-authz/certificate-signing-requests.md b/content/en/docs/reference/access-authn-authz/certificate-signing-requests.md index 6d05d0436ad1d..d398a4b7cef61 100644 --- a/content/en/docs/reference/access-authn-authz/certificate-signing-requests.md +++ b/content/en/docs/reference/access-authn-authz/certificate-signing-requests.md @@ -363,7 +363,7 @@ status: It's usual to set `status.conditions.reason` to a machine-friendly reason code using TitleCase; this is a convention but you can set it to anything -you like. If you want to add a note just for human consumption, use the +you like. If you want to add a note for human consumption, use the `status.conditions.message` field. ## Signing diff --git a/content/en/docs/reference/access-authn-authz/rbac.md b/content/en/docs/reference/access-authn-authz/rbac.md index 4bc2b86dd692d..bd9aba1aa803e 100644 --- a/content/en/docs/reference/access-authn-authz/rbac.md +++ b/content/en/docs/reference/access-authn-authz/rbac.md @@ -219,7 +219,7 @@ the role that is granted to those subjects. 1. A binding to a different role is a fundamentally different binding. Requiring a binding to be deleted/recreated in order to change the `roleRef` ensures the full list of subjects in the binding is intended to be granted -the new role (as opposed to enabling accidentally modifying just the roleRef +the new role (as opposed to enabling or accidentally modifying only the roleRef without verifying all of the existing subjects should be given the new role's permissions). @@ -333,7 +333,7 @@ as a cluster administrator, include rules for custom resources, such as those se or aggregated API servers, to extend the default roles. For example: the following ClusterRoles let the "admin" and "edit" default roles manage the custom resource -named CronTab, whereas the "view" role can perform just read actions on CronTab resources. +named CronTab, whereas the "view" role can perform only read actions on CronTab resources. You can assume that CronTab objects are named `"crontabs"` in URLs as seen by the API server. ```yaml diff --git a/content/en/docs/reference/command-line-tools-reference/kubelet-tls-bootstrapping.md b/content/en/docs/reference/command-line-tools-reference/kubelet-tls-bootstrapping.md index 89ad711a56bec..a441f8ce5b667 100644 --- a/content/en/docs/reference/command-line-tools-reference/kubelet-tls-bootstrapping.md +++ b/content/en/docs/reference/command-line-tools-reference/kubelet-tls-bootstrapping.md @@ -185,9 +185,9 @@ systemd unit file perhaps) to enable the token file. See docs further details. ### Authorize kubelet to create CSR -Now that the bootstrapping node is _authenticated_ as part of the `system:bootstrappers` group, it needs to be _authorized_ to create a certificate signing request (CSR) as well as retrieve it when done. Fortunately, Kubernetes ships with a `ClusterRole` with precisely these (and just these) permissions, `system:node-bootstrapper`. +Now that the bootstrapping node is _authenticated_ as part of the `system:bootstrappers` group, it needs to be _authorized_ to create a certificate signing request (CSR) as well as retrieve it when done. Fortunately, Kubernetes ships with a `ClusterRole` with precisely these (and only these) permissions, `system:node-bootstrapper`. -To do this, you just need to create a `ClusterRoleBinding` that binds the `system:bootstrappers` group to the cluster role `system:node-bootstrapper`. +To do this, you only need to create a `ClusterRoleBinding` that binds the `system:bootstrappers` group to the cluster role `system:node-bootstrapper`. ``` # enable bootstrapping nodes to create CSR @@ -345,7 +345,7 @@ The important elements to note are: * `token`: the token to use The format of the token does not matter, as long as it matches what kube-apiserver expects. In the above example, we used a bootstrap token. -As stated earlier, _any_ valid authentication method can be used, not just tokens. +As stated earlier, _any_ valid authentication method can be used, not only tokens. Because the bootstrap `kubeconfig` _is_ a standard `kubeconfig`, you can use `kubectl` to generate it. To create the above example file: diff --git a/content/en/docs/reference/glossary/cloud-controller-manager.md b/content/en/docs/reference/glossary/cloud-controller-manager.md index c78bf393cb215..874d0925cfe12 100755 --- a/content/en/docs/reference/glossary/cloud-controller-manager.md +++ b/content/en/docs/reference/glossary/cloud-controller-manager.md @@ -14,7 +14,7 @@ tags: A Kubernetes {{< glossary_tooltip text="control plane" term_id="control-plane" >}} component that embeds cloud-specific control logic. The cloud controller manager lets you link your cluster into your cloud provider's API, and separates out the components that interact -with that cloud platform from components that just interact with your cluster. +with that cloud platform from components that only interact with your cluster. diff --git a/content/en/docs/reference/kubectl/cheatsheet.md b/content/en/docs/reference/kubectl/cheatsheet.md index cbb579908b4e1..5b60aab8a883e 100644 --- a/content/en/docs/reference/kubectl/cheatsheet.md +++ b/content/en/docs/reference/kubectl/cheatsheet.md @@ -360,7 +360,7 @@ Other operations for exploring API resources: ```bash kubectl api-resources --namespaced=true # All namespaced resources kubectl api-resources --namespaced=false # All non-namespaced resources -kubectl api-resources -o name # All resources with simple output (just the resource name) +kubectl api-resources -o name # All resources with simple output (only the resource name) kubectl api-resources -o wide # All resources with expanded (aka "wide") output kubectl api-resources --verbs=list,get # All resources that support the "list" and "get" request verbs kubectl api-resources --api-group=extensions # All resources in the "extensions" API group diff --git a/content/en/docs/reference/kubectl/overview.md b/content/en/docs/reference/kubectl/overview.md index dbad6f5cf2416..f8ec7e5603762 100644 --- a/content/en/docs/reference/kubectl/overview.md +++ b/content/en/docs/reference/kubectl/overview.md @@ -69,7 +69,7 @@ for example `create`, `get`, `describe`, `delete`. Flags that you specify from the command line override default values and any corresponding environment variables. {{< /caution >}} -If you need help, just run `kubectl help` from the terminal window. +If you need help, run `kubectl help` from the terminal window. ## Operations diff --git a/content/en/docs/reference/using-api/deprecation-policy.md b/content/en/docs/reference/using-api/deprecation-policy.md index 17840f195ba84..4de09ee82ada8 100644 --- a/content/en/docs/reference/using-api/deprecation-policy.md +++ b/content/en/docs/reference/using-api/deprecation-policy.md @@ -327,7 +327,7 @@ supported in API v1 must exist and function until API v1 is removed. ### Component config structures -Component configs are versioned and managed just like REST resources. +Component configs are versioned and managed similar to REST resources. ### Future work diff --git a/content/en/docs/reference/using-api/server-side-apply.md b/content/en/docs/reference/using-api/server-side-apply.md index d91497f8f2d96..a6684adb24ddc 100644 --- a/content/en/docs/reference/using-api/server-side-apply.md +++ b/content/en/docs/reference/using-api/server-side-apply.md @@ -209,9 +209,8 @@ would have failed due to conflicting ownership. The merging strategy, implemented with Server Side Apply, provides a generally more stable object lifecycle. Server Side Apply tries to merge fields based on -the fact who manages them instead of overruling just based on values. This way -it is intended to make it easier and more stable for multiple actors updating -the same object by causing less unexpected interference. +the actor who manages them instead of overruling based on values. This way +multiple actors can update the same object without causing unexpected interference. When a user sends a "fully-specified intent" object to the Server Side Apply endpoint, the server merges it with the live object favoring the value in the @@ -319,7 +318,7 @@ kubectl apply -f https://k8s.io/examples/application/ssa/nginx-deployment-replic ``` If the apply results in a conflict with the HPA controller, then do nothing. The -conflict just indicates the controller has claimed the field earlier in the +conflict indicates the controller has claimed the field earlier in the process than it sometimes does. At this point the user may remove the `replicas` field from their configuration. @@ -436,7 +435,7 @@ Data: [{"op": "replace", "path": "/metadata/managedFields", "value": [{}]}] This will overwrite the managedFields with a list containing a single empty entry that then results in the managedFields being stripped entirely from the -object. Note that just setting the managedFields to an empty list will not +object. Note that setting the managedFields to an empty list will not reset the field. This is on purpose, so managedFields never get stripped by clients not aware of the field. diff --git a/content/en/docs/setup/best-practices/cluster-large.md b/content/en/docs/setup/best-practices/cluster-large.md index ccb31fe108498..a75499a811f11 100644 --- a/content/en/docs/setup/best-practices/cluster-large.md +++ b/content/en/docs/setup/best-practices/cluster-large.md @@ -69,10 +69,9 @@ When creating a cluster, you can (using custom tooling): ## Addon resources Kubernetes [resource limits](/docs/concepts/configuration/manage-resources-containers/) -help to minimise the impact of memory leaks and other ways that pods and containers can -impact on other components. These resource limits can and should apply to -{{< glossary_tooltip text="addon" term_id="addons" >}} just as they apply to application -workloads. +help to minimize the impact of memory leaks and other ways that pods and containers can +impact on other components. These resource limits apply to +{{< glossary_tooltip text="addon" term_id="addons" >}} resources just as they apply to application workloads. For example, you can set CPU and memory limits for a logging component: diff --git a/content/en/docs/setup/production-environment/tools/kubespray.md b/content/en/docs/setup/production-environment/tools/kubespray.md index 4fac65ab88d0e..5ba4c995dd1ed 100644 --- a/content/en/docs/setup/production-environment/tools/kubespray.md +++ b/content/en/docs/setup/production-environment/tools/kubespray.md @@ -68,7 +68,7 @@ Kubespray provides the ability to customize many aspects of the deployment: * {{< glossary_tooltip term_id="cri-o" >}} * Certificate generation methods -Kubespray customizations can be made to a [variable file](https://docs.ansible.com/ansible/playbooks_variables.html). If you are just getting started with Kubespray, consider using the Kubespray defaults to deploy your cluster and explore Kubernetes. +Kubespray customizations can be made to a [variable file](https://docs.ansible.com/ansible/playbooks_variables.html). If you are getting started with Kubespray, consider using the Kubespray defaults to deploy your cluster and explore Kubernetes. ### (4/5) Deploy a Cluster diff --git a/content/en/docs/setup/production-environment/windows/intro-windows-in-kubernetes.md b/content/en/docs/setup/production-environment/windows/intro-windows-in-kubernetes.md index 5c33b0a94b027..446bf11d3cf66 100644 --- a/content/en/docs/setup/production-environment/windows/intro-windows-in-kubernetes.md +++ b/content/en/docs/setup/production-environment/windows/intro-windows-in-kubernetes.md @@ -333,7 +333,7 @@ These features were added in Kubernetes v1.15: ##### DNS {#dns-limitations} * ClusterFirstWithHostNet is not supported for DNS. Windows treats all names with a '.' as a FQDN and skips PQDN resolution -* On Linux, you have a DNS suffix list, which is used when trying to resolve PQDNs. On Windows, we only have 1 DNS suffix, which is the DNS suffix associated with that pod's namespace (mydns.svc.cluster.local for example). Windows can resolve FQDNs and services or names resolvable with just that suffix. For example, a pod spawned in the default namespace, will have the DNS suffix **default.svc.cluster.local**. On a Windows pod, you can resolve both **kubernetes.default.svc.cluster.local** and **kubernetes**, but not the in-betweens, like **kubernetes.default** or **kubernetes.default.svc**. +* On Linux, you have a DNS suffix list, which is used when trying to resolve PQDNs. On Windows, we only have 1 DNS suffix, which is the DNS suffix associated with that pod's namespace (mydns.svc.cluster.local for example). Windows can resolve FQDNs and services or names resolvable with only that suffix. For example, a pod spawned in the default namespace, will have the DNS suffix **default.svc.cluster.local**. On a Windows pod, you can resolve both **kubernetes.default.svc.cluster.local** and **kubernetes**, but not the in-betweens, like **kubernetes.default** or **kubernetes.default.svc**. * On Windows, there are multiple DNS resolvers that can be used. As these come with slightly different behaviors, using the `Resolve-DNSName` utility for name query resolutions is recommended. ##### IPv6 @@ -363,9 +363,9 @@ There are no differences in how most of the Kubernetes APIs work for Windows. Th At a high level, these OS concepts are different: -* Identity - Linux uses userID (UID) and groupID (GID) which are represented as integer types. User and group names are not canonical - they are just an alias in `/etc/groups` or `/etc/passwd` back to UID+GID. Windows uses a larger binary security identifier (SID) which is stored in the Windows Security Access Manager (SAM) database. This database is not shared between the host and containers, or between containers. +* Identity - Linux uses userID (UID) and groupID (GID) which are represented as integer types. User and group names are not canonical - they are an alias in `/etc/groups` or `/etc/passwd` back to UID+GID. Windows uses a larger binary security identifier (SID) which is stored in the Windows Security Access Manager (SAM) database. This database is not shared between the host and containers, or between containers. * File permissions - Windows uses an access control list based on SIDs, rather than a bitmask of permissions and UID+GID -* File paths - convention on Windows is to use `\` instead of `/`. The Go IO libraries typically accept both and just make it work, but when you're setting a path or command line that's interpreted inside a container, `\` may be needed. +* File paths - convention on Windows is to use `\` instead of `/`. The Go IO libraries accept both types of file path separators. However, when you're setting a path or command line that's interpreted inside a container, `\` may be needed. * Signals - Windows interactive apps handle termination differently, and can implement one or more of these: * A UI thread handles well-defined messages including WM_CLOSE * Console apps handle ctrl-c or ctrl-break using a Control Handler diff --git a/content/en/docs/tasks/access-application-cluster/access-cluster.md b/content/en/docs/tasks/access-application-cluster/access-cluster.md index 400a54ffb23d5..4f20545190a4b 100644 --- a/content/en/docs/tasks/access-application-cluster/access-cluster.md +++ b/content/en/docs/tasks/access-application-cluster/access-cluster.md @@ -231,7 +231,7 @@ You have several options for connecting to nodes, pods and services from outside - Use a service with type `NodePort` or `LoadBalancer` to make the service reachable outside the cluster. See the [services](/docs/concepts/services-networking/service/) and [kubectl expose](/docs/reference/generated/kubectl/kubectl-commands/#expose) documentation. - - Depending on your cluster environment, this may just expose the service to your corporate network, + - Depending on your cluster environment, this may only expose the service to your corporate network, or it may expose it to the internet. Think about whether the service being exposed is secure. Does it do its own authentication? - Place pods behind services. To access one specific pod from a set of replicas, such as for debugging, @@ -357,7 +357,7 @@ There are several different proxies you may encounter when using Kubernetes: - proxies UDP and TCP - does not understand HTTP - provides load balancing - - is just used to reach services + - is only used to reach services 1. A Proxy/Load-balancer in front of apiserver(s): diff --git a/content/en/docs/tasks/administer-cluster/access-cluster-services.md b/content/en/docs/tasks/administer-cluster/access-cluster-services.md index f6ba4e4fc0f13..927e05b77a467 100644 --- a/content/en/docs/tasks/administer-cluster/access-cluster-services.md +++ b/content/en/docs/tasks/administer-cluster/access-cluster-services.md @@ -31,7 +31,7 @@ You have several options for connecting to nodes, pods and services from outside - Use a service with type `NodePort` or `LoadBalancer` to make the service reachable outside the cluster. See the [services](/docs/concepts/services-networking/service/) and [kubectl expose](/docs/reference/generated/kubectl/kubectl-commands/#expose) documentation. - - Depending on your cluster environment, this may just expose the service to your corporate network, + - Depending on your cluster environment, this may only expose the service to your corporate network, or it may expose it to the internet. Think about whether the service being exposed is secure. Does it do its own authentication? - Place pods behind services. To access one specific pod from a set of replicas, such as for debugging, diff --git a/content/en/docs/tasks/administer-cluster/extended-resource-node.md b/content/en/docs/tasks/administer-cluster/extended-resource-node.md index a95a325d5d774..797993f116f67 100644 --- a/content/en/docs/tasks/administer-cluster/extended-resource-node.md +++ b/content/en/docs/tasks/administer-cluster/extended-resource-node.md @@ -54,7 +54,7 @@ Host: k8s-master:8080 ``` Note that Kubernetes does not need to know what a dongle is or what a dongle is for. -The preceding PATCH request just tells Kubernetes that your Node has four things that +The preceding PATCH request tells Kubernetes that your Node has four things that you call dongles. Start a proxy, so that you can easily send requests to the Kubernetes API server: diff --git a/content/en/docs/tasks/administer-cluster/guaranteed-scheduling-critical-addon-pods.md b/content/en/docs/tasks/administer-cluster/guaranteed-scheduling-critical-addon-pods.md index 0d5b6d4ebe93a..a9aaaacd46adc 100644 --- a/content/en/docs/tasks/administer-cluster/guaranteed-scheduling-critical-addon-pods.md +++ b/content/en/docs/tasks/administer-cluster/guaranteed-scheduling-critical-addon-pods.md @@ -9,24 +9,17 @@ content_type: concept -In addition to Kubernetes core components like api-server, scheduler, controller-manager running on a master machine -there are a number of add-ons which, for various reasons, must run on a regular cluster node (rather than the Kubernetes master). +Kubernetes core components such as the API server, scheduler, and controller-manager run on a control plane node. However, add-ons must run on a regular cluster node. Some of these add-ons are critical to a fully functional cluster, such as metrics-server, DNS, and UI. A cluster may stop working properly if a critical add-on is evicted (either manually or as a side effect of another operation like upgrade) and becomes pending (for example when the cluster is highly utilized and either there are other pending pods that schedule into the space vacated by the evicted critical add-on pod or the amount of resources available on the node changed for some other reason). Note that marking a pod as critical is not meant to prevent evictions entirely; it only prevents the pod from becoming permanently unavailable. -For static pods, this means it can't be evicted, but for non-static pods, it just means they will always be rescheduled. - - - +A static pod marked as critical, can't be evicted. However, a non-static pods marked as critical are always rescheduled. - ### Marking pod as critical To mark a Pod as critical, set priorityClassName for that Pod to `system-cluster-critical` or `system-node-critical`. `system-node-critical` is the highest available priority, even higher than `system-cluster-critical`. - - diff --git a/content/en/docs/tasks/administer-cluster/kubeadm/kubeadm-certs.md b/content/en/docs/tasks/administer-cluster/kubeadm/kubeadm-certs.md index 56a6c25e9a998..e8eda88b03fe9 100644 --- a/content/en/docs/tasks/administer-cluster/kubeadm/kubeadm-certs.md +++ b/content/en/docs/tasks/administer-cluster/kubeadm/kubeadm-certs.md @@ -35,7 +35,7 @@ and kubeadm will use this CA for signing the rest of the certificates. ## External CA mode {#external-ca-mode} -It is also possible to provide just the `ca.crt` file and not the +It is also possible to provide only the `ca.crt` file and not the `ca.key` file (this is only available for the root CA file, not other cert pairs). If all other certificates and kubeconfig files are in place, kubeadm recognizes this condition and activates the "External CA" mode. kubeadm will proceed without the diff --git a/content/en/docs/tasks/administer-cluster/migrating-from-dockershim/check-if-dockershim-deprecation-affects-you.md b/content/en/docs/tasks/administer-cluster/migrating-from-dockershim/check-if-dockershim-deprecation-affects-you.md index 766db38485062..fe9fd8b0c4708 100644 --- a/content/en/docs/tasks/administer-cluster/migrating-from-dockershim/check-if-dockershim-deprecation-affects-you.md +++ b/content/en/docs/tasks/administer-cluster/migrating-from-dockershim/check-if-dockershim-deprecation-affects-you.md @@ -50,7 +50,7 @@ and scheduling of Pods; on each node, the {{< glossary_tooltip text="kubelet" te uses the container runtime interface as an abstraction so that you can use any compatible container runtime. -In its earliest releases, Kubernetes offered compatibility with just one container runtime: Docker. +In its earliest releases, Kubernetes offered compatibility with one container runtime: Docker. Later in the Kubernetes project's history, cluster operators wanted to adopt additional container runtimes. The CRI was designed to allow this kind of flexibility - and the kubelet began supporting CRI. However, because Docker existed before the CRI specification was invented, the Kubernetes project created an @@ -75,7 +75,7 @@ or execute something inside container using `docker exec`. If you're running workloads via Kubernetes, the best way to stop a container is through the Kubernetes API rather than directly through the container runtime (this advice applies -for all container runtimes, not just Docker). +for all container runtimes, not only Docker). {{< /note >}} diff --git a/content/en/docs/tasks/administer-cluster/namespaces-walkthrough.md b/content/en/docs/tasks/administer-cluster/namespaces-walkthrough.md index 1d1461ade780b..5d99875527aba 100644 --- a/content/en/docs/tasks/administer-cluster/namespaces-walkthrough.md +++ b/content/en/docs/tasks/administer-cluster/namespaces-walkthrough.md @@ -232,7 +232,7 @@ Apply the manifest to create a Deployment ```shell kubectl apply -f https://k8s.io/examples/admin/snowflake-deployment.yaml ``` -We have just created a deployment whose replica size is 2 that is running the pod called `snowflake` with a basic container that just serves the hostname. +We have created a deployment whose replica size is 2 that is running the pod called `snowflake` with a basic container that serves the hostname. ```shell kubectl get deployment diff --git a/content/en/docs/tasks/administer-cluster/namespaces.md b/content/en/docs/tasks/administer-cluster/namespaces.md index 08b2868806f48..2934e1c0f78d9 100644 --- a/content/en/docs/tasks/administer-cluster/namespaces.md +++ b/content/en/docs/tasks/administer-cluster/namespaces.md @@ -196,7 +196,7 @@ This delete is asynchronous, so for a time you will see the namespace in the `Te ```shell kubectl create deployment snowflake --image=k8s.gcr.io/serve_hostname -n=development --replicas=2 ``` - We have just created a deployment whose replica size is 2 that is running the pod called `snowflake` with a basic container that just serves the hostname. + We have created a deployment whose replica size is 2 that is running the pod called `snowflake` with a basic container that serves the hostname. ```shell kubectl get deployment -n=development @@ -302,7 +302,7 @@ Use cases include: When you create a [Service](/docs/concepts/services-networking/service/), it creates a corresponding [DNS entry](/docs/concepts/services-networking/dns-pod-service/). This entry is of the form `..svc.cluster.local`, which means -that if a container just uses `` it will resolve to the service which +that if a container uses `` it will resolve to the service which is local to a namespace. This is useful for using the same configuration across multiple namespaces such as Development, Staging and Production. If you want to reach across namespaces, you need to use the fully qualified domain name (FQDN). diff --git a/content/en/docs/tasks/administer-cluster/network-policy-provider/calico-network-policy.md b/content/en/docs/tasks/administer-cluster/network-policy-provider/calico-network-policy.md index 9efdccfb6e242..40733c4c96810 100644 --- a/content/en/docs/tasks/administer-cluster/network-policy-provider/calico-network-policy.md +++ b/content/en/docs/tasks/administer-cluster/network-policy-provider/calico-network-policy.md @@ -20,7 +20,7 @@ Decide whether you want to deploy a [cloud](#creating-a-calico-cluster-with-goog **Prerequisite**: [gcloud](https://cloud.google.com/sdk/docs/quickstarts). -1. To launch a GKE cluster with Calico, just include the `--enable-network-policy` flag. +1. To launch a GKE cluster with Calico, include the `--enable-network-policy` flag. **Syntax** ```shell diff --git a/content/en/docs/tasks/administer-cluster/safely-drain-node.md b/content/en/docs/tasks/administer-cluster/safely-drain-node.md index db31cceb8b2b3..0fc0a97ffcd1b 100644 --- a/content/en/docs/tasks/administer-cluster/safely-drain-node.md +++ b/content/en/docs/tasks/administer-cluster/safely-drain-node.md @@ -128,8 +128,8 @@ curl -v -H 'Content-type: application/json' https://your-cluster-api-endpoint.ex The API can respond in one of three ways: -- If the eviction is granted, then the Pod is deleted just as if you had sent - a `DELETE` request to the Pod's URL and you get back `200 OK`. +- If the eviction is granted, then the Pod is deleted as if you sent + a `DELETE` request to the Pod's URL and received back `200 OK`. - If the current state of affairs wouldn't allow an eviction by the rules set forth in the budget, you get back `429 Too Many Requests`. This is typically used for generic rate limiting of *any* requests, but here we mean diff --git a/content/en/docs/tasks/configmap-secret/managing-secret-using-config-file.md b/content/en/docs/tasks/configmap-secret/managing-secret-using-config-file.md index 23f85f109b99d..b405d57baf0ce 100644 --- a/content/en/docs/tasks/configmap-secret/managing-secret-using-config-file.md +++ b/content/en/docs/tasks/configmap-secret/managing-secret-using-config-file.md @@ -184,7 +184,7 @@ Where `YWRtaW5pc3RyYXRvcg==` decodes to `administrator`. ## Clean Up -To delete the Secret you have just created: +To delete the Secret you have created: ```shell kubectl delete secret mysecret diff --git a/content/en/docs/tasks/configmap-secret/managing-secret-using-kubectl.md b/content/en/docs/tasks/configmap-secret/managing-secret-using-kubectl.md index 1e6d88ede481e..340cfdc0aef25 100644 --- a/content/en/docs/tasks/configmap-secret/managing-secret-using-kubectl.md +++ b/content/en/docs/tasks/configmap-secret/managing-secret-using-kubectl.md @@ -115,8 +115,7 @@ accidentally to an onlooker, or from being stored in a terminal log. ## Decoding the Secret {#decoding-secret} -To view the contents of the Secret we just created, you can run the following -command: +To view the contents of the Secret you created, run the following command: ```shell kubectl get secret db-user-pass -o jsonpath='{.data}' @@ -142,7 +141,7 @@ The output is similar to: ## Clean Up -To delete the Secret you have just created: +To delete the Secret you have created: ```shell kubectl delete secret db-user-pass diff --git a/content/en/docs/tasks/configmap-secret/managing-secret-using-kustomize.md b/content/en/docs/tasks/configmap-secret/managing-secret-using-kustomize.md index 5cbb30b99b728..fb257a602683c 100644 --- a/content/en/docs/tasks/configmap-secret/managing-secret-using-kustomize.md +++ b/content/en/docs/tasks/configmap-secret/managing-secret-using-kustomize.md @@ -113,7 +113,7 @@ To check the actual content of the encoded data, please refer to ## Clean Up -To delete the Secret you have just created: +To delete the Secret you have created: ```shell kubectl delete secret db-user-pass-96mffmfh4k diff --git a/content/en/docs/tasks/configure-pod-container/assign-cpu-resource.md b/content/en/docs/tasks/configure-pod-container/assign-cpu-resource.md index 243072eff292b..21b02cc000cf6 100644 --- a/content/en/docs/tasks/configure-pod-container/assign-cpu-resource.md +++ b/content/en/docs/tasks/configure-pod-container/assign-cpu-resource.md @@ -112,7 +112,7 @@ kubectl top pod cpu-demo --namespace=cpu-example ``` This example output shows that the Pod is using 974 milliCPU, which is -just a bit less than the limit of 1 CPU specified in the Pod configuration. +slightly less than the limit of 1 CPU specified in the Pod configuration. ``` NAME CPU(cores) MEMORY(bytes) diff --git a/content/en/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes.md b/content/en/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes.md index 0cdfd28258055..918a5bf33ec8a 100644 --- a/content/en/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes.md +++ b/content/en/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes.md @@ -204,7 +204,7 @@ seconds. In addition to the readiness probe, this configuration includes a liveness probe. The kubelet will run the first liveness probe 15 seconds after the container -starts. Just like the readiness probe, this will attempt to connect to the +starts. Similar to the readiness probe, this will attempt to connect to the `goproxy` container on port 8080. If the liveness probe fails, the container will be restarted. diff --git a/content/en/docs/tasks/configure-pod-container/pull-image-private-registry.md b/content/en/docs/tasks/configure-pod-container/pull-image-private-registry.md index 32b857c156f95..697a4c6e0e888 100644 --- a/content/en/docs/tasks/configure-pod-container/pull-image-private-registry.md +++ b/content/en/docs/tasks/configure-pod-container/pull-image-private-registry.md @@ -118,7 +118,7 @@ those secrets might also be visible to other users on your PC during the time th ## Inspecting the Secret `regcred` -To understand the contents of the `regcred` Secret you just created, start by viewing the Secret in YAML format: +To understand the contents of the `regcred` Secret you created, start by viewing the Secret in YAML format: ```shell kubectl get secret regcred --output=yaml diff --git a/content/en/docs/tasks/configure-pod-container/translate-compose-kubernetes.md b/content/en/docs/tasks/configure-pod-container/translate-compose-kubernetes.md index dc4fba348092e..384b709720ee0 100644 --- a/content/en/docs/tasks/configure-pod-container/translate-compose-kubernetes.md +++ b/content/en/docs/tasks/configure-pod-container/translate-compose-kubernetes.md @@ -67,7 +67,7 @@ sudo yum -y install kompose {{% /tab %}} {{% tab name="Fedora package" %}} -Kompose is in Fedora 24, 25 and 26 repositories. You can install it just like any other package. +Kompose is in Fedora 24, 25 and 26 repositories. You can install it like any other package. ```bash sudo dnf -y install kompose @@ -87,7 +87,7 @@ brew install kompose ## Use Kompose -In just a few steps, we'll take you from Docker Compose to Kubernetes. All +In a few steps, we'll take you from Docker Compose to Kubernetes. All you need is an existing `docker-compose.yml` file. 1. Go to the directory containing your `docker-compose.yml` file. If you don't have one, test using this one. diff --git a/content/en/docs/tasks/debug-application-cluster/debug-application-introspection.md b/content/en/docs/tasks/debug-application-cluster/debug-application-introspection.md index 730b9fb00cda4..03ba9d2c025d7 100644 --- a/content/en/docs/tasks/debug-application-cluster/debug-application-introspection.md +++ b/content/en/docs/tasks/debug-application-cluster/debug-application-introspection.md @@ -177,7 +177,7 @@ kubectl describe pod nginx-deployment-1370807587-fz9sd Here you can see the event generated by the scheduler saying that the Pod failed to schedule for reason `FailedScheduling` (and possibly others). The message tells us that there were not enough resources for the Pod on any of the nodes. -To correct this situation, you can use `kubectl scale` to update your Deployment to specify four or fewer replicas. (Or you could just leave the one Pod pending, which is harmless.) +To correct this situation, you can use `kubectl scale` to update your Deployment to specify four or fewer replicas. (Or you could leave the one Pod pending, which is harmless.) Events such as the ones you saw at the end of `kubectl describe pod` are persisted in etcd and provide high-level information on what is happening in the cluster. To list all events you can use diff --git a/content/en/docs/tasks/debug-application-cluster/debug-pod-replication-controller.md b/content/en/docs/tasks/debug-application-cluster/debug-pod-replication-controller.md index 8a972e13651f7..c99182b854db0 100644 --- a/content/en/docs/tasks/debug-application-cluster/debug-pod-replication-controller.md +++ b/content/en/docs/tasks/debug-application-cluster/debug-pod-replication-controller.md @@ -57,7 +57,7 @@ case you can try several things: will never be scheduled. You can check node capacities with the `kubectl get nodes -o ` - command. Here are some example command lines that extract just the necessary + command. Here are some example command lines that extract the necessary information: ```shell diff --git a/content/en/docs/tasks/debug-application-cluster/debug-service.md b/content/en/docs/tasks/debug-application-cluster/debug-service.md index 3613e5b2cb9d4..3b3b1c60819a2 100644 --- a/content/en/docs/tasks/debug-application-cluster/debug-service.md +++ b/content/en/docs/tasks/debug-application-cluster/debug-service.md @@ -178,7 +178,7 @@ kubectl expose deployment hostnames --port=80 --target-port=9376 service/hostnames exposed ``` -And read it back, just to be sure: +And read it back: ```shell kubectl get svc hostnames @@ -427,8 +427,7 @@ hostnames-632524106-ly40y 1/1 Running 0 1h hostnames-632524106-tlaok 1/1 Running 0 1h ``` -The `-l app=hostnames` argument is a label selector - just like our Service -has. +The `-l app=hostnames` argument is a label selector configured on the Service. The "AGE" column says that these Pods are about an hour old, which implies that they are running fine and not crashing. @@ -607,7 +606,7 @@ iptables-save | grep hostnames -A KUBE-PORTALS-HOST -d 10.0.1.175/32 -p tcp -m comment --comment "default/hostnames:default" -m tcp --dport 80 -j DNAT --to-destination 10.240.115.247:48577 ``` -There should be 2 rules for each port of your Service (just one in this +There should be 2 rules for each port of your Service (only one in this example) - a "KUBE-PORTALS-CONTAINER" and a "KUBE-PORTALS-HOST". Almost nobody should be using the "userspace" mode any more, so you won't spend diff --git a/content/en/docs/tasks/debug-application-cluster/logging-stackdriver.md b/content/en/docs/tasks/debug-application-cluster/logging-stackdriver.md index 1703bbbe428d7..29ace662f6065 100644 --- a/content/en/docs/tasks/debug-application-cluster/logging-stackdriver.md +++ b/content/en/docs/tasks/debug-application-cluster/logging-stackdriver.md @@ -294,9 +294,9 @@ a running cluster in the [Deploying section](#deploying). ### Changing `DaemonSet` parameters -When you have the Stackdriver Logging `DaemonSet` in your cluster, you can just modify the -`template` field in its spec, daemonset controller will update the pods for you. For example, -let's assume you've just installed the Stackdriver Logging as described above. Now you want to +When you have the Stackdriver Logging `DaemonSet` in your cluster, you can modify the +`template` field in its spec. The DaemonSet controller manages the pods for you. +For example, assume you've installed the Stackdriver Logging as described above. Now you want to change the memory limit to give fluentd more memory to safely process more logs. Get the spec of `DaemonSet` running in your cluster: diff --git a/content/en/docs/tasks/extend-kubernetes/configure-multiple-schedulers.md b/content/en/docs/tasks/extend-kubernetes/configure-multiple-schedulers.md index 96f55c39502fb..7ad7072fd74fc 100644 --- a/content/en/docs/tasks/extend-kubernetes/configure-multiple-schedulers.md +++ b/content/en/docs/tasks/extend-kubernetes/configure-multiple-schedulers.md @@ -12,7 +12,7 @@ weight: 20 Kubernetes ships with a default scheduler that is described [here](/docs/reference/command-line-tools-reference/kube-scheduler/). If the default scheduler does not suit your needs you can implement your own scheduler. -Not just that, you can even run multiple schedulers simultaneously alongside the default +Moreover, you can even run multiple schedulers simultaneously alongside the default scheduler and instruct Kubernetes what scheduler to use for each of your pods. Let's learn how to run multiple schedulers in Kubernetes with an example. @@ -30,7 +30,7 @@ in the Kubernetes source directory for a canonical example. ## Package the scheduler Package your scheduler binary into a container image. For the purposes of this example, -let's just use the default scheduler (kube-scheduler) as our second scheduler as well. +you can use the default scheduler (kube-scheduler) as your second scheduler. Clone the [Kubernetes source code from GitHub](https://github.com/kubernetes/kubernetes) and build the source. @@ -61,9 +61,9 @@ gcloud docker -- push gcr.io/my-gcp-project/my-kube-scheduler:1.0 ## Define a Kubernetes Deployment for the scheduler -Now that we have our scheduler in a container image, we can just create a pod -config for it and run it in our Kubernetes cluster. But instead of creating a pod -directly in the cluster, let's use a [Deployment](/docs/concepts/workloads/controllers/deployment/) +Now that you have your scheduler in a container image, create a pod +configuration for it and run it in your Kubernetes cluster. But instead of creating a pod +directly in the cluster, you can use a [Deployment](/docs/concepts/workloads/controllers/deployment/) for this example. A [Deployment](/docs/concepts/workloads/controllers/deployment/) manages a [Replica Set](/docs/concepts/workloads/controllers/replicaset/) which in turn manages the pods, thereby making the scheduler resilient to failures. Here is the deployment @@ -83,7 +83,7 @@ detailed description of other command line arguments. ## Run the second scheduler in the cluster -In order to run your scheduler in a Kubernetes cluster, just create the deployment +In order to run your scheduler in a Kubernetes cluster, create the deployment specified in the config above in a Kubernetes cluster: ```shell @@ -132,9 +132,9 @@ kubectl edit clusterrole system:kube-scheduler ## Specify schedulers for pods -Now that our second scheduler is running, let's create some pods, and direct them -to be scheduled by either the default scheduler or the one we just deployed. -In order to schedule a given pod using a specific scheduler, we specify the name of the +Now that your second scheduler is running, create some pods, and direct them +to be scheduled by either the default scheduler or the one you deployed. +In order to schedule a given pod using a specific scheduler, specify the name of the scheduler in that pod spec. Let's look at three examples. - Pod spec without any scheduler name @@ -196,7 +196,7 @@ while the other two pods get scheduled. Once we submit the scheduler deployment and our new scheduler starts running, the `annotation-second-scheduler` pod gets scheduled as well. -Alternatively, one could just look at the "Scheduled" entries in the event logs to +Alternatively, you can look at the "Scheduled" entries in the event logs to verify that the pods were scheduled by the desired schedulers. ```shell diff --git a/content/en/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definition-versioning.md b/content/en/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definition-versioning.md index b48d44a0783f6..671637c084616 100644 --- a/content/en/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definition-versioning.md +++ b/content/en/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definition-versioning.md @@ -404,7 +404,7 @@ how to [authenticate API servers](/docs/reference/access-authn-authz/extensible- A conversion webhook must not mutate anything inside of `metadata` of the converted object other than `labels` and `annotations`. Attempted changes to `name`, `UID` and `namespace` are rejected and fail the request -which caused the conversion. All other changes are just ignored. +which caused the conversion. All other changes are ignored. ### Deploy the conversion webhook service diff --git a/content/en/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions.md b/content/en/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions.md index 62251eb2226e2..3230b7b73aeb4 100644 --- a/content/en/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions.md +++ b/content/en/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions.md @@ -520,7 +520,7 @@ CustomResourceDefinition and migrating your objects from one version to another. ### Finalizers *Finalizers* allow controllers to implement asynchronous pre-delete hooks. -Custom objects support finalizers just like built-in objects. +Custom objects support finalizers similar to built-in objects. You can add a finalizer to a custom object like this: diff --git a/content/en/docs/tasks/extend-kubernetes/setup-extension-api-server.md b/content/en/docs/tasks/extend-kubernetes/setup-extension-api-server.md index 626ddcab5c751..64c41d9094a04 100644 --- a/content/en/docs/tasks/extend-kubernetes/setup-extension-api-server.md +++ b/content/en/docs/tasks/extend-kubernetes/setup-extension-api-server.md @@ -41,7 +41,7 @@ Alternatively, you can use an existing 3rd party solution, such as [apiserver-bu 1. Make sure that your extension-apiserver loads those certs from that volume and that they are used in the HTTPS handshake. 1. Create a Kubernetes service account in your namespace. 1. Create a Kubernetes cluster role for the operations you want to allow on your resources. -1. Create a Kubernetes cluster role binding from the service account in your namespace to the cluster role you just created. +1. Create a Kubernetes cluster role binding from the service account in your namespace to the cluster role you created. 1. Create a Kubernetes cluster role binding from the service account in your namespace to the `system:auth-delegator` cluster role to delegate auth decisions to the Kubernetes core API server. 1. Create a Kubernetes role binding from the service account in your namespace to the `extension-apiserver-authentication-reader` role. This allows your extension api-server to access the `extension-apiserver-authentication` configmap. 1. Create a Kubernetes apiservice. The CA cert above should be base64 encoded, stripped of new lines and used as the spec.caBundle in the apiservice. This should not be namespaced. If using the [kube-aggregator API](https://github.com/kubernetes/kube-aggregator/), only pass in the PEM encoded CA bundle because the base 64 encoding is done for you. diff --git a/content/en/docs/tasks/job/coarse-parallel-processing-work-queue.md b/content/en/docs/tasks/job/coarse-parallel-processing-work-queue.md index ce7da0b4534b2..62ddf56aabe90 100644 --- a/content/en/docs/tasks/job/coarse-parallel-processing-work-queue.md +++ b/content/en/docs/tasks/job/coarse-parallel-processing-work-queue.md @@ -19,7 +19,7 @@ Here is an overview of the steps in this example: 1. **Start a message queue service.** In this example, we use RabbitMQ, but you could use another one. In practice you would set up a message queue service once and reuse it for many jobs. 1. **Create a queue, and fill it with messages.** Each message represents one task to be done. In - this example, a message is just an integer that we will do a lengthy computation on. + this example, a message is an integer that we will do a lengthy computation on. 1. **Start a Job that works on tasks from the queue**. The Job starts several pods. Each pod takes one task from the message queue, processes it, and repeats until the end of the queue is reached. @@ -141,13 +141,12 @@ root@temp-loe07:/# ``` In the last command, the `amqp-consume` tool takes one message (`-c 1`) -from the queue, and passes that message to the standard input of an arbitrary command. In this case, the program `cat` is just printing -out what it gets on the standard input, and the echo is just to add a carriage +from the queue, and passes that message to the standard input of an arbitrary command. In this case, the program `cat` prints out the characters read from standard input, and the echo adds a carriage return so the example is readable. ## Filling the Queue with tasks -Now let's fill the queue with some "tasks". In our example, our tasks are just strings to be +Now let's fill the queue with some "tasks". In our example, our tasks are strings to be printed. In a practice, the content of the messages might be: diff --git a/content/en/docs/tasks/job/fine-parallel-processing-work-queue.md b/content/en/docs/tasks/job/fine-parallel-processing-work-queue.md index 7f3c30121edec..268eed7f9b984 100644 --- a/content/en/docs/tasks/job/fine-parallel-processing-work-queue.md +++ b/content/en/docs/tasks/job/fine-parallel-processing-work-queue.md @@ -21,7 +21,7 @@ Here is an overview of the steps in this example: detect when a finite-length work queue is empty. In practice you would set up a store such as Redis once and reuse it for the work queues of many jobs, and other things. 1. **Create a queue, and fill it with messages.** Each message represents one task to be done. In - this example, a message is just an integer that we will do a lengthy computation on. + this example, a message is an integer that we will do a lengthy computation on. 1. **Start a Job that works on tasks from the queue**. The Job starts several pods. Each pod takes one task from the message queue, processes it, and repeats until the end of the queue is reached. @@ -55,7 +55,7 @@ You could also download the following files directly: ## Filling the Queue with tasks -Now let's fill the queue with some "tasks". In our example, our tasks are just strings to be +Now let's fill the queue with some "tasks". In our example, our tasks are strings to be printed. Start a temporary interactive pod for running the Redis CLI. diff --git a/content/en/docs/tasks/manage-daemon/rollback-daemon-set.md b/content/en/docs/tasks/manage-daemon/rollback-daemon-set.md index 05e8060cc9df0..704b01cc9a3ad 100644 --- a/content/en/docs/tasks/manage-daemon/rollback-daemon-set.md +++ b/content/en/docs/tasks/manage-daemon/rollback-daemon-set.md @@ -25,7 +25,7 @@ You should already know how to [perform a rolling update on a ### Step 1: Find the DaemonSet revision you want to roll back to -You can skip this step if you just want to roll back to the last revision. +You can skip this step if you only want to roll back to the last revision. List all revisions of a DaemonSet: diff --git a/content/en/docs/tasks/manage-daemon/update-daemon-set.md b/content/en/docs/tasks/manage-daemon/update-daemon-set.md index f9e35cb0f55c9..2f3001da0f1a3 100644 --- a/content/en/docs/tasks/manage-daemon/update-daemon-set.md +++ b/content/en/docs/tasks/manage-daemon/update-daemon-set.md @@ -111,7 +111,7 @@ kubectl edit ds/fluentd-elasticsearch -n kube-system ##### Updating only the container image -If you just need to update the container image in the DaemonSet template, i.e. +If you only need to update the container image in the DaemonSet template, i.e. `.spec.template.spec.containers[*].image`, use `kubectl set image`: ```shell @@ -167,7 +167,7 @@ If the recent DaemonSet template update is broken, for example, the container is crash looping, or the container image doesn't exist (often due to a typo), DaemonSet rollout won't progress. -To fix this, just update the DaemonSet template again. New rollout won't be +To fix this, update the DaemonSet template again. New rollout won't be blocked by previous unhealthy rollouts. #### Clock skew diff --git a/content/en/docs/tasks/manage-gpus/scheduling-gpus.md b/content/en/docs/tasks/manage-gpus/scheduling-gpus.md index 4f8fc434f9cca..997005e9ced01 100644 --- a/content/en/docs/tasks/manage-gpus/scheduling-gpus.md +++ b/content/en/docs/tasks/manage-gpus/scheduling-gpus.md @@ -37,7 +37,7 @@ When the above conditions are true, Kubernetes will expose `amd.com/gpu` or `nvidia.com/gpu` as a schedulable resource. You can consume these GPUs from your containers by requesting -`.com/gpu` just like you request `cpu` or `memory`. +`.com/gpu` the same way you request `cpu` or `memory`. However, there are some limitations in how you specify the resource requirements when using GPUs: diff --git a/content/en/docs/tasks/run-application/delete-stateful-set.md b/content/en/docs/tasks/run-application/delete-stateful-set.md index a70e018b85701..94b3c583ebb12 100644 --- a/content/en/docs/tasks/run-application/delete-stateful-set.md +++ b/content/en/docs/tasks/run-application/delete-stateful-set.md @@ -43,8 +43,8 @@ You may need to delete the associated headless service separately after the Stat kubectl delete service ``` -Deleting a StatefulSet through kubectl will scale it down to 0, thereby deleting all pods that are a part of it. -If you want to delete just the StatefulSet and not the pods, use `--cascade=false`. +When deleting a StatefulSet through `kubectl`, the StatefulSet scales down to 0. All Pods that are part of this workload are also deleted. If you want to delete only the StatefulSet and not the Pods, use `--cascade=false`. +For example: ```shell kubectl delete -f --cascade=false diff --git a/content/en/docs/tasks/run-application/force-delete-stateful-set-pod.md b/content/en/docs/tasks/run-application/force-delete-stateful-set-pod.md index 28de1865fd937..0001f4c9f4ba5 100644 --- a/content/en/docs/tasks/run-application/force-delete-stateful-set-pod.md +++ b/content/en/docs/tasks/run-application/force-delete-stateful-set-pod.md @@ -44,7 +44,7 @@ for StatefulSet Pods. Graceful deletion is safe and will ensure that the Pod [shuts down gracefully](/docs/concepts/workloads/pods/pod-lifecycle/#pod-termination) before the kubelet deletes the name from the apiserver. -Kubernetes (versions 1.5 or newer) will not delete Pods just because a Node is unreachable. +A Pod is not deleted automatically when a node is unreachable. The Pods running on an unreachable Node enter the 'Terminating' or 'Unknown' state after a [timeout](/docs/concepts/architecture/nodes/#condition). Pods may also enter these states when the user attempts graceful deletion of a Pod diff --git a/content/en/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough.md b/content/en/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough.md index 49009e1268807..84ae1addd2dd0 100644 --- a/content/en/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough.md +++ b/content/en/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough.md @@ -382,7 +382,7 @@ with *external metrics*. Using external metrics requires knowledge of your monitoring system; the setup is similar to that required when using custom metrics. External metrics allow you to autoscale your cluster -based on any metric available in your monitoring system. Just provide a `metric` block with a +based on any metric available in your monitoring system. Provide a `metric` block with a `name` and `selector`, as above, and use the `External` metric type instead of `Object`. If multiple time series are matched by the `metricSelector`, the sum of their values is used by the HorizontalPodAutoscaler. diff --git a/content/en/docs/tasks/run-application/horizontal-pod-autoscale.md b/content/en/docs/tasks/run-application/horizontal-pod-autoscale.md index 763d9ab996624..3d83a82f3c060 100644 --- a/content/en/docs/tasks/run-application/horizontal-pod-autoscale.md +++ b/content/en/docs/tasks/run-application/horizontal-pod-autoscale.md @@ -162,7 +162,7 @@ can be fetched, scaling is skipped. This means that the HPA is still capable of scaling up if one or more metrics give a `desiredReplicas` greater than the current value. -Finally, just before HPA scales the target, the scale recommendation is recorded. The +Finally, right before HPA scales the target, the scale recommendation is recorded. The controller considers all recommendations within a configurable window choosing the highest recommendation from within that window. This value can be configured using the `--horizontal-pod-autoscaler-downscale-stabilization` flag, which defaults to 5 minutes. This means that scaledowns will occur gradually, smoothing out the impact of rapidly diff --git a/content/en/docs/tasks/service-catalog/install-service-catalog-using-sc.md b/content/en/docs/tasks/service-catalog/install-service-catalog-using-sc.md index 078999730905f..a724d5b17bf0d 100644 --- a/content/en/docs/tasks/service-catalog/install-service-catalog-using-sc.md +++ b/content/en/docs/tasks/service-catalog/install-service-catalog-using-sc.md @@ -12,10 +12,7 @@ You can use the GCP [Service Catalog Installer](https://github.com/GoogleCloudPl tool to easily install or uninstall Service Catalog on your Kubernetes cluster, linking it to Google Cloud projects. -Service Catalog itself can work with any kind of managed service, not just Google Cloud. - - - +Service Catalog can work with any kind of managed service, not only Google Cloud. ## {{% heading "prerequisites" %}} diff --git a/content/en/docs/test.md b/content/en/docs/test.md index aadfc9a9e3a1f..ae5bb447f1534 100644 --- a/content/en/docs/test.md +++ b/content/en/docs/test.md @@ -113,7 +113,7 @@ mind: two consecutive lists. **The HTML comment needs to be at the left margin.** 2. Numbered lists can have paragraphs or block elements within them. - Just indent the content to be the same as the first line of the bullet + Indent the content to be the same as the first line of the bullet point. **This paragraph and the code block line up with the `N` in `Numbered` above.** diff --git a/content/en/docs/tutorials/clusters/apparmor.md b/content/en/docs/tutorials/clusters/apparmor.md index 54c8a0f44c9db..b220647e62dec 100644 --- a/content/en/docs/tutorials/clusters/apparmor.md +++ b/content/en/docs/tutorials/clusters/apparmor.md @@ -184,7 +184,7 @@ profile k8s-apparmor-example-deny-write flags=(attach_disconnected) { ``` Since we don't know where the Pod will be scheduled, we'll need to load the profile on all our -nodes. For this example we'll just use SSH to install the profiles, but other approaches are +nodes. For this example we'll use SSH to install the profiles, but other approaches are discussed in [Setting up nodes with profiles](#setting-up-nodes-with-profiles). ```shell diff --git a/content/en/docs/tutorials/clusters/seccomp.md b/content/en/docs/tutorials/clusters/seccomp.md index 376c349f72029..971618cf554d8 100644 --- a/content/en/docs/tutorials/clusters/seccomp.md +++ b/content/en/docs/tutorials/clusters/seccomp.md @@ -67,8 +67,8 @@ into the cluster. For simplicity, [kind](https://kind.sigs.k8s.io/) can be used to create a single node cluster with the seccomp profiles loaded. Kind runs Kubernetes in Docker, -so each node of the cluster is actually just a container. This allows for files -to be mounted in the filesystem of each container just as one might load files +so each node of the cluster is a container. This allows for files +to be mounted in the filesystem of each container similar to loading files onto a node. {{< codenew file="pods/security/seccomp/kind.yaml" >}} diff --git a/content/en/docs/tutorials/hello-minikube.md b/content/en/docs/tutorials/hello-minikube.md index 4bd6a9a82c656..035efcb8f2877 100644 --- a/content/en/docs/tutorials/hello-minikube.md +++ b/content/en/docs/tutorials/hello-minikube.md @@ -152,7 +152,7 @@ Kubernetes [*Service*](/docs/concepts/services-networking/service/). The application code inside the image `k8s.gcr.io/echoserver` only listens on TCP port 8080. If you used `kubectl expose` to expose a different port, clients could not connect to that other port. -2. View the Service you just created: +2. View the Service you created: ```shell kubectl get services @@ -227,7 +227,7 @@ The minikube tool includes a set of built-in {{< glossary_tooltip text="addons" metrics-server was successfully enabled ``` -3. View the Pod and Service you just created: +3. View the Pod and Service you created: ```shell kubectl get pod,svc -n kube-system