Skip to content

Commit

Permalink
Merge pull request #26472 from kbhawkey/cleanup-usage-just
Browse files Browse the repository at this point in the history
clean up use of word: just
  • Loading branch information
k8s-ci-robot authored Mar 26, 2021
2 parents 00d1542 + 3ff5ec1 commit ec48408
Show file tree
Hide file tree
Showing 81 changed files with 130 additions and 148 deletions.
2 changes: 1 addition & 1 deletion content/en/docs/concepts/architecture/nodes.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@ and contains the services necessary to run
{{< glossary_tooltip text="Pods" term_id="pod" >}}

Typically you have several nodes in a cluster; in a learning or resource-limited
environment, you might have just one.
environment, you might have only one node.

The [components](/docs/concepts/overview/components/#node-components) on a node include the
{{< glossary_tooltip text="kubelet" term_id="kubelet" >}}, a
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -427,7 +427,7 @@ poorly-behaved workloads that may be harming system health.
histogram vector of queue lengths for the queues, broken down by
the labels `priority_level` and `flow_schema`, as sampled by the
enqueued requests. Each request that gets queued contributes one
sample to its histogram, reporting the length of the queue just
sample to its histogram, reporting the length of the queue immediately
after the request was added. Note that this produces different
statistics than an unbiased survey would.
{{< note >}}
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -278,7 +278,7 @@ pod/my-nginx-2035384211-u3t6x labeled
```

This first filters all pods with the label "app=nginx", and then labels them with the "tier=fe".
To see the pods you just labeled, run:
To see the pods you labeled, run:

```shell
kubectl get pods -l app=nginx -L tier
Expand Down Expand Up @@ -411,7 +411,7 @@ and

## Disruptive updates

In some cases, you may need to update resource fields that cannot be updated once initialized, or you may just want to make a recursive change immediately, such as to fix broken pods created by a Deployment. To change such fields, use `replace --force`, which deletes and re-creates the resource. In this case, you can modify your original configuration file:
In some cases, you may need to update resource fields that cannot be updated once initialized, or you may want to make a recursive change immediately, such as to fix broken pods created by a Deployment. To change such fields, use `replace --force`, which deletes and re-creates the resource. In this case, you can modify your original configuration file:

```shell
kubectl replace -f https://k8s.io/examples/application/nginx/nginx-deployment.yaml --force
Expand Down
2 changes: 1 addition & 1 deletion content/en/docs/concepts/cluster-administration/proxies.md
Original file line number Diff line number Diff line change
Expand Up @@ -39,7 +39,7 @@ There are several different proxies you may encounter when using Kubernetes:
- proxies UDP, TCP and SCTP
- does not understand HTTP
- provides load balancing
- is just used to reach services
- is only used to reach services

1. A Proxy/Load-balancer in front of apiserver(s):

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -72,8 +72,7 @@ You cannot overcommit `hugepages-*` resources.
This is different from the `memory` and `cpu` resources.
{{< /note >}}

CPU and memory are collectively referred to as *compute resources*, or just
*resources*. Compute
CPU and memory are collectively referred to as *compute resources*, or *resources*. Compute
resources are measurable quantities that can be requested, allocated, and
consumed. They are distinct from
[API resources](/docs/concepts/overview/kubernetes-api/). API resources, such as Pods and
Expand Down Expand Up @@ -554,7 +553,7 @@ extender.

### Consuming extended resources

Users can consume extended resources in Pod specs just like CPU and memory.
Users can consume extended resources in Pod specs like CPU and memory.
The scheduler takes care of the resource accounting so that no more than the
available amount is simultaneously allocated to Pods.

Expand Down
2 changes: 1 addition & 1 deletion content/en/docs/concepts/configuration/secret.md
Original file line number Diff line number Diff line change
Expand Up @@ -109,7 +109,7 @@ empty-secret Opaque 0 2m6s
```

The `DATA` column shows the number of data items stored in the Secret.
In this case, `0` means we have just created an empty Secret.
In this case, `0` means we have created an empty Secret.

### Service account token Secrets

Expand Down
2 changes: 1 addition & 1 deletion content/en/docs/concepts/containers/images.md
Original file line number Diff line number Diff line change
Expand Up @@ -135,7 +135,7 @@ Here are the recommended steps to configuring your nodes to use a private regist
example, run these on your desktop/laptop:

1. Run `docker login [server]` for each set of credentials you want to use. This updates `$HOME/.docker/config.json` on your PC.
1. View `$HOME/.docker/config.json` in an editor to ensure it contains just the credentials you want to use.
1. View `$HOME/.docker/config.json` in an editor to ensure it contains only the credentials you want to use.
1. Get a list of your nodes; for example:
- if you want the names: `nodes=$( kubectl get nodes -o jsonpath='{range.items[*].metadata}{.name} {end}' )`
- if you want to get the IP addresses: `nodes=$( kubectl get nodes -o jsonpath='{range .items[*].status.addresses[?(@.type=="ExternalIP")]}{.address} {end}' )`
Expand Down
2 changes: 1 addition & 1 deletion content/en/docs/concepts/extend-kubernetes/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -145,7 +145,7 @@ Kubernetes provides several built-in authentication methods, and an [Authenticat

### Authorization

[Authorization](/docs/reference/access-authn-authz/webhook/) determines whether specific users can read, write, and do other operations on API resources. It just works at the level of whole resources -- it doesn't discriminate based on arbitrary object fields. If the built-in authorization options don't meet your needs, and [Authorization webhook](/docs/reference/access-authn-authz/webhook/) allows calling out to user-provided code to make an authorization decision.
[Authorization](/docs/reference/access-authn-authz/webhook/) determines whether specific users can read, write, and do other operations on API resources. It works at the level of whole resources -- it doesn't discriminate based on arbitrary object fields. If the built-in authorization options don't meet your needs, and [Authorization webhook](/docs/reference/access-authn-authz/webhook/) allows calling out to user-provided code to make an authorization decision.


### Dynamic Admission Control
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -146,7 +146,7 @@ Kubernetes provides several built-in authentication methods, and an [Authenticat

### Authorization

[Authorization](/docs/reference/access-authn-authz/webhook/) determines whether specific users can read, write, and do other operations on API resources. It just works at the level of whole resources -- it doesn't discriminate based on arbitrary object fields. If the built-in authorization options don't meet your needs, and [Authorization webhook](/docs/reference/access-authn-authz/webhook/) allows calling out to user-provided code to make an authorization decision.
[Authorization](/docs/reference/access-authn-authz/webhook/) determines whether specific users can read, write, and do other operations on API resources. It works at the level of whole resources -- it doesn't discriminate based on arbitrary object fields. If the built-in authorization options don't meet your needs, and [Authorization webhook](/docs/reference/access-authn-authz/webhook/) allows calling out to user-provided code to make an authorization decision.


### Dynamic Admission Control
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,7 @@ resource can only be in one namespace.

Namespaces are a way to divide cluster resources between multiple users (via [resource quota](/docs/concepts/policy/resource-quotas/)).

It is not necessary to use multiple namespaces just to separate slightly different
It is not necessary to use multiple namespaces to separate slightly different
resources, such as different versions of the same software: use
[labels](/docs/concepts/overview/working-with-objects/labels) to distinguish
resources within the same namespace.
Expand Down Expand Up @@ -91,7 +91,7 @@ kubectl config view --minify | grep namespace:
When you create a [Service](/docs/concepts/services-networking/service/),
it creates a corresponding [DNS entry](/docs/concepts/services-networking/dns-pod-service/).
This entry is of the form `<service-name>.<namespace-name>.svc.cluster.local`, which means
that if a container just uses `<service-name>`, it will resolve to the service which
that if a container only uses `<service-name>`, it will resolve to the service which
is local to a namespace. This is useful for using the same configuration across
multiple namespaces such as Development, Staging and Production. If you want to reach
across namespaces, you need to use the fully qualified domain name (FQDN).
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -120,12 +120,12 @@ pod is eligible to be scheduled on, based on labels on the node.

There are currently two types of node affinity, called `requiredDuringSchedulingIgnoredDuringExecution` and
`preferredDuringSchedulingIgnoredDuringExecution`. You can think of them as "hard" and "soft" respectively,
in the sense that the former specifies rules that *must* be met for a pod to be scheduled onto a node (just like
in the sense that the former specifies rules that *must* be met for a pod to be scheduled onto a node (similar to
`nodeSelector` but using a more expressive syntax), while the latter specifies *preferences* that the scheduler
will try to enforce but will not guarantee. The "IgnoredDuringExecution" part of the names means that, similar
to how `nodeSelector` works, if labels on a node change at runtime such that the affinity rules on a pod are no longer
met, the pod will still continue to run on the node. In the future we plan to offer
`requiredDuringSchedulingRequiredDuringExecution` which will be just like `requiredDuringSchedulingIgnoredDuringExecution`
met, the pod continues to run on the node. In the future we plan to offer
`requiredDuringSchedulingRequiredDuringExecution` which will be identical to `requiredDuringSchedulingIgnoredDuringExecution`
except that it will evict pods from nodes that cease to satisfy the pods' node affinity requirements.

Thus an example of `requiredDuringSchedulingIgnoredDuringExecution` would be "only run the pod on nodes with Intel CPUs"
Expand Down
2 changes: 1 addition & 1 deletion content/en/docs/concepts/security/controlling-access.md
Original file line number Diff line number Diff line change
Expand Up @@ -43,7 +43,7 @@ Authenticators are described in more detail in
[Authentication](/docs/reference/access-authn-authz/authentication/).

The input to the authentication step is the entire HTTP request; however, it typically
just examines the headers and/or client certificate.
examines the headers and/or client certificate.

Authentication modules include client certificates, password, and plain tokens,
bootstrap tokens, and JSON Web Tokens (used for service accounts).
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -387,7 +387,7 @@ $ curl https://<EXTERNAL-IP>:<NODE-PORT> -k
<h1>Welcome to nginx!</h1>
```

Let's now recreate the Service to use a cloud load balancer, just change the `Type` of `my-nginx` Service from `NodePort` to `LoadBalancer`:
Let's now recreate the Service to use a cloud load balancer. Change the `Type` of `my-nginx` Service from `NodePort` to `LoadBalancer`:

```shell
kubectl edit svc my-nginx
Expand Down
2 changes: 1 addition & 1 deletion content/en/docs/concepts/services-networking/ingress.md
Original file line number Diff line number Diff line change
Expand Up @@ -260,7 +260,7 @@ There are existing Kubernetes concepts that allow you to expose a single Service
{{< codenew file="service/networking/test-ingress.yaml" >}}

If you create it using `kubectl apply -f` you should be able to view the state
of the Ingress you just added:
of the Ingress you added:

```bash
kubectl get ingress test-ingress
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -57,7 +57,7 @@ the first label matches the originating Node's value for that label. If there is
no backend for the Service on a matching Node, then the second label will be
considered, and so forth, until no labels remain.

If no match is found, the traffic will be rejected, just as if there were no
If no match is found, the traffic will be rejected, as if there were no
backends for the Service at all. That is, endpoints are chosen based on the first
topology key with available backends. If this field is specified and all entries
have no backends that match the topology of the client, the service has no
Expand Down Expand Up @@ -87,7 +87,7 @@ traffic as follows.

* Service topology is not compatible with `externalTrafficPolicy=Local`, and
therefore a Service cannot use both of these features. It is possible to use
both features in the same cluster on different Services, just not on the same
both features in the same cluster on different Services, only not on the same
Service.

* Valid topology keys are currently limited to `kubernetes.io/hostname`,
Expand Down
7 changes: 3 additions & 4 deletions content/en/docs/concepts/services-networking/service.md
Original file line number Diff line number Diff line change
Expand Up @@ -527,7 +527,7 @@ for NodePort use.

Using a NodePort gives you the freedom to set up your own load balancing solution,
to configure environments that are not fully supported by Kubernetes, or even
to just expose one or more nodes' IPs directly.
to expose one or more nodes' IPs directly.

Note that this Service is visible as `<NodeIP>:spec.ports[*].nodePort`
and `.spec.clusterIP:spec.ports[*].port`. (If the `--nodeport-addresses` flag in kube-proxy is set, <NodeIP> would be filtered NodeIP(s).)
Expand Down Expand Up @@ -785,8 +785,7 @@ you can use the following annotations:
```

In the above example, if the Service contained three ports, `80`, `443`, and
`8443`, then `443` and `8443` would use the SSL certificate, but `80` would just
be proxied HTTP.
`8443`, then `443` and `8443` would use the SSL certificate, but `80` would be proxied HTTP.

From Kubernetes v1.9 onwards you can use [predefined AWS SSL policies](https://docs.aws.amazon.com/elasticloadbalancing/latest/classic/elb-security-policy-table.html) with HTTPS or SSL listeners for your Services.
To see which policies are available for use, you can use the `aws` command line tool:
Expand Down Expand Up @@ -1107,7 +1106,7 @@ but the current API requires it.

## Virtual IP implementation {#the-gory-details-of-virtual-ips}

The previous information should be sufficient for many people who just want to
The previous information should be sufficient for many people who want to
use Services. However, there is a lot going on behind the scenes that may be
worth understanding.

Expand Down
3 changes: 2 additions & 1 deletion content/en/docs/concepts/storage/ephemeral-volumes.md
Original file line number Diff line number Diff line change
Expand Up @@ -135,8 +135,9 @@ As a cluster administrator, you can use a [PodSecurityPolicy](/docs/concepts/pol
This feature requires the `GenericEphemeralVolume` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/) to be
enabled. Because this is an alpha feature, it is disabled by default.

Generic ephemeral volumes are similar to `emptyDir` volumes, just more
Generic ephemeral volumes are similar to `emptyDir` volumes, except more
flexible:

- Storage can be local or network-attached.
- Volumes can have a fixed size that Pods are not able to exceed.
- Volumes may have some initial data, depending on the driver and
Expand Down
2 changes: 1 addition & 1 deletion content/en/docs/concepts/storage/persistent-volumes.md
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,7 @@ A _PersistentVolume_ (PV) is a piece of storage in the cluster that has been pro

A _PersistentVolumeClaim_ (PVC) is a request for storage by a user. It is similar to a Pod. Pods consume node resources and PVCs consume PV resources. Pods can request specific levels of resources (CPU and Memory). Claims can request specific size and access modes (e.g., they can be mounted ReadWriteOnce, ReadOnlyMany or ReadWriteMany, see [AccessModes](#access-modes)).

While PersistentVolumeClaims allow a user to consume abstract storage resources, it is common that users need PersistentVolumes with varying properties, such as performance, for different problems. Cluster administrators need to be able to offer a variety of PersistentVolumes that differ in more ways than just size and access modes, without exposing users to the details of how those volumes are implemented. For these needs, there is the _StorageClass_ resource.
While PersistentVolumeClaims allow a user to consume abstract storage resources, it is common that users need PersistentVolumes with varying properties, such as performance, for different problems. Cluster administrators need to be able to offer a variety of PersistentVolumes that differ in more ways than size and access modes, without exposing users to the details of how those volumes are implemented. For these needs, there is the _StorageClass_ resource.

See the [detailed walkthrough with working examples](/docs/tasks/configure-pod-container/configure-persistent-volume-storage/).

Expand Down
4 changes: 2 additions & 2 deletions content/en/docs/concepts/storage/storage-classes.md
Original file line number Diff line number Diff line change
Expand Up @@ -37,7 +37,7 @@ request a particular class. Administrators set the name and other parameters
of a class when first creating StorageClass objects, and the objects cannot
be updated once they are created.

Administrators can specify a default StorageClass just for PVCs that don't
Administrators can specify a default StorageClass only for PVCs that don't
request any particular class to bind to: see the
[PersistentVolumeClaim section](/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims)
for details.
Expand Down Expand Up @@ -569,7 +569,7 @@ parameters:
`"http(s)://api-server:7860"`
* `registry`: Quobyte registry to use to mount the volume. You can specify the
registry as ``<host>:<port>`` pair or if you want to specify multiple
registries you just have to put a comma between them e.q.
registries, put a comma between them.
``<host1>:<port>,<host2>:<port>,<host3>:<port>``.
The host can be an IP address or if you have a working DNS you can also
provide the DNS names.
Expand Down
2 changes: 1 addition & 1 deletion content/en/docs/concepts/storage/volume-pvc-datasource.md
Original file line number Diff line number Diff line change
Expand Up @@ -40,7 +40,7 @@ Users need to be aware of the following when using this feature:

## Provisioning

Clones are provisioned just like any other PVC with the exception of adding a dataSource that references an existing PVC in the same namespace.
Clones are provisioned like any other PVC with the exception of adding a dataSource that references an existing PVC in the same namespace.

```yaml
apiVersion: v1
Expand Down
2 changes: 1 addition & 1 deletion content/en/docs/concepts/storage/volumes.md
Original file line number Diff line number Diff line change
Expand Up @@ -38,7 +38,7 @@ that run within the pod, and data is preserved across container restarts. When a
ceases to exist, Kubernetes destroys ephemeral volumes; however, Kubernetes does not
destroy persistent volumes.

At its core, a volume is just a directory, possibly with some data in it, which
At its core, a volume is a directory, possibly with some data in it, which
is accessible to the containers in a pod. How that directory comes to be, the
medium that backs it, and the contents of it are determined by the particular
volume type used.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -708,7 +708,7 @@ nginx-deployment-618515232 11 11 11 7m
You can pause a Deployment before triggering one or more updates and then resume it. This allows you to
apply multiple fixes in between pausing and resuming without triggering unnecessary rollouts.
* For example, with a Deployment that was just created:
* For example, with a Deployment that was created:
Get the Deployment details:
```shell
kubectl get deploy
Expand Down
2 changes: 1 addition & 1 deletion content/en/docs/concepts/workloads/controllers/job.md
Original file line number Diff line number Diff line change
Expand Up @@ -99,7 +99,7 @@ pi-5rwd7
```

Here, the selector is the same as the selector for the Job. The `--output=jsonpath` option specifies an expression
that just gets the name from each Pod in the returned list.
with the name from each Pod in the returned list.

View the standard output of one of the pods:

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -222,7 +222,7 @@ In this manner, a ReplicaSet can own a non-homogenous set of Pods
## Writing a ReplicaSet manifest

As with all other Kubernetes API objects, a ReplicaSet needs the `apiVersion`, `kind`, and `metadata` fields.
For ReplicaSets, the kind is always just ReplicaSet.
For ReplicaSets, the `kind` is always a ReplicaSet.
In Kubernetes 1.9 the API version `apps/v1` on the ReplicaSet kind is the current version and is enabled by default. The API version `apps/v1beta2` is deprecated.
Refer to the first lines of the `frontend.yaml` example for guidance.

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -110,8 +110,7 @@ nginx-3ntk0 nginx-4ok8v nginx-qrm3m

Here, the selector is the same as the selector for the ReplicationController (seen in the
`kubectl describe` output), and in a different form in `replication.yaml`. The `--output=jsonpath` option
specifies an expression that just gets the name from each pod in the returned list.

specifies an expression with the name from each pod in the returned list.

## Writing a ReplicationController Spec

Expand Down
2 changes: 1 addition & 1 deletion content/en/docs/concepts/workloads/pods/pod-lifecycle.md
Original file line number Diff line number Diff line change
Expand Up @@ -312,7 +312,7 @@ can specify a readiness probe that checks an endpoint specific to readiness that
is different from the liveness probe.

{{< note >}}
If you just want to be able to drain requests when the Pod is deleted, you do not
If you want to be able to drain requests when the Pod is deleted, you do not
necessarily need a readiness probe; on deletion, the Pod automatically puts itself
into an unready state regardless of whether the readiness probe exists.
The Pod remains in the unready state while it waits for the containers in the Pod
Expand Down
Loading

0 comments on commit ec48408

Please sign in to comment.