Skip to content

Commit 4652e21

Browse files
committed
Fix typo in docs
1 parent 06beed4 commit 4652e21

File tree

2 files changed

+4
-4
lines changed

2 files changed

+4
-4
lines changed

controllers/gce/BETA_LIMITATIONS.md

+2-2
Original file line numberDiff line numberDiff line change
@@ -13,7 +13,7 @@ This is a list of beta limitations:
1313
* [Large clusters](#large-clusters): Ingress on GCE isn't supported on large (>1000 nodes), single-zone clusters.
1414
* [Teardown](README.md#deletion): The recommended way to tear down a cluster with active Ingresses is to either delete each Ingress, or hit the `/delete-all-and-quit` endpoint on GLBC, before invoking a cluster teardown script (eg: kube-down.sh). You will have to manually cleanup GCE resources through the [cloud console](https://cloud.google.com/compute/docs/console#access) or [gcloud CLI](https://cloud.google.com/compute/docs/gcloud-compute/) if you simply tear down the cluster with active Ingresses.
1515
* [Changing UIDs](#changing-the-cluster-uid): You can change the UID used as a suffix for all your GCE cloud resources, but this requires you to delete existing Ingresses first.
16-
* [Cleaning up](#cleaning-up-cloud-resources): You can delete loadbalancers that older clusters might've leaked due to permature teardown through the GCE console.
16+
* [Cleaning up](#cleaning-up-cloud-resources): You can delete loadbalancers that older clusters might've leaked due to premature teardown through the GCE console.
1717

1818
## Prerequisites
1919

@@ -172,6 +172,6 @@ If you deleted a GKE/GCE cluster without first deleting the associated Ingresses
172172

173173
1. Navigate to the [cloud console](https://console.cloud.google.com/) and click on the "Networking" tab, then choose "LoadBalancing"
174174
2. Find the loadbalancer you'd like to delete, it should have a name formatted as: k8s-um-ns-name--UUID
175-
3. Delete it, check the boxes to also casade the deletion down to associated resources (eg: backend-services)
175+
3. Delete it, check the boxes to also cascade the deletion down to associated resources (eg: backend-services)
176176
4. Switch to the "Compute Engine" tab, then choose "Instance Groups"
177177
5. Delete the Instance Group allocated for the leaked Ingress, it should have a name formatted as: k8s-ig-UUID

controllers/gce/README.md

+2-2
Original file line numberDiff line numberDiff line change
@@ -53,7 +53,7 @@ __Lines 8-9__: Each http rule contains the following information: A host (eg: fo
5353

5454
__Lines 10-12__: A `backend` is a service:port combination. It selects a group of pods capable of servicing traffic sent to the path specified in the parent rule. The `port` is the desired `spec.ports[*].port` from the Service Spec -- Note, though, that the L7 actually directs traffic to the corresponding `NodePort`.
5555

56-
__Global Prameters__: For the sake of simplicity the example Ingress has no global parameters. However, one can specify a default backend (see examples below) in the absence of which requests that don't match a path in the spec are sent to the default backend of glbc.
56+
__Global Parameters__: For the sake of simplicity the example Ingress has no global parameters. However, one can specify a default backend (see examples below) in the absence of which requests that don't match a path in the spec are sent to the default backend of glbc.
5757

5858

5959
## Load Balancer Management
@@ -135,7 +135,7 @@ Go to your GCE console and confirm that the following resources have been create
135135
* BackendServices (one for each Kubernetes nodePort service)
136136
* An Instance Group (with ports corresponding to the BackendServices)
137137

138-
The HTTPLoadBalancing panel will also show you if your backends have responded to the health checks, wait till they do. This can take a few minutes. If you see `Health status will display here once configuration is complete.` the L7 is still bootstrapping. Wait till you have `Healthy instances: X`. Even though the GCE L7 is driven by our controller, which notices the Kubernetes healtchecks of a pod, we still need to wait on the first GCE L7 health check to complete. Once your backends are up and healthy:
138+
The HTTPLoadBalancing panel will also show you if your backends have responded to the health checks, wait till they do. This can take a few minutes. If you see `Health status will display here once configuration is complete.` the L7 is still bootstrapping. Wait till you have `Healthy instances: X`. Even though the GCE L7 is driven by our controller, which notices the Kubernetes healthchecks of a pod, we still need to wait on the first GCE L7 health check to complete. Once your backends are up and healthy:
139139

140140
```shell
141141
$ curl --resolve foo.bar.com:80:107.178.245.239 http://foo.bar.com/foo

0 commit comments

Comments
 (0)