Skip to content
This repository has been archived by the owner on Apr 17, 2019. It is now read-only.

Baking Google pagespeed into the nginx-ingress-controller #1979

Closed
rudolfv opened this issue Nov 3, 2016 · 3 comments
Closed

Baking Google pagespeed into the nginx-ingress-controller #1979

rudolfv opened this issue Nov 3, 2016 · 3 comments

Comments

@rudolfv
Copy link

rudolfv commented Nov 3, 2016

I am trying to build an nginx-ingress-controller with the Google pagespeed module built in. We are currently using the following version of the controller in our Kubernetes cluster: gcr.io/google_containers/nginx-ingress-controller:0.8.3.

Therefore I started off by modifying the build for this image gcr.io/google_containers/nginx-slim:0.9. I checked out the commit "Update nginx to 1.11.3 (4a467bf)" because that seems to be when the 0.9 release was cut (this is also the image used by version 0.8.3 of the ingress controller). I then added the pagespeed module to the build.sh file and built successfully. Basically merging the instructions found here into your build.sh: https://developers.google.com/speed/pagespeed/module/build_ngx_pagespeed_from_source. I also managed to start up nginx in the built docker container and verified that pagespeed was indeed added. I then pushed this to our repo - the tag is exactly the same except for the repo.

After that I checked out the "Release 0.8.3 (add0235)" commit of the nginx-ingress-controller. I modified the build to use my custom nginx-slim image and managed to build successfully.

I had to make one change in the code due to dependencies that had changed in the meantime:

 @@ -271,7 +271,7 @@ func buildRateLimit(input interface{}) []string {
  // maximum number of connections that can be queued for acceptance
  // http://nginx.org/en/docs/http/ngx_http_core_module.html#listen
  func sysctlSomaxconn() int {
 -	maxConns, err := sysctl.GetSysctl("net/core/somaxconn")
 +	maxConns, err := sysctl.New().GetSysctl("net/core/somaxconn")
  	if err != nil || maxConns < 512 {
  		glog.Warningf("system net.core.somaxconn=%v. Using NGINX default (511)", maxConns)
  		return 511

The nginx-ingress-controller executable and image built successfully (the executable runs in a standalone docker container and displays the help options). I tried to execute the dry run

./nginx-ingress-controller --running-in-cluster=false --default-backend-service=kube-system/default-http-backend

but the --running-in-cluster option does not seem to be supported in that version.

I then deployed the ingress controller to our Kubernetes cluster, but it fails with a 255 termination code.

I then reverted all my changes and built the same way as above, without pagespeed, only changing the image repository name to ours. Essentially it should be exactly the same as your 0.8.3, only with the image repository names changed in the image tag (for both nginx-slim and nginx-ingress-controller). I got exactly the same problem, Kubernetes container terminating with a 255 exit code.

Name:		nginx-ingress-controller-hcbcn
Namespace:	default
Node:		myserver.corp/10.101.11.0
Start Time:	Thu, 03 Nov 2016 08:27:49 +0200
Labels:		k8s-app=nginx-ingress-lb
		name=nginx-ingress-lb
Status:		Running
IP:		10.32.0.52
Controllers:	ReplicationController/nginx-ingress-controller
Containers:
  nginx-ingress-lb:
    Container ID:	docker://312b46af5b03de0ba35c209be55a386bed25d709f008d406dee272ca5d415e1a
    Image:		my-artifactory.corp/google_containers/nginx-ingress-controller:0.8.3
    Image ID:		docker://sha256:d8f08c58f2338d334cd678201bfff5dbcb2a341690f584a800fad78c18ffff37
    Ports:		80/TCP, 443/TCP, 18080/TCP
    Args:
      /nginx-ingress-controller
      --default-backend-service=$(POD_NAMESPACE)/default-http-backend
    Limits:
      cpu:	1
      memory:	2Gi
    Requests:
      cpu:		200m
      memory:		1Gi
    State:		Waiting
      Reason:		CrashLoopBackOff
    Last State:		Terminated
      Reason:		Error
      Exit Code:	255
      Started:		Thu, 03 Nov 2016 08:27:59 +0200
      Finished:		Thu, 03 Nov 2016 08:27:59 +0200
    Ready:		False
    Restart Count:	1
    Liveness:		http-get http://:10254/healthz delay=30s timeout=5s period=10s #success=1 #failure=3
    Environment Variables:
      POD_NAME:		nginx-ingress-controller-hcbcn (v1:metadata.name)
      POD_NAMESPACE:	default (v1:metadata.namespace)
Conditions:
  Type		Status
  Initialized 	True 
  Ready 	False 
  PodScheduled 	True 
Volumes:
  default-token-qos4q:
    Type:	Secret (a volume populated by a Secret)
    SecretName:	default-token-qos4q
QoS Tier:	Burstable
Events:
  FirstSeen	LastSeen	Count	From				SubobjectPath				Type		Reason		Message
  ---------	--------	-----	----				-------------				--------	------		-------
  27s		27s		1	{default-scheduler }							Normal		Scheduled	Successfully assigned nginx-ingress-controller-hcbcn to myserver.corp
  22s		22s		1	{kubelet myserver.corp}	spec.containers{nginx-ingress-lb}	Normal		Created		Created container with docker id cc284364d89e; Security:[seccomp=unconfined]
  22s		22s		1	{kubelet myserver.corp}	spec.containers{nginx-ingress-lb}	Normal		Started		Started container with docker id cc284364d89e
  24s		19s		2	{kubelet myserver.corp}	spec.containers{nginx-ingress-lb}	Normal		Pulling		pulling image "my-artifactory.corp/google_containers/nginx-ingress-controller:0.8.3"
  24s		19s		2	{kubelet myserver.corp}	spec.containers{nginx-ingress-lb}	Normal		Pulled		Successfully pulled image "my-artifactory.corp/google_containers/nginx-ingress-controller:0.8.3"
  17s		17s		1	{kubelet myserver.corp}	spec.containers{nginx-ingress-lb}	Normal		Created		Created container with docker id 312b46af5b03; Security:[seccomp=unconfined]
  17s		17s		1	{kubelet myserver.corp}	spec.containers{nginx-ingress-lb}	Normal		Started		Started container with docker id 312b46af5b03
  16s		7s		4	{kubelet myserver.corp}	spec.containers{nginx-ingress-lb}	Warning		BackOff		Back-off restarting failed docker container
  16s		7s		4	{kubelet myserver.corp}						Warning		FailedSync	Error syncing pod, skipping: failed to "StartContainer" for "nginx-ingress-lb" with CrashLoopBackOff: "Back-off 10s restarting failed container=nginx-ingress-lb pod=nginx-ingress-controller-hcbcn_default(a3fcf750-a18e-11e6-b66c-408d5ce19a83)"

BTW I have changed the server names in the above to protect sensitive information.

I even tried manually copying the nginx-ingress-controller executable from your 0.8.3 image and into my custom image. I ended up with the same result - 255 error code and same events as above.

kubectl logs nginx-ingress-controller-hcbcn

or

kubectl logs nginx-ingress-controller-hcbcn --previous

does not provide any log entries for any of the scenarios above.

This was all built on Ubuntu 15.10 (64-bit).

Any assistance will be appreciated. Also, is it a good idea to provide an image like this so that others can use it too? Or is there a better way to accomplish the same goal?

@aledbf
Copy link
Contributor

aledbf commented Nov 3, 2016

@rudolfv I think that is easier to copy the binary already present in 0.8.3 and the template file in the nginx image you built.

--running-in-cluster=false

Yes that flag is deprecated. If you need to run the ingress controller locally you just need to export the environment variable export KUBERNETES_MASTER=http://<master IP>:<port>
Please check this #1467

Also, is it a good idea to provide an image like this so that others can use it too?

Yes, I think is a good idea but is very hard to customize. As you already realize you need to change the configuration and using a custom template is not enough.

@rudolfv
Copy link
Author

rudolfv commented Nov 3, 2016

Thanks @aledbf, I managed to get this working as you suggested by copying the binary already present in 0.8.3 and the template file in the nginx image I built.

It looks like part of the problem with my other approach was that I was using go 1.5.1 to compile the ingress controller and the template syntax has changed in go 1.6, but might've run into other issues after that.

I also accidentally locally tagged one of my faulty builds with gcr.io/google_containers/nginx-ingress-controller:0.8.3 at some point which caused some serious head-scratching.

@rudolfv rudolfv closed this as completed Nov 3, 2016
@aledbf
Copy link
Contributor

aledbf commented Nov 3, 2016

Thanks @aledbf, I managed to get this working as you suggested by copying the binary already present in 0.8.3 and the template file in the nginx image I built.

Awesome!

I was using go 1.5.1 to compile

Good point. All the code in kubernetes/contrib requires 1.6

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants