-
Notifications
You must be signed in to change notification settings - Fork 8.4k
Dynamic configuration causes caching problems due to default proxy_cache_key #3331
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
@ostkrok ingress-nginx does not configure those directives at the moment. And I'm guessing you're enabling proxy caching using custom snippet. I'm not sure how to make this better for users other than introducing another configmap/annotation to configure proxy caching automatically. |
@ElvinEfendi That's correct, we're enabling the caching using the custom http snippet. I guess you're right. Simply setting Maybe adding some documentation on how to enable caching would keep others from running in to the same problems? |
This would help indeed! |
I'll try to find some time to create a pull request with some example on how to enable caching correctly then. |
paste here, please |
Hi, We got our cache working correctly by adding the following to under
|
Is this a request for help?
No
What keywords did you search in NGINX Ingress controller issues before filing this one?
cache, proxy_cache_key
Is this a BUG REPORT or FEATURE REQUEST?
BUG REPORT
NGINX Ingress controller version:
0.20.0
Kubernetes version (use
kubectl version
):Environment:
uname -a
):What happened:
We upgraded our nginx from 0.15.0 to 0.20.0 and ran into some caching problems due to the new default dynamic configuration.
We started to see some really weird caching behaviour where a page from one host was being served on a different one, seemingly at random.
What you expected to happen:
I expected the our cache to continue working normally.
How to reproduce it (as minimally and precisely as possible):
I'm guessing this should be easily reproducable by creating two hosts and enabling the following settings in nginx:
proxy_cache
proxy_cache_valid
Anything else we need to know:
I've tried to figure out why this happened and I think I've found the problem to be that the default
proxy_cache_key
($scheme$proxy_host$request_uri) is incompatible with the new dynamic configuration.The variable
$proxy_host
will always be set tohttp://upstream_balancer
regardless what site is being proxied, which causes responses from different hosts to be saved to the cache with the same key.We've solved this on our end by setting
proxy_cache_key $scheme$proxy_upstream_name$request_uri;
, which should work since$proxy_upstream_name
always seem to get set by the ingress-controller.So the "bug" here seems to be that this ingress controller is now incompatible with the default value of
proxy_cache_key
, so I would suggest that we explicitly set the value ofproxy_cache_key
to something that work, perhaps$scheme$proxy_upstream_name$request_uri
.The text was updated successfully, but these errors were encountered: