-
Notifications
You must be signed in to change notification settings - Fork 8.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Jail/chroot nginx process inside controller container #8337
Conversation
/kind bug |
fe42bc2
to
b61ae2c
Compare
wow! Great! |
This is definitely such a beautiful direction to take.
Next dream is if nginx process or another process makes a unneccessary system call, then it gets SIGKILL.
Even bigger dream is declare the sys calls a process is going to make and abide by it.
Thanks,
; Long Wu Yuan
… On 15-Mar-2022, at 9:11 AM, Jintao Zhang ***@***.***> wrote:
wow! Great!
—
Reply to this email directly, view it on GitHub <#8337 (comment)>, or unsubscribe <https://github.com/notifications/unsubscribe-auth/ABGZVWVHAYUVHJTB5DQY52TVAABFLANCNFSM5QXLMBPQ>.
Triage notifications on the go with GitHub Mobile for iOS <https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675> or Android <https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub>.
You are receiving this because you are subscribed to this thread.
|
# See the License for the specific language governing permissions and | ||
# limitations under the License. | ||
|
||
cat /etc/resolv.conf > /chroot/etc/resolv.conf |
This comment was marked as resolved.
This comment was marked as resolved.
Sorry, something went wrong.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nope, because the resolv.conf is generated on runtime (when the Pod is bootstrapped) and may change from cluster to cluster :D
Is there a reason the controller can't be separated out into a different container? Naively that seems like it would be a lot simpler than jailing it within the same container. |
Hey @tallclair so let me try to clarify why we didn't followed this split, even this being actually our first attempt:
Thinking on the same approach kube-proxyv2 is taking (aka kpng), we started to discuss about this as an opportunity to split CP/DP using a single gRPC controller and multiple proxies talking only with the controller and getting the right information. This is, for us, the new ingress-nginx and the desired state of art. But, again, at least myself stopped on the amount of time I have to develop vs amount of time required to make it work :) So, we didn't either wanted to stop on the promise of a new implementation or a hard workaround, after all this needed to be fixed fast. And this is why the chroot/jail solution came on a weekend discussion and was implemented in like 4 hours :) I've spent some time as well researching the previous solutions, but all of them needed deep changes in the code not as easy as it seems (I though it was going to be easy the first time as well!) Makes sense? :) |
56ffbfa
to
d2557a3
Compare
@rikatz thanks for the detailed response.
I might be misunderstanding you, but there's already a pod-level
Do you have any more details on what went wrong? I'm guessing you're missing some of the other projected data. Easiest thing is to just copy the projected volume from a pod with the SA auto mounted:
I'm sure I don't understand all the nuances here, but if the issue is just that the controller needs access to the nginx binary, then you could consider just putting everything you need in a single container image, and run both containers off the same image (just with a different entrypoint / command).
Yeah, that makes sense. I think stop should be straightforward? In place of restart, could you just have something that monitors the process (e.g. systemd, or just rely on the kubelet) and restarts it as soon as it's killed? I'm just guessing here... I can see why you might need the gRPC interface instead. On a separate topic, you might want to have a look at AppArmor hats, which are a way to change the AppArmor context, designed to change privileges on the fly. It might be a good fit for this use case (but only works on AppArmor systems). |
@@ -74,7 +74,7 @@ spec: | |||
containers: | |||
- name: {{ .Values.controller.containerName }} | |||
{{- with .Values.controller.image }} | |||
image: "{{- if .repository -}}{{ .repository }}{{ else }}{{ .registry }}/{{ .image }}{{- end -}}:{{ .tag }}{{- if (.digest) -}} @{{.digest}} {{- end -}}" | |||
image: "{{- if .repository -}}{{ .repository }}{{ else }}{{ .registry }}/{{ include "ingress-nginx.image" . }}{{- end -}}:{{ .tag }}{{- if (.digest) -}} @{{.digest}} {{- end -}}" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I understand that currently, we can complete image name conversion through helper functions, but once chroot is enabled, should we cancel digest?
The expected behavior is that we use a configuration item to allow users to switch images without perception, but digests are not the same. If the user configures digests, the image download will fail.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nope, I think we may have digest for chroot as well. This can be another argument, it's fine. Because we don't have digest yet (no image published) I forgot this, but we can fix this on a followup
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
OK,I agree
internal/ingress/controller/nginx.go
Outdated
@@ -61,6 +59,7 @@ import ( | |||
"k8s.io/ingress-nginx/internal/nginx" | |||
"k8s.io/ingress-nginx/internal/task" | |||
"k8s.io/ingress-nginx/internal/watch" | |||
"k8s.io/klog/v2" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: keep the previous import order
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🤦 fixing
b0aec45
to
875a86f
Compare
875a86f
to
2718ca6
Compare
@tao12345666333 fixed the helm thing, please take a look into it |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this is great! If you want to merge this PR at any time please remove the hold
/lgtm
/hold
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: rikatz, tao12345666333 The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
/hold cancel |
* Initial work on chrooting nginx process * More improvements in chroot * Fix charts and some file locations * Fix symlink on non chrooted container * fix psp test * Add e2e tests to chroot image * Fix logger * Add internal logger in controller * Fix overlay for chrooted tests * Fix tests * fix boilerplates * Fix unittest to point to the right pid * Fix PR review
What this PR does / why we need it:
Isolate nginx program into its own userns inside the main controller container.
Using this technique, we can provide a sandbox inside a sandbox (beautiful, uh) so nginx program wont have access to sensitive files from the main container.
Types of changes
Which issue/s this PR fixes
So many...
How Has This Been Tested?
TODO (Don't merge it before we pass through the TODO!!!)