-
Notifications
You must be signed in to change notification settings - Fork 1.9k
[kibana] Add pod security policy to template #641
Conversation
Since this is a community submitted pull request, a Jenkins build has not been kicked off automatically. Can an Elastic organization member please verify the contents of this patch and then kick off a build manually? |
1 similar comment
Since this is a community submitted pull request, a Jenkins build has not been kicked off automatically. Can an Elastic organization member please verify the contents of this patch and then kick off a build manually? |
jenkins test this please |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi @krichter722, thanks for this PR 👍
helm lint --strict
is failing because managedServiceAccount
is missing from default values (see elastic+helm-charts+pull-request+template-testing/616/CHART=kibana). You should add it like in
helm-charts/filebeat/values.yaml
Lines 85 to 86 in b6054f0
# Whether this chart should self-manage its service account, role, and associated role binding. | |
managedServiceAccount: true |
d382adb
to
ce7f741
Compare
jenkins test this please |
helm-charts/kibana/tests/kibana_test.py Line 283 in acb4f51
This test is failling in elastic+helm-charts+pull-request+template-testing/637/CHART=kibana. Can you fix it? |
Signed-off-by: Karl-Philipp Richter <[email protected]>
ce7f741
to
ca3582d
Compare
jenkins test this please |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@krichter722, thanks for this PR.
Some changed are required to make it more consistent with the way PSP is done in other charts.
As you are adding new values to the chart, please don't forget to document them on the README page.
It would also be great if you could add the python tests we have in Logstash chart
helm-charts/logstash/tests/logstash_test.py
Lines 583 to 672 in 4f40f6d
def test_pod_security_policy(): | |
## Make sure the default config is not creating any resources | |
config = "" | |
resources = ("role", "rolebinding", "serviceaccount", "podsecuritypolicy") | |
r = helm_template(config) | |
for resource in resources: | |
assert resource not in r | |
assert ( | |
"serviceAccountName" not in r["statefulset"][name]["spec"]["template"]["spec"] | |
) | |
## Make sure all the resources are created with default values | |
config = """ | |
rbac: | |
create: true | |
serviceAccountName: "" | |
podSecurityPolicy: | |
create: true | |
name: "" | |
""" | |
r = helm_template(config) | |
for resource in resources: | |
assert resource in r | |
assert r["role"][name]["rules"][0] == { | |
"apiGroups": ["extensions"], | |
"verbs": ["use"], | |
"resources": ["podsecuritypolicies"], | |
"resourceNames": [name], | |
} | |
assert r["rolebinding"][name]["subjects"] == [ | |
{"kind": "ServiceAccount", "namespace": "default", "name": name} | |
] | |
assert r["rolebinding"][name]["roleRef"] == { | |
"apiGroup": "rbac.authorization.k8s.io", | |
"kind": "Role", | |
"name": name, | |
} | |
assert ( | |
r["statefulset"][name]["spec"]["template"]["spec"]["serviceAccountName"] == name | |
) | |
psp_spec = r["podsecuritypolicy"][name]["spec"] | |
assert psp_spec["privileged"] is True | |
def test_external_pod_security_policy(): | |
## Make sure we can use an externally defined pod security policy | |
config = """ | |
rbac: | |
create: true | |
serviceAccountName: "" | |
podSecurityPolicy: | |
create: false | |
name: "customPodSecurityPolicy" | |
""" | |
resources = ("role", "rolebinding") | |
r = helm_template(config) | |
for resource in resources: | |
assert resource in r | |
assert r["role"][name]["rules"][0] == { | |
"apiGroups": ["extensions"], | |
"verbs": ["use"], | |
"resources": ["podsecuritypolicies"], | |
"resourceNames": ["customPodSecurityPolicy"], | |
} | |
def test_external_service_account(): | |
## Make sure we can use an externally defined service account | |
config = """ | |
rbac: | |
create: false | |
serviceAccountName: "customServiceAccountName" | |
podSecurityPolicy: | |
create: false | |
name: "" | |
""" | |
resources = ("role", "rolebinding", "serviceaccount") | |
r = helm_template(config) | |
assert ( | |
r["statefulset"][name]["spec"]["template"]["spec"]["serviceAccountName"] | |
== "customServiceAccountName" | |
) | |
# When referencing an external service account we do not want any resources to be created. | |
for resource in resources: | |
assert resource not in r |
@@ -0,0 +1,21 @@ | |||
{{- if .Values.managedServiceAccount }} | |||
apiVersion: rbac.authorization.k8s.io/v1beta1 | |||
kind: ClusterRole |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think we need ClusterRole
and ClusterRoleBinding
for Kibana chart, using a Role
and RoleBinding
should be enough
{{- if .Values.rbac.serviceAccountName }} | ||
serviceAccount: {{ .Values.rbac.serviceAccountName }} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
a default ServiceAccount
name should be used if serviceAccountName
isn't defined. Here is an example from Logstash chart:
{{- if .Values.rbac.serviceAccountName }} | |
serviceAccount: {{ .Values.rbac.serviceAccountName }} | |
{{- if .Values.rbac.create }} | |
serviceAccountName: "{{ template "kibana.fullname" . }}" | |
{{- else if not (eq .Values.rbac.serviceAccountName "") }} | |
serviceAccountName: {{ .Values.rbac.serviceAccountName | quote }} |
Use the fullname if the serviceAccount value is not set | ||
*/}} | ||
{{- define "kibana.serviceAccount" -}} | ||
{{- if and .Values.rbac.serviceAccountName ( not ( eq .Values.podSecurityPolicy.name "" ) ) -}} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why are you checking podSecurityPolicy.name
value if you're not using kibana.serviceAccount
in kibana/templates/podsecuritypolicy.yaml
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would prefer removing kibana.serviceAccount
template and keeping ClusterRole
, ClusterRoleBinding
and ServiceAccount
name consistent with what we have in most other charts (you can check Logstash
chart for that).
rule: RunAsAny | ||
supplementalGroups: | ||
rule: RunAsAny | ||
volumes: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's add downwardAPI
and emptyDir
like in #496 (comment)
create: false | ||
name: "" | ||
spec: | ||
privileged: true |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
privileged: true
isn't required for kibana
chart: "{{ .Chart.Name }}-{{ .Chart.Version }}" | ||
heritage: {{ .Release.Service | quote }} | ||
release: {{ .Release.Name | quote }} | ||
rules: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
a specific rule for podsecuritypolicies
resource is required similar to #496 (review)
This PR has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. To track this PR (even if closed), please open a corresponding issue if one does not already exist. |
This PR has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. To track this PR (even if closed), please open a corresponding issue if one does not already exist. |
closing as stale |
This is necessary in order to be able to make the chart work on a cluster with pod security policy enabled.
${CHART}/tests/*.py
${CHART}/examples/*/test/goss.yaml