-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Propose a schema of universal values.yaml options to be used and enforced across all apps #3185
Comments
Assumptions
Proposed schemaLegend:
# This version was invalidated, see below comments for updated version |
Regarding JSON schema dereferencing: I think we build this into schemalint. Despite the name, schemalint already has a helper command Apart from that, I have looked for dedicated tools, but found nothing. |
|
I was thinking about that, and even started like that. Still, this felt wrong: this is a generic setting, not something GS specific (the value of
But if we make it per-image, then it will be very hard to override a global value. Maybe let's just add optional override per image and use the top one as default. Then only charts which need that override (very rare case) will implement it (and carefully, as it won't be handled by the global top level value). WDYT? |
Problems and issues:
|
v20240429-1Schema### All keys here placed under "global", so they are available to sub-charts as well
global:
# ###
# Mandatory well-known top level keys - have to be present and have this structure
# We use them to drive region/MC/WC specific settings for multiple charts
# from a single source of configuration.
# ###
images:
registry: [gsoci.azurecr.io]
"<imagePullSecrets>":
- [SecretName]
"[main]":
image: giantswarm/[image]
tag: [TAG]
"<pullPolicy>": [IfNotPresent]
"<registry>":
[gsociprivate.azurecr.io] # do that only if you want to override the default;
# this won't be managed by external global config settings
"[alpine]":
image: giantswarm/alpine
tag: "3.18"
"<pullPolicy>": [IfNotPresent]
# ###
# Optional well-known top level keys - they don't have to be present, but if they are,
# they have to have this structure.
# We use them to drive region/MC/WC specific settings for multiple charts
# from a single source of configuration.
# ###
podSecurityStandards:
enforced: false
###
# Optional keys - they are not used to enforce common settings, but to keep most popular settings
# in sync, so we have consistency when working on charts. These values won't be set for multiple charts
# at the same time, like from CCRs, but we still want to have the settings consitent, if used.
# ###
verticalPodAutoscaler:
enabled: true
podDisruptionBudget:
enabled: false
crds:
install: true
# defined for the main pod (default), then for each pod with different requirements by pod's name
resources:
default:
"<requests>":
cpu: 500m
memory: 512Mi
"<limits>":
cpu: 1000m
memory: 1024Mi
"[alpine]":
"<requests>":
cpu: 500m
memory: 512Mi
"<limits>":
cpu: 1000m
memory: 1024Mi
# defined for the main pod (default), then for each pod with different requirements by pod's name
tolerations:
default: []
"[alpine]": []
nodeSelector:
default: {}
"[alpine]": {}
affinity:
default: {}
"[alpine]": {}
podSecurityContext:
default:
runAsNonRoot: true
runAsUser: 1000
runAsGroup: 1000
seccompProfile:
type: RuntimeDefault
fsGroup: 1000
fsGroupChangePolicy: "OnRootMismatch"
"[alpine]":
runAsNonRoot: false
containerSecurityContext:
default:
allowPrivilegeEscalation: false
runAsNonRoot: true
runAsUser: 1000
runAsGroup: 1000
seccompProfile:
type: RuntimeDefault
"[alpine]":
runAsNonRoot: false Examples---
#### Simple pod with 1 container and security policies
global:
images:
registry: gsoci.azurecr.io
zot:
image: giantswarm/zot-linux-amd64
tag: "2.3.4"
podSecurityContext:
default:
runAsNonRoot: true
runAsUser: 1000
runAsGroup: 1000
seccompProfile:
type: RuntimeDefault
fsGroup: 1000
fsGroupChangePolicy: "OnRootMismatch"
containerSecurityContext:
default:
runAsNonRoot: true
runAsUser: 1000
runAsGroup: 1000
allowPrivilegeEscalation: false
---
#### 1 pod with 2 containers, one of them coming from a private registry, and extra options
global:
images:
registry: gsoci.azurecr.io
# there's only 1 pod and pull secrets are defined on the pod level, so we can use the default
imagePullSecrets:
- gsociprivate-pull-secret
zot:
image: giantswarm/zot-linux-amd64
tag: "2.3.4"
secret-injector:
image: giantswarm/super-secret-injector
tag: "3.18"
registry: gsociprivate.azurecr.io
resources:
default:
requests:
cpu: 2000m
memory: 1024Mi
limits:
cpu: 2000m
memory: 1024Mi
alpine:
requests:
cpu: 500m
memory: 128Mi
limits:
cpu: 1000m
memory: 128Mi
podSecurityContext:
default:
runAsNonRoot: true
fsGroup: 1000
fsGroupChangePolicy: "OnRootMismatch"
containerSecurityContexts:
default:
runAsNonRoot: true
runAsGroup: 2000
alpine:
runAsNonRoot: true
allowPrivilegeEscalation: false
---
#### Sub-charts with many pods, containers and extra options
global:
images:
registry: gsoci.azurecr.io
zot:
image: giantswarm/zot-linux-amd64
tag: "2.3.4"
secret-injector:
image: giantswarm/super-secret-injector
tag: "3.18"
registry: gsociprivate.azurecr.io
# only some (not all) pods need a pull secret to get this image
imagePullSecrets:
- gsociprivate-pull-secret
postgres:
image: giantswarm/postgres
tag: "10.11.2"
minio:
image: giantswarm/minio
tag: "3.4.5"
verticalPodAutoscaler:
enabled: true
podSecurityStandards:
enforced: true
podDisruptionBudget:
enabled: true
crds:
install: true
# same tolerations for all the pods
tolerations:
default:
- key: "node-role.kubernetes.io/control-plane"
operator: "Exists"
effect: "NoSchedule"
"postgres":
- key: "storage-backend/superfast"
operator: "Exists"
effect: "NoSchedule" |
Just a minor thing, but the read / usability of the values file takes precedence I think. On nodes where there there are component names like [main], [alpine], etc. it might be simpler to validate / programatically update them if they are separated under a key, may it be however named. Otherwise it is not straightforward to know which nodes are component nodes are supposed to have e.g. global:
images:
registry: gsoci.azurecr.io
kustomize-controller:
image: giantswarm/kustomize-controller
tag: v1.0.1
source-controller:
image: giantswarm/fluxcd-source-controller
tag: v1.0.1
more:
properties:
a: 1
b: 2 From a validation tools standpoint: is |
yo @giantswarm/team-turtles we had some discussion in SIG Architecture Sync about whether it would make sense for cluster charts to align with this schema - we reckoned it probably didn't make sense, but thought we'd ping you anyway to get your opinion <3 <3 <3 <3 |
@uvegla can you please give an exmaple here? I'm not sure what you mean? |
@piontec Fixed the indentation. Meaning if we want a tool to validate the certain nodes under images have |
v20240604-1Schema### All keys here placed under "global", so they are available to sub-charts as well
global:
# ###
# Mandatory well-known top level keys - have to be present and have this structure
# We use them to drive region/MC/WC specific settings for multiple charts
# from a single source of configuration.
# ###
images_info:
registry: [gsoci.azurecr.io]
"<imagePullSecrets>":
- [SecretName]
images:
"[main]":
image: giantswarm/[image]
tag: [TAG]
"<pullPolicy>": [IfNotPresent]
"<registry>": [gsociprivate.azurecr.io] # only if you want to override the 'images_info:" default;
# this value won't be managed by external global config settings (ie. catalog config maps)
"<imagePullSecrets>": # only if you want to override the 'images_info:" default;
- [gsociprivate-pull-secret]
"[alpine]":
image: giantswarm/alpine
tag: "3.18"
"<pullPolicy>": [IfNotPresent]
# ###
# Optional well-known top level keys - they don't have to be present, but if they are,
# they have to have this structure.
# We use them to drive region/MC/WC specific settings for multiple charts
# from a single source of configuration.
# ###
podSecurityStandards:
enforced: false
###
# Optional keys - they are not used to enforce common settings, but to keep most popular settings
# in sync, so we have consistency when working on charts. These values won't be set for multiple charts
# at the same time, like from CCRs, but we still want to have the settings consitent, if used.
# ###
verticalPodAutoscaler:
enabled: true
podDisruptionBudget:
enabled: false
crds:
install: true
# defined for the main pod (default), then for each pod with different requirements by pod's name
resources:
default:
"<requests>":
cpu: 500m
memory: 512Mi
"<limits>":
cpu: 1000m
memory: 1024Mi
"[alpine]":
"<requests>":
cpu: 500m
memory: 512Mi
"<limits>":
cpu: 1000m
memory: 1024Mi
# defined for the main pod (default), then for each pod with different requirements by pod's name
tolerations:
default: []
"[alpine]": []
nodeSelector:
default: {}
"[alpine]": {}
affinity:
default: {}
"[alpine]": {}
podSecurityContext:
default:
runAsNonRoot: true
runAsUser: 1000
runAsGroup: 1000
seccompProfile:
type: RuntimeDefault
fsGroup: 1000
fsGroupChangePolicy: "OnRootMismatch"
"[alpine]":
runAsNonRoot: false
containerSecurityContext:
default:
allowPrivilegeEscalation: false
runAsNonRoot: true
runAsUser: 1000
runAsGroup: 1000
seccompProfile:
type: RuntimeDefault
"[alpine]":
runAsNonRoot: false Examples---
#### Simple pod with 1 container and security policies
global:
images_info:
registry: gsoci.azurecr.io
images:
zot:
image: giantswarm/zot-linux-amd64
tag: "2.3.4"
podSecurityContext:
default:
runAsNonRoot: true
runAsUser: 1000
runAsGroup: 1000
seccompProfile:
type: RuntimeDefault
fsGroup: 1000
fsGroupChangePolicy: "OnRootMismatch"
containerSecurityContext:
default:
runAsNonRoot: true
runAsUser: 1000
runAsGroup: 1000
allowPrivilegeEscalation: false
---
#### 1 pod with 2 containers, one of them coming from a private registry, and extra options
global:
images_info:
registry: gsoci.azurecr.io
# there's only 1 pod and pull secrets are defined on the pod level, so we can use the default
imagePullSecrets:
- gsociprivate-pull-secret
images:
zot:
image: giantswarm/zot-linux-amd64
tag: "2.3.4"
secret-injector:
image: giantswarm/super-secret-injector
tag: "3.18"
registry: gsociprivate.azurecr.io
resources:
default:
requests:
cpu: 2000m
memory: 1024Mi
limits:
cpu: 2000m
memory: 1024Mi
alpine:
requests:
cpu: 500m
memory: 128Mi
limits:
cpu: 1000m
memory: 128Mi
podSecurityContext:
default:
runAsNonRoot: true
fsGroup: 1000
fsGroupChangePolicy: "OnRootMismatch"
containerSecurityContexts:
default:
runAsNonRoot: true
runAsGroup: 2000
alpine:
runAsNonRoot: true
allowPrivilegeEscalation: false
---
#### Sub-charts with many pods, containers and extra options
global:
images_info:
registry: gsoci.azurecr.io
images:
zot:
image: giantswarm/zot-linux-amd64
tag: "2.3.4"
secret-injector:
image: giantswarm/super-secret-injector
tag: "3.18"
registry: gsociprivate.azurecr.io
# only some (not all) pods need a pull secret to get this image
imagePullSecrets:
- gsociprivate-pull-secret
postgres:
image: giantswarm/postgres
tag: "10.11.2"
minio:
image: giantswarm/minio
tag: "3.4.5"
verticalPodAutoscaler:
enabled: true
podSecurityStandards:
enforced: true
podDisruptionBudget:
enabled: true
crds:
install: true
# same tolerations for all the pods
tolerations:
default:
- key: "node-role.kubernetes.io/control-plane"
operator: "Exists"
effect: "NoSchedule"
"postgres":
- key: "storage-backend/superfast"
operator: "Exists"
effect: "NoSchedule" |
Closing as done |
We need to come up with a defined schema for configuration values that are shared across multiple/all apps. A good example of the problem we have right now is the configuration of the image registry URL, which is now configured in at least 6 different ways (ticket).
The text was updated successfully, but these errors were encountered: