-
Notifications
You must be signed in to change notification settings - Fork 1.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
pkg/types/defaults/installconfig: Set defaults for null replicas #1146
pkg/types/defaults/installconfig: Set defaults for null replicas #1146
Conversation
ad08d2f
to
9eeb259
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Seems reasonable to me.
Can you change the comments for Replicas
to reflect what the default values actually are? Also, my preference would be that the Replicas
field be omitempty
, too, since it is not required.
We don't currently support configuring zero workers [1], largely because some key operators still do not tolerate masters. Still, some users are attempting to work around our checks by leaving 'replicas' unset (which ends up as nil in Go) [2]. This commit adjusts our install-config defaulting to fill in the default replica counts when the user provides machine-pool entries but leaves replicas unset. [1]: openshift#958 [2]: https://bugzilla.redhat.com/show_bug.cgi?id=1670005#c1
9eeb259
to
4b553cd
Compare
I've dropped the old line. I haven't added a new line, because it would take a few lines to describe the current, platform-specific defaults, and users who care can just look in a generated
Done with 9eeb259 -> 4b553cd, which also fixes the gofmt issue. |
/lgtm |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: staebler, wking The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
/retest Please review the full test history for this PR and help us cut down flakes. |
1 similar comment
/retest Please review the full test history for this PR and help us cut down flakes. |
Do all the teams that don't tolerate masters have high severity bugs to fix that? |
In a libvirt cluster just now (because I never get libvirt workers due to openshift/cluster-api-provider-libvirt#45): $ oc get pods --all-namespaces | grep Pending
openshift-ingress router-default-7688479d99-nbnj8 0/1 Pending 0 21m
openshift-monitoring prometheus-operator-647d84b5c6-rsplb 0/1 Pending 0 21m
openshift-operator-lifecycle-manager olm-operators-sf5sm 0/1 Pending 0 26m
$ oc get pod -o "jsonpath={.status.conditions}{'\n'}" -n openshift-ingress router-default-7688479d99-nbnj8
[map[reason:Unschedulable message:0/1 nodes are available: 1 node(s) didn't match node selector. type:PodScheduled status:False lastProbeTime:<nil> lastTransitionTime:2019-01-30T20:00:04Z]]
$ oc get pod -o "jsonpath={.status.conditions}{'\n'}" -n openshift-monitoring prometheus-operator-647d84b5c6-rsplb
[map[message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate. type:PodScheduled status:False lastProbeTime:<nil> lastTransitionTime:2019-01-30T20:00:03Z reason:Unschedulable]]
$ oc get pod -o "jsonpath={.status.conditions}{'\n'}" -n openshift-operator-lifecycle-manager olm-operators-sf5sm
[map[type:PodScheduled status:False lastProbeTime:<nil> lastTransitionTime:2019-01-30T19:55:21Z reason:Unschedulable message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.]] I'll make sure those get tracking issues. |
|
We don't currently support configuring zero workers (#958), largely because some key operators still do not tolerate masters. Still, some users are attempting to work around our checks by leaving
replicas
unset (which ends up as nil in Go). This pull-request adjusts our install-config defaulting to fill in the defaultreplicas
when the user provides machine-pool entries but leavesreplicas
unset.