Skip to content

Commit

Permalink
Update v1alpha6.md
Browse files Browse the repository at this point in the history
  • Loading branch information
ellistarn authored Dec 7, 2022
1 parent 01f321f commit 5090325
Showing 1 changed file with 6 additions and 6 deletions.
12 changes: 6 additions & 6 deletions designs/v1alpha6.md
Original file line number Diff line number Diff line change
@@ -1,22 +1,22 @@
# v1alpha6 API Proposal

This document formalizes the v1alpha6 laundry list (https://github.com/aws/karpenter/issues/1327) into a release strategy and concrete set of proposed changes.
This document formalizes the [v1alpha6 laundry list](https://github.com/aws/karpenter/issues/1327) into a release strategy and concrete set of proposed changes.

### Migration path from v1alpha5 to v1alpha6

Customers will be able to migrate from v1alpha5 to v1alpha6 in a single cluster using a single Karpenter version.

Kubernetes custom resources have built in support for API version compatibility. CRDs with multiple versions must define a “storage version”, which controls the data stored in etcd. Other versions are views onto this data and converted using conversion webhooks (https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definition-versioning/#webhook-conversion). However, there is a fundamental limitation that all versions must be safely round-tripable through each other (https://book.kubebuilder.io/multiversion-tutorial/api-changes.html). (https://book.kubebuilder.io/multiversion-tutorial/api-changes.html) This means that it must be possible to define a function that converts a v1alpha5 Provisioner into a v1alpha6 Provisioner and vise versa.
Kubernetes custom resources have built in support for API version compatibility. CRDs with multiple versions must define a “storage version”, which controls the data stored in etcd. Other versions are views onto this data and converted using [conversion webhooks](https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definition-versioning/#webhook-conversion). However, there is a fundamental limitation that all versions must be safely [round-tripable through each other](https://book.kubebuilder.io/multiversion-tutorial/api-changes.html). This means that it must be possible to define a function that converts a v1alpha5 Provisioner into a v1alpha6 Provisioner and vise versa.

Unfortunately, multiple proposed changes in v1alpha6 are not round-trippable. Below, we propose two deprecations (https://quip-amazon.com/MxdVAupIznul/Karpenter-v1alpha6-Plan#temp:C:eGU56ee35f879f44214bee4a60e3, https://quip-amazon.com/MxdVAupIznul/Karpenter-v1alpha6-Plan#temp:C:eGU3adac039ca7c4ea3ac9acad4b) of legacy fields in favor more modern mechanisms that have seen broad adoption in v1alpha5. These changes remove sharp edges that regularly cause customer surprise and production pain.
Unfortunately, multiple proposed changes in v1alpha6 are not round-trippable. Below, we propose two deprecations of legacy fields in favor more modern mechanisms that have seen broad adoption in v1alpha5. These changes remove sharp edges that regularly cause customer surprise and production pain.

To work around this limitation, we have three options:

1. Rename Provisioner to something like NodeProvisioner, to avoid being subject to round-trippable requirements
2. [Recommended] Require that users delete the existing v1alpha5 Provisioner CRD and then install the v1alpha6 Provisioner CRD. This will result in all capacity being shut down, and cannot be done in place if the customer has already launched nodes.
3. Keep the legacy fields in our API forever

Option 2 is untenable and easily discarded. We must provide a migration path for existing customers. Option 3 minimizes immediate impact to customers, but results in long term customer pain. There are https://quip-amazon.com/MxdVAupIznul/Karpenter-v1alpha6-Plan#temp:C:eGU24fcdbebf03540dda334f8611 to renaming Provisioner, so while it does cause some customer churn, it results in long term value.
Option 2 is untenable and easily discarded. We must provide a migration path for existing customers. Option 3 minimizes immediate impact to customers, but results in long term customer pain. There are other benefits to renaming Provisioner, so while it does cause some churn, it results in long term value.

Following option #1, customers would upgrade as follows:

Expand All @@ -27,7 +27,7 @@ Following option #1, customers would upgrade as follows:

### v1alpha6 will promote to v1beta1 and eventually v1

While we retain flexibility to create a v1alpha6, our intent is to promote v1alpha6 to v1beta1 and eventually v1 according to the rules of Kubernetes deprecation policy. (https://kubernetes.io/docs/reference/using-api/deprecation-policy/)
While we retain flexibility to create a v1alpha6, our intent is to promote v1alpha6 to v1beta1 and eventually v1 according to the rules of [Kubernetes deprecation policy](https://kubernetes.io/docs/reference/using-api/deprecation-policy/).

API changes create a user migration burden that should be weighed against the benefits of the breaking changes. Batching breaking changes into a single version bump helps to minimize this burden. The v1alpha5 API has seen broad adoption over the last year, and resulted in a large amount of feedback. We see this period to have been a critical maturation process for the Karpenter project, and has given us confidence that the changes in v1alpha6 will be sufficient to promote after a shorter feedback period.

Expand Down Expand Up @@ -80,7 +80,7 @@ We’ve recommended that customers leverage spec.providerRef in favor of spec.pr

### Deprecate: awsnodetemplate.spec.launchTemplate

Direct launch template support is problematic for many reasons, outlined in the design https://github.com/aws/karpenter/blob/main/designs/aws/aws-launch-templates-v2.md. (https://github.com/aws/karpenter/blob/main/designs/aws/aws-launch-templates-v2.md) Customers continue to run into issues when directly using launch templates. From many conversations with customers, our AWSNodeTemplate design has achieved feature parity with launch templates. The only gap is for users who maintain external workflows for launch template management. This requirement is in direct conflict with users who run into this sharp edge.
Direct launch template support is problematic for many reasons, outlined in the [design])https://github.com/aws/karpenter/blob/main/designs/aws/aws-launch-templates-v2.md). Customers continue to run into issues when directly using launch templates. From many conversations with customers, our AWSNodeTemplate design has achieved feature parity with launch templates. The only gap is for users who maintain external workflows for launch template management. This requirement is in direct conflict with users who run into this sharp edge.

This change simply removes legacy support for launch templates in favor of the AWSNodeTemplate design.

Expand Down

0 comments on commit 5090325

Please sign in to comment.