Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

applications.argoproj.io: ApplicationStatus is not binded to Application #319

Closed
HoKim98 opened this issue Mar 13, 2025 · 5 comments · Fixed by #320
Closed

applications.argoproj.io: ApplicationStatus is not binded to Application #319

HoKim98 opened this issue Mar 13, 2025 · 5 comments · Fixed by #320

Comments

@HoKim98
Copy link
Contributor

HoKim98 commented Mar 13, 2025

When using kube-custom-resources-rs, I found Argo's Application CRD has no such field status, which is auto-generated but not auto-binded.

Error Reconstruction

# applications.argoproj.io: v1alpha1, https://github.com/metio/kube-custom-resources-rs/blob/main/crd-catalog/argoproj-labs/argocd-operator/argoproj.io/v1alpha1/applications.yaml
kopium applications.argoproj.io --docs --derive=Default --derive=PartialEq --smart-derive-elision

When we can see the outputs below:

// WARNING: generated by kopium - manual changes will be overwritten
// kopium command: kopium applications.argoproj.io --docs --derive=Default --derive=PartialEq --smart-derive-elision
// kopium version: 0.21.1

/// ApplicationSpec represents desired application state. Contains link to repository with application definition and additional parameters link definition revision.
#[derive(CustomResource, Serialize, Deserialize, Clone, Debug, Default, PartialEq)]
#[kube(group = "argoproj.io", version = "v1alpha1", kind = "Application", plural = "applications")]
#[kube(namespaced)]
#[kube(schema = "disabled")]
#[kube(derive="Default")]
#[kube(derive="PartialEq")]
pub struct ApplicationSpec {
    ...
}

/// ApplicationStatus contains status information for the application
#[derive(Serialize, Deserialize, Clone, Debug, Default, PartialEq)]
pub struct ApplicationStatus {
    ...
}

When using the outputs, we can find an error below:

mod generated; // generated applications.argoproj.io by kopium

fn main() {
    let cr: generated::Application = todo!(); // somewhere
    let status: generated::ApplicationStatus = cr.status; // compile error!
}
error[E0609]: no field `status` on type `Application`
 --> my_test/src/main.rs:5:51
  |
5 |     let status: generated::ApplicationStatus = cr.status; // compile error!
  |                                                   ^^^^^^ unknown field
  |
  = note: available fields are: `metadata`, `spec`

I think the tag #[kube(status = "ApplicationStatus")] is missing.

@HoKim98
Copy link
Contributor Author

HoKim98 commented Mar 13, 2025

I found the applications.argoproj.io CRD contains the status properties not in the subresources but schema.openAPIV3Schema.properties.

---
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
  name: applications.argoproj.io
spec:
  versions:
    - name: v1alpha1
      schema:
        openAPIV3Schema:
          properties:
            spec:
              description: ApplicationSpec represents desired application state. Contains
                link to repository with application definition and additional parameters
                link definition revision.
              properties: ...
            status: # here!
              description: ApplicationStatus contains status information for the application
              properties: ...
      subresources: {}

But current code only checks the subresources like below:

if version.subresources.as_ref().is_some_and(|c| c.status.is_some())
    && self.has_status_resource(&structs)
{
    println!(r#"#[kube(status = "{}Status")]"#, kind);
}

So we can fix it by checking schema.openAPIV3Schema.properties too like below:

if (version.subresources.as_ref().is_some_and(|c| c.status.is_some())
    || version  // also check!
        .schema
        .as_ref()
        .and_then(|c| c.open_api_v3_schema.as_ref())
        .and_then(|c| c.properties.as_ref())
        .is_some_and(|c| c.contains_key("status")))
    && self.has_status_resource(&structs)
{
    println!(r#"#[kube(status = "{}Status")]"#, kind);
}

@clux
Copy link
Member

clux commented Mar 13, 2025

hey, there. thanks for this. your fix looks sensible, and happy to merge it, but this is also likely a bug in the schema from argo's POV. they are meant to indicate they have a subresource.

@HoKim98
Copy link
Contributor Author

HoKim98 commented Mar 13, 2025

I tested locally with kube-custom-resources-rs and found no errors. And I also found that this patch can apply to 60 CRDs: ulagbulag/kube-custom-resources-rs@3a65030

If these were not meant to be, I believe we have fortunately found that there are many ways we can contribute externally. However, this is something that takes additional time. Although these may eventually be corrected, accepting them for the time being is not necessarily a bad choice. Nonetheless, I am more than willing to defer my judgment to you.

Affected CRDs

git diff HEAD~1 | awk '/^diff --git / { fn = $4; gsub(/^b\//, "", fn) } /^\+#\[kube\(status \= / { print fn }' | sort -u
kube-custom-resources-rs/src/app_lightbend_com/v1alpha1/akkaclusters.rs
kube-custom-resources-rs/src/application_networking_k8s_aws/v1alpha1/serviceimports.rs
kube-custom-resources-rs/src/argoproj_io/v1alpha1/applications.rs
kube-custom-resources-rs/src/argoproj_io/v1alpha1/appprojects.rs
kube-custom-resources-rs/src/autoscaling_k8s_io/v1/verticalpodautoscalercheckpoints.rs
kube-custom-resources-rs/src/autoscaling_k8s_io/v1beta2/verticalpodautoscalercheckpoints.rs
kube-custom-resources-rs/src/boskos_k8s_io/v1/resources.rs
kube-custom-resources-rs/src/chaos_mesh_org/v1alpha1/awschaos.rs
kube-custom-resources-rs/src/chaos_mesh_org/v1alpha1/azurechaos.rs
kube-custom-resources-rs/src/chaos_mesh_org/v1alpha1/blockchaos.rs
kube-custom-resources-rs/src/chaos_mesh_org/v1alpha1/dnschaos.rs
kube-custom-resources-rs/src/chaos_mesh_org/v1alpha1/gcpchaos.rs
kube-custom-resources-rs/src/chaos_mesh_org/v1alpha1/httpchaos.rs
kube-custom-resources-rs/src/chaos_mesh_org/v1alpha1/iochaos.rs
kube-custom-resources-rs/src/chaos_mesh_org/v1alpha1/jvmchaos.rs
kube-custom-resources-rs/src/chaos_mesh_org/v1alpha1/kernelchaos.rs
kube-custom-resources-rs/src/chaos_mesh_org/v1alpha1/networkchaos.rs
kube-custom-resources-rs/src/chaos_mesh_org/v1alpha1/physicalmachinechaos.rs
kube-custom-resources-rs/src/chaos_mesh_org/v1alpha1/podchaos.rs
kube-custom-resources-rs/src/chaos_mesh_org/v1alpha1/schedules.rs
kube-custom-resources-rs/src/chaos_mesh_org/v1alpha1/stresschaos.rs
kube-custom-resources-rs/src/chaos_mesh_org/v1alpha1/timechaos.rs
kube-custom-resources-rs/src/cilium_io/v2/ciliumlocalredirectpolicies.rs
kube-custom-resources-rs/src/couchbase_com/v2/couchbasebackuprestores.rs
kube-custom-resources-rs/src/couchbase_com/v2/couchbasebackups.rs
kube-custom-resources-rs/src/couchbase_com/v2/couchbaseclusters.rs
kube-custom-resources-rs/src/crd_projectcalico_org/v1/caliconodestatuses.rs
kube-custom-resources-rs/src/crd_projectcalico_org/v1/kubecontrollersconfigurations.rs
kube-custom-resources-rs/src/devices_kubeedge_io/v1alpha2/devices.rs
kube-custom-resources-rs/src/devices_kubeedge_io/v1beta1/devices.rs
kube-custom-resources-rs/src/forklift_konveyor_io/v1beta1/openstackvolumepopulators.rs
kube-custom-resources-rs/src/forklift_konveyor_io/v1beta1/ovirtvolumepopulators.rs
kube-custom-resources-rs/src/gateway_networking_k8s_io/v1alpha2/grpcroutes.rs
kube-custom-resources-rs/src/hnc_x_k8s_io/v1alpha2/hierarchicalresourcequotas.rs
kube-custom-resources-rs/src/hnc_x_k8s_io/v1alpha2/hierarchyconfigurations.rs
kube-custom-resources-rs/src/hnc_x_k8s_io/v1alpha2/hncconfigurations.rs
kube-custom-resources-rs/src/hnc_x_k8s_io/v1alpha2/subnamespaceanchors.rs
kube-custom-resources-rs/src/model_kubedl_io/v1alpha1/modelversions.rs
kube-custom-resources-rs/src/networking_gke_io/v1/managedcertificates.rs
kube-custom-resources-rs/src/quota_codeflare_dev/v1alpha1/quotasubtrees.rs
kube-custom-resources-rs/src/rules_kubeedge_io/v1/rules.rs
kube-custom-resources-rs/src/scheduling_koordinator_sh/v1alpha1/devices.rs
kube-custom-resources-rs/src/scheduling_sigs_k8s_io/v1alpha1/elasticquotas.rs
kube-custom-resources-rs/src/scheduling_sigs_k8s_io/v1alpha1/podgroups.rs
kube-custom-resources-rs/src/scheduling_volcano_sh/v1beta1/podgroups.rs
kube-custom-resources-rs/src/schemas_schemahero_io/v1alpha4/migrations.rs
kube-custom-resources-rs/src/schemas_schemahero_io/v1alpha4/tables.rs
kube-custom-resources-rs/src/secrets_store_csi_x_k8s_io/v1alpha1/secretproviderclasses.rs
kube-custom-resources-rs/src/velero_io/v1/backuprepositories.rs
kube-custom-resources-rs/src/velero_io/v1/backups.rs
kube-custom-resources-rs/src/velero_io/v1/backupstoragelocations.rs
kube-custom-resources-rs/src/velero_io/v1/deletebackuprequests.rs
kube-custom-resources-rs/src/velero_io/v1/downloadrequests.rs
kube-custom-resources-rs/src/velero_io/v1/podvolumebackups.rs
kube-custom-resources-rs/src/velero_io/v1/podvolumerestores.rs
kube-custom-resources-rs/src/velero_io/v1/schedules.rs
kube-custom-resources-rs/src/velero_io/v1/serverstatusrequests.rs
kube-custom-resources-rs/src/velero_io/v1/volumesnapshotlocations.rs
kube-custom-resources-rs/src/velero_io/v2alpha1/datadownloads.rs
kube-custom-resources-rs/src/velero_io/v2alpha1/datauploads.rs

@clux
Copy link
Member

clux commented Mar 13, 2025

Ah, wow. Ok, yeah, makes sense. Thanks for checking all this.
I do think they should probably mark it properly, but it totally makes sense for us to be speculative here.

@clux clux closed this as completed in #320 Mar 13, 2025
@clux
Copy link
Member

clux commented Mar 13, 2025

Published in 0.21.2

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants