Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

kusomize.yaml preprocessing using environment #4673

Closed
Aypahyo opened this issue Jun 9, 2022 · 8 comments
Closed

kusomize.yaml preprocessing using environment #4673

Aypahyo opened this issue Jun 9, 2022 · 8 comments
Labels
kind/feature Categorizes issue or PR as related to a new feature. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one.

Comments

@Aypahyo
Copy link

Aypahyo commented Jun 9, 2022

Is your feature request related to a problem? Please describe.

The common CI CD problem of using a hash value for container image tags creates the situation on all ci cd solutions where the used tag is available in the environemnt and needs to be processed by kustomize in very simple use cases where only an image needs a tag replacement. Instead of allowing the built in replacement feature to use environemnt variables the common solution is to import environment through a poorly documented fall through statement in config maps and utilize that as a source for the elaborate repalce feature.

I am very frustrated by not beeing able to replace my image tag by writing:

images:
- name: super-image
  newTag: ${CI_PIPELINE_ID}

I would like the kustomization.yaml file to have a preprocessing step that replaces anything matched by \$\{(?<ENVNAME>.+?)\} with the contents of the ENVNAME varaible in the environement or write out a warning if the env did not match to anything.

I considered using the replaceemnt feature but would rather remove my eyealls with a glowing hot spoon. I have also considered writing my own preprocessing using sed statements which is what I googled people end up doing - totally defeats the point og kustomize imo.

I an absolutely certain that I am not the only one who wants this feature - this may be end up beeing the single most popular feature in kustomize since it would be used in every single ci cd pipeline out there.

@Aypahyo Aypahyo added the kind/feature Categorizes issue or PR as related to a new feature. label Jun 9, 2022
@k8s-ci-robot
Copy link
Contributor

@Aypahyo: This issue is currently awaiting triage.

SIG CLI takes a lead on issue triage for this repo, but any Kubernetes member can accept issues by applying the triage/accepted label.

The triage/accepted label can be added by org members by writing /triage accepted in a comment.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot added the needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. label Jun 9, 2022
@natasha41575
Copy link
Contributor

Thank you for filing the issue.

Environment variables have limited support because they are (a) templating and (b) not reproducible, which are two of kustomize's founding design principles. Replacements is the intended solution for use cases such as yours, so it would be helpful if you could elaborate on why it doesn't work for you, and if there are any suggestions you have that align more with kustomize's philosophy.

@Aypahyo
Copy link
Author

Aypahyo commented Jun 14, 2022

Thanks.
Replacements are an over elaborated solution that is too cumbersome to be useful for environment variables. In order to sneak in the environment there are several hoops to jump through and files to create for something that should be as simple as adding a sort of annotation to a label. To get an environment variable in one has to exploit a fall through in config maps so that config map pulls in an unspecified value from the environment. This happens if there is only a key that does not contain an = sign. Once that is done a replacement needs to be formulated that basically does what the image annotation already does just using the config map. Last not least you need to verify that your output does not contain the config map since that was just so you can backdoor hack your value into your deployment. This is way too cumbersome, likely brittle and depends on an fall-through that could be patched out of existence.

On templating: connecting the environment to the execution is in line with 12 factor apps and also one of the things done in most if not all ci/cd pipelines. There is always some artifact or deployment target that was generated from environment variables. Getting these variables into my kustomize execution is currently supported. What I want is better support to do that. The implementation details do not matter that much. Templating, String Interpolation,.... there are options to handle how these variables are made available to the kustomize execution.

On reproducible: Given a known environment context the output is predictable. Auditing could be a challenge since environments could contain secrets so these values should not be part of any log generic strategy. The only thing that should be controlled for is missing a key in env and values that are null / empty.

At the moment there are two options to get environment variable in: c) backdoor through configmap d) cutom preprocessing.
If a string interpolation like feature were present, c) and d) would not be required. No "points" are lost for b) since the behavior from the pipeline perspective did not change (put environment variable into k8s artifacts).

Arguably a) is already "violated". If one defines a template as containing a placeholder that is filled at runtime the config map fall through would be exactly that. Making that easy to use may make that template aspect more visible but it would not be a new aspect. The config map backdoor could be closed to remove that and in turn would break a bunch of customers who rely on this hidden templating. In every customer needs to implement their own pre_kustomize.py (or a bunch of sed statements) which would have the responsibility of rewriting the kustomize.yaml which would be marked with their own template syntax. What I see the most in the wild are annotations lieke #TAG# and a sed statement that finds and replaces those. To my mind that is exactly what kustomize would be useful for. Kustomize really needs a way to reference strings in the environment with an easy to use mechanism.

@Aypahyo
Copy link
Author

Aypahyo commented Jun 21, 2022

I built a simple template option, so I can run it as a preprocessor to kustomize in order to easily replace values taken from the environemnt: https://github.com/Aypahyo/ayTempler

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Sep 19, 2022
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Oct 19, 2022
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

@k8s-ci-robot k8s-ci-robot closed this as not planned Won't fix, can't repro, duplicate, stale Nov 18, 2022
@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/feature Categorizes issue or PR as related to a new feature. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one.
Projects
None yet
Development

No branches or pull requests

4 participants