-
-
Notifications
You must be signed in to change notification settings - Fork 518
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Feature]: Add option to k3s to preload manifests into the container #1915
Comments
I like the idea @pablochacin, let's do it. One thing regarding the Thoughts? |
I started implementing a PoC but found the problem mentioned in this issue. The main problem is that we cannot just copy the manifests using the
My initial idea was to allow a "copy & paste" of |
Then the Regarding #1919, we should implement it to have an even more consistent lifecycle. |
@mdelapenya we only need to a post start hook to copy the manifest to a certain directory that is created when the container starts. Here is a PoC of this idea. By the way, as far as I know, a post-create hook cannot create a dir, because it is not possible to execute a command in the container until it is started. Regarding waiting for the pods, that should be a new wait option. According to the discussion in this issue, the post-start hook that copies the manifest will be executed first and the wait will be executed the last. |
@mdelapenya I updated the PoC with the changes in the life cycle hooks introduced in this PR and it works as expected! 🙌 What is pending for me is defining the syntax of the proposed |
@mdelapenya regarding the Therefore this option I proposed is redundant and as you suggested it can be implemented as:
|
I would say it's best to do this with server-side apply (SSA). You could do something like this with SSA package from Flux project: package apply
import (
"context"
"io"
"time"
"k8s.io/apimachinery/pkg/runtime"
clientgoscheme "k8s.io/client-go/kubernetes/scheme"
"k8s.io/client-go/rest"
"k8s.io/klog/v2"
ctrlClient "sigs.k8s.io/controller-runtime/pkg/client"
"github.com/fluxcd/cli-utils/pkg/kstatus/polling"
"github.com/fluxcd/pkg/ssa"
)
type ResourceManager struct {
*ssa.ResourceManager
logger klog.Logger
}
func NewResourceManager(config *rest.Config, logger klog.Logger) (*ResourceManager, error) {
clientConfig := rest.CopyConfig(m.Config)
options := ctrlClient.Options{
Scheme: runtime.NewScheme(),
}
if err := clientgoscheme.AddToScheme(options.Scheme); err != nil {
return nil, err
}
client, err := ctrlClient.New(clientConfig, options)
if err != nil {
return nil, err
}
resourceManager := &ResourceManager{
ResourceManager: ssa.NewResourceManager(client,
polling.NewStatusPoller(client, client.RESTMapper(), polling.Options{}),
ssa.Owner{
Field: "testcontainers",
Group: "k3s.golang.testcontainers.org",
},
),
logger: logger,
}
return resourceManager, nil
}
func (m *ResourceManager) ApplyManifest(ctx context.Context, r io.Reader) error {
objs, err := ssa.ReadObjects(r)
if err != nil {
return err
}
if err := ssa.NormalizeUnstructuredListWithScheme(objs, m.Client().Scheme()); err != nil {
return err
}
changeSet, err := m.ApplyAllStaged(ctx, objs, ssa.DefaultApplyOptions())
if err != nil {
return err
}
for _, change := range changeSet.Entries {
m.logger.Info(change.String())
}
return m.WaitForSet(changeSet.ToObjMetadataSet(),
ssa.WaitOptions{
Interval: 2 * time.Second,
Timeout: time.Minute,
})
} Disclaimer: I think this code should work, but it was a quick copy from something I'm working on, I've not tested this as is. If someone is thinking of making a PR, do ping me ;) |
@errordeveloper I already working on a PR using the copy to manifest approach because I think is simpler, as it used a k3s feature and do not require the additional dependency of kubernetes client. |
Good point for sure, given the client would be a new dependency and can complicate things. The only other difference is that SSA code above includes readiness polling. |
But that comes from yet another dependency on flux. I'm not sure it brings any significant advantage over this implementation using already available methods to justify all those additional dependencies. |
To be clear, the SSA package is a separate module, so it's not like installing Flux or dragging all of its deps. However, it does bring in the controller runtime client.
I would say that in most cases when I'm creating a cluster and want to apply some manifests to it before running tests, I certainly want to make sure these additional resources become ready before running any tests. So in my mind, it's quite significant. @pablochacin this is just my few cents, I think your proposed solution is great progress already! It's possible that waiting needs to be solved separately if the k3s module should avoid the inclusion of any client. I just wanted to make sure the issue of readiness is touched on in this discussion, at least just for the record :) |
I think it is, Even when limitations, using a command like the one shown in the example should in general be sufficient to check for resource readiness.
|
Problem
In many cases, k3s is used for testing a component (application, operator, et cetera) that must be deployed in the cluster.
Presently, this can be done for example copying the manifest to the container and using the
kubectl
command to apply it, as shown in the snipped below.This is repetitive and highly inconvenient if multiple components must be deployed.
Solution
Provide a customization option
WithManifest()
that loads the manifests in the container and applies it.It also would be convenient to provide aWaitForCondition()
wait strategy to wait for the components deployed in the manifest to be ready:Benefit
Simplify test code by removing boilerplate.
Alternatives
Keep existing mechanism.
Would you like to help contributing this feature?
Yes
The text was updated successfully, but these errors were encountered: