-
-
Notifications
You must be signed in to change notification settings - Fork 99
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
provisioner: add Hetzner support for inlets #119
Conversation
Signed-off-by: Carlos Panato <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
Thank you 👍 Next up would be: The arkade app: https://github.com/alexellis/arkade/blob/master/cmd/apps/inletsoperator_app.go (adding the provider option) Then the docs to add the example for Hetzner and any specifics on creating their API token: https://raw.githubusercontent.com/inlets/docs/master/docs/tools/inlets-operator.md |
case "hetzner": | ||
provisioner, _ = provision.NewHetznerProvisioner(c.infraConfig.GetAccessKey()) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Out of curiosity, why do we silently skip errors while creating the clients for the provisioners? @alexellis
func getProvisioner(c *Controller) (provision.Provisioner, error) {
var err error
var provisioner provision.Provisioner
switch c.infraConfig.Provider {
case "equinix-metal":
provisioner, _ = provision.NewEquinixMetalProvisioner(c.infraConfig.GetAccessKey())
case "digitalocean":
provisioner, _ = provision.NewDigitalOceanProvisioner(c.infraConfig.GetAccessKey())
case "scaleway":
provisioner, _ = provision.NewScalewayProvisioner(c.infraConfig.GetAccessKey(), c.infraConfig.GetSecretKey(), c.infraConfig.OrganizationID, c.infraConfig.Region)
case "gce":
provisioner, _ = provision.NewGCEProvisioner(c.infraConfig.GetAccessKey())
case "ec2":
provisioner, _ = provision.NewEC2Provisioner(c.infraConfig.Region, c.infraConfig.GetAccessKey(), c.infraConfig.GetSecretKey())
case "linode":
provisioner, _ = provision.NewLinodeProvisioner(c.infraConfig.GetAccessKey())
case "azure":
provisioner, _ = provision.NewAzureProvisioner(c.infraConfig.SubscriptionID, c.infraConfig.GetAccessKey())
case "hetzner":
provisioner, _ = provision.NewHetznerProvisioner(c.infraConfig.GetAccessKey())
default:
return nil, fmt.Errorf("unsupported provider: %s", c.infraConfig.Provider)
}
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is there any error to return or is it there as part of the interface?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For context, the Provisioner
interface looks like this:
// Provisioner is an interface used for deploying exit nodes into cloud providers
type Provisioner interface {
Provision(BasicHost) (*ProvisionedHost, error)
Status(id string) (*ProvisionedHost, error)
Delete(HostDeleteRequest) error
}
It looks a bit weird to have an error on an interface and to not check it afterwards. Some libraries, like Google's, may fail while "creating" the client. For example, in gce.go:
func NewGCEProvisioner(accessKey string) (*GCEProvisioner, error) {
gceService, err := compute.NewService(context.Background(), option.WithCredentialsJSON([]byte(accessKey)))
return &GCEProvisioner{
gceProvisioner: gceService,
}, err
}
The compute.NewService
may fail if the given JSON blob cannot be parsed. With the following secret:
apiVersion: v1
kind: Secret
metadata:
name: inlets-access-key
dataString:
inlets-access-key: '{"foo": broken-json}'
the controller crashes with no legible error:
2021/05/18 07:53:42 Operator version: 0.12.1 SHA: b3a96cc192b97afc862087260e97ad3bc2f2491b
2021/05/18 07:53:42 Inlets client: ghcr.io/inlets/inlets-pro:0.8.3
2021/05/18 07:53:42 Using inlets PRO.
W0518 07:53:42.944632 1 client_config.go:552] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work.
I0518 07:53:42.945230 1 controller.go:121] Setting up event handlers
I0518 07:53:42.945262 1 controller.go:243] Starting Tunnel controller
I0518 07:53:42.945266 1 controller.go:246] Waiting for informer caches to sync
I0518 07:53:43.045366 1 controller.go:251] Starting workers
I0518 07:53:43.045380 1 controller.go:257] Started workers
E0518 07:53:43.045485 1 runtime.go:78] Observed a panic: "invalid memory address or nil pointer dereference" (runtime error: invalid memory address or nil pointer dereference)
goroutine 112 [running[]:
k8s.io/apimachinery/pkg/util/runtime.logPanic(0x1e71fc0, 0x3234d70)
/go/pkg/mod/k8s.io/[email protected]/pkg/util/runtime/runtime.go:74 +0xa6
k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0)
/go/pkg/mod/k8s.io/[email protected]/pkg/util/runtime/runtime.go:48 +0x89
panic(0x1e71fc0, 0x3234d70)
/usr/local/go/src/runtime/panic.go:969 +0x1b9
github.com/inlets/cloud-provision/provision.(*GCEProvisioner).Status(0xc000798cb0, 0xc000098100, 0x32, 0x0, 0x0, 0x1fc5840)
/go/pkg/mod/github.com/inlets/cloud-provision/[email protected]/gce.go:232 +0x14d
main.syncProvisioningHostStatus(0xc0006caf00, 0xc000166ea0, 0xe, 0xc0006caf00)
/go/src/github.com/inlets/inlets-operator/controller.go:669 +0x93
main.(*Controller).syncHandler(0xc000166ea0, 0xc0005ec060, 0x1a, 0xc00060dd88, 0x10b1274)
/go/src/github.com/inlets/inlets-operator/controller.go:436 +0x2cc
main.(*Controller).processNextWorkItem.func1(0xc000166ea0, 0x1d09a00, 0xc0000f2020, 0x0, 0x0)
/go/src/github.com/inlets/inlets-operator/controller.go:307 +0xd7
main.(*Controller).processNextWorkItem(0xc000166ea0, 0x203000)
/go/src/github.com/inlets/inlets-operator/controller.go:317 +0x4d
main.(*Controller).runWorker(0xc000166ea0)
/go/src/github.com/inlets/inlets-operator/controller.go:268 +0x2b
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc00069fe40)
/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:155 +0x5f
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc00069fe40, 0x24ceae0, 0xc0006a3350, 0x1, 0xc0000cc120)
/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:156 +0xad
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc00069fe40, 0x3b9aca00, 0x0, 0x1, 0xc0000cc120)
/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:133 +0x98
k8s.io/apimachinery/pkg/util/wait.Until(0xc00069fe40, 0x3b9aca00, 0xc0000cc120)
/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:90 +0x4d
created by main.(*Controller).Run
/go/src/github.com/inlets/inlets-operator/controller.go:254 +0x26f
panic: runtime error: invalid memory address or nil pointer dereference [recovered[]
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0xc8 pc=0x1a8110d]
goroutine 112 [running[]:
k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0)
/go/pkg/mod/k8s.io/[email protected]/pkg/util/runtime/runtime.go:55 +0x10c
panic(0x1e71fc0, 0x3234d70)
/usr/local/go/src/runtime/panic.go:969 +0x1b9
github.com/inlets/cloud-provision/provision.(*GCEProvisioner).Status(0xc000798cb0, 0xc000098100, 0x32, 0x0, 0x0, 0x1fc5840)
/go/pkg/mod/github.com/inlets/cloud-provision/[email protected]/gce.go:232 +0x14d
main.syncProvisioningHostStatus(0xc0006caf00, 0xc000166ea0, 0xe, 0xc0006caf00)
/go/src/github.com/inlets/inlets-operator/controller.go:669 +0x93
main.(*Controller).syncHandler(0xc000166ea0, 0xc0005ec060, 0x1a, 0xc00060dd88, 0x10b1274)
/go/src/github.com/inlets/inlets-operator/controller.go:436 +0x2cc
main.(*Controller).processNextWorkItem.func1(0xc000166ea0, 0x1d09a00, 0xc0000f2020, 0x0, 0x0)
/go/src/github.com/inlets/inlets-operator/controller.go:307 +0xd7
main.(*Controller).processNextWorkItem(0xc000166ea0, 0x203000)
/go/src/github.com/inlets/inlets-operator/controller.go:317 +0x4d
main.(*Controller).runWorker(0xc000166ea0)
/go/src/github.com/inlets/inlets-operator/controller.go:268 +0x2b
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc00069fe40)
/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:155 +0x5f
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc00069fe40, 0x24ceae0, 0xc0006a3350, 0x1, 0xc0000cc120)
/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:156 +0xad
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc00069fe40, 0x3b9aca00, 0x0, 0x1, 0xc0000cc120)
/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:133 +0x98
k8s.io/apimachinery/pkg/util/wait.Until(0xc00069fe40, 0x3b9aca00, 0xc0000cc120)
/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:90 +0x4d
created by main.(*Controller).Run
/go/src/github.com/inlets/inlets-operator/controller.go:254 +0x26f
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It might just be wording, but whenever people ask "why" I feel like that is the wrong question to ask. I really don't know why, because there have been 17 different people submitting code and the project has evolved over time.
The question I would ask is more like: should we start capturing this error now?
I would say yes, it seems like it makes sense, even if the GCE provider is the only one that can create an error when constructed like this.
Let's go for it. Do you want to raise an issue and if you have time, send a PR too?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the GCE provider is the only one that can create an error when constructed like this
I now realize that 95% of Go integrations (godo, packet...) do not return any errors while creating a client. The GCE integration was added later on, which might explain why we forgot to handle errors for the specific case of GCE 😅
I will open an issue (and try to have a PR). Thanks!
Fixes: Add new Hetnzer provisioner to operator #115
Description
Add Hetzner provider to inlets
How Has This Been Tested?
The test was made following the guide: https://docs.inlets.dev/#/get-started/quickstart-ingresscontroller-cert-manager?id=expose-your-ingresscontroller-and-get-tls-from-letsencrypt
with the only change was when deploying the inlets-operator that I use the helm chart updates in this PR.
for that, you can install the helm chart for inlets-operator but change replace the default image with
ctadeu/inlets-operator:hetzner
How are existing users impacted? What migration steps/scripts do we need?
Users now can use the Hetzner provider to provision your inlets
Checklist:
I have:
git commit -s
/assign @alexellis