Skip to content

Commit

Permalink
Target a single namespace for KCC resources.
Browse files Browse the repository at this point in the history
Instead of creating a namespace in the infra control plane for each namespace
observed in an upstream control plane, a single namespace will now be used
for managing of KCC resources. It is expected that the infra control plane
has KCC configured either with global credentials, or in namespaced mode.
When in namespaced mode, a ConfigConnectorContext must be created in the namespace
provided to infra-provider-gcp in order for KCC to be able to authenticate with
GCP.

Updated the readme with a bit more project information and documentation boilerplate.
  • Loading branch information
joshlreese committed Jan 2, 2025
1 parent 5ae6076 commit f021fca
Show file tree
Hide file tree
Showing 9 changed files with 152 additions and 242 deletions.
19 changes: 6 additions & 13 deletions Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -9,18 +9,7 @@ COPY go.mod go.mod
COPY go.sum go.sum
# cache deps before building and copying source so that we don't need to re-download as much
# and so that source changes don't invalidate our downloaded layer
ENV GOPRIVATE=go.datum.net/network-services-operator
RUN git config --global url.ssh://[email protected]/.insteadOf https://github.com/
RUN mkdir -p /root/.ssh && \
chmod 0700 /root/.ssh

# See https://docs.github.com/en/authentication/keeping-your-account-and-data-secure/githubs-ssh-key-fingerprints
RUN <<EOF cat >> /root/.ssh/known_hosts
github.com ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOMqqnkVzrm0SdG6UOoqKLsabgH5C9okWi0dh2l9GKJl
github.com ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEmKSENjQEezOmxkZMy7opKgwFB9nkt5YRrYMjNuG5N87uRgg6CLrbo5wAdT/y6v0mKV0U2w0WZ2YB/++Tpockg=
github.com ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCj7ndNxQowgcQnjshcLrqPEiiphnt+VTTvDP6mHBL9j1aNUkY4Ue1gvwnGLVlOhGeYrnZaMgRK6+PKCUXaDbC7qtbW8gIkhL7aGCsOr/C56SJMy/BCZfxd1nWzAOxSDPgVsmerOBYfNqltV9/hWCqBywINIR+5dIg6JTJ72pcEpEjcYgXkE2YEFXV1JHnsKgbLWNlhScqb2UmyRkQyytRLtL+38TGxkxCflmO+5Z8CSSNY7GidjMIZ7Q4zMjA2n1nGrlTDkzwDCsw+wqFPGQA179cnfGWOWRVruj16z6XyvxvjJwbz0wQZ75XK5tKSb7FNyeIEs4TT4jk+S4dhPeAUC5y+bDYirYgM4GC7uEnztnZyaVWQ7B381AK4Qdrwt51ZqExKbQpTUNn+EjqoTwvqNj4kqx5QUCI0ThS/YkOxJCXmPUWZbhjpCg56i+2aB6CmK2JGhn57K5mj0MNdBXA4/WnwH6XoPWJzK5Nyu2zB3nAZp+S5hpQs+p1vN1/wsjk=
EOF
RUN --mount=type=ssh go mod download
RUN go mod download

# Copy the go source
COPY cmd/main.go cmd/main.go
Expand All @@ -32,7 +21,11 @@ COPY internal/ internal/
# was called. For example, if we call make docker-build in a local env which has the Apple Silicon M1 SO
# the docker BUILDPLATFORM arg will be linux/arm64 when for Apple x86 it will be linux/amd64. Therefore,
# by leaving it empty we can ensure that the container and binary shipped on it will have the same platform.
RUN CGO_ENABLED=0 GOOS=${TARGETOS:-linux} GOARCH=${TARGETARCH} go build -a -o manager cmd/main.go
ENV GOCACHE=/root/.cache/go-build
ENV GOTMPDIR=/root/.cache/go-build
RUN --mount=type=cache,target=/go/pkg/mod/ \
--mount=type=cache,target="/root/.cache/go-build" \
CGO_ENABLED=0 GOOS=${TARGETOS:-linux} GOARCH=${TARGETARCH} go build -o manager cmd/main.go

# Use distroless as minimal base image to package the manager binary
# Refer to https://github.com/GoogleContainerTools/distroless for more details
Expand Down
109 changes: 85 additions & 24 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,43 +1,104 @@
# Datum GCP Infrastructure Provider

> [!CAUTION]
> This operator is currently in a POC phase. The POC integration branch will
> be orphaned and separate PRs opened for discrete components (APIs, controllers,
> etc) as they mature.
This provider manages resources in GCP as a result of interpreting workload and
network related API entities defined by users.

This provider interprets workload related entities and provisions resources to
satisfy workload requirements in GCP.
The primary APIs driving resource creation are defined in [workload-operator][workload-operator]
and [network-services-operator][network-services-operator].

## Prerequisites
[workload-operator]: https://github.com/datum-cloud/workload-operator
[network-services-operator]: https://github.com/datum-cloud/network-services-operator

This provider makes use of the [GCP Config Connector][k8s-config-connector]
project to manage resources in GCP. It is expected that the config connector
and associated CRDs have been installed in the cluster.
## Documentation

[k8s-config-connector]: https://github.com/GoogleCloudPlatform/k8s-config-connector
Documentation will be available at [docs.datum.net](https://docs.datum.net/)
shortly.

## Design Notes
### Design Notes

### Instances
#### Instances

Currently this provider leverages [GCP Managed Instance Groups][gcp-migs] to
manage instances within GCP. A future update will move toward more direct
instance control, as MIG resources and entities such as templates that are
required to use them take a considerably longer time to interact with than
direct VM instance control.

### TCP Gateways
[gcp-migs]: https://cloud.google.com/compute/docs/instance-groups#managed_instance_groups

> [!IMPORTANT]
> The controller for this feature is currently disabled as it assumes a workload
> which is deployed to a single project. This will be updated in the future.
## Getting Started

TCP gateways for a Workload is provisioned as a global external TCP network load
balancer in GCP. An anycast address is provisioned which is unique to the
workload, and backend services are connected to instance groups.
### Prerequisites

Similar to the instance group manager, these entities take a considerable amount
of time to provision and become usable. As we move forward to Datum powered LB
capabilities, the use of these services will be removed.
- go version v1.23.0+
- docker version 17.03+.
- kubectl version v1.31.0+.
- Access to a Kubernetes v1.31.0+ cluster.

[gcp-migs]: https://cloud.google.com/compute/docs/instance-groups#managed_instance_groups
This provider makes use of the [GCP Config Connector][k8s-config-connector]
project to manage resources in GCP. It is expected that the config connector
and associated CRDs have been installed in the cluster.

[k8s-config-connector]: https://github.com/GoogleCloudPlatform/k8s-config-connector

### To Deploy on the cluster

**Build and push your image to the location specified by `IMG`:**

```sh
make docker-build docker-push IMG=<some-registry>/tmp:tag
```

**NOTE:** This image ought to be published in the personal registry you specified.
And it is required to have access to pull the image from the working environment.
Make sure you have the proper permission to the registry if the above commands don’t work.

**Install the CRDs into the cluster:**

```sh
make install
```

**Deploy the Manager to the cluster with the image specified by `IMG`:**

```sh
make deploy IMG=<some-registry>/tmp:tag
```

> **NOTE**: If you encounter RBAC errors, you may need to grant yourself cluster-admin
privileges or be logged in as admin.

**Create instances of your solution**
You can apply the samples (examples) from the config/sample:

```sh
kubectl apply -k config/samples/
```

>**NOTE**: Ensure that the samples has default values to test it out.
### To Uninstall

**Delete the instances (CRs) from the cluster:**

```sh
kubectl delete -k config/samples/
```

**Delete the APIs(CRDs) from the cluster:**

```sh
make uninstall
```

**UnDeploy the controller from the cluster:**

```sh
make undeploy
```

<!-- ## Contributing -->

**NOTE:** Run `make help` for more information on all potential `make` targets

More information can be found via the [Kubebuilder Documentation](https://book.kubebuilder.io/introduction.html)
44 changes: 25 additions & 19 deletions cmd/main.go
Original file line number Diff line number Diff line change
Expand Up @@ -60,6 +60,7 @@ func main() {
var tlsOpts []func(*tls.Config)
var upstreamKubeconfig string
var locationClassName string
var infraNamespace string

flag.StringVar(&metricsAddr, "metrics-bind-address", "0", "The address the metrics endpoint binds to. "+
"Use :8443 for HTTPS or :8080 for HTTP, or leave as 0 to disable the metrics service.")
Expand All @@ -81,8 +82,12 @@ func main() {
// which the operator does not need to receive. We'll likely need to lean
// into well known labels here, since a location class is defined on a location,
// which entities only reference and do not embed.
flag.StringVar(&locationClassName, "location-class", "self-managed", "Only consider resources attached to locations with the "+
"specified location class.")
flag.StringVar(
&locationClassName,
"location-class",
"self-managed",
"Only consider resources attached to locations with the specified location class.",
)

opts := zap.Options{
Development: true,
Expand All @@ -92,6 +97,9 @@ func main() {
flag.StringVar(&upstreamKubeconfig, "upstream-kubeconfig", "", "absolute path to the kubeconfig "+
"file for the API server that is the source of truth for datum entities")

flag.StringVar(&infraNamespace, "infra-namespace", "", "The namespace which resources for managing GCP entities "+
"should be created in.")

flag.Parse()

ctrl.SetLogger(zap.New(zap.UseFlagOptions(&opts)))
Expand Down Expand Up @@ -144,6 +152,11 @@ func main() {
os.Exit(1)
}

if len(infraNamespace) == 0 {
setupLog.Info("must provide --infra-namespace")
os.Exit(1)
}

upstreamClusterConfig, err := clientcmd.BuildConfigFromFlags("", upstreamKubeconfig)
if err != nil {
setupLog.Error(err, "unable to load control plane kubeconfig")
Expand Down Expand Up @@ -191,15 +204,6 @@ func main() {
os.Exit(1)
}

if err = (&controller.InfraClusterNamespaceReconciler{
Client: mgr.GetClient(),
InfraClient: infraCluster.GetClient(),
Scheme: mgr.GetScheme(),
}).SetupWithManager(mgr, infraCluster); err != nil {
setupLog.Error(err, "unable to create controller", "controller", "InfraClusterNamespaceReconciler")
os.Exit(1)
}

// TODO(jreese) rework the gateway controller when we have a higher level
// orchestrator from network-services-operator that schedules "sub gateways"
// onto clusters, similar to Workloads -> WorkloadDeployments and
Expand All @@ -216,10 +220,11 @@ func main() {
// }

if err = (&controller.WorkloadDeploymentReconciler{
Client: mgr.GetClient(),
InfraClient: infraCluster.GetClient(),
Scheme: mgr.GetScheme(),
LocationClassName: locationClassName,
Client: mgr.GetClient(),
InfraClient: infraCluster.GetClient(),
Scheme: mgr.GetScheme(),
LocationClassName: locationClassName,
InfraClusterNamespaceName: infraNamespace,
}).SetupWithManager(mgr, infraCluster); err != nil {
setupLog.Error(err, "unable to create controller", "controller", "WorkloadDeploymentReconciler")
os.Exit(1)
Expand All @@ -235,10 +240,11 @@ func main() {
}

if err = (&controller.NetworkContextReconciler{
Client: mgr.GetClient(),
InfraClient: infraCluster.GetClient(),
Scheme: mgr.GetScheme(),
LocationClassName: locationClassName,
Client: mgr.GetClient(),
InfraClient: infraCluster.GetClient(),
Scheme: mgr.GetScheme(),
LocationClassName: locationClassName,
InfraClusterNamespaceName: infraNamespace,
}).SetupWithManager(mgr, infraCluster); err != nil {
setupLog.Error(err, "unable to create controller", "controller", "NetworkContextReconciler")
os.Exit(1)
Expand Down
136 changes: 0 additions & 136 deletions internal/controller/infracluster_namespace_controller.go

This file was deleted.

Loading

0 comments on commit f021fca

Please sign in to comment.