Skip to content

Commit

Permalink
Rename Shipwright components away from "operator"
Browse files Browse the repository at this point in the history
  • Loading branch information
imjasonh committed Feb 26, 2021
1 parent a5efeed commit 05bd427
Show file tree
Hide file tree
Showing 25 changed files with 89 additions and 90 deletions.
14 changes: 7 additions & 7 deletions .github/workflows/ci.yml
Original file line number Diff line number Diff line change
Expand Up @@ -116,21 +116,21 @@ jobs:
run: curl -fsL https://github.com/google/ko/releases/download/v0.8.1/ko_0.8.1_Linux_x86_64.tar.gz | sudo tar xzf - -C /usr/local/bin ko
- name: Install Shipwright Build
run: |
make install-operator-kind
kubectl -n build-operator rollout status deployment build-operator --timeout=1m || true
make install-controller-kind
kubectl -n shipwright-build rollout status deployment shipwright-build-controller --timeout=1m || true
- name: Test
run: TEST_E2E_OPERATOR=managed_outside TEST_NAMESPACE=build-operator TEST_IMAGE_REPO=registry.registry.svc.cluster.local:32222/shipwright-io/build-e2e make test-e2e
- name: Build operator logs
run: TEST_E2E_OPERATOR=managed_outside TEST_NAMESPACE=shipwright-build TEST_IMAGE_REPO=registry.registry.svc.cluster.local:32222/shipwright-io/build-e2e make test-e2e
- name: Build controller logs
if: ${{ failure() }}
run: |
echo "# Pods:"
kubectl -n build-operator get pod
kubectl -n shipwright-build get pod
PODS=$(kubectl -n build-operator get pod -o json)
POD_NAME=$(echo "${PODS}" | jq -r '.items[] | select(.metadata.name | startswith("build-operator-")) | .metadata.name')
POD_NAME=$(echo "${PODS}" | jq -r '.items[] | select(.metadata.name | startswith("shipwright-build-controller-")) | .metadata.name')
RESTART_COUNT=$(echo "${PODS}" | jq -r ".items[] | select(.metadata.name == \"${POD_NAME}\") | .status.containerStatuses[0].restartCount")
if [ "${RESTART_COUNT}" != "0" ]; then
echo "# Previous logs:"
kubectl -n build-operator logs "${POD_NAME}" --previous || true
fi
echo "# Logs:"
kubectl -n build-operator logs "${POD_NAME}"
kubectl -n shipwright-build logs "${POD_NAME}"
16 changes: 8 additions & 8 deletions DEVELOPMENT.md
Original file line number Diff line number Diff line change
Expand Up @@ -94,7 +94,7 @@ You must install these tools:

## Environment Setup

To run your operator, you'll need to set these environment variables (we recommend adding them to your `.bashrc`):
To run your controller, you'll need to set these environment variables (we recommend adding them to your `.bashrc`):

1. `GOPATH`: If you don't have one, simply pick a directory and add `export
GOPATH=...`
Expand All @@ -118,7 +118,7 @@ Note: This is roughly equivalent to [`docker login`](https://docs.docker.com/eng

## Install Shipwright Build

The following set of steps highlight how to deploy a Build operator pod into an existing Kubernetes cluster.
The following set of steps highlight how to deploy a Build controller pod into an existing Kubernetes cluster.

1. Target your Kubernetes cluster and install the Shipwright Build. Run this from the root of the source repo:

Expand All @@ -130,17 +130,17 @@ The following set of steps highlight how to deploy a Build operator pod into an
image registry you push to, or `kind.local` if you're using
[KinD](https://kind.sigs.k8s.io).
1. Build and deploy the operator from source, from within the root of the repo:
1. Build and deploy the controller from source, from within the root of the repo:
```sh
ko apply -P -R -f deploy/
```
The above steps give you a running Build operator that executes the code from your current branch.
The above steps give you a running Build controller that executes the code from your current branch.
### Redeploy operator
### Redeploy controller
As you make changes to the code, you can redeploy your operator with:
As you make changes to the code, you can redeploy your controller with:
```sh
ko apply -P -R -f deploy/
Expand All @@ -156,9 +156,9 @@ You can clean up everything with:
### Accessing logs
To look at the operator logs, run:
To look at the controller logs, run:
```sh
kubectl -n build-operator logs $(kubectl -n build-operator get pods -l name=build-operator -o name)
kubectl -n shipwright-build logs $(kubectl -n shipwright-build get pods -l name=shipwright-build-controller -o name)
```
4 changes: 2 additions & 2 deletions HACK.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ Copyright The Shipwright Contributors
SPDX-License-Identifier: Apache-2.0
-->

# Running the Operator
# Running the Controller

Assuming you are logged in to an OpenShift/Kubernetes cluster, run

Expand Down Expand Up @@ -33,7 +33,7 @@ Or
oc policy add-role-to-user system:image-builder pipeline
```

In the near future, the above would be setup by the operator.
In the near future, the above would be setup by the controller.

## Building it locally

Expand Down
16 changes: 8 additions & 8 deletions Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -2,8 +2,8 @@ SHELL := /bin/bash

# output directory, where all artifacts will be created and managed
OUTPUT_DIR ?= build/_output
# relative path to operator binary
OPERATOR = $(OUTPUT_DIR)/bin/build-operator
# relative path to controller binary
OPERATOR = $(OUTPUT_DIR)/bin/shipwright-build-controller

# golang cache directory path
GOCACHE ?= $(shell echo ${PWD})/$(OUTPUT_DIR)/gocache
Expand All @@ -27,7 +27,7 @@ OPERATOR_SDK_EXTRA_ARGS ?= --debug
# test namespace name
TEST_NAMESPACE ?= default

# CI: tekton pipelines operator version
# CI: tekton pipelines controller version
TEKTON_VERSION ?= v0.20.1
# CI: operator-sdk version
SDK_VERSION ?= v0.18.2
Expand Down Expand Up @@ -231,7 +231,7 @@ test-e2e-plain: ginkgo
TEST_E2E_VERIFY_TEKTONOBJECTS=${TEST_E2E_VERIFY_TEKTONOBJECTS} \
$(GINKGO) ${TEST_E2E_FLAGS} test/e2e

.PHONY: install install-apis install-operator install-strategies
.PHONY: install install-apis install-controller install-strategies

install:
KO_DOCKER_REPO="$(IMAGE_HOST)/$(IMAGE)" GOFLAGS="$(GO_FLAGS)" ko apply --bare -R -f deploy/
Expand All @@ -244,21 +244,21 @@ install-apis:
# Wait for the CRD type to be established; this can take a second or two.
kubectl wait --timeout=10s --for condition=established crd/clusterbuildstrategies.build.dev

install-operator: install-apis
install-controller: install-apis
KO_DOCKER_REPO="$(IMAGE_HOST)/$(IMAGE)" GOFLAGS="$(GO_FLAGS)" ko apply --bare -f deploy/

install-operator-kind: install-apis
install-controller-kind: install-apis
KO_DOCKER_REPO=kind.local GOFLAGS="$(GO_FLAGS)" ko apply -f deploy/

install-strategies: install-apis
kubectl apply -R -f samples/buildstrategy/

local: vendor install-strategies
OPERATOR_NAME=build-operator \
OPERATOR_NAME=shipwright-build-controller \
operator-sdk run local --operator-flags="$(ZAP_FLAGS)"

local-plain: vendor
OPERATOR_NAME=build-operator \
OPERATOR_NAME=shipwright-build-controller \
operator-sdk run local --operator-flags="$(ZAP_FLAGS)"

clean:
Expand Down
8 changes: 4 additions & 4 deletions build/Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -4,12 +4,12 @@

FROM registry.access.redhat.com/ubi8/ubi-minimal:latest

ENV OPERATOR=/usr/local/bin/build-operator \
ENV OPERATOR=/usr/local/bin/shipwright-build-controller \
USER_UID=1001 \
USER_NAME=build-operator
USER_NAME=shipwright-build-controller

# install operator binary
COPY build/_output/bin/build-operator ${OPERATOR}
# install controller binary
COPY build/_output/bin/shipwright-build-controller ${OPERATOR}

COPY build/bin /usr/local/bin
RUN /usr/local/bin/user_setup
Expand Down
2 changes: 1 addition & 1 deletion build/bin/entrypoint
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@

if ! whoami &>/dev/null; then
if [ -w /etc/passwd ]; then
echo "${USER_NAME:-build-operator}:x:$(id -u):$(id -g):${USER_NAME:-build-operator} user:${HOME}:/sbin/nologin" >> /etc/passwd
echo "${USER_NAME:-shipwright-build-controller}:x:$(id -u):$(id -g):${USER_NAME:-shipwright-build-controller} user:${HOME}:/sbin/nologin" >> /etc/passwd
fi
fi

Expand Down
2 changes: 1 addition & 1 deletion cmd/manager/main.go
Original file line number Diff line number Diff line change
Expand Up @@ -90,7 +90,7 @@ func main() {

mgr, err := controller.NewManager(ctx, buildCfg, cfg, manager.Options{
LeaderElection: true,
LeaderElectionID: "build-operator-lock",
LeaderElectionID: "shipwright-build-controller-lock",
LeaderElectionNamespace: buildCfg.ManagerOptions.LeaderElectionNamespace,
LeaseDuration: buildCfg.ManagerOptions.LeaseDuration,
RenewDeadline: buildCfg.ManagerOptions.RenewDeadline,
Expand Down
2 changes: 1 addition & 1 deletion deploy/namespace.yaml → deploy/100-namespace.yaml
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
apiVersion: v1
kind: Namespace
metadata:
name: build-operator
name: shipwright-build
4 changes: 2 additions & 2 deletions deploy/role.yaml → deploy/200-role.yaml
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: build-operator
name: shipwright-build-controller
rules:
- apiGroups:
- ""
Expand Down Expand Up @@ -48,7 +48,7 @@ rules:
- apiGroups:
- apps
resourceNames:
- build-operator
- shipwright-build
resources:
- deployments/finalizers
verbs:
Expand Down
8 changes: 4 additions & 4 deletions deploy/role_binding.yaml → deploy/300-rolebinding.yaml
Original file line number Diff line number Diff line change
@@ -1,12 +1,12 @@
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: build-operator
name: shipwright-build-controller
subjects:
- kind: ServiceAccount
name: build-operator
namespace: build-operator
name: shipwright-build-controller
namespace: shipwright-build
roleRef:
kind: ClusterRole
name: build-operator
name: shipwright-build-controller
apiGroup: rbac.authorization.k8s.io
5 changes: 5 additions & 0 deletions deploy/400-serviceaccount.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
apiVersion: v1
kind: ServiceAccount
metadata:
name: shipwright-build-controller
namespace: shipwright-build
18 changes: 9 additions & 9 deletions deploy/operator.yaml → deploy/500-controller.yaml
Original file line number Diff line number Diff line change
@@ -1,37 +1,37 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: build-operator
namespace: build-operator
name: shipwright-build-controller
namespace: shipwright-build
spec:
replicas: 1
selector:
matchLabels:
name: build-operator
name: shipwright-build
template:
metadata:
labels:
name: build-operator
name: shipwright-build
spec:
serviceAccountName: build-operator
serviceAccountName: shipwright-build-controller
containers:
- name: build-operator
- name: shipwright-build
image: ko://github.com/shipwright-io/build/cmd/manager
env:
- name: WATCH_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: BUILD_OPERATOR_LEADER_ELECTION_NAMESPACE
- name: BUILD_CONTROLLER_LEADER_ELECTION_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: OPERATOR_NAME
value: "build-operator"
- name: CONTROLLER_NAME
value: "shipwright-build"
livenessProbe:
exec:
command:
Expand Down
5 changes: 0 additions & 5 deletions deploy/service_account.yaml

This file was deleted.

2 changes: 1 addition & 1 deletion docs/build.md
Original file line number Diff line number Diff line change
Expand Up @@ -301,7 +301,7 @@ Please consider the description of the attributes under `.spec.runtime`:
> Specifying the runtime section will cause a `BuildRun` to push `spec.output.image` twice. First, the image produced by chosen `BuildStrategy` is pushed, and next it gets reused to construct the runtime-image, which is pushed again, overwriting `BuildStrategy` outcome.
> Be aware, specially in situations where the image push action triggers automation steps. Since the same tag will be reused, you might need to take this in consideration when using runtime-images.

Under the cover, the runtime image will be an additional step in the generated Task spec of the TaskRun. It uses [Kaniko](https://github.com/GoogleContainerTools/kaniko) to run a container build using the `gcr.io/kaniko-project/executor:v1.5.1` image. You can overwrite this image by adding the environment variable `KANIKO_CONTAINER_IMAGE` to the [build operator deployment](../deploy/operator.yaml).
Under the cover, the runtime image will be an additional step in the generated Task spec of the TaskRun. It uses [Kaniko](https://github.com/GoogleContainerTools/kaniko) to run a container build using the `gcr.io/kaniko-project/executor:v1.5.1` image. You can overwrite this image by adding the environment variable `KANIKO_CONTAINER_IMAGE` to the [build controller deployment](../deploy/controller.yaml).

## BuildRun deletion

Expand Down
2 changes: 1 addition & 1 deletion docs/buildstrategies.md
Original file line number Diff line number Diff line change
Expand Up @@ -286,7 +286,7 @@ spec:

### How does Tekton Pipelines handle resources

The **Build** operator relies on the Tekton [pipeline controller](https://github.com/tektoncd/pipeline) to schedule the `pods` that execute the above strategy steps. In a nutshell, the **Build** operator creates on run-time a Tekton **TaskRun**, and the **TaskRun** generates a new pod in the particular namespace. In order to build an image, the pod executes all the strategy steps one-by-one.
The **Build** controller relies on the Tekton [pipeline controller](https://github.com/tektoncd/pipeline) to schedule the `pods` that execute the above strategy steps. In a nutshell, the **Build** controller creates on run-time a Tekton **TaskRun**, and the **TaskRun** generates a new pod in the particular namespace. In order to build an image, the pod executes all the strategy steps one-by-one.

Tekton manage each step resources **request** in a very particular way, see the [docs](https://github.com/tektoncd/pipeline/blob/master/docs/tasks.md#defining-steps). From this document, it mentions the following:

Expand Down
10 changes: 5 additions & 5 deletions docs/configuration.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,15 +6,15 @@ SPDX-License-Identifier: Apache-2.0

# Configuration

The `build-operator` is installed into Kubernetes with reasonable defaults. However, there are some settings that can be overridden using environment variables in [`operator.yaml`](../deploy/operator.yaml).
The controller is installed into Kubernetes with reasonable defaults. However, there are some settings that can be overridden using environment variables in [`controller.yaml`](../deploy/controller.yaml).

The following environment variables are available:

| Environment Variable | Description |
| --- | --- |
| `CTX_TIMEOUT` | Override the default context timeout used for all Custom Resource Definition reconciliation operations. |
| `KANIKO_CONTAINER_IMAGE` | Specify the Kaniko container image to be used for the runtime image build instead of the default, for example `gcr.io/kaniko-project/executor:v1.5.1`. |
| `BUILD_OPERATOR_LEADER_ELECTION_NAMESPACE` | Set the namespace to be used to store the `build-operator` lock, by default it is in the same namespace as the operator itself. |
| `BUILD_OPERATOR_LEASE_DURATION` | Override the `LeaseDuration`, which is the duration that non-leader candidates will wait to force acquire leadership. |
| `BUILD_OPERATOR_RENEW_DEADLINE` | Override the `RenewDeadline`, which is the duration that the acting master will retry refreshing leadership before giving up. |
| `BUILD_OPERATOR_RETRY_PERIOD` | Override the `RetryPeriod`, which is the duration the LeaderElector clients should wait between tries of actions. |
| `BUILD_CONTROLLER_LEADER_ELECTION_NAMESPACE` | Set the namespace to be used to store the `shipwright-build-controller` lock, by default it is in the same namespace as the controller itself. |
| `BUILD_CONTROLLER_LEASE_DURATION` | Override the `LeaseDuration`, which is the duration that non-leader candidates will wait to force acquire leadership. |
| `BUILD_CONTROLLER_RENEW_DEADLINE` | Override the `RenewDeadline`, which is the duration that the acting master will retry refreshing leadership before giving up. |
| `BUILD_CONTROLLER_RETRY_PERIOD` | Override the `RetryPeriod`, which is the duration the LeaderElector clients should wait between tries of actions. |
2 changes: 1 addition & 1 deletion docs/development/authentication.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ SPDX-License-Identifier: Apache-2.0

# Understanding authentication at runtime

The following document provides an introduction around the different authentication methods that can take place during an image build when using the Build operator.
The following document provides an introduction around the different authentication methods that can take place during an image build when using the Build controller.

- [Overview](#overview)
- [Build Secrets Annotation](#build-secrets-annotation)
Expand Down
8 changes: 4 additions & 4 deletions docs/development/local_development.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,21 +6,21 @@ SPDX-License-Identifier: Apache-2.0

# Running on development mode

The following document highlights how to deploy a Build operator locally for running on development mode.
The following document highlights how to deploy a Build controller locally for running on development mode.

**Before generating an instance of the Build operator, ensure the following:**
**Before generating an instance of the Build controller, ensure the following:**

- Target your Kubernetes cluster. We recommend the usage of KinD for development, which you can launch via our [install-kind.sh](/hack/install-kind.sh) script.
- On the cluster, ensure the Tekton controllers are running. You can use our Tekton installation script in [install-tekton.sh](/hack/install-tekton.sh)

---

Once the code have been modified, you can generate an instance of the Build operator running locally to validate your changes. For running the Build operator locally via the `local` target:
Once the code have been modified, you can generate an instance of the Build controller running locally to validate your changes. For running the Build controller locally via the `local` target:

```sh
pushd $GOPATH/src/github.com/shipwright-io/build
make local
popd
```

_Note_: The above target will uninstall/install all related CRDs and start an instance of the operator via the `operator-sdk` binary. All existing CRDs instances will be deleted.
_Note_: The above target will uninstall/install all related CRDs and start an instance of the controller via the `operator-sdk` binary. All existing CRDs instances will be deleted.
2 changes: 1 addition & 1 deletion docs/development/testing.md
Original file line number Diff line number Diff line change
Expand Up @@ -87,7 +87,7 @@ Integration tests are designed based on the following:

- All significant features should have an integration test.
- They require to have access to a Kubernetes cluster.
- Each test generates its own instance of the build operator, namespace and resources.
- Each test generates its own instance of the build controller, namespace and resources.
- After test are executed, all generated resources for the particular test are removed.
- They test all the interactions between components that have a relationship.
- They do not test an e2e flow.
Expand Down
Loading

0 comments on commit 05bd427

Please sign in to comment.