-
Notifications
You must be signed in to change notification settings - Fork 1.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Build, tag, and push operator base images during CI #832
Build, tag, and push operator base images during CI #832
Conversation
@hasbro17 Before we merge this, we need a robot account created in UPDATE: @shawn-hurley was able to do this for me. Thanks Shawn! |
Also, looks like this will close #555. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks great
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
TIL a lot about travis directives 😁
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lgtm overall, nice work 👍
Few suggestions and questions.
f70d8b9
to
43112f8
Compare
set -eux | ||
|
||
# build operator binary and base image | ||
go build -o test/ansible-operator/ansible-operator test/ansible-operator/cmd/ansible-operator/main.go |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@shawn-hurley @mhrivnak I'm guessing test/ansible-operator/cmd/ansible-operator/main.go
is not (yet, at least) what we want for the official ansible-operator base image?
Can we pull in the latest changes from the main.go in github.com/water-hole/ansible-operator
or do we need to wait on #823?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we can pull in the latest from water-hole
I don't see any reason not too. Once this change goes in the this becomes our main file I believe.
Personally, I would like to see these at a new location that is not hidden under the test
dir. I think that we can deal with that stuff much later, just throwing out the concern.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@shawn-hurley For the scorecard PR, I have the image for the scorecard-proxy in a new images
folder. I think it would be a good idea to move our base images into a specific directory like that. We could make another PR after this gets merged to move the base images to a more reasonable location.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@shawn-hurley I reverted my changes to test/ansible-operator/cmd/ansible-operator/main.go
that were meant to match it up water-hole. Now with #754, the current version looks like what we want. WDYT?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No, we need these changes https://github.com/water-hole/ansible-operator/blob/master/cmd/manager/main.go#L38-L44
for the namespace
c721504
to
5f181fc
Compare
…opying from github.com/water-hole/ansible-operator
5f181fc
to
f1a1459
Compare
Latest CI with updated ansible-operator main.go: https://travis-ci.org/joelanford/operator-sdk/builds/467668311 |
932fb14
to
b1968aa
Compare
pkg/scaffold/ansible/dockerfile.go
Outdated
@@ -39,7 +39,7 @@ func (d *Dockerfile) GetInput() (input.Input, error) { | |||
return d.Input, nil | |||
} | |||
|
|||
const dockerFileAnsibleTmpl = `FROM quay.io/water-hole/ansible-operator | |||
const dockerFileAnsibleTmpl = `FROM quay.io/operator-framework/ansible-operator |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In the Dockerfile scaffolds for helm and ansible, should we pin to an image tag? Something like the following?
d.ImageTag = strings.TrimSuffix(version.Version, "+git")
And
const dockerFileAnsibleTmpl = `FROM quay.io/operator-framework/ansible-operator:{{.ImageTag}}
...
`
If so, we'll have to be careful to make sure our e2e tests use the just-built image vs an existing tagged one.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good question. Our e2e tests will definitely need to build a dev ansible/helm base image from the PR before testing.
For our tagged releases for the SDK we definitely want to use ansible/helm base images with the same image tag as the SDK release version in the scaffold.
But when using the SDK on the master branch, do we want to scaffold base images for the last semver release, or use base images built from the master. I think that we'll need to do the latter.
Right now it seems like we don't have a release tag so it will always use the latest updated tag.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree, if we're testing a PR on master we should build base images from master. We might want to add a step in the release process in another PR for building, tagging, and pushing ansible and helm base images as per @shawn-hurley's previous comment and #555.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I actually think that the tag should just be the release that is for that operator-sdk binary. I wouldn't add the git because it should be the release number I think right?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Here are my thoughts:
- During CI, we can use whatever tag we want with the CI-built image. Ideally we name it something that we never actually push to avoid accidentally using an image we don't intend to. I propose
:dev
. So no matter what branch or tag Travis is building, we'll build:dev
as our base image locally. - Next, during the e2e tests, we want to use that
:dev
image. So, after runningoperator-sdk new
but beforeoperator-sdk build
, I would suggest always replacing the scaffolded image name inbuild/Dockerfile
withquay.io/operator-framework/ansible-operator:dev
- Finally, when we go to push the image tags, we'll run the following for each
$PUSH_TAG
:docker tag quay.io/operator-framework/ansible-operator:dev quay.io/operator-framework/ansible-operator:$PUSH_TAG docker push quay.io/operator-framework/ansible-operator:$PUSH_TAG
As for what image tag to scaffold, it sounds like we agree that if version.Version
does not end with +git
, then we'll use version.Version
. The biggest question in my mind is what to scaffold when it does contain +git
. The options in my mind are:
- Use
:master
.- Pro: If users are running operator-sdk from master, they'll get the most recent base image changes.
- Con: If users are running operator-sdk from an unreleased commit prior to the last stable release, it's possible that the master base image will be incompatible with the rest of the scaffolded operator.
- Just drop
+git
.- Pro: No matter what build of operator-sdk is being used, it's likely that the base image will work, since we'll use the "closest" one.
- Con: If using operator-sdk from master, the latest base image changes will not be present.
- Use a broken placeholder, like
REPLACE_TAG
:- In this case, the operator will fail to build, forcing the decision onto the user.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
^^ It will point to the latest release.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also, I'd suggest :master
for a couple of reasons.
- It's more obvious to users where it came from.
- Because it just comes from the branch name, it makes the scripts more flexible. If we wanted to started pushing base images for another branch (say we have another big integration effort like
controller-runtime
), it would be as easy as updating theif:
condition in the .travis.yml deploy stage.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@joelanford @hasbro17 I personally think that a new ansible or helm operator always uses the latest release version (i.e 0.3.0), not master. I think that if a user wants to use latest or master that is an explicit change to use it in their dockerfile.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That sounds fine to me. So long as we maintain a :master
tag it's not too much work for users to use the :master
base image when they want to.
@joelanford WDYT? We always keep the :latest
tag pointing to the latest release.
And in the scaffold we always use the latest release tag e.g :v0.3.0
. (Can't use :latest
I think because that can break older projects when we make a new release image and change latest).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Agreed. I'll update the PR with this change.
…tches from env, not hardcoded file
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't really understand the Travis file or how it is working. I have yet to see the <<
tag or the &go_before_install
and *go_before_install
# build operator binary and base image | ||
go build -o test/ansible-operator/ansible-operator test/ansible-operator/cmd/ansible-operator/main.go | ||
pushd test/ansible-operator | ||
docker build -t "$1" . |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Shouldn't we dogfood our stuff here, and just use operator-sdk build.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Probably. For this PR, I just moved these lines directly out of the e2e-ansible.sh script to make them reusable.
I think this is related to your comment about where the main functions are stored. Is it possible to use operator-sdk build
with the current repo layout?
I would expect that this script will be changed or reorganized based on the outcome of @mhrivnak's upcoming proposal to use the operator-sdk binary directly and/or the migrate command from #823 .
pkg/scaffold/ansible/dockerfile.go
Outdated
@@ -39,7 +39,7 @@ func (d *Dockerfile) GetInput() (input.Input, error) { | |||
return d.Input, nil | |||
} | |||
|
|||
const dockerFileAnsibleTmpl = `FROM quay.io/water-hole/ansible-operator | |||
const dockerFileAnsibleTmpl = `FROM quay.io/operator-framework/ansible-operator |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I actually think that the tag should just be the release that is for that operator-sdk binary. I wouldn't add the git because it should be the release number I think right?
@shawn-hurley This syntax is using YAML merge keys (documented here). It's basically a way of declaring re-usable YAML maps and then merging them into other maps, enabling composition and reuse. Since our travis build has multiple concerns now, I figured this would be the easiest way to maintain it without repeating ourselves or performing unnecessary steps unrelated to the CI task at hand (e.g. we almost always need to install and run dep, but we don't need to install ansible for the Helm operator base image deploy stage). I'm definitely open to suggestions though. I don't have a whole lot of experience with Travis. |
… local-only image tag
…s from latest release
96ad2f8
to
7b38800
Compare
Latest CI build: https://travis-ci.org/joelanford/operator-sdk/builds/469623064 |
…atch all namespaces if WATCH_NAMESPACE is unset
…ixing ansible-operator compile error
Latest CI build: https://travis-ci.org/joelanford/operator-sdk/builds/469682451 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
Nice work! Let's see if it all comes together by testing out the master branch image build after this gets merged.
@joelanford I always forget but can you please follow this up to update the CHANGELOG as well. |
…rk#832) * Makefile,hack/docker,hack/tests: adding new docker scripts and Make targets * Makefile,pkg/scaffold: changing default image repo to quay.io/operator-framework * [travis deploy] .travis.yml: refactor into stages to build and push docker images * [travis deploy] Makefile,hack/: renaming make 'docker' targets to 'image' * [travis deploy] .travis.yml: renaming make 'docker' targets to 'image' * [travis deploy] .travis.yml,Makefile,hack/image: updates from PR feedback * [travis deploy] test/ansible-operator/cmd/ansible-operator/main.go: copying from github.com/water-hole/ansible-operator * [travis deploy] test/helm-operator/cmd/helm-operator/main.go: use production logger * [travis deploy] test/helm-operator/cmd/helm-operator/main.go: load watches from env, not hardcoded file * [travis deploy] Makefile,hack/tests: always build, test, and tag from local-only image tag * [travis deploy] pkg/scaffold/ansible,pkg/scaffold/helm: use image tags from latest release * [travis deploy] Makefile: add missing PHONY target for test/ci-helm * [travis deploy] test/ansible-operator/cmd/ansible-operator/main.go: watch all namespaces if WATCH_NAMESPACE is unset * [travis deploy] test/ansible-operator/cmd/ansible-operator/main.go: fixing ansible-operator compile error
Description of the change:
$TRAVIS_BRANCH
(which is set to$TRAVIS_TAG
for tagged commits)latest
if$TRAVIS_TAG
is set and it matches the latest semver version found in the repo.quay.io/operator-framework
instead ofquay.io/water-hole
ANSIBLE_IMAGE
andHELM_IMAGE
in their travis repository settings to override the full image name used when pushing to a docker repository.[travis deploy]
Motivation for the change:
Our CI process needs to automatically build, tag, and deploy our base images for the Ansible and Helm operators.
NOTE: Because the deploy stage does not run during PR builds, we won't see the new deploy stage in the travis builds for this PR. Here's a link to the build for my fork: https://travis-ci.org/joelanford/operator-sdk/builds/466080812