-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Bug: pod setup fails internally #1621
Comments
I just change image label from latest to v2.294.0-ubuntu-20.04-e3deb0d and job succeed. - image: 'summerwind/actions-runner-dind'
+ image: 'summerwind/actions-runner-dind:v2.294.0-ubuntu-20.04-e3deb0d' By this history https://github.com/actions-runner-controller/actions-runner-controller/commits/master/runner/actions-runner.dockerfile and runner pod log, 11cb9b7 must have a problem around |
@bigwheel Hey! Thanks for reporting and the detailed analysis of the problem. Probably it's due to that 11cb9b7#diff-45ecac98435a59bccdffc03779dd18d22ed154a507be7b20cc0828ed8d1a31b6 I'd greatly appreciate it if you could submit a pull request to fix it 🙏 |
@mumoshu I checkd new image ( |
Thanks for confirming! Enjoy |
Controller Version
0.23.0
Helm Chart Version
0.18.0
CertManager Version
1.8.0
Deployment Method
ArgoCD
cert-manager installation
I followed README installation guide and Argo CD install cert-manager.
Argo CD showed green and it is not changed.
Checks
Resource Definitions
To Reproduce
Describe the bug
Today, github job on custom actions runner fails suddenly because we don't change workflow definitions and AC in few weeks.
I started to I investigate this problem.
hint 1
This is job log until yesterday.

This is today's it.

There is unknown step and it is fail cause.
hint 2
In our production environment, job fails several times already.
But n our staging env, job succeeded.
Then, I re-created pods.
Finally, job becomes to fail in staging.
From the above, I guessed docker image was changed recently and it might case problem. Because github job in staging is fewer than production and it sometimes retain old images.
hint 3
See runner pod's log.
I would be suspicious of
update-status
.Describe the expected behavior
Run jobs successfully.
Controller Logs
Runner Pod Logs
Additional Context
We are using ARC over 1 years. Thank you great operator!
The text was updated successfully, but these errors were encountered: