Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

increase wait time on imagestreams #6383

Closed

Conversation

stevekuznetsov
Copy link
Contributor

This is a band-aid so we flake less often while a real fix is made. All waits now allow 2 minutes until timeout.
@bparees @deads2k FYI

@stevekuznetsov
Copy link
Contributor Author

[test]

@bparees
Copy link
Contributor

bparees commented Dec 17, 2015

lgtm [merge]

@openshift-bot
Copy link
Contributor

continuous-integration/openshift-jenkins/merge Waiting: You are in the build queue at position: 5

@openshift-bot
Copy link
Contributor

Evaluated for origin merge up to 399df9f

@deads2k
Copy link
Contributor

deads2k commented Dec 17, 2015

@stevekuznetsov Why would I believe this would make me flake less often? Why wouldn't I think that something was wedged and this will just make me wait twice as long? Do you have some data that indicates that it was actively doing work for a minute before?

@bparees
Copy link
Contributor

bparees commented Dec 17, 2015

@deads2k
#6259 (comment)

@deads2k
Copy link
Contributor

deads2k commented Dec 17, 2015

@bparees if I'm reading that comment right, all the other image streams start importing, but the mysql never does. mysql is created before some of the others and they appear to be running (or at least kicking off) quickly. I'm still not seeing the indication that its taking a really long time for mysql in particular and I haven't seen total time to import in successful cases either for comparison.

@bparees
Copy link
Contributor

bparees commented Dec 17, 2015

@deads2k it appears to me they are importing sequentially, i don't think that log is complete (hence the 60s timeout being hit right after wildfly starts...clearly it was doing something before the log excerpt provided, likely importing other images)

i've never looked into the image import controller, but my assumption and observation is that it's a sequential process and i don't know how it chooses the ordering.

and I still think the fact that it takes 60s to import needs to be investigated (hence why this doesn't close the original issue) but it may reduce the flaking if our assumption is correct.

@bparees
Copy link
Contributor

bparees commented Dec 17, 2015

@deads2k The sever gets the shutdown signal milliseconds after the wilfdly image is imported,

shutdown signal meaning we hit the 60s timeout. so that doesn't sound like "wedged", it sounds like it ran out of time.

@liggitt
Copy link
Contributor

liggitt commented Dec 18, 2015

on the successful test run:

Running hack/../test/cmd/builds.sh:76: executing 'oc get is ruby-22-centos7' expecting any result and text 'latest'; re-trying every 0.2s until completion or 120.000s...
SUCCESS after 29.058s: hack/../test/cmd/builds.sh:76: executing 'oc get is ruby-22-centos7' expecting any result and text 'latest'; re-trying every 0.2s until completion or 120.000s
...
Running hack/../test/cmd/images.sh:49: executing 'oc get imagestreamtags wildfly:latest' expecting success; re-trying every 0.2s until completion or 120.000s...
SUCCESS after 18.067s: hack/../test/cmd/images.sh:49: executing 'oc get imagestreamtags wildfly:latest' expecting success; re-trying every 0.2s until completion or 120.000s
...
Running hack/../test/cmd/newapp.sh:21: executing 'oc get imagestreamtags mysql:latest' expecting success; re-trying every 0.2s until completion or 120.000s...
SUCCESS after 66.343s: hack/../test/cmd/newapp.sh:21: executing 'oc get imagestreamtags mysql:latest' expecting success; re-trying every 0.2s until completion or 120.000s
...
Running hack/../test/cmd/newapp.sh:144: executing 'oc get imagestreamtags installable:file' expecting success; re-trying every 0.2s until completion or 120.000s...
SUCCESS after 52.793s: hack/../test/cmd/newapp.sh:144: executing 'oc get imagestreamtags installable:file' expecting success; re-trying every 0.2s until completion or 120.000s

I can see how these edged over a minute... I'm concerned the imports are taking so long

@bparees
Copy link
Contributor

bparees commented Dec 18, 2015

Agree, issue #6259 will stay open to cover that though.

@liggitt
Copy link
Contributor

liggitt commented Dec 18, 2015

if import time has increased dramatically, the issue open to address that needs to be higher priority and not probably not tagged as a test-flake issue

@openshift-bot openshift-bot added the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Dec 18, 2015
@openshift-bot
Copy link
Contributor

Evaluated for origin test up to 0c7a7f9

@openshift-bot
Copy link
Contributor

continuous-integration/openshift-jenkins/test SUCCESS (https://ci.openshift.redhat.com/jenkins/job/test_pull_requests_origin/7956/)

@openshift-bot openshift-bot removed the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Dec 18, 2015
@stevekuznetsov stevekuznetsov deleted the skuznets/is-flake branch January 8, 2016 15:21
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants