Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

tests: Multiple txsource fixes #5192

Merged
merged 2 commits into from
Feb 21, 2023
Merged

tests: Multiple txsource fixes #5192

merged 2 commits into from
Feb 21, 2023

Conversation

kostko
Copy link
Member

@kostko kostko commented Feb 21, 2023

No description provided.

@kostko kostko force-pushed the kostko/fix/ci-txsource branch from a5ea3e5 to cb9ded1 Compare February 21, 2023 19:13
@kostko kostko marked this pull request as ready for review February 21, 2023 19:24
@codecov
Copy link

codecov bot commented Feb 21, 2023

Codecov Report

Merging #5192 (cb9ded1) into master (733553b) will decrease coverage by 0.08%.
The diff coverage is n/a.

@@            Coverage Diff             @@
##           master    #5192      +/-   ##
==========================================
- Coverage   61.58%   61.51%   -0.08%     
==========================================
  Files         512      512              
  Lines       53980    53980              
==========================================
- Hits        33244    33205      -39     
- Misses      16509    16565      +56     
+ Partials     4227     4210      -17     
Impacted Files Coverage Δ
go/worker/common/p2p/txsync/server.go 7.69% <0.00%> (-53.85%) ⬇️
go/worker/common/p2p/txsync/client.go 23.68% <0.00%> (-50.00%) ⬇️
.../worker/compute/executor/committee/transactions.go 18.18% <0.00%> (-40.91%) ⬇️
go/worker/compute/executor/committee/state.go 74.07% <0.00%> (-14.82%) ⬇️
go/worker/compute/executor/committee/batch.go 60.60% <0.00%> (-9.10%) ⬇️
go/p2p/rpc/client.go 71.61% <0.00%> (-3.50%) ⬇️
go/worker/storage/p2p/sync/server.go 32.25% <0.00%> (-3.23%) ⬇️
go/runtime/txpool/txpool.go 75.21% <0.00%> (-3.08%) ⬇️
go/worker/keymanager/worker.go 62.09% <0.00%> (-2.55%) ⬇️
go/runtime/host/protocol/connection.go 60.15% <0.00%> (-1.88%) ⬇️
... and 25 more

Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here.

@kostko kostko merged commit 07995cd into master Feb 21, 2023
@kostko kostko deleted the kostko/fix/ci-txsource branch February 21, 2023 20:05
@@ -302,6 +303,8 @@ func (r *registration) Run( // nolint: gocyclo
}
}
}
// Cleanup temporary identities directory after generation.
_ = os.RemoveAll(nodeIdentitiesDir)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Any special reason why removing twice?

@@ -256,6 +254,9 @@ func (r *registration) Run( // nolint: gocyclo

entityAccs[i].nodeIdentities = append(entityAccs[i].nodeIdentities, &nodeAcc{ident, nodeDesc, nodeAccNonce})
ent.Nodes = append(ent.Nodes, ident.NodeSigner.Public())

// Cleanup temporary node identity directory after generation.
_ = os.RemoveAll(dataDir)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Move higher and use defer inside func to remove also on error.

if isSynced, _ := q.control.IsSynced(loopCtx); !isSynced {
_ = q.control.WaitSync(loopCtx)
time.Sleep(1 * time.Second)
continue
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's a good practice to always cancel context.
We are stealing time from query timeout when syncing, not sure if a problem.
If the node never syncs, we never gracefully exit the loop.
I think one line could be removed.

if err = q.control.WaitSync(loopCtx); err != nil {
	time.Sleep(time.Second)
	continue
}
``

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants