Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Failing TestWSClientNonBlockingEvents #3005

Closed
AnnaShaleva opened this issue May 4, 2023 · 0 comments · Fixed by #3340
Closed

Failing TestWSClientNonBlockingEvents #3005

AnnaShaleva opened this issue May 4, 2023 · 0 comments · Fixed by #3340
Labels
bug Something isn't working I4 No visible changes S4 Routine test Unit tests U2 Seriously planned
Milestone

Comments

@AnnaShaleva
Copy link
Member

Ubuntu, go 1.19. Firstly discovered at https://github.com/nspcc-dev/neo-go/actions/runs/4883739029/jobs/8715507458?pr=3004.

2023-05-04T14:10:52.3767949Z === RUN   TestWSClientNonBlockingEvents
2023-05-04T14:10:52.3768321Z     wsclient_test.go:349: 
2023-05-04T14:10:52.3769041Z         	Error Trace:	/home/runner/work/neo-go/neo-go/pkg/rpcclient/wsclient_test.go:349
2023-05-04T14:10:52.3769525Z         	Error:      	Condition never satisfied
2023-05-04T14:10:52.3769973Z         	Test:       	TestWSClientNonBlockingEvents
2023-05-04T14:10:52.3770414Z --- FAIL: TestWSClientNonBlockingEvents (2.10s)
@AnnaShaleva AnnaShaleva added bug Something isn't working test Unit tests labels May 4, 2023
@AnnaShaleva AnnaShaleva added this to the v0.102.1 milestone May 4, 2023
AnnaShaleva added a commit that referenced this issue Aug 29, 2023
The race is probably caused by the fact that accessing receivers
list of WSClient under rlock requires some time. Technically, it may
be helpful to increase timeouts for the receivers checking operation,
but we can add one more condition to ensure that at least maximum
number of messages was received by the buffered receiver.

This test can't be reproduced on my machine, thus I'd suggest to keep
an eye on it in GH workflows for some time.

Close #3005.

Signed-off-by: Anna Shaleva <[email protected]>
AnnaShaleva added a commit that referenced this issue Aug 29, 2023
The race is probably caused by the fact that accessing receivers
list of WSClient under rlock requires some time. Technically, it may
be helpful to increase timeouts for the receivers checking operation,
but we can add one more condition to ensure that at least maximum
number of messages was received by the buffered receiver.

This test can't be reproduced on my machine, thus I'd suggest to keep
an eye on it in GH workflows for some time.

Close #3005.

Signed-off-by: Anna Shaleva <[email protected]>
@AnnaShaleva AnnaShaleva modified the milestones: v0.102.1, v0.102.0 Aug 29, 2023
AnnaShaleva added a commit that referenced this issue Aug 30, 2023
The race is probably caused by the fact that accessing receivers
list of WSClient under rlock requires some time. Technically, it may
be helpful to increase timeouts for the receivers checking operation,
but we can add one more condition to ensure that at least maximum
number of messages was received by the buffered receiver.

This test can't be reproduced on my machine, thus I'd suggest to keep
an eye on it in GH workflows for some time.

Close #3005.

Signed-off-by: Anna Shaleva <[email protected]>
AnnaShaleva added a commit that referenced this issue Aug 30, 2023
The race is probably caused by the fact that accessing receivers
list of WSClient under rlock requires some time. Technically, it may
be helpful to increase timeouts for the receivers checking operation,
but we can add one more condition to ensure that at least maximum
number of messages was received by the buffered receiver.

This test can't be reproduced on my machine, thus I'd suggest to keep
an eye on it in GH workflows for some time.

Close #3005.

Signed-off-by: Anna Shaleva <[email protected]>
AnnaShaleva added a commit that referenced this issue Aug 30, 2023
The race is probably caused by the fact that accessing receivers
list of WSClient under rlock requires some time. Technically, it may
be helpful to increase timeouts for the receivers checking operation,
but we can add one more condition to ensure that at least maximum
number of messages was received by the buffered receiver.

This test can't be reproduced on my machine, thus I'd suggest to keep
an eye on it in GH workflows for some time.

Close #3005.

Signed-off-by: Anna Shaleva <[email protected]>
AnnaShaleva added a commit that referenced this issue Aug 30, 2023
The race is probably caused by the fact that accessing receivers
list of WSClient under rlock requires some time. Technically, it may
be helpful to increase timeouts for the receivers checking operation,
but we can add one more condition to ensure that at least maximum
number of messages was received by the buffered receiver.

This test can't be reproduced on my machine, thus I'd suggest to keep
an eye on it in GH workflows for some time.

Close #3005.

Signed-off-by: Anna Shaleva <[email protected]>
AnnaShaleva added a commit that referenced this issue Aug 30, 2023
The race is probably caused by the fact that accessing receivers
list of WSClient under rlock requires some time. Technically, it may
be helpful to increase timeouts for the receivers checking operation,
but we can add one more condition to ensure that at least maximum
number of messages was received by the buffered receiver.

This test can't be reproduced on my machine, thus I'd suggest to keep
an eye on it in GH workflows for some time.

Close #3005.

Signed-off-by: Anna Shaleva <[email protected]>
AnnaShaleva added a commit that referenced this issue Aug 30, 2023
The race is probably caused by the fact that accessing receivers
list of WSClient under rlock requires some time. Technically, it may
be helpful to increase timeouts for the receivers checking operation,
but we can add one more condition to ensure that at least maximum
number of messages was received by the buffered receiver.

This test can't be reproduced on my machine, thus I'd suggest to keep
an eye on it in GH workflows for some time.

Close #3005.

Signed-off-by: Anna Shaleva <[email protected]>
@roman-khimov roman-khimov modified the milestones: v0.102.0, v0.102.1 Sep 6, 2023
@roman-khimov roman-khimov modified the milestones: v0.103.0, v0.104.0 Oct 20, 2023
@AnnaShaleva AnnaShaleva modified the milestones: v0.104.0, v0.105.0 Nov 9, 2023
@roman-khimov roman-khimov added U2 Seriously planned S4 Routine I4 No visible changes labels Dec 21, 2023
@AnnaShaleva AnnaShaleva modified the milestones: v0.105.0, v0.106.0 Dec 28, 2023
AliceInHunterland added a commit that referenced this issue Mar 5, 2024
The HTTP server should be closed at the end of the test to prevent fd
leak.

Refs #3300
Close #3312
Close #2956
Close #3005
Ref #2958

Signed-off-by: Ekaterina Pavlova <[email protected]>
AliceInHunterland added a commit that referenced this issue Mar 6, 2024
Add waiting for startSending to ensure that the client is ready before
the server starts sending messages.

Close #3005

Signed-off-by: Ekaterina Pavlova <[email protected]>
AliceInHunterland added a commit that referenced this issue Mar 6, 2024
Add waiting for startSending to ensure that the client is ready before
the server starts sending messages.

Close #3005

Signed-off-by: Ekaterina Pavlova <[email protected]>
AliceInHunterland added a commit that referenced this issue Mar 6, 2024
Add waiting for startSending to ensure that the client is ready before
the server starts sending messages.

Close #3005

Signed-off-by: Ekaterina Pavlova <[email protected]>
AliceInHunterland added a commit that referenced this issue Mar 7, 2024
Add waiting for startSending to ensure that the client is ready before
the server starts sending messages.

Close #3005

Signed-off-by: Ekaterina Pavlova <[email protected]>
AliceInHunterland added a commit that referenced this issue Mar 11, 2024
Add waiting for startSending to ensure that the client is ready before
the server starts sending messages.

Close #3005

Signed-off-by: Ekaterina Pavlova <[email protected]>
AliceInHunterland added a commit that referenced this issue Mar 13, 2024
Add waiting for startSending to ensure that the client is ready before
the server starts sending messages.

Close #3005
Close #3312

Signed-off-by: Ekaterina Pavlova <[email protected]>
AliceInHunterland added a commit that referenced this issue Mar 13, 2024
Add waiting for startSending to ensure that the client is ready before
the server starts sending messages.

Close #3005
Close #3312

Signed-off-by: Ekaterina Pavlova <[email protected]>
AliceInHunterland added a commit that referenced this issue Mar 13, 2024
Add waiting for startSending to ensure that the client is ready before
the server starts sending messages.

Close #3005
Close #3312

Signed-off-by: Ekaterina Pavlova <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working I4 No visible changes S4 Routine test Unit tests U2 Seriously planned
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants