Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Nix] adjust pytest retrys #4558

Merged
merged 10 commits into from
May 29, 2024
Merged
Show file tree
Hide file tree
Changes from 9 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion CMakeLists.txt
Original file line number Diff line number Diff line change
Expand Up @@ -582,7 +582,7 @@ if (BUILD_TESTING)
add_test(NAME ${test_target}
COMMAND
pytest
-x -n=${N} --maxfail=1 --reruns=0 --cache-clear -rpfsq
goatgoose marked this conversation as resolved.
Show resolved Hide resolved
-x -n=${N} --reruns=2 --durations=10 --cache-clear -rpfsq
goatgoose marked this conversation as resolved.
Show resolved Hide resolved
-o log_cli=true --log-cli-level=DEBUG --provider-version=$ENV{S2N_LIBCRYPTO}
--provider-criterion=off --fips-mode=0 --no-pq=0 ${test_file_path}
WORKING_DIRECTORY ${PROJECT_SOURCE_DIR}/tests/integrationv2
Expand Down
2 changes: 2 additions & 0 deletions tests/integrationv2/test_happy_path.py
Original file line number Diff line number Diff line change
@@ -1,4 +1,5 @@
import copy
import platform
import pytest

from configuration import available_ports, ALL_TEST_CIPHERS, ALL_TEST_CURVES, ALL_TEST_CERTS, PROTOCOLS
Expand Down Expand Up @@ -66,6 +67,7 @@ def test_s2n_server_happy_path(managed_process, cipher, provider, curve, protoco
cipher.name)) in server_results.stdout


@pytest.mark.flaky(reruns=5, reruns_delay=2, condition=platform.machine().startswith("aarch"))
@pytest.mark.uncollect_if(func=invalid_test_parameters)
@pytest.mark.parametrize("cipher", ALL_TEST_CIPHERS, ids=get_parameter_name)
@pytest.mark.parametrize("provider", [S2N, OpenSSL, GnuTLS, SSLv3Provider])
Expand Down
2 changes: 2 additions & 0 deletions tests/integrationv2/test_record_padding.py
Original file line number Diff line number Diff line change
@@ -1,4 +1,5 @@
import copy
import platform
import pytest
import re

Expand Down Expand Up @@ -123,6 +124,7 @@ def test_s2n_server_handles_padded_records(managed_process, cipher, provider, cu
cipher.name)) in server_results.stdout


@pytest.mark.flaky(reruns=5, reruns_delay=2, condition=platform.machine().startswith("aarch"))
@pytest.mark.uncollect_if(func=invalid_test_parameters)
@pytest.mark.parametrize("cipher", TLS13_CIPHERS, ids=get_parameter_name)
@pytest.mark.parametrize("provider", [OpenSSL])
Expand Down
7 changes: 5 additions & 2 deletions tests/integrationv2/test_renegotiate.py
Original file line number Diff line number Diff line change
@@ -1,4 +1,5 @@
import copy
import platform
import pytest
import random

Expand Down Expand Up @@ -188,13 +189,13 @@ def basic_reneg_test(managed_process, cipher, curve, certificate, protocol, prov

server = managed_process(provider, server_options,
send_marker=Msg.send_markers(messages, Provider.ServerMode),
timeout=5
timeout=8
)

s2n_client = managed_process(S2N, client_options,
send_marker=Msg.send_markers(messages, Provider.ClientMode),
close_marker=Msg.close_marker(messages),
timeout=5
timeout=8
)

return (s2n_client, server)
Expand Down Expand Up @@ -292,6 +293,7 @@ def test_s2n_client_renegotiate_with_openssl(managed_process, cipher, curve, cer
"""


@pytest.mark.flaky(reruns=3, reruns_delay=1, condition=platform.machine().startswith("aarch"))
@pytest.mark.uncollect_if(func=invalid_test_parameters)
@pytest.mark.parametrize("cipher", ALL_TEST_CIPHERS, ids=get_parameter_name)
@pytest.mark.parametrize("curve", ALL_TEST_CURVES, ids=get_parameter_name)
Expand Down Expand Up @@ -341,6 +343,7 @@ def test_s2n_client_renegotiate_with_client_auth_with_openssl(managed_process, c
"""


@pytest.mark.flaky(reruns=3, reruns_delay=1, condition=platform.machine().startswith("aarch"))
@pytest.mark.uncollect_if(func=invalid_test_parameters)
@pytest.mark.parametrize("cipher", ALL_TEST_CIPHERS, ids=get_parameter_name)
@pytest.mark.parametrize("curve", ALL_TEST_CURVES, ids=get_parameter_name)
Expand Down
2 changes: 2 additions & 0 deletions tests/integrationv2/test_session_resumption.py
Original file line number Diff line number Diff line change
@@ -1,5 +1,6 @@
import copy
import os
import platform
import pytest

from configuration import available_ports, ALL_TEST_CIPHERS, ALL_TEST_CURVES, ALL_TEST_CERTS, PROTOCOLS, TLS13_CIPHERS
Expand Down Expand Up @@ -243,6 +244,7 @@ def test_tls13_session_resumption_s2n_client(managed_process, cipher, curve, cer
b'SSL_accept:SSLv3/TLS write certificate') == num_full_connections


@pytest.mark.flaky(reruns=7, reruns_delay=2, condition=platform.machine().startswith("aarch"))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What are the errors when these tests fail due to flakiness? It seems like the tests could be broken if they require 5 or 7 retries in order to succeed?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Current working theory is that this is the test framework/interactions with subprocesses. I bumped the timeout values from 5 seconds to 60, and then noticed that the retried tests count exactly matches the number of tests running for 60 seconds. So these are being timed out by pytest, unclear why.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Interesting ok. I guess the concern is that if we make a change that causes one of these tests to become flaky, we likely won't notice it since they're retrying so many times. But I guess the extra retries only apply to arm, so as long as we continue running them in an environment with retries disabled we should notice a flaky regression.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

To be fair, we're running these now on x86 with 2 retires, so there is no CI running these specific integrationv2 tests with 0 retires. I don't believe we're collecting retry metrics, which would be an interesting datapoint...

@pytest.mark.uncollect_if(func=invalid_test_parameters)
@pytest.mark.parametrize("cipher", TLS13_CIPHERS, ids=get_parameter_name)
@pytest.mark.parametrize("curve", ALL_TEST_CURVES, ids=get_parameter_name)
Expand Down
Loading