-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Preparation for publication #364
Conversation
|
I confirmed that all slow tests pass on a GPU. We will need to add links to the papers once they are published. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I approve all of Dan's changes. Someone should look over my own changes.
pytest --durations=10 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sad, but acceptable. We'll want to make sure that we manually run slow tests on gpu before merging PRs.
@@ -295,7 +295,7 @@ def test_modular_arithmetic_build_graph( | |||
} | |||
) | |||
results = graph_build_test(config=config, atol=atol) | |||
get_rib_acts_test(results, atol=0) # Need atol=1e-3 if float32 | |||
get_rib_acts_test(results, atol=1e-15) # Need atol=1e-3 if float32 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
not sure why these tolerances needed to be lowered. Maybe different gpu architectures or torch version? But 1e-15 is small enough I don't really care.
@@ -185,12 +141,11 @@ def ablation_results(self, temp_object, rib_results) -> dict: | |||
|
|||
return ablation_results | |||
|
|||
@pytest.mark.xfail( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Seems nicer to have this marked as an xfailing test rather than just not checking all the layers that fail.
@@ -36,6 +36,8 @@ def get_modular_arithmetic_config(*updates: dict) -> RibBuildConfig: | |||
basis_formula: (1-0)*alpha | |||
edge_formula: squared | |||
n_stochastic_sources_edges: null | |||
integration_method: gauss-legendre |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is consistent with the old default rib build config.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lgtm. Approve pending CI pass
Preparation for publication
Description
test_distributed.test_squared_edges_are_same_dist_split_over_dataset
with xfail due to unknown error. Made issue here.Motivation and Context
Various bits of cleaning and fixes required before making repo public
Does this PR introduce a breaking change?