Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Preparation for publication #364

Merged
merged 12 commits into from
Apr 19, 2024
Merged

Preparation for publication #364

merged 12 commits into from
Apr 19, 2024

Conversation

danbraunai-apollo
Copy link
Contributor

@danbraunai-apollo danbraunai-apollo commented Apr 9, 2024

Preparation for publication

Description

  • Make workflow only run slow tests and use Github-hosted runner
  • Create requirements-cpu.txt to avoid installing mpi4py in the cpu workflow
  • Mark test_distributed.test_squared_edges_are_same_dist_split_over_dataset with xfail due to unknown error. Made issue here.

Motivation and Context

Various bits of cleaning and fixes required before making repo public

Does this PR introduce a breaking change?

@danbraunai-apollo
Copy link
Contributor Author

danbraunai-apollo commented Apr 9, 2024

  • Fix workflow so that it works with ubuntu runner
  • Refer to paper instead of old writeup in README (2 places, the part that says “section of this writeup” and near the beginning)
  • Update the “process for building rib graph” in README by adding options for modularity and ablations
  • Change defaults to jacobian @nix-apollo @stefan-apollo think we're fine doing this?
  • Mention how to run slow, fast, and distributed tests in README
  • Make public

@nix-apollo
Copy link
Collaborator

I confirmed that all slow tests pass on a GPU. We will need to add links to the papers once they are published.

Copy link
Collaborator

@nix-apollo nix-apollo left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I approve all of Dan's changes. Someone should look over my own changes.

pytest --durations=10
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sad, but acceptable. We'll want to make sure that we manually run slow tests on gpu before merging PRs.

@@ -295,7 +295,7 @@ def test_modular_arithmetic_build_graph(
}
)
results = graph_build_test(config=config, atol=atol)
get_rib_acts_test(results, atol=0) # Need atol=1e-3 if float32
get_rib_acts_test(results, atol=1e-15) # Need atol=1e-3 if float32
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

not sure why these tolerances needed to be lowered. Maybe different gpu architectures or torch version? But 1e-15 is small enough I don't really care.

@@ -185,12 +141,11 @@ def ablation_results(self, temp_object, rib_results) -> dict:

return ablation_results

@pytest.mark.xfail(
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Seems nicer to have this marked as an xfailing test rather than just not checking all the layers that fail.

@@ -36,6 +36,8 @@ def get_modular_arithmetic_config(*updates: dict) -> RibBuildConfig:
basis_formula: (1-0)*alpha
edge_formula: squared
n_stochastic_sources_edges: null
integration_method: gauss-legendre
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is consistent with the old default rib build config.

Copy link
Contributor Author

@danbraunai-apollo danbraunai-apollo left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lgtm. Approve pending CI pass

@nix-apollo nix-apollo merged commit 3b628b5 into main Apr 19, 2024
1 check passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants