Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Tpetra: Make sparse matrix-matrix multiply thread-parallel #629

Closed
mhoemmen opened this issue Sep 19, 2016 · 6 comments
Closed

Tpetra: Make sparse matrix-matrix multiply thread-parallel #629

mhoemmen opened this issue Sep 19, 2016 · 6 comments
Assignees
Labels
CLOSED_DUE_TO_INACTIVITY Issue or PR has been closed by the GitHub Actions bot due to inactivity. MARKED_FOR_CLOSURE Issue or PR is marked for auto-closure by the GitHub Actions bot. pkg: Tpetra story The issue corresponds to a Kanban Story (vs. Epic or Task) TpetraRF

Comments

@mhoemmen
Copy link
Contributor

mhoemmen commented Sep 19, 2016

@trilinos/tpetra
This is related to #148 and #430. It may not require completion of either of those.

@mhoemmen mhoemmen added pkg: Tpetra story The issue corresponds to a Kanban Story (vs. Epic or Task) labels Sep 19, 2016
@mhoemmen mhoemmen added this to the Tpetra-threading milestone Sep 19, 2016
@mhoemmen mhoemmen modified the milestones: Tpetra-backlog, Tpetra-threading Nov 2, 2016
@mhoemmen
Copy link
Contributor Author

mhoemmen commented Dec 6, 2016

@csiefer2 has been working on this, so I'm assigning him just so he gets credit for finishing it :-)

@mhoemmen
Copy link
Contributor Author

Note that @csiefer2 finished OpenMP integration in FY17 Q1. Unclear whether CUDA is part of the requirement for FY17. (CUDA must run correctly, but unclear whether we need to plug in sparse matrix-matrix multiply for CUDA; is it acceptable for now to run on host?)

@csiefer2
Copy link
Member

csiefer2 commented Jan 10, 2017

To build w/ the OpenMP kernels, you need these configure options

-D Trilinos_ENABLE_OpenMP:BOOL=ON 
-D TpetraKernels_ENABLE_Experimental:BOOL=ON

@mhoemmen
Copy link
Contributor Author

Next step is to enable this capability for OpenMP by default. We'll need to make sure that tests pass. Since CUDA is not an immediate priority, we should for now only enable this by default for non-CUDA builds.

@mhoemmen mhoemmen removed this from the Tpetra-FY17-Q4 milestone Oct 13, 2017
@github-actions
Copy link

This issue has had no activity for 365 days and is marked for closure. It will be closed after an additional 30 days of inactivity.
If you would like to keep this issue open please add a comment and remove the MARKED_FOR_CLOSURE label.
If this issue should be kept open even with no activity beyond the time limits you can add the label DO_NOT_AUTOCLOSE.

@github-actions github-actions bot added the MARKED_FOR_CLOSURE Issue or PR is marked for auto-closure by the GitHub Actions bot. label Jan 17, 2021
@github-actions
Copy link

This issue was closed due to inactivity for 395 days.

@github-actions github-actions bot added the CLOSED_DUE_TO_INACTIVITY Issue or PR has been closed by the GitHub Actions bot due to inactivity. label Feb 17, 2021
@jhux2 jhux2 added this to Tpetra Aug 12, 2024
@jhux2 jhux2 moved this to Done in Tpetra Aug 12, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CLOSED_DUE_TO_INACTIVITY Issue or PR has been closed by the GitHub Actions bot due to inactivity. MARKED_FOR_CLOSURE Issue or PR is marked for auto-closure by the GitHub Actions bot. pkg: Tpetra story The issue corresponds to a Kanban Story (vs. Epic or Task) TpetraRF
Projects
Status: Done
Development

No branches or pull requests

3 participants