We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
May want to move c_vec_index into a OFI MTL specific struct which would be pointed to by the c_pml_comm in ompi_communicator_t.
Maybe in ompi_mtl_ofi_add_comm we'd know whether we're using a GLOBAL CID or not.
Something here for instance to set up a OFI MTL specific c_vec_index when not global CID:
__opal_attribute_always_inline__ static inline int ompi_mtl_ofi_add_comm(struct mca_mtl_base_module_t *mtl, struct ompi_communicator_t *comm) { int ret; mca_mtl_ofi_ep_type ep_type = (0 == ompi_mtl_ofi.enable_sep) ? OFI_REGULAR_EP : OFI_SCALABLE_EP; /* * If thread grouping enabled, add new OFI context for each communicator * other than MPI_COMM_SELF. */ if ((ompi_mtl_ofi.thread_grouping && (MPI_COMM_SELF != comm)) || /* If no thread grouping, add new OFI context only * for MPI_COMM_WORLD. */ (!ompi_mtl_ofi.thread_grouping && (!ompi_mtl_ofi.is_initialized))) { ret = ompi_mtl_ofi_init_contexts(mtl, comm, ep_type); ompi_mtl_ofi.is_initialized = true; if (OMPI_SUCCESS != ret) { goto error; } } return OMPI_SUCCESS; error: return OMPI_ERROR; }
The text was updated successfully, but these errors were encountered:
closed via 7cae326
Sorry, something went wrong.
hppritcha
tomhers
No branches or pull requests
May want to move c_vec_index into a OFI MTL specific struct which would be pointed to by the c_pml_comm in ompi_communicator_t.
Maybe in ompi_mtl_ofi_add_comm we'd know whether we're using a GLOBAL CID or not.
Something here for instance to set up a OFI MTL specific c_vec_index when not global CID:
The text was updated successfully, but these errors were encountered: