Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Kernel specialization of EwBinaryObjSca for DenseMatrix,double and double scalar multiplication #346

Open
aristotelis96 opened this issue Apr 29, 2022 · 1 comment
Labels
good first issue Non-urgent, simple task with limited scope, suitable for getting started as a contributor.

Comments

@aristotelis96
Copy link
Collaborator

Currently EwBinaryObjSca-kernel supports element-wise multiplications between dense matrices and scalars of the same value type (DenseMatrix<double> * double, DenseMatrix<int> * int). We need to support this operation between different value types as well. This will allow us to run more scripts like scripts/lmDS.daph, which requires an element-wise multiplication between DenseMatrix<double> and an int.

We need to specialize kernel EwBinaryObjSca for DenseMatrix<double> * int. The result should be a DenseMatrix<double> matrix.

This issue asks you to implement

  1. A partial specialization of the EwBinaryObjSca-kernel for DenseMatrix<double> * int64_t (result DenseMatrix<double>).
  2. Ideally some test cases in test/runtime/local/kernels/EwBinaryObjScaTest.cpp
  3. Optionally, partial specilization for DenseMatrix<int64_t> * double (result DenseMatrix<double>).
@aristotelis96 aristotelis96 added the good first issue Non-urgent, simple task with limited scope, suitable for getting started as a contributor. label Apr 29, 2022
@aristotelis96
Copy link
Collaborator Author

@pdamme do we still need this? I remember in #402 we discussed that type inference + type promotion solve these kind of issues, however, I think it depends on whether an operation has trait CastArgsToResType. Is there a benefit having different specializations at the kernel-level (and removing the trait)? In my opinion, if a kernel specialization is offering a faster/better implementation then the anwser is probably yes, though I am not sure if this applies to this case..

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
good first issue Non-urgent, simple task with limited scope, suitable for getting started as a contributor.
Projects
None yet
Development

No branches or pull requests

1 participant