Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allow float32 matrices to be used as input #17

Closed
leodesigner opened this issue Feb 13, 2017 · 7 comments
Closed

Allow float32 matrices to be used as input #17

leodesigner opened this issue Feb 13, 2017 · 7 comments

Comments

@leodesigner
Copy link

I was able to compile the package from source,
however I am getting error during the execution of the least_squares (cpp extension, python version is working)

  File "build/bdist.freebsd-11.0-RELEASE-p1-amd64/egg/implicit/als.py", line 48, in alternating_least_squares
  File "implicit/_als.pyx", line 60, in implicit._als.least_squares (implicit/_als.cpp:3561)
ValueError: Buffer dtype mismatch, expected 'double' but got 'float'

Here is the corresponding line from _als.cpp

    cdef double[:] data = Cui.data

Can you please advise how to fix this ?

@leodesigner
Copy link
Author

Just found a solution,
Input matrix should be float64 type only,
float32 gives this error.

@benfred
Copy link
Owner

benfred commented Feb 13, 2017

There is some code to specify if you want to generate 32 or 64 bit factors as the output of this function, but the input data needs to be a float64 confidence matrix right now =(

I'll add something soon to allow float32 inputs (and will update this issue when done) - but in the meantime pass in a float64 matrix as a workaround.

@benfred benfred changed the title Running under FreeBSD Allow float32 matrices to be used as input Feb 13, 2017
@leodesigner
Copy link
Author

leodesigner commented Feb 13, 2017

Ok, thanks, I am continuing to play with your great library and comparing results to the raw matrix factorization (with missed data points). Is the github is convenient/appropriate place to ask you about different results ?
Also I would like to suggest a small addition: should be an option to initialize X,Y feature vectors before running matrix factorization (to be able to update existing trained model or use other methods for initialization).

P.S. Also thank you for a insightful blog posts with visualisations 👍

@benfred
Copy link
Owner

benfred commented Feb 13, 2017

Thanks for the feedback!

Either GitHub or email works for contacting me - if you think that it might be of interest to other people, maybe just create another issue, otherwise my email address is on my github profile.

I'll make sure to add the option to train on pre-initialized factors. I'm changing around a little bit anyways (moving to a class with methods for fitting the model, predicting for a user etc) - and I can easily put that in at the same time.

@benfred
Copy link
Owner

benfred commented Feb 25, 2017

The last commit allows training on pre-initialized factors: 8c18f16#diff-604d1b48a6ae71b2cc39b27249942f12R60. It will only initialize the factors if they haven't been set already.

@leodesigner
Copy link
Author

Thanks,
I am already made modifications to my local copy of repository to be able to supply preinitialised vectors. I see you are refactoring your code to make it more scikit-learn style.

@benfred
Copy link
Owner

benfred commented Nov 1, 2017

This commit should enable this: #62

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants