Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Benchmark model in a verbose and transparent way #20

Open
quassy opened this issue Sep 26, 2020 · 0 comments
Open

Benchmark model in a verbose and transparent way #20

quassy opened this issue Sep 26, 2020 · 0 comments
Assignees

Comments

@quassy
Copy link
Collaborator

quassy commented Sep 26, 2020

@ThorbenJensen on #19

the higher-level purpose of this error calculation is not obvious -> maybe add high-level comments and a more verbose print in the end.
on a lower level, the variables squares, prediction, error and nrmsd could have a more self-explanatory naming, I think.

overall, I very much like the to benchmark the model this way.
maybe an MAE error metric would be easier for end-users?

@quassy quassy changed the title Benchmark model Benchmark model in a verbose and transparent way Sep 26, 2020
@quassy quassy mentioned this issue Sep 26, 2020
@top-on top-on self-assigned this Sep 26, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants