Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

may you share link on mentioned your habr post about validation? #1

Open
Sandy4321 opened this issue Jul 12, 2019 · 4 comments
Open

Comments

@Sandy4321
Copy link

may you share link on mentioned your habr post about validation?
https://youtu.be/Otw9MAkXQD4?t=15
Мифы про интерпретацию моделей – Данила Савенков

may it has new tile
https://habr.com/en/company/ods/blog/336168/
Kaggle Mercedes и кросс-валидация

@Danila89
Copy link
Owner

Danila89 commented Jul 14, 2019

Yes, this https://habr.com/en/company/ods/blog/336168/ is exactly the link I mentioned in the video

@Sandy4321
Copy link
Author

great thanks, by the way , what to do if we need more folds? shuffling may give very similar data chunks?

@Sandy4321
Copy link
Author

did you tried one out?
https://statmodeling.stat.columbia.edu/2018/06/05/comments-limitations-bayesian-leave-one-cross-validation-model-selection/
Comments on Limitations of Bayesian Leave-One-Out Cross-Validation for Model Selection

@Danila89
Copy link
Owner

I can't imagine the situation where we will need really a lot of folds. Very small folds tend to give higher variance and there is no sense to increase the number of folds to much. In general more folds you have - less times you should do cross-validation. The limit case if leave-one-out where one should do this only once.
I have not tried Bayesian leave-one-out

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants