-
Notifications
You must be signed in to change notification settings - Fork 251
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Reorganize examples #179
Reorganize examples #179
Conversation
291638a
to
7daf894
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks! mainly looks good, dropped some comments.
FLAGS.short_seq_prob, | ||
FLAGS.masked_lm_prob, | ||
FLAGS.max_predictions_per_seq, | ||
PREPROCESSING_CONFIG["max_seq_length"], |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Seems we are mixing PREPROCESSING_CONFIG with flags. May I ask why we only move part of flags to config?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Things related to input and output need to be flags I think. Input files, output files, etc.
Preprocessing hyperparameters I was moving to code, per our discussions as group. Maybe someday we will want to actually pull vocab files automatically from some fixed locations, but not for this PR.
That just leaves do_lowercase
. I think that's better to keep as a flag as it is essentially a new entire variant of a model. bert_size
+ do_lowercase
together decide which "bert" you are building, and then we pull the config for your that corresponds to that. I think that is the UX we want.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have the feeling that this definition does not have a clear boundary, so for me I feel that there are two places to change to get the right config. But I guess such UX discussion is hard to get a "best" solution, so I will just leave this to you to decide.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Agreed the boundary is not clear, but the idea should be that more flags is always a bad thing, so we should avoid when possible without breaking workflows
examples/README.md
Outdated
├── __init__.py | ||
├── modelname_config.py | ||
├── modelname_model.py | ||
├── modelname_pretrain.py |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
train?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Agreed!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this looks good to me. One thing - have you verified the change by running the quick test?
Yep, though I will need to check again after getting these changes in. |
3c9d246
to
408a6c1
Compare
- Configuration in code not json - Toplevel README - Clearer script names - Better directory organization Fixes keras-team#177
408a6c1
to
158793e
Compare
All comments addressed. |
Fixes #177