-
Notifications
You must be signed in to change notification settings - Fork 10.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Check the full vocab for grammar only if necessary #4306
Changes from 3 commits
2e3b4f6
281e2ba
de454b9
245de1f
f5f9d96
115a921
b4377ee
88fd22c
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -98,10 +98,11 @@ std::string llama_sampling_print(const llama_sampling_params & params); | |
// - candidates: vector of candidate tokens | ||
// | ||
llama_token llama_sampling_sample( | ||
struct llama_sampling_context * ctx_sampling, | ||
struct llama_context * ctx_main, | ||
struct llama_context * ctx_cfg, | ||
int idx = 0); | ||
struct llama_sampling_context * ctx_sampling, | ||
struct llama_context * ctx_main, | ||
struct llama_context * ctx_cfg, | ||
const int idx, | ||
bool is_resampling = false); // Add the new parameter with default value | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Should hide the There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. The comment here is also redundant, technically. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Hm, not sure what you mean. The candidates array is part of the sampling context and it is cleared and populated on each call. The proposed code should work - does it not work? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
It does seem to work, but my concern was that it was maybe not working because I wasn't sure if recursively calling the function would accomplish that if we are resampling the same token. If my implementation seems correct (it seems to work in my brief testing), then this is probably mergeable then. |
||
|
||
void llama_sampling_accept( | ||
struct llama_sampling_context * ctx_sampling, | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah we could probably expose something more straightforward to check this if we want. It would probably be like
llama_grammar_accept_token
but returning a bool instead of throwing.