-
Notifications
You must be signed in to change notification settings - Fork 5.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ctransformers: move thread and seed parameters #3543
Conversation
Make it more in line with webui's way of managing loaders. While ctransformers can have the number of threads and seed set during the loading and generation phase, set the threads during model load and seed during generate phase to align with llama.cpp expectations. Seed value now taken from the parameters page, where -1 is for random.
Thank you. I confirm that the |
A typo? I presumed that was intended behaviour |
Yes, I meant same seeds* |
@oobabooga in ctransformers ctransformers/llm.py:
Eventually it makes its way to the LLM::Sample C++ class models/llm.h:
From what I understand, as long as seed is -1, it should be random. |
I tried Make sure to set a high temperature/top_k/top_p to make the replies unpredictable. |
From my testing, it seems to be working (using the same model and simple-1 Parameters): Instruction: Roll a 20-sided dice. |
Yes, Is |
@@ -49,6 +47,7 @@ def decode(self, ids): | |||
|
|||
def generate(self, prompt, state, callback=None): | |||
prompt = prompt if type(prompt) is str else prompt.decode() | |||
# ctransformers uses -1 for random seed | |||
generator = self.model._stream( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please use self.model()
as self.model._stream()
is an internal method and should not be used directly.
Make it more in line with webui's way of managing loaders.
While ctransformers can have the number of threads and seed set during the loading and generation phase, set the threads during model load and seed during generate phase to align with llama.cpp expectations.
Checklist: