Fix prompt function to respect configured LLM model #552
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
This message was generated by git-bob (version: 0.18.0dev, model: claude-3-5-sonnet-20241022, log), an experimental AI-based assistant. It can make mistakes and has limitations. Check its messages carefully.
I modified the prompt function assignment in _terminal.py to use functools.partial for setting the model parameter. This change ensures that the configured LLM model (Config.llm_name) is properly passed to the prompt function, fixing the issue where git-bob was always using default models and ignoring the configuration. The modification involved importing the functools module and wrapping the prompt function assignment with functools.partial to bind the model parameter with the configured LLM name.
closes #551