A powerful CLI tool that leverages OpenAI's GPT models to generate high-quality, conventional commit messages from your staged changes.
- 🤖 Uses OpenAI's GPT models to analyze your staged changes
- 📝 Generates conventional commit messages that follow best practices
- 🎯 Interactive selection from multiple commit message suggestions
- ✏️ Edit messages directly or request AI revisions
- 🧠 Advanced reasoning mode for enhanced AI interactions
- 🔍 Comprehensive debugging capabilities with file or stdout logging
- ⚡ Streaming responses for real-time feedback
- 🔄 Auto-update checks to keep you on the latest version
- 🎨 Beautiful terminal UI with color-coded output
- ⚙️ Configurable settings via YAML config file
cargo install turbocommit
Pro tip: Add an alias to your shell configuration for quicker access:
# Add to your .bashrc, .zshrc, etc.
alias tc='turbocommit'
- Stage your changes:
git add . # or stage specific files
- Generate commit messages:
turbocommit # or 'tc' if you set up the alias
After generating commit messages, you can:
- Select your preferred message from multiple suggestions
- Edit the message directly before committing
- Request AI revisions with additional context or requirements
- Commit the message once you're satisfied
-n <number>
- Number of commit message suggestions to generate-t <temperature>
- Temperature for GPT model (0.0 to 2.0) (no effect in reasoning mode)-f <frequency_penalty>
- Frequency penalty (-2.0 to 2.0)-m <model>
- Specify the GPT model to use-r, --enable-reasoning
- Enable support for models with reasoning capabilities (like o-series)--reasoning-effort <level>
- Set reasoning effort for supported models (low/medium/high, default: medium)-d, --debug
- Show basic debug info in console--debug-file <path>
- Write detailed debug logs to file (use '-' for stdout)--auto-commit
- Automatically commit with the generated message--api-key <key>
- Provide API key directly--api-endpoint <url>
- Custom API endpoint URL-p, --print-once
- Disable streaming output
When using models that support reasoning capabilities (like OpenAI's o-series), this mode enables their built-in reasoning features. These models are specifically designed to analyze code changes and generate commit messages with their own reasoning process.
Example usage:
turbocommit -r -m o3-mini -n 1 # Enable reasoning mode with default effort
turbocommit -r --reasoning-effort high -m o3-mini -n 1 # Specify reasoning effort
Debug output helps troubleshoot API interactions:
turbocommit -d # Basic info to console
turbocommit --debug-file debug.log # Detailed logs to file
turbocommit --debug-file - # Detailed logs to stdout
The debug logs include:
- Request details (model, tokens, parameters)
- API responses and errors
- Timing information
- Full request/response JSON (in file mode)
Different models have different capabilities and limitations:
- Support reasoning mode
- Do not support temperature/frequency parameters
- May not support multiple choices (
-n
) - Optimized for specific tasks
- Support all parameters
- Multiple choices available
- Temperature and frequency tuning
- Standard reasoning capabilities
For more options, run:
turbocommit --help
turboCommit creates a config file at ~/.turbocommit.yaml
on first run. You can customize:
- Default model
- API endpoint
- Temperature and frequency penalty
- Number of suggestions
- System message prompt
- Auto-update checks
- Reasoning mode defaults
- And more!
Example configuration:
model: "gpt-4"
default_temperature: 1.0
default_frequency_penalty: 0.0
default_number_of_choices: 3
enable_reasoning: true
reasoning_effort: "medium"
disable_print_as_stream: false
disable_auto_update_check: false
Contributions are welcome! Feel free to open issues and pull requests.
Licensed under MIT - see the LICENSE file for details.