@@ -40,214 +40,193 @@ graph TD
40
40
41
41
### Prerequisites
42
42
43
- - Python: Version 3.11 or higher required
44
- - Dependency Management: Poetry for easy setup
45
- - Model Files: Ensure you have at least one supported Llama model available, or be prepared to download one
43
+ - Python 3.11 or higher
44
+ - Virtual environment (recommended)
45
+ - CUDA-capable GPU (optional)
46
46
47
- ### Installation Steps
47
+ ### Installation
48
48
49
- 1 . Install Poetry (if not already installed) :
49
+ 1 . Clone and Install :
50
50
51
51
``` bash
52
- curl -sSL https://install.python-poetry.org | python3 -
52
+ git clone https://github.com/zachshallbetter/llamahome.git
53
+ cd llamahome
54
+ make setup
53
55
```
54
56
55
- 2 . Clone and Install LlamaHome :
57
+ 2 . Set Up Environment :
56
58
57
59
``` bash
58
- git clone https://github.com/llamahome/llamahome.git
59
- cd llamahome
60
+ # Copy example environment file
61
+ cp .env.example .env
60
62
61
- poetry install # Install dependencies
62
- poetry shell # Activate virtual environment
63
+ # Edit environment variables
64
+ nano .env
63
65
```
64
66
65
- 3 . Set Up Environment Variables :
67
+ 3 . Verify Installation :
66
68
67
69
``` bash
68
- # UNIX-like systems
69
- export LLAMA_HOME_MODEL_PATH=/path/to/llama/model
70
+ # Activate virtual environment
71
+ source .venv/bin/activate # Unix/macOS
72
+ # or
73
+ .venv\S cripts\a ctivate # Windows
70
74
71
- # Windows (PowerShell)
72
- $Env :LLAMA_HOME_MODEL_PATH= " C:\path\to\llama\model "
75
+ # Run CLI
76
+ python -m src.interfaces.cli
73
77
```
74
78
75
- Adjust these paths and environment variables according to your setup.
76
-
77
- ### Launching the CLI
78
-
79
- Once the environment is prepared, launch the CLI with:
80
-
81
- ``` bash
82
- llamahome
83
- ```
84
-
85
- If you've installed using Poetry, you may need:
86
-
87
- ``` bash
88
- poetry run llamahome
89
- ```
90
-
91
79
## CLI Features
92
80
93
81
### Command History & Navigation
94
82
95
- - History Recall: Use Up/Down arrows to navigate through previously executed commands
96
- - Search History: Press Ctrl+R to search through your command history
97
- - Persisted History: By default, history is saved in ` .config/history.txt ` , so it's available after restarts
83
+ - History Recall: Up/Down arrows
84
+ - Search History: Ctrl+R
85
+ - History saved in ` .config/history.txt `
98
86
99
87
### Auto-Completion & Suggestions
100
88
101
- - Tab Completion: Press Tab to auto-complete commands, model names, and file paths
102
- - Dynamic Suggestions: As you type, suggestions appear in gray. Press Right Arrow to accept them
103
- - Multiple Options: If multiple completions are available, a menu appears. Use arrow keys to navigate and Enter to select
89
+ - Tab Completion: Commands, models, paths
90
+ - Dynamic Suggestions: Gray text, Right Arrow to accept
91
+ - Multiple Options: Arrow keys to navigate
104
92
105
93
### Key Bindings
106
94
107
- - Ctrl+C: Cancel the current operation (if any)
108
- - Ctrl+D: Exit the CLI
109
- - Arrow Keys: Move the cursor left/right through the current line or up/down through history
110
- - Home/End: Jump to the start or end of the line
111
- - Ctrl+K/U/W/Y: Edit text inline (cut/paste words or entire lines)
112
-
113
- ### Mouse Support
114
-
115
- - Click to Position Cursor: Jump to any point in the command line
116
- - Click to Select Completion Options: Quickly choose suggestions with a mouse click
117
- - Scroll Completion Menu: If the completion list is long, scroll to find the right option
95
+ - Ctrl+C: Cancel operation
96
+ - Ctrl+D: Exit CLI
97
+ - Arrow Keys: Navigation
98
+ - Home/End: Line navigation
99
+ - Ctrl+K/U/W/Y: Text editing
118
100
119
101
### Basic Commands
120
102
121
- - ` help ` : Show a list of available commands and usage examples
122
- - ` models ` : List all available models, including versions and compatibility info
123
- - ` model <name> ` : Select a model for subsequent operations
124
- - ` download <model> ` : Download specified model resources
125
- - ` remove <model> ` : Remove a previously downloaded model
126
- - ` chat ` : Start an interactive chat session with the selected model
127
- - ` train <params> ` : Train a model with specified parameters (data paths, epochs, etc.)
128
- - ` quit ` : Exit the CLI
129
-
130
- Example:
131
-
132
103
``` bash
133
- llamahome> models
134
- Available Models:
135
- - llama-3.3-7b (version 3.3-7b)
104
+ # Show help
105
+ help
136
106
137
- llamahome > model llama-3.3-7b
138
- [INFO] Model set to llama-3.3-7b
107
+ # List models
108
+ models
139
109
140
- llamahome> download llama-3.3-7b --force
141
- [INFO] Downloading model...
142
- [INFO] Download complete.
143
- ```
110
+ # Download model
111
+ download llama-3.3-7b
144
112
145
- ## Advanced Usage
113
+ # Start chat
114
+ chat
146
115
147
- ### Multi-Format Output
148
-
149
- - Text Output (default): Ideal for direct reading in the terminal
150
- - JSON Output: Use ` --output json ` for structured output, perfect for scripting or integration with other tools
151
- - Progress Indicators: Long-running tasks (like training) show progress bars and estimated completion times
116
+ # Exit CLI
117
+ quit
118
+ ```
152
119
153
- ### Environment Customization
120
+ ## Configuration
154
121
155
- Set environment variables for customization:
122
+ ### Environment Variables
156
123
157
124
``` bash
158
- export LLAMAHOME_CONFIG=./config/custom_config.toml
159
- export LLAMAHOME_CACHE=./.cache/models
160
- ```
125
+ # Core settings
126
+ LLAMAHOME_ENV=development
127
+ LLAMAHOME_LOG_LEVEL=INFO
161
128
162
- These variables influence the default paths, caching strategies, and model directories used by the CLI.
129
+ # Model settings
130
+ LLAMA_MODEL=llama3.3
131
+ LLAMA_MODEL_SIZE=13b
132
+ ```
163
133
164
- ### Scripting & Automation
134
+ ### Project Configuration
165
135
166
- Combine CLI commands in shell scripts to automate tasks. For example :
136
+ Configuration is managed through ` pyproject.toml ` :
167
137
168
- ``` bash
169
- #! /usr/bin/env bash
138
+ ``` toml
139
+ [project ]
140
+ name = " llamahome"
141
+ version = " 0.1.0"
142
+ requires-python = " >=3.11"
170
143
171
- # Batch download models
172
- llamahome download llama-3.3-7b
173
- llamahome download llama-3.3-7b-finetuned
144
+ [project .scripts ]
145
+ llamahome = " src.interfaces.cli:main"
174
146
175
- # List models to verify
176
- llamahome models
147
+ [tool .llamahome .cli ]
148
+ history_file = " .config/history.txt"
149
+ max_history = 1000
150
+ completion_style = " fancy"
177
151
```
178
152
179
- Run ` chmod +x script.sh ` and ` ./script.sh ` to execute.
180
-
181
- ## Configuration & Integration
153
+ ## Advanced Usage
182
154
183
- ### Model Configuration Files
155
+ ### Multi-Format Output
184
156
185
- Models are defined in ` .config/models.json ` :
157
+ ``` bash
158
+ # JSON output
159
+ llamahome --output json list-models
186
160
187
- ``` json
188
- {
189
- "llama" : {
190
- "versions" : {
191
- "3.3-7b" : {
192
- "url" : " https://example.com/llama-3.3-7b" ,
193
- "size" : " 7B" ,
194
- "type" : " base" ,
195
- "format" : " meta"
196
- }
197
- }
198
- }
199
- }
161
+ # Detailed output
162
+ llamahome --verbose train
200
163
```
201
164
202
- When you run ` llamahome download llama-3.3-7b ` , the CLI reads these definitions to know where to fetch models.
165
+ ### Scripting & Automation
203
166
204
- ### Plugin Support
167
+ ``` bash
168
+ #! /usr/bin/env bash
205
169
206
- Extend CLI functionality with plugins that add new commands or integrations:
170
+ # Activate environment
171
+ source .venv/bin/activate
207
172
208
- - Install Plugins: Place them in the ` plugins/ ` directory
209
- - Configure in ` .config/plugins.toml ` : Enable or disable plugins
210
- - New Commands: Loaded automatically at CLI startup
173
+ # Run commands
174
+ llamahome download llama-3.3-7b
175
+ llamahome train --data path/to/data
176
+ ```
211
177
212
178
## Troubleshooting
213
179
214
180
### Common Issues
215
181
216
182
1 . Command Not Found
217
- - Check that the CLI is installed properly
218
- - Confirm your PATH includes Poetry's bin directory if using Poetry
183
+ ``` bash
184
+ # Ensure virtual environment is activated
185
+ source .venv/bin/activate
186
+ ```
219
187
220
- 2 . Slow Completion or Response
221
- - Check for model availability in ` .config/models.json `
222
- - Verify that the model is downloaded and properly configured
223
- - Consider enabling hardware acceleration or streaming in config files
188
+ 2 . Model Issues
189
+ ``` bash
190
+ # Verify model installation
191
+ llamahome verify-model llama-3.3-7b
192
+ ```
224
193
225
- 3 . Network or Download Issues
226
- - Ensure internet connectivity
227
- - Verify the model URL and credentials if required
228
- - Try using ` --force ` to redownload corrupted files
194
+ 3 . Environment Issues
195
+ ``` bash
196
+ # Check configuration
197
+ llamahome doctor
198
+ ```
229
199
230
- 4 . Compatibility Problems
231
- - Ensure Python 3.11 or higher is installed
232
- - Check CUDA version and GPU drivers if using GPU acceleration
233
- - Update dependencies with ` poetry update `
200
+ ### Debug Mode
234
201
235
- ### Logs & Diagnostics
202
+ ``` bash
203
+ # Enable debug logging
204
+ export LLAMAHOME_LOG_LEVEL=DEBUG
236
205
237
- - Check logs in ` logs/ ` directory for detailed error reports
238
- - Increase verbosity using ` --verbose ` for more detailed output
239
- - Consult the LlamaHome Community Forum for additional support and best practices
206
+ # Run with debug output
207
+ llamahome --debug
208
+ ```
240
209
241
210
## Best Practices
242
211
243
- - Keep Your CLI Updated: Regularly pull the latest changes from the repository and run ` poetry install ` to get bug fixes and new features
244
- - Backup Configuration: Keep a backup of ` .config/models.json ` and ` .config/history.txt `
245
- - Use Versioned Models: Specify exact model versions to ensure reproducibility
246
- - Prompt Clarity: Provide clear and explicit prompts for better model responses
212
+ 1 . Environment Management
213
+ - Use virtual environment
214
+ - Keep dependencies updated
215
+ - Follow configuration structure
216
+
217
+ 2 . Command Usage
218
+ - Use tab completion
219
+ - Leverage history search
220
+ - Check command help
221
+
222
+ 3 . Resource Management
223
+ - Monitor system resources
224
+ - Clean up unused models
225
+ - Manage cache effectively
247
226
248
227
## Next Steps
249
228
250
- - [ GUI Guide] ( GUI .md) : For a graphical interface to LlamaHome features
251
- - [ API Documentation ] ( API .md) : Integrate programmatically with LlamaHome
252
- - [ Plugin Development ] ( Plugins .md) : Extend the CLI with custom plugins
253
- - [ Advanced Configuration ] ( Config .md) : Dive deeper into YAML and JSON configurations
229
+ 1 . [ Training Guide] ( Training .md)
230
+ 2 . [ Model Management ] ( Models .md)
231
+ 3 . [ API Integration ] ( API .md)
232
+ 4 . [ Performance Tuning ] ( Performance .md)
0 commit comments