Skip to content

Commit b1ec0e1

Browse files
corrected test dependancies
1 parent 9baa269 commit b1ec0e1

File tree

6 files changed

+538
-623
lines changed

6 files changed

+538
-623
lines changed

docs/CLI.md

+119-140
Original file line numberDiff line numberDiff line change
@@ -40,214 +40,193 @@ graph TD
4040

4141
### Prerequisites
4242

43-
- Python: Version 3.11 or higher required
44-
- Dependency Management: Poetry for easy setup
45-
- Model Files: Ensure you have at least one supported Llama model available, or be prepared to download one
43+
- Python 3.11 or higher
44+
- Virtual environment (recommended)
45+
- CUDA-capable GPU (optional)
4646

47-
### Installation Steps
47+
### Installation
4848

49-
1. Install Poetry (if not already installed):
49+
1. Clone and Install:
5050

5151
```bash
52-
curl -sSL https://install.python-poetry.org | python3 -
52+
git clone https://github.com/zachshallbetter/llamahome.git
53+
cd llamahome
54+
make setup
5355
```
5456

55-
2. Clone and Install LlamaHome:
57+
2. Set Up Environment:
5658

5759
```bash
58-
git clone https://github.com/llamahome/llamahome.git
59-
cd llamahome
60+
# Copy example environment file
61+
cp .env.example .env
6062

61-
poetry install # Install dependencies
62-
poetry shell # Activate virtual environment
63+
# Edit environment variables
64+
nano .env
6365
```
6466

65-
3. Set Up Environment Variables:
67+
3. Verify Installation:
6668

6769
```bash
68-
# UNIX-like systems
69-
export LLAMA_HOME_MODEL_PATH=/path/to/llama/model
70+
# Activate virtual environment
71+
source .venv/bin/activate # Unix/macOS
72+
# or
73+
.venv\Scripts\activate # Windows
7074

71-
# Windows (PowerShell)
72-
$Env:LLAMA_HOME_MODEL_PATH="C:\path\to\llama\model"
75+
# Run CLI
76+
python -m src.interfaces.cli
7377
```
7478

75-
Adjust these paths and environment variables according to your setup.
76-
77-
### Launching the CLI
78-
79-
Once the environment is prepared, launch the CLI with:
80-
81-
```bash
82-
llamahome
83-
```
84-
85-
If you've installed using Poetry, you may need:
86-
87-
```bash
88-
poetry run llamahome
89-
```
90-
9179
## CLI Features
9280

9381
### Command History & Navigation
9482

95-
- History Recall: Use Up/Down arrows to navigate through previously executed commands
96-
- Search History: Press Ctrl+R to search through your command history
97-
- Persisted History: By default, history is saved in `.config/history.txt`, so it's available after restarts
83+
- History Recall: Up/Down arrows
84+
- Search History: Ctrl+R
85+
- History saved in `.config/history.txt`
9886

9987
### Auto-Completion & Suggestions
10088

101-
- Tab Completion: Press Tab to auto-complete commands, model names, and file paths
102-
- Dynamic Suggestions: As you type, suggestions appear in gray. Press Right Arrow to accept them
103-
- Multiple Options: If multiple completions are available, a menu appears. Use arrow keys to navigate and Enter to select
89+
- Tab Completion: Commands, models, paths
90+
- Dynamic Suggestions: Gray text, Right Arrow to accept
91+
- Multiple Options: Arrow keys to navigate
10492

10593
### Key Bindings
10694

107-
- Ctrl+C: Cancel the current operation (if any)
108-
- Ctrl+D: Exit the CLI
109-
- Arrow Keys: Move the cursor left/right through the current line or up/down through history
110-
- Home/End: Jump to the start or end of the line
111-
- Ctrl+K/U/W/Y: Edit text inline (cut/paste words or entire lines)
112-
113-
### Mouse Support
114-
115-
- Click to Position Cursor: Jump to any point in the command line
116-
- Click to Select Completion Options: Quickly choose suggestions with a mouse click
117-
- Scroll Completion Menu: If the completion list is long, scroll to find the right option
95+
- Ctrl+C: Cancel operation
96+
- Ctrl+D: Exit CLI
97+
- Arrow Keys: Navigation
98+
- Home/End: Line navigation
99+
- Ctrl+K/U/W/Y: Text editing
118100

119101
### Basic Commands
120102

121-
- `help`: Show a list of available commands and usage examples
122-
- `models`: List all available models, including versions and compatibility info
123-
- `model <name>`: Select a model for subsequent operations
124-
- `download <model>`: Download specified model resources
125-
- `remove <model>`: Remove a previously downloaded model
126-
- `chat`: Start an interactive chat session with the selected model
127-
- `train <params>`: Train a model with specified parameters (data paths, epochs, etc.)
128-
- `quit`: Exit the CLI
129-
130-
Example:
131-
132103
```bash
133-
llamahome> models
134-
Available Models:
135-
- llama-3.3-7b (version 3.3-7b)
104+
# Show help
105+
help
136106

137-
llamahome> model llama-3.3-7b
138-
[INFO] Model set to llama-3.3-7b
107+
# List models
108+
models
139109

140-
llamahome> download llama-3.3-7b --force
141-
[INFO] Downloading model...
142-
[INFO] Download complete.
143-
```
110+
# Download model
111+
download llama-3.3-7b
144112

145-
## Advanced Usage
113+
# Start chat
114+
chat
146115

147-
### Multi-Format Output
148-
149-
- Text Output (default): Ideal for direct reading in the terminal
150-
- JSON Output: Use `--output json` for structured output, perfect for scripting or integration with other tools
151-
- Progress Indicators: Long-running tasks (like training) show progress bars and estimated completion times
116+
# Exit CLI
117+
quit
118+
```
152119

153-
### Environment Customization
120+
## Configuration
154121

155-
Set environment variables for customization:
122+
### Environment Variables
156123

157124
```bash
158-
export LLAMAHOME_CONFIG=./config/custom_config.toml
159-
export LLAMAHOME_CACHE=./.cache/models
160-
```
125+
# Core settings
126+
LLAMAHOME_ENV=development
127+
LLAMAHOME_LOG_LEVEL=INFO
161128

162-
These variables influence the default paths, caching strategies, and model directories used by the CLI.
129+
# Model settings
130+
LLAMA_MODEL=llama3.3
131+
LLAMA_MODEL_SIZE=13b
132+
```
163133

164-
### Scripting & Automation
134+
### Project Configuration
165135

166-
Combine CLI commands in shell scripts to automate tasks. For example:
136+
Configuration is managed through `pyproject.toml`:
167137

168-
```bash
169-
#!/usr/bin/env bash
138+
```toml
139+
[project]
140+
name = "llamahome"
141+
version = "0.1.0"
142+
requires-python = ">=3.11"
170143

171-
# Batch download models
172-
llamahome download llama-3.3-7b
173-
llamahome download llama-3.3-7b-finetuned
144+
[project.scripts]
145+
llamahome = "src.interfaces.cli:main"
174146

175-
# List models to verify
176-
llamahome models
147+
[tool.llamahome.cli]
148+
history_file = ".config/history.txt"
149+
max_history = 1000
150+
completion_style = "fancy"
177151
```
178152

179-
Run `chmod +x script.sh` and `./script.sh` to execute.
180-
181-
## Configuration & Integration
153+
## Advanced Usage
182154

183-
### Model Configuration Files
155+
### Multi-Format Output
184156

185-
Models are defined in `.config/models.json`:
157+
```bash
158+
# JSON output
159+
llamahome --output json list-models
186160

187-
```json
188-
{
189-
"llama": {
190-
"versions": {
191-
"3.3-7b": {
192-
"url": "https://example.com/llama-3.3-7b",
193-
"size": "7B",
194-
"type": "base",
195-
"format": "meta"
196-
}
197-
}
198-
}
199-
}
161+
# Detailed output
162+
llamahome --verbose train
200163
```
201164

202-
When you run `llamahome download llama-3.3-7b`, the CLI reads these definitions to know where to fetch models.
165+
### Scripting & Automation
203166

204-
### Plugin Support
167+
```bash
168+
#!/usr/bin/env bash
205169

206-
Extend CLI functionality with plugins that add new commands or integrations:
170+
# Activate environment
171+
source .venv/bin/activate
207172

208-
- Install Plugins: Place them in the `plugins/` directory
209-
- Configure in `.config/plugins.toml`: Enable or disable plugins
210-
- New Commands: Loaded automatically at CLI startup
173+
# Run commands
174+
llamahome download llama-3.3-7b
175+
llamahome train --data path/to/data
176+
```
211177

212178
## Troubleshooting
213179

214180
### Common Issues
215181

216182
1. Command Not Found
217-
- Check that the CLI is installed properly
218-
- Confirm your PATH includes Poetry's bin directory if using Poetry
183+
```bash
184+
# Ensure virtual environment is activated
185+
source .venv/bin/activate
186+
```
219187

220-
2. Slow Completion or Response
221-
- Check for model availability in `.config/models.json`
222-
- Verify that the model is downloaded and properly configured
223-
- Consider enabling hardware acceleration or streaming in config files
188+
2. Model Issues
189+
```bash
190+
# Verify model installation
191+
llamahome verify-model llama-3.3-7b
192+
```
224193

225-
3. Network or Download Issues
226-
- Ensure internet connectivity
227-
- Verify the model URL and credentials if required
228-
- Try using `--force` to redownload corrupted files
194+
3. Environment Issues
195+
```bash
196+
# Check configuration
197+
llamahome doctor
198+
```
229199

230-
4. Compatibility Problems
231-
- Ensure Python 3.11 or higher is installed
232-
- Check CUDA version and GPU drivers if using GPU acceleration
233-
- Update dependencies with `poetry update`
200+
### Debug Mode
234201

235-
### Logs & Diagnostics
202+
```bash
203+
# Enable debug logging
204+
export LLAMAHOME_LOG_LEVEL=DEBUG
236205

237-
- Check logs in `logs/` directory for detailed error reports
238-
- Increase verbosity using `--verbose` for more detailed output
239-
- Consult the LlamaHome Community Forum for additional support and best practices
206+
# Run with debug output
207+
llamahome --debug
208+
```
240209

241210
## Best Practices
242211

243-
- Keep Your CLI Updated: Regularly pull the latest changes from the repository and run `poetry install` to get bug fixes and new features
244-
- Backup Configuration: Keep a backup of `.config/models.json` and `.config/history.txt`
245-
- Use Versioned Models: Specify exact model versions to ensure reproducibility
246-
- Prompt Clarity: Provide clear and explicit prompts for better model responses
212+
1. Environment Management
213+
- Use virtual environment
214+
- Keep dependencies updated
215+
- Follow configuration structure
216+
217+
2. Command Usage
218+
- Use tab completion
219+
- Leverage history search
220+
- Check command help
221+
222+
3. Resource Management
223+
- Monitor system resources
224+
- Clean up unused models
225+
- Manage cache effectively
247226

248227
## Next Steps
249228

250-
- [GUI Guide](GUI.md): For a graphical interface to LlamaHome features
251-
- [API Documentation](API.md): Integrate programmatically with LlamaHome
252-
- [Plugin Development](Plugins.md): Extend the CLI with custom plugins
253-
- [Advanced Configuration](Config.md): Dive deeper into YAML and JSON configurations
229+
1. [Training Guide](Training.md)
230+
2. [Model Management](Models.md)
231+
3. [API Integration](API.md)
232+
4. [Performance Tuning](Performance.md)

0 commit comments

Comments
 (0)