A simple Retrieval-Augmented Generation (RAG) plugin for Obsidian that allows you to query your vault using Google's Gemini AI.
- Real-time monitoring of markdown files in your vault
- Automatic embedding generation and updates
- Semantic search using ChromaDB
- Conversation history support
- Markdown-aware text chunking
- File change detection (create, modify, delete)
-
Clone the repository into your Obsidian vault's plugins directory:
cd YOUR_VAULT_PATH/.obsidian/plugins git clone [repository-url] rag-plugin
-
Set up the Python environment:
python -m venv venv source venv/bin/activate # On Windows: venv\Scripts\activate
-
Install required Python packages:
pip install flask chromadb google-generativeai python-dotenv watchdog
-
Configure the environment:
- Add your Google AI API key:
# Configure Google AI genai.configure(api_key="")
- Update
ROOT_DIR
in the code to point to your Obsidian vault path -
Here: ROOT_DIR = r"C:\Users\SREEHARI\Documents\Obsidian Vault"
- Add your Google AI API key:
-
Start the Python backend server:
python new.py
-
Install Node.js dependencies and start the frontend:
npm install npm run dev
-
Enable the plugin in Obsidian:
- Go to Settings → Community Plugins
- Enable the plugin named "sample pluggin"
- Database: Uses ChromaDB for vector storage (in-memory mode)
- Embedding Strategy:
- Chunks markdown files by sections and paragraphs
- Maximum chunk size: 1000 characters
- Maintains document references and metadata
- AI Model: Uses Gemini 1.5 Flash with configured parameters:
- Temperature: 0.7
- Top-p: 0.95
- Top-k: 64
- Max output tokens: 1000
- File Processing:
- Monitors
.md
files only - Ignores hidden directories
- Supports real-time updates
- Monitors
- API Endpoint:
- POST
/arraysum
for querying the knowledge base ( "arraysum" doesn't make sense, i will change this eventually ) - Maintains conversation history (last 5 exchanges)
- POST