PlayWithLLM is a powerful and user-friendly system designed to streamline interactions with Ollama-hosted large language models (LLMs). This tool allows you to send inference requests to Ollama state models via a simple API, making it easier to integrate and experiment with LLMs in your projects.
- API Integration: Exposes Ollama-hosted LLMs through an API, enabling seamless inference requests
- Request History Tracking: Automatically stores all inference requests, including details like token usage, request logs, costs, and duration
- Admin UI: Provides an intuitive interface to monitor and manage all your LLM interactions in one place
- Local Setup: Easy-to-follow instructions to clone, install, and run the system on your local machine
- Developers: Quickly integrate and test Ollama-hosted LLMs in your applications
- Researchers: Track and analyze inference requests for experiments and studies
- AI Enthusiasts: Explore the capabilities of state-of-the-art language models with minimal setup
To get started with PlayWithLLM, follow the installation and setup instructions below:
For more detailed documentation, please refer to the resources provided in this repository.