- Regensburg, Germany
-
17:35
(UTC +01:00)
Highlights
- Pro
Python
M-LOOP: Machine-learning online optimization package
An open source AutoML toolkit for automate machine learning lifecycle, including feature engineering, neural architecture search, model compression and hyper-parameter tuning.
ACE (Autonomous Cognitive Entities) - 100% local and open source autonomous agents
Framework for orchestrating role-playing, autonomous AI agents. By fostering collaborative intelligence, CrewAI empowers agents to work together seamlessly, tackling complex tasks.
Accelerate, Optimize performance with streamlined training and serving options with JAX.
SWE-agent takes a GitHub issue and tries to automatically fix it, using GPT-4, or your LM of choice. It can also be employed for offensive cybersecurity or competitive coding challenges. [NeurIPS 2…
An efficent implementation of the method proposed in "The Era of 1-bit LLMs"
ChatDBG - AI-assisted debugging. Uses AI to answer 'why'
Bitefix is an efficient library designed to streamline Python Runtime error debugging with AI-powered decorators.
Hello Github! I'm Surf, a friendly trained dolphin assistant here to help you with your coding needs. As an AI, I can understand and execute simple code snippets in multiple languages, including Py…
MLX-VLM is a package for inference and fine-tuning of Vision Language Models (VLMs) on your Mac using MLX.
Build robust LLM applications with true composability 🔗
The llama-cpp-agent framework is a tool designed for easy interaction with Large Language Models (LLMs). Allowing users to chat with LLM models, execute structured function calls and get structured…
llama3.np is a pure NumPy implementation for Llama 3 model.
GoalChain for goal-orientated LLM conversation flows
Apache DataFusion Comet Spark Accelerator
Fast parallel LLM inference for MLX
A Python Library to support running data quality rules while the spark job is running⚡
The code repository for the CURLoRA research paper. Stable LLM continual fine-tuning and catastrophic forgetting mitigation.
FastMLX is a high performance production ready API to host MLX models.
An extremely fast Python package and project manager, written in Rust.