This is a playground for testing out LLM function tools. There are a couple of examples
- langchain - This is an incomplete example of using langchain to achieve the same thing
- llama-tool - This uses the ollama api and defines some example tools
The tooling is based off the SQL Tool and there is an inital prompt that needs to be updated.
A FunctionTool
class is defined as a way of extending the Ollama API Tool
,
it also includes the function to call.
The RunWithTools
function sets up a chat with an inital prompt and the set of
tools and runs with a given initial prompt. This is just for testing purposes
and can be extended to give user input.
Tools are a feature that allow connecting data sources up to an LLM. They have a
name, description and parameters to define their functionality and how they are
called. They also have a call
method that runs the tool to get data with given
parameters.
Tools should return as specific information as possible, eg. dont return a whole graphql schema from an introspection query as its too much information.
These models have been tested to work but the output and functionality may vary
- Ollama3.1 - Seems to be better at folling instructions to retry
- Mistral-nemo - Better at building valid graphql queries
- Mistral
- Gemma2
It would be good to group tools together, e.g graphql introspection followed by constructing and running a query or
Autogen has either local or docker running - code executors
Any sort of sandboxing is no longer maintained, recommendations seem to be using docker. Restricted Python allows custom builds with restrictions added in
All fully featrured VMs/Sandboxes that allow any nodejs code are not secure isolated-vm provides a JS sandbox but it is raw V8 so has no nodejs apis
Deno has a well built permissions feature that by default is restrictive. It's not possible to use a subprocess as a sanbox but a child process could be used.
Docker could be used to run code, but has to be done appropriately to avoid container escape
sysbox can build on this.
WASM can be used with many host languages but requires having bindings for any I/O like networking.
name: "demo project"
model:
name: llama3.1
# Port to LLM RPC
url: http://localhost:1234
tools:
- name: "get-balance"
type: "code"
file: "./get-balance.js"
arguments:
# Key value pairs to specify options for the runner, this could be endpoints, timeouts etc.
- key: value