Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

HF and agent miners #91

Merged
merged 37 commits into from
Mar 1, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
37 commits
Select commit Hold shift + click to select a range
612c9eb
initial changes to hf miner
p-ferreira Jan 29, 2024
afe3993
adds 8bit and 4bit config
p-ferreira Jan 30, 2024
a424e6e
simplifies torch type logic
p-ferreira Jan 30, 2024
843f9a2
adapt zephyr miner to hf miner
p-ferreira Jan 30, 2024
e51ff26
update README for hf miner
p-ferreira Jan 30, 2024
65f3b74
adds should_force_model_loading flag to miners
p-ferreira Jan 30, 2024
1ae7540
miners module refactor
p-ferreira Jan 31, 2024
c2d40f6
runs black on miners code
p-ferreira Jan 31, 2024
9669cab
properly adds miner integration with system prompt args
p-ferreira Jan 31, 2024
dde3840
adds check verification on global logger definition
p-ferreira Feb 1, 2024
9b5b91f
clean main func from hf miner
p-ferreira Feb 1, 2024
a8e3bc8
refactor agent code + adds react agent
p-ferreira Feb 5, 2024
369820b
agents adjustments
p-ferreira Feb 5, 2024
53ca503
adds max iteration to agents
p-ferreira Feb 5, 2024
cc2b5c9
fix load llm issue
p-ferreira Feb 5, 2024
ce160ca
fix max tokens param
p-ferreira Feb 5, 2024
b8dd71b
adds toolminer for experiments
p-ferreira Feb 6, 2024
df7638b
fix error case for non wiki match
p-ferreira Feb 7, 2024
8dc340d
increase wiki retries and fix tool_miner bug
p-ferreira Feb 8, 2024
443f831
black
mccrindlebrian Feb 14, 2024
b7bdf0b
Merge branch 'main' into features/hf-miner
p-ferreira Feb 23, 2024
d1255aa
depreciation on agent miners
mccrindlebrian Feb 28, 2024
e94efae
merge main
mccrindlebrian Feb 28, 2024
46f798e
add depreciation to reqs
mccrindlebrian Feb 28, 2024
51de23f
add log_status
mccrindlebrian Feb 28, 2024
39d8ad0
change version to 1.1.2
mccrindlebrian Feb 28, 2024
cb529ee
Merge branch 'pre-staging' into features/hf-miner
p-ferreira Mar 1, 2024
493e7e4
Update prompting/miners/agents/react_agent.py
p-ferreira Mar 1, 2024
7226fa1
Update prompting/miners/openai_miner.py
p-ferreira Mar 1, 2024
6d34528
update docstring
p-ferreira Mar 1, 2024
70ab932
adds deprecation tags to all agent classes
p-ferreira Mar 1, 2024
1ad9571
Manually remove .DS_Store files
steffencruz Mar 1, 2024
41fd140
Remove .DS_Store in all directories
steffencruz Mar 1, 2024
9186c8b
fix flake8 warnings
p-ferreira Mar 1, 2024
7719a48
drops unnecessary run code for base miner
p-ferreira Mar 1, 2024
10fc0ce
Update prompting/miners/openai_miner.py
p-ferreira Mar 1, 2024
acba249
deprecates tool miner
p-ferreira Mar 1, 2024
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 2 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,8 @@ __pycache__/
*.py[cod]
*$py.class
.DS_Store
**/.DS_Store


# C extensions
*.so
Expand Down
Empty file removed neurons/miners/__init__.py
Empty file.
47 changes: 47 additions & 0 deletions neurons/miners/huggingface/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,47 @@
# Hugging Face Bittensor Miner
This repository contains a Bittensor Miner integrated with 🤗 Hugging Face pipelines. The miner connects to the Bittensor network, registers its wallet, and serves a hugging face model to the network.

## Prerequisites

- Python 3.8+
- OpenAI Python API (https://github.com/openai/openai)

## Installation
1. Clone the repository
```bash
git clone https://github.com/opentensor/prompting.git
```
2. Install the required packages for the [repository requirements](../../../requirements.txt) with `pip install -r requirements.txt`


For more configuration options related to the wallet, axon, subtensor, logging, and metagraph, please refer to the Bittensor documentation.

## Example Usage

Here are some model examples that could be leveraged by the HuggingFace Miner, alongside suggested GPU footprint to run the models comfortably:
| model_id | Default GPU footprint | 8bits quantization GPU footprint | 4bits quantization GPU footprint |
| --- | ---- | ---- | ---- |
| HuggingFaceH4/zephyr-7b-beta | 18 GB | 12 GB | 7 GB |
| teknium/OpenHermes-2.5-Mistral-7B | 30 GB | 10 GB | 7 GB |
| upstage/SOLAR-10.7B-Instruct-v1.0 | 42 GB | 14 GB| 8 GB |
| mistralai/Mixtral-8x7B-Instruct-v0.1 | 92 GB* | 64 GB* | 30 GB* |

> \* Big models such as mixtral are very costly to run and optimize, so always bear in mind the trade-offs between model speed, model quality and infra cost.


To run the Hugging Face Bittensor Miner with default settings, use the following command:
```bash
python3 neurons/miners/huggingface/miner.py \
--wallet.name <<your-wallet-name>> \
--wallet.hotkey <<your-hotkey>>
--neuron.model_id <<model_id>>
```

You can also run automatic quantization by adding the flag `--neuron.load_in_8bit` for 8bits quantization and `--neuron.load_in_4bit` for 4 bits quantization:
```bash
python3 neurons/miners/huggingface/miner.py \
--wallet.name <<your-wallet-name>> \
--wallet.hotkey <<your-hotkey>>
--neuron.model_id <<model_id>>
--neuron.load_in_8bit True
```
31 changes: 31 additions & 0 deletions neurons/miners/huggingface/miner.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,31 @@
# The MIT License (MIT)
# Copyright © 2024 Yuma Rao

# Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated
# documentation files (the “Software”), to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software,
# and to permit persons to whom the Software is furnished to do so, subject to the following conditions:

# The above copyright notice and this permission notice shall be included in all copies or substantial portions of
# the Software.

# THE SOFTWARE IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO
# THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
# OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE.
import time
import bittensor as bt
from prompting.miners import HuggingFaceMiner


# This is the main function, which runs the miner.
if __name__ == "__main__":
with HuggingFaceMiner() as miner:
while True:
miner.log_status()
time.sleep(5)

if miner.should_exit:
bt.logging.warning("Ending miner...")
break
Empty file removed neurons/miners/openai/__init__.py
Empty file.
133 changes: 1 addition & 132 deletions neurons/miners/openai/miner.py
Original file line number Diff line number Diff line change
Expand Up @@ -14,140 +14,9 @@
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
# OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE.

import os
import time
import bittensor as bt
import argparse

# Bittensor Miner Template:
import prompting
from prompting.protocol import PromptingSynapse

# import base miner class which takes care of most of the boilerplate
from neurons.miner import Miner

from langchain.prompts import ChatPromptTemplate
from langchain_core.output_parsers import StrOutputParser
from langchain.chat_models import ChatOpenAI
from dotenv import load_dotenv, find_dotenv
from langchain.callbacks import get_openai_callback


class OpenAIMiner(Miner):
"""Langchain-based miner which uses OpenAI's API as the LLM.

You should also install the dependencies for this miner, which can be found in the requirements.txt file in this directory.
"""

@classmethod
def add_args(cls, parser: argparse.ArgumentParser):
"""
Adds OpenAI-specific arguments to the command line parser.
"""
super().add_args(parser)

def __init__(self, config=None):
super().__init__(config=config)

bt.logging.info(f"Initializing with model {self.config.neuron.model_id}...")

if self.config.wandb.on:
self.identity_tags = ("openai_miner",) + (self.config.neuron.model_id,)

_ = load_dotenv(find_dotenv())
api_key = os.environ.get("OPENAI_API_KEY")

# Set openai key and other args
self.model = ChatOpenAI(
api_key=api_key,
model_name=self.config.neuron.model_id,
max_tokens=self.config.neuron.max_tokens,
temperature=self.config.neuron.temperature,
)

self.system_prompt = "You are a friendly chatbot who always responds concisely and helpfully. You are honest about things you don't know."
self.accumulated_total_tokens = 0
self.accumulated_prompt_tokens = 0
self.accumulated_completion_tokens = 0
self.accumulated_total_cost = 0

def get_cost_logging(self, cb):
bt.logging.info(f"Total Tokens: {cb.total_tokens}")
bt.logging.info(f"Prompt Tokens: {cb.prompt_tokens}")
bt.logging.info(f"Completion Tokens: {cb.completion_tokens}")
bt.logging.info(f"Total Cost (USD): ${round(cb.total_cost,4)}")

self.accumulated_total_tokens += cb.total_tokens
self.accumulated_prompt_tokens += cb.prompt_tokens
self.accumulated_completion_tokens += cb.completion_tokens
self.accumulated_total_cost += cb.total_cost

return {
"total_tokens": cb.total_tokens,
"prompt_tokens": cb.prompt_tokens,
"completion_tokens": cb.completion_tokens,
"total_cost": cb.total_cost,
"accumulated_total_tokens": self.accumulated_total_tokens,
"accumulated_prompt_tokens": self.accumulated_prompt_tokens,
"accumulated_completion_tokens": self.accumulated_completion_tokens,
"accumulated_total_cost": self.accumulated_total_cost,
}

async def forward(self, synapse: PromptingSynapse) -> PromptingSynapse:
"""
Processes the incoming synapse by performing a predefined operation on the input data.
This method should be replaced with actual logic relevant to the miner's purpose.

Args:
synapse (PromptingSynapse): The synapse object containing the 'dummy_input' data.

Returns:
PromptingSynapse: The synapse object with the 'dummy_output' field set to twice the 'dummy_input' value.

The 'forward' function is a placeholder and should be overridden with logic that is appropriate for
the miner's intended operation. This method demonstrates a basic transformation of input data.
"""
try:
with get_openai_callback() as cb:
t0 = time.time()
bt.logging.debug(f"📧 Message received, forwarding synapse: {synapse}")

prompt = ChatPromptTemplate.from_messages(
[("system", self.system_prompt), ("user", "{input}")]
)
chain = prompt | self.model | StrOutputParser()

role = synapse.roles[-1]
message = synapse.messages[-1]

bt.logging.debug(f"💬 Querying openai: {prompt}")
response = chain.invoke({"role": role, "input": message})

synapse.completion = response
synapse_latency = time.time() - t0

if self.config.wandb.on:
self.log_event(
timing=synapse_latency,
prompt=message,
completion=response,
system_prompt=self.system_prompt,
extra_info=self.get_cost_logging(cb),
)

bt.logging.debug(f"✅ Served Response: {response}")
self.step += 1

return synapse
except Exception as e:
bt.logging.error(f"Error in forward: {e}")
synapse.completion = "Error: " + str(e)
finally:
if self.config.neuron.stop_on_forward_exception:
self.should_exit = True
return synapse

from prompting.miners import OpenAIMiner

# This is the main function, which runs the miner.
if __name__ == "__main__":
Expand Down
5 changes: 0 additions & 5 deletions neurons/miners/openai/requirements.txt

This file was deleted.

30 changes: 1 addition & 29 deletions neurons/miners/test/echo.py
Original file line number Diff line number Diff line change
Expand Up @@ -14,37 +14,9 @@
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
# OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE.

import time
import typing
import bittensor as bt

# Bittensor Miner Template:
import prompting
from prompting.protocol import PromptingSynapse

# import base miner class which takes care of most of the boilerplate
from neurons.miner import Miner


class EchoMiner(Miner):
"""
This little fella just repeats the last message it received.
"""

def __init__(self, config=None):
super().__init__(config=config)

async def forward(self, synapse: PromptingSynapse) -> PromptingSynapse:
synapse.completion = synapse.messages[-1]
self.step += 1
return synapse

async def blacklist(self, synapse: PromptingSynapse) -> typing.Tuple[bool, str]:
return False, "All good here"

async def priority(self, synapse: PromptingSynapse) -> float:
return 1e6
from prompting.miners import EchoMiner


# This is the main function, which runs the miner.
Expand Down
30 changes: 1 addition & 29 deletions neurons/miners/test/mock.py
Original file line number Diff line number Diff line change
Expand Up @@ -14,37 +14,9 @@
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
# OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE.

import time
import typing
import bittensor as bt

# Bittensor Miner Template:
import prompting
from prompting.protocol import PromptingSynapse

# import base miner class which takes care of most of the boilerplate
from neurons.miner import Miner


class MockMiner(Miner):
"""
This little fella responds with a static message.
"""

def __init__(self, config=None):
super().__init__(config=config)

async def forward(self, synapse: PromptingSynapse) -> PromptingSynapse:
synapse.completion = f"Hey you reached mock miner {self.config.wallet.hotkey!r}. Please leave a message after the tone.. Beep!"
self.step += 1
return synapse

async def blacklist(self, synapse: PromptingSynapse) -> typing.Tuple[bool, str]:
return False, "All good here"

async def priority(self, synapse: PromptingSynapse) -> float:
return 1e6
from prompting.miners import MockMiner


# This is the main function, which runs the miner.
Expand Down
42 changes: 1 addition & 41 deletions neurons/miners/test/phrase.py
Original file line number Diff line number Diff line change
Expand Up @@ -14,49 +14,9 @@
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
# OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE.

import time
import typing
import argparse
import bittensor as bt

# Bittensor Miner Template:
import prompting
from prompting.protocol import PromptingSynapse

# import base miner class which takes care of most of the boilerplate
from neurons.miner import Miner


class PhraseMiner(Miner):
"""
This little fella responds with whatever phrase you give it.
"""

@classmethod
def add_args(cls, parser: argparse.ArgumentParser):
super().add_args(parser)

parser.add_argument(
"--neuron.phrase",
type=str,
help="The phrase to use when running a phrase (test) miner.",
default="Can you please repeat that?",
)

def __init__(self, config=None):
super().__init__(config=config)

async def forward(self, synapse: PromptingSynapse) -> PromptingSynapse:
synapse.completion = self.config.neuron.phrase
self.step += 1
return synapse

async def blacklist(self, synapse: PromptingSynapse) -> typing.Tuple[bool, str]:
return False, "All good here"

async def priority(self, synapse: PromptingSynapse) -> float:
return 1e6
from prompting.miners import PhraseMiner


# This is the main function, which runs the miner.
Expand Down
Loading
Loading