Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Staging #110

Merged
merged 85 commits into from
Feb 21, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
85 commits
Select commit Hold shift + click to select a range
a812a8a
Add Date and Math Tasks
bkb2135 Jan 30, 2024
337ffd4
Date Scoring
bkb2135 Jan 30, 2024
d704d21
Update Float Scoring
bkb2135 Feb 4, 2024
c1515d6
cleanup date.py
bkb2135 Feb 5, 2024
25bd780
update requirements
bkb2135 Feb 5, 2024
29711c1
Update requirements.txt
bkb2135 Feb 5, 2024
38427b9
Gaussian Date Scoring
bkb2135 Feb 6, 2024
3f50e30
Merge branch 'features/task-expansion' of https://github.com/opentens…
bkb2135 Feb 6, 2024
980c41e
Add unit tests
bkb2135 Feb 7, 2024
b21d1fb
Update unittests
bkb2135 Feb 7, 2024
ef1ec4f
Update tests/test_scoring.py
bkb2135 Feb 8, 2024
b2ec0fe
Update tests/test_scoring.py
bkb2135 Feb 8, 2024
e0678e2
Update tests/test_scoring.py
bkb2135 Feb 8, 2024
dde74ac
Update tests/test_scoring.py
bkb2135 Feb 8, 2024
5288dce
Update tests/test_scoring.py
bkb2135 Feb 8, 2024
65df744
Update prompting/rewards/float_diff.py
bkb2135 Feb 8, 2024
cf236b9
Update prompting/rewards/float_diff.py
bkb2135 Feb 8, 2024
13b62f6
Update prompting/rewards/float_diff.py
bkb2135 Feb 8, 2024
769d8b5
Update prompting/rewards/float_diff.py
bkb2135 Feb 8, 2024
69ff1f2
Update prompting/rewards/float_diff.py
bkb2135 Feb 8, 2024
2c73013
Update prompting/rewards/float_diff.py
bkb2135 Feb 8, 2024
7731850
Add documentation to date and float
bkb2135 Feb 8, 2024
5d7a525
Add forward_words to unit test
bkb2135 Feb 13, 2024
0ea3480
fix logging in case axon is not set
surcyf123 Feb 13, 2024
382c7e4
Update README.md
dougsillars Feb 16, 2024
01cfa4e
Update mock objects for more control and improved testing
steffencruz Feb 17, 2024
e61c512
Add tests for mock objects
steffencruz Feb 17, 2024
cfae3bb
Remove comment
steffencruz Feb 17, 2024
cc7432f
Improved validator logging
steffencruz Feb 18, 2024
b61168e
Improved miner logging
steffencruz Feb 18, 2024
7617ded
Increment miner step counter on forward
steffencruz Feb 18, 2024
051ace1
Add base dataset
steffencruz Feb 19, 2024
94cda65
Add selector class
steffencruz Feb 19, 2024
5d40ff7
Add wiki datasets (date and normal)
steffencruz Feb 19, 2024
41f1615
Add context class
steffencruz Feb 19, 2024
481632b
Add mock dataset
steffencruz Feb 19, 2024
17873e9
Add code dataset
steffencruz Feb 19, 2024
a6e7b6a
Add math dataset
steffencruz Feb 19, 2024
b2c8425
Add init
steffencruz Feb 19, 2024
3768036
Remove old monolothic dataset file
steffencruz Feb 19, 2024
9fcf494
Update submodule init
steffencruz Feb 19, 2024
ae9769a
Refactor QA task to use new context class, and cleanup
steffencruz Feb 19, 2024
849ab97
Refactor summarization task to use new context class, and cleanup
steffencruz Feb 19, 2024
7171a28
Update base task so that context can be unpacked into state dict
steffencruz Feb 19, 2024
d5ceec4
Refactor date QA task to use new context class, and cleanup
steffencruz Feb 19, 2024
9824cb5
Refactor math task to use new context class, and cleanup
steffencruz Feb 19, 2024
edb0a39
Refactor debugging task to use new context class, and cleanup
steffencruz Feb 19, 2024
b0cc7da
Add TASKS list in submodule init
steffencruz Feb 19, 2024
1f6b2af
Add MaxRetryError exception class
steffencruz Feb 19, 2024
b37b110
Catch MaxRetryError and continue validation
steffencruz Feb 19, 2024
7682f76
Update dependencies: synapse fork of mathegenerator and wiki sections
steffencruz Feb 19, 2024
bfdf7b3
Update fixtures for dataset tests to use updated dataset and context …
steffencruz Feb 19, 2024
73ed922
Update tests for dataset and context
steffencruz Feb 19, 2024
5aa8673
Update tests for tasks
steffencruz Feb 19, 2024
5dca2dd
Add pre-staging to workflows
steffencruz Feb 19, 2024
e282a27
Fix dataset name typos
steffencruz Feb 19, 2024
5543e90
Import REWARD_MODELS dict from pipeline for global access to reward m…
steffencruz Feb 19, 2024
a57a491
Import TASKS from tasks submodule
steffencruz Feb 19, 2024
a58c973
Remove redundant args
steffencruz Feb 19, 2024
9694848
Remove redundant args
steffencruz Feb 19, 2024
9229eac
Remove redundant args
steffencruz Feb 19, 2024
0dbd3f0
Add more task fields to tests
steffencruz Feb 19, 2024
77ee3ed
Add tests for reward and penalty definitions and make test_task_field…
steffencruz Feb 19, 2024
811d93a
Ensure a default wallet exists in test environemt
steffencruz Feb 19, 2024
565212c
Add pre-staging to workflows
steffencruz Feb 19, 2024
6dee0f4
Merge pull request #101 from surcyf123/patch-1
steffencruz Feb 19, 2024
3486027
Add config for decay_alpha
steffencruz Feb 19, 2024
c4ee6c1
Apply score decay
steffencruz Feb 19, 2024
5112b47
Remove hanging reference to score decay
steffencruz Feb 20, 2024
24e434e
Update math.py
bkb2135 Feb 20, 2024
14149c2
Merge pull request #111 from opentensor/features/decay-scores
steffencruz Feb 20, 2024
dd74a4c
fix broken unit test by adding a mock wallet to mock dendrite
p-ferreira Feb 20, 2024
8586a99
Update config.py
bkb2135 Feb 20, 2024
3d7b577
Merge pull request #109 from opentensor/features/context
steffencruz Feb 20, 2024
a5db577
Merge pull request #107 from opentensor/mock/tests
steffencruz Feb 20, 2024
6af33ec
Merge pull request #108 from opentensor/logging/neuron-info
steffencruz Feb 20, 2024
c7ff221
Merge remote-tracking branch 'origin/pre-staging' into features/task-…
bkb2135 Feb 21, 2024
e6ffa46
Update Math Task to use context
bkb2135 Feb 21, 2024
dfca67b
Merge pull request #105 from dougsillars/main
steffencruz Feb 21, 2024
1200586
Merge pull request #88 from opentensor/features/task-expansion
steffencruz Feb 21, 2024
c9d0b02
update versioning
p-ferreira Feb 21, 2024
745f68e
Remove unecessary logging
bkb2135 Feb 21, 2024
85247eb
Merge branch 'pre-staging' of https://github.com/opentensor/prompting…
bkb2135 Feb 21, 2024
f7a02b5
Fix bug when handling disambiguiation errors and replace expections w…
steffencruz Feb 21, 2024
f751e10
Merge pull request #112 from opentensor/pre-staging
steffencruz Feb 21, 2024
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions .github/workflows/python-package.yml
Original file line number Diff line number Diff line change
Expand Up @@ -5,9 +5,9 @@ name: Python package

on:
push:
branches: [ "main", "staging" ]
branches: [ "main", "staging", "pre-staging" ]
pull_request:
branches: [ "main", "staging" ]
branches: [ "main", "staging", "pre-staging" ]

jobs:
build:
Expand Down
4 changes: 2 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -39,7 +39,7 @@ The design of the network's incentive mechanism is based on two important requir

It is imperative that the validation process engages with miners in the same way as real users. The reasons for this are as follows:
- Miners will compete and continuously improve at performing the validation task(s), and so this fine tuning should be aligned with the goals of the subnet.
- It should not be possible to distinguish between validation and API client queries so that miners always serve requests (even when they do not recieve emissions for doing so).
- It should not be possible to distinguish between validation and API client queries so that miners always serve requests (even when they do not receive emissions for doing so).

In the context of this subnet, miners are required to be intelligent AI assistants that provide helpful and correct responses to a range of queries.

Expand Down Expand Up @@ -104,7 +104,7 @@ These validators are designed to run and update themselves automatically. To run
pm2 start run.sh --name s1_validator_autoupdate -- --wallet.name <your-wallet-name> --wallet.hotkey <your-wallet-hot-key>
```

This will run **two** PM2 process: one for the validator which is called `s1_validator_main_process` by default (you can change this in `run.sh`), and one for the run.sh script (in step 4, we named it `s1_validator_autoupdate`). The script will check for updates every 30 minutes, if there is an update then it will pull it, install it, restart `s1_validator_main_process` and then restart itself.
This will run **two** PM2 processes: one for the validator which is called `s1_validator_main_process` by default (you can change this in `run.sh`), and one for the run.sh script (in step 4, we named it `s1_validator_autoupdate`). The script will check for updates every 30 minutes, if there is an update then it will pull it, install it, restart `s1_validator_main_process` and then restart itself.



Expand Down
6 changes: 3 additions & 3 deletions neurons/miner.py
Original file line number Diff line number Diff line change
Expand Up @@ -176,11 +176,11 @@ def log_event(self, timing: float, prompt: str, completion: str, system_prompt:

# This is the main function, which runs the miner.
if __name__ == "__main__":
with Miner() as miner:
with Miner() as m:
while True:
bt.logging.info("Miner running...", time.time())
bt.logging.info(f"Miner running:: network: {m.subtensor.network} | block: {m.block} | step: {m.step} | uid: {m.uid} | last updated: {m.block-m.metagraph.last_update[m.uid]} | trust: {m.metagraph.trust[m.uid]:.3f} | emission {m.metagraph.emission[m.uid]:.3f}")
time.sleep(5)

if miner.should_exit:
if m.should_exit:
bt.logging.warning("Ending miner...")
break
16 changes: 9 additions & 7 deletions neurons/miners/openai/miner.py
Original file line number Diff line number Diff line change
Expand Up @@ -53,16 +53,16 @@ def __init__(self, config=None):

if self.config.wandb.on:
self.identity_tags = ("openai_miner", ) + (self.config.neuron.model_id, )
_ = load_dotenv(find_dotenv())
api_key = os.environ.get("OPENAI_API_KEY")

_ = load_dotenv(find_dotenv())
api_key = os.environ.get("OPENAI_API_KEY")

# Set openai key and other args
self.model = ChatOpenAI(
api_key=api_key,
model_name=self.config.neuron.model_id,
max_tokens = self.config.neuron.max_tokens,
temperature = self.config.neuron.temperature,
temperature = self.config.neuron.temperature,
)

self.system_prompt = "You are a friendly chatbot who always responds concisely and helpfully. You are honest about things you don't know."
Expand Down Expand Up @@ -122,7 +122,7 @@ async def forward(

role = synapse.roles[-1]
message = synapse.messages[-1]

bt.logging.debug(f"💬 Querying openai: {prompt}")
response = chain.invoke(
{"role": role, "input": message}
Expand All @@ -133,14 +133,16 @@ async def forward(

if self.config.wandb.on:
self.log_event(
timing=synapse_latency,
timing=synapse_latency,
prompt=message,
completion=response,
system_prompt=self.system_prompt,
extra_info=self.get_cost_logging(cb)
)

bt.logging.debug(f"✅ Served Response: {response}")
self.step += 1

return synapse
except Exception as e:
bt.logging.error(f"Error in forward: {e}")
Expand All @@ -160,4 +162,4 @@ async def forward(

if miner.should_exit:
bt.logging.warning("Ending miner...")
break
break
2 changes: 1 addition & 1 deletion neurons/miners/test/echo.py
Original file line number Diff line number Diff line change
Expand Up @@ -41,7 +41,7 @@ async def forward(
) -> PromptingSynapse:

synapse.completion = synapse.messages[-1]

self.step += 1
return synapse

async def blacklist(
Expand Down
2 changes: 1 addition & 1 deletion neurons/miners/test/mock.py
Original file line number Diff line number Diff line change
Expand Up @@ -41,7 +41,7 @@ async def forward(
) -> PromptingSynapse:

synapse.completion = f'Hey you reached mock miner {self.config.wallet.hotkey!r}. Please leave a message after the tone.. Beep!'

self.step += 1
return synapse

async def blacklist(
Expand Down
2 changes: 1 addition & 1 deletion neurons/miners/test/phrase.py
Original file line number Diff line number Diff line change
Expand Up @@ -54,7 +54,7 @@ async def forward(
) -> PromptingSynapse:

synapse.completion = self.config.neuron.phrase

self.step += 1
return synapse

async def blacklist(
Expand Down
20 changes: 11 additions & 9 deletions neurons/miners/wiki_agent/miner.py
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,7 @@

class WikipediaAgentMiner(Miner):
"""Langchain-based miner which uses OpenAI's API as the LLM. This uses the ReAct framework.

You should also install the dependencies for this miner, which can be found in the requirements.txt file in this directory.
"""
@classmethod
Expand All @@ -41,14 +41,14 @@ def add_args(cls, parser: argparse.ArgumentParser):

def __init__(self, config=None):
super().__init__(config=config)

bt.logging.info(f"🤖📖 Initializing wikipedia agent with model {self.config.neuron.model_id}...")

if self.config.wandb.on:
self.identity_tags = ("wikipedia_agent_miner", ) + (self.config.neuron.model_id, )
_ = load_dotenv(find_dotenv())

_ = load_dotenv(find_dotenv())

self.agent = WikiAgent(self.config.neuron.model_id, self.config.neuron.temperature)
self.accumulated_total_tokens = 0
self.accumulated_prompt_tokens = 0
Expand Down Expand Up @@ -99,26 +99,28 @@ async def forward(
with get_openai_callback() as cb:
t0 = time.time()
bt.logging.debug(f"📧 Message received, forwarding synapse: {synapse}")

message = synapse.messages[-1]

bt.logging.debug(f"💬 Querying openai and wikipedia: {message}")

response = self.agent.run(message)

synapse.completion = response
synapse_latency = time.time() - t0

if self.config.wandb.on:
self.log_event(
timing=synapse_latency,
timing=synapse_latency,
prompt=message,
completion=response,
system_prompt='',
extra_info=self.get_cost_logging(cb)
)

bt.logging.debug(f"✅ Served Response: {response}")
self.step += 1

return synapse
except Exception as e:
bt.logging.error(f"Error in forward: {e}")
Expand Down
1 change: 1 addition & 0 deletions neurons/miners/zephyr/miner.py
Original file line number Diff line number Diff line change
Expand Up @@ -122,6 +122,7 @@ async def forward(self, synapse: PromptingSynapse) -> PromptingSynapse:

bt.logging.debug(f"✅ Served Response: {response}")
torch.cuda.empty_cache()
self.step += 1

except Exception as e:
bt.logging.error(f"Error: {e}")
Expand Down
10 changes: 5 additions & 5 deletions neurons/validator.py
Original file line number Diff line number Diff line change
Expand Up @@ -55,7 +55,7 @@ def __init__(self, config=None):
if p > 0
]
# Load the reward pipeline
self.reward_pipeline = RewardPipeline(selected_tasks=self.active_tasks, device=self.device)
self.reward_pipeline = RewardPipeline(selected_tasks=self.active_tasks, device=self.device)

async def forward(self):
"""
Expand Down Expand Up @@ -100,12 +100,12 @@ def __exit__(self, exc_type, exc_value, traceback):

# The main function parses the configuration and runs the validator.
if __name__ == "__main__":
with Validator() as validator:
with Validator() as v:
while True:
bt.logging.info("Validator running...", time.time())
bt.logging.info(f"Validator running:: network: {v.subtensor.network} | block: {v.block} | step: {v.step} | uid: {v.uid} | last updated: {v.block-v.metagraph.last_update[v.uid]} | vtrust: {v.metagraph.validator_trust[v.uid]:.3f} | emission {v.metagraph.emission[v.uid]:.3f}")
time.sleep(5)

if validator.should_exit:
if v.should_exit:
bt.logging.warning("Ending validator...")
break

2 changes: 1 addition & 1 deletion prompting/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@
# DEALINGS IN THE SOFTWARE.

# Define the version of the template module.
__version__ = "1.0.4"
__version__ = "1.1.0"
version_split = __version__.split(".")
__spec_version__ = (
(10000 * int(version_split[0]))
Expand Down
35 changes: 24 additions & 11 deletions prompting/base/validator.py
Original file line number Diff line number Diff line change
Expand Up @@ -29,18 +29,18 @@
from prompting.base.neuron import BaseNeuron
from prompting.mock import MockDendrite
from prompting.utils.config import add_validator_args

from prompting.utils.exceptions import MaxRetryError

class BaseValidatorNeuron(BaseNeuron):
"""
Base class for Bittensor validators. Your validator should inherit from this class.
"""

@classmethod
def add_args(cls, parser: argparse.ArgumentParser):
super().add_args(parser)
add_validator_args(cls, parser)
add_validator_args(cls, parser)


def __init__(self, config=None):
super().__init__(config=config)
Expand Down Expand Up @@ -127,10 +127,15 @@ def run(self):

# Check that validator is registered on the network.
self.sync()

bt.logging.info(
f"Running validator {self.axon} on network: {self.config.subtensor.chain_endpoint} with netuid: {self.config.netuid}"
)

if not self.config.neuron.axon_off:
bt.logging.info(
f"Running validator {self.axon} on network: {self.config.subtensor.chain_endpoint} with netuid: {self.config.netuid}"
)
else:
bt.logging.info(
f"Running validator on network: {self.config.subtensor.chain_endpoint} with netuid: {self.config.netuid}"
)

bt.logging.info(f"Validator starting at block: {self.block}")

Expand All @@ -140,7 +145,14 @@ def run(self):
bt.logging.info(f"step({self.step}) block({self.block})")

# Run multiple forwards concurrently.
self.loop.run_until_complete(self.concurrent_forward())
try:
self.loop.run_until_complete(self.concurrent_forward())
except torch.cuda.OutOfMemoryError as e:
bt.logging.error(f"Out of memory error: {e}")
continue
except MaxRetryError as e:
bt.logging.error(f"MaxRetryError: {e}")
continue

# Check if we should exit.
if self.should_exit:
Expand All @@ -161,8 +173,8 @@ def run(self):
except Exception as err:
bt.logging.error("Error during validation", str(err))
bt.logging.debug(print_exception(type(err), err, err.__traceback__))
self.should_exit = True
self.should_exit = True

def run_in_background_thread(self):
"""
Starts the validator's operations in a background thread upon entering the context.
Expand Down Expand Up @@ -323,6 +335,7 @@ def update_scores(self, rewards: torch.FloatTensor, uids: List[int]):
# shape: [ metagraph.n ]
alpha = self.config.neuron.moving_average_alpha
self.scores = alpha * step_rewards + (1 - alpha) * self.scores
self.scores = (self.scores - self.config.neuron.decay_alpha).clamp(min=0)
bt.logging.debug(f"Updated moving avg scores: {self.scores}")

def save_state(self):
Expand Down
8 changes: 4 additions & 4 deletions prompting/conversation.py
Original file line number Diff line number Diff line change
Expand Up @@ -8,9 +8,9 @@
)
from prompting.tools import (
WikiDataset,
CodingDataset,
HFCodingDataset,
MathDataset,
DateQADataset,
WikiDateDataset,
)

from transformers import Pipeline
Expand All @@ -26,13 +26,13 @@ def create_task(llm_pipeline: Pipeline, task_name: str) -> Task:
dataset = WikiDataset()

elif task_name in coding_based_tasks:
dataset = CodingDataset()
dataset = HFCodingDataset()

elif task_name == "math":
dataset = MathDataset()

elif task_name == "date_qa":
dataset = DateQADataset()
dataset = WikiDateDataset()

if task_name == "summarization":
task = SummarizationTask(llm_pipeline=llm_pipeline, context=dataset.next())
Expand Down
Loading
Loading