Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

v2.0 #185

Merged
merged 205 commits into from
Apr 4, 2024
Merged

v2.0 #185

Show file tree
Hide file tree
Changes from 1 commit
Commits
Show all changes
205 commits
Select commit Hold shift + click to select a range
612c9eb
initial changes to hf miner
p-ferreira Jan 29, 2024
afe3993
adds 8bit and 4bit config
p-ferreira Jan 30, 2024
a424e6e
simplifies torch type logic
p-ferreira Jan 30, 2024
843f9a2
adapt zephyr miner to hf miner
p-ferreira Jan 30, 2024
e51ff26
update README for hf miner
p-ferreira Jan 30, 2024
65f3b74
adds should_force_model_loading flag to miners
p-ferreira Jan 30, 2024
1ae7540
miners module refactor
p-ferreira Jan 31, 2024
c2d40f6
runs black on miners code
p-ferreira Jan 31, 2024
9669cab
properly adds miner integration with system prompt args
p-ferreira Jan 31, 2024
dde3840
adds check verification on global logger definition
p-ferreira Feb 1, 2024
9b5b91f
clean main func from hf miner
p-ferreira Feb 1, 2024
a8e3bc8
refactor agent code + adds react agent
p-ferreira Feb 5, 2024
369820b
agents adjustments
p-ferreira Feb 5, 2024
53ca503
adds max iteration to agents
p-ferreira Feb 5, 2024
cc2b5c9
fix load llm issue
p-ferreira Feb 5, 2024
ce160ca
fix max tokens param
p-ferreira Feb 5, 2024
b8dd71b
adds toolminer for experiments
p-ferreira Feb 6, 2024
df7638b
fix error case for non wiki match
p-ferreira Feb 7, 2024
8dc340d
increase wiki retries and fix tool_miner bug
p-ferreira Feb 8, 2024
32e2fbf
dsstore gitignore, change miner base
mccrindlebrian Feb 13, 2024
cd45132
edit StreamPromptingSynapse, and add _forward to BaseStreamMinerNeuron
mccrindlebrian Feb 13, 2024
4c1902b
OpenAIUtils
mccrindlebrian Feb 13, 2024
c5066ce
change llm pipeline for streaming
mccrindlebrian Feb 13, 2024
76b7806
hf_miner to streaming
mccrindlebrian Feb 13, 2024
0cd3820
openai stream miner
mccrindlebrian Feb 13, 2024
7f983b4
echo miner
mccrindlebrian Feb 13, 2024
2b30a33
phrase miner
mccrindlebrian Feb 13, 2024
898b4b8
fix bugs
mccrindlebrian Feb 13, 2024
85c46a2
black entire workspace
mccrindlebrian Feb 13, 2024
9ad977a
add try except in langchain miners
mccrindlebrian Feb 13, 2024
4b4af19
format_send on HuggingFaceMiner
mccrindlebrian Feb 13, 2024
f1e5e57
mock miner
mccrindlebrian Feb 13, 2024
4438f15
add handle_response for StreamMiners
mccrindlebrian Feb 13, 2024
b6cf4a5
remove streaming docs
mccrindlebrian Feb 13, 2024
0735046
remove streaming docs
mccrindlebrian Feb 14, 2024
5fe465d
add client-side querying
mccrindlebrian Feb 14, 2024
1e5e9cb
black, debugging
mccrindlebrian Feb 14, 2024
5478ecd
remove format send bc broke streaming
mccrindlebrian Feb 14, 2024
41d3b5c
remove format_send
mccrindlebrian Feb 14, 2024
443f831
black
mccrindlebrian Feb 14, 2024
aa58e8d
Merge branch 'features/hf-miner' into features/streaming
mccrindlebrian Feb 14, 2024
ae8392e
remove .DS_Store
mccrindlebrian Feb 14, 2024
a129564
add tool miner
mccrindlebrian Feb 14, 2024
da90d8e
add timeout checking to stop streaming tokens
mccrindlebrian Feb 14, 2024
1b63bcd
add timeout checking to all miners
mccrindlebrian Feb 14, 2024
78a8281
add return_streamer logic
mccrindlebrian Feb 15, 2024
c7b52c6
remove force
mccrindlebrian Feb 15, 2024
5eeabb7
add thread closing and queue clearing for huggingface models
mccrindlebrian Feb 16, 2024
1f5e54f
add docstrings, more logic for cleaning
mccrindlebrian Feb 16, 2024
f3ec3fc
add timeout check to openai
mccrindlebrian Feb 16, 2024
ab81674
add streaming_batch_size to config
mccrindlebrian Feb 21, 2024
723b64c
remove dep
mccrindlebrian Feb 21, 2024
5792dfc
add wandb logging if wandb.on
mccrindlebrian Feb 21, 2024
3b4df22
remove get_openai_callback, black repo
mccrindlebrian Feb 21, 2024
784b100
fix batch size bug in config
mccrindlebrian Feb 21, 2024
516fb25
add try except to hf miners
mccrindlebrian Feb 21, 2024
1abe949
add try except to tool_miner
mccrindlebrian Feb 21, 2024
38b9a85
Merge branch 'features/streaming' of github.com:opentensor/prompting …
mccrindlebrian Feb 21, 2024
e1f6ddb
add try except to tool_miner
mccrindlebrian Feb 21, 2024
c86431f
add back in blacklist and priorty, test working
mccrindlebrian Feb 21, 2024
6bae36a
change back logging time
mccrindlebrian Feb 21, 2024
22d562a
develop mock dendrite for testing
mccrindlebrian Feb 22, 2024
f0de609
remove dep
mccrindlebrian Feb 22, 2024
425b31d
add tests for stream miners
mccrindlebrian Feb 22, 2024
5f31d16
add docstrings
mccrindlebrian Feb 23, 2024
b7bdf0b
Merge branch 'main' into features/hf-miner
p-ferreira Feb 23, 2024
13c4bea
Merge branch 'features/hf-miner' into features/streaming
mccrindlebrian Feb 23, 2024
157a9f9
change process timing
mccrindlebrian Feb 23, 2024
d3bfcfb
add constants
mccrindlebrian Feb 23, 2024
38232b1
add default port/id
mccrindlebrian Feb 23, 2024
07a73f3
merge staging
mccrindlebrian Feb 28, 2024
8f12375
remove todo
mccrindlebrian Feb 28, 2024
d1255aa
depreciation on agent miners
mccrindlebrian Feb 28, 2024
e94efae
merge main
mccrindlebrian Feb 28, 2024
46f798e
add depreciation to reqs
mccrindlebrian Feb 28, 2024
51de23f
add log_status
mccrindlebrian Feb 28, 2024
39d8ad0
change version to 1.1.2
mccrindlebrian Feb 28, 2024
8957ff9
add docstrings
mccrindlebrian Feb 29, 2024
4f573fb
move exception clause
mccrindlebrian Feb 29, 2024
d31ec32
remove all agents from PR to separate from features/hf-miners
mccrindlebrian Feb 29, 2024
64d67e7
merge pre-staging, resolve conflicts
mccrindlebrian Feb 29, 2024
860ec69
black
mccrindlebrian Feb 29, 2024
f9863b7
test precommit hook
mccrindlebrian Mar 1, 2024
4b0d76f
add .DS_Store and test precommit hook
mccrindlebrian Mar 1, 2024
cb529ee
Merge branch 'pre-staging' into features/hf-miner
p-ferreira Mar 1, 2024
493e7e4
Update prompting/miners/agents/react_agent.py
p-ferreira Mar 1, 2024
7226fa1
Update prompting/miners/openai_miner.py
p-ferreira Mar 1, 2024
6d34528
update docstring
p-ferreira Mar 1, 2024
70ab932
adds deprecation tags to all agent classes
p-ferreira Mar 1, 2024
1ad9571
Manually remove .DS_Store files
steffencruz Mar 1, 2024
41fd140
Remove .DS_Store in all directories
steffencruz Mar 1, 2024
9186c8b
fix flake8 warnings
p-ferreira Mar 1, 2024
7719a48
drops unnecessary run code for base miner
p-ferreira Mar 1, 2024
10fc0ce
Update prompting/miners/openai_miner.py
p-ferreira Mar 1, 2024
acba249
deprecates tool miner
p-ferreira Mar 1, 2024
cb99e73
Merge pull request #91 from opentensor/features/hf-miner
steffencruz Mar 1, 2024
e12e2fc
merge pre-staging, resolve conflicts, delete stream_tutorial docs
mccrindlebrian Mar 2, 2024
10aae24
change test output type
mccrindlebrian Mar 2, 2024
e72b60a
fix breaking tests
mccrindlebrian Mar 4, 2024
e826bfa
fix streaming test with percentage comparison on timeout:
mccrindlebrian Mar 4, 2024
15e6c91
rework where process_time is calculated to minimize time inconsistancies
mccrindlebrian Mar 4, 2024
c13bfff
check if process_time >= timeout
mccrindlebrian Mar 4, 2024
3567259
samples uids only on candidate uids
mccrindlebrian Mar 5, 2024
e19229d
add additional test when k > number of available uids
mccrindlebrian Mar 5, 2024
d55e953
Update neuron.py
steffencruz Mar 6, 2024
01255d1
black entire repo
mccrindlebrian Mar 6, 2024
bf71611
Merge pull request #145 from opentensor/hotfix/black-pre-staging
steffencruz Mar 7, 2024
d44228c
black
mccrindlebrian Mar 7, 2024
c72b1e4
Merge pull request #143 from opentensor/hotfix/fix-set-weights
mccrindlebrian Mar 7, 2024
ab5f6b2
merge pre-staging
mccrindlebrian Mar 11, 2024
66c7dbd
remove tokenizer mapping for models that use different tokenizers tha…
mccrindlebrian Mar 11, 2024
acf3f0a
remove unfinished wording is docstring
mccrindlebrian Mar 11, 2024
bbfdcfe
remove debugging
mccrindlebrian Mar 12, 2024
d969897
add new readme for pre-staging experiments
mccrindlebrian Mar 12, 2024
e316e93
remove old docs
mccrindlebrian Mar 12, 2024
b847daa
add docs for streaming, and change output types of miners
mccrindlebrian Mar 12, 2024
fe6782c
fix typo
mccrindlebrian Mar 12, 2024
c7caf1a
fix client script
mccrindlebrian Mar 12, 2024
5f9c591
add more to readme
mccrindlebrian Mar 12, 2024
aaed470
format docs to stream_miner_template
mccrindlebrian Mar 12, 2024
8b0a3aa
Update prompting/mock.py
mccrindlebrian Mar 12, 2024
9885128
Update prompting/miners/hf_miner.py
mccrindlebrian Mar 12, 2024
deacce0
Update prompting/utils/config.py
steffencruz Mar 12, 2024
ef911db
Update prompting/utils/uids.py
steffencruz Mar 12, 2024
ad9e1ff
Update prompting/utils/uids.py
steffencruz Mar 12, 2024
29680ec
Update prompting/utils/uids.py
steffencruz Mar 12, 2024
6095539
Merge branch 'pre-staging' into features/change-uid-sampling
mccrindlebrian Mar 12, 2024
03c1df4
fix bug with brackets
mccrindlebrian Mar 12, 2024
9b490b4
remove partial kwargs
mccrindlebrian Mar 12, 2024
326a910
Merge pull request #142 from opentensor/features/change-uid-sampling
steffencruz Mar 12, 2024
f879b48
Merge branch 'pre-staging' into features/streaming
mccrindlebrian Mar 12, 2024
720ec70
Apply suggestions from code review
mccrindlebrian Mar 12, 2024
439fb15
remove depreciated miners
mccrindlebrian Mar 12, 2024
e6359ff
Merge branch 'features/streaming' of github.com:opentensor/prompting …
mccrindlebrian Mar 12, 2024
91330cf
Merge pull request #103 from opentensor/features/streaming
steffencruz Mar 12, 2024
d8cbc16
remove miner from init
mccrindlebrian Mar 12, 2024
3c17dc7
Merge pull request #155 from opentensor/hotfix/remove-tool-miner
mccrindlebrian Mar 12, 2024
de1f84c
add torch type to load_pipeline
mccrindlebrian Mar 12, 2024
7395643
add torch type to load_pipeline in miner
mccrindlebrian Mar 12, 2024
94b80c5
Merge pull request #156 from opentensor/hotfix/add-torch-type
mccrindlebrian Mar 12, 2024
aebea20
merge prestaging into vllm
mccrindlebrian Mar 14, 2024
51660a6
fix test_tasks
mccrindlebrian Mar 14, 2024
6e72729
fix broken mockpipeline
mccrindlebrian Mar 14, 2024
28d3984
add customstreamiterator and remove docs
mccrindlebrian Mar 14, 2024
cfa4014
remove deps in openai
mccrindlebrian Mar 14, 2024
3d9eaef
remove old miners
mccrindlebrian Mar 14, 2024
ef221ae
add return streamer back into load_hf_pipeline for mocking
mccrindlebrian Mar 14, 2024
438fd93
black
mccrindlebrian Mar 14, 2024
48c8363
add vllm section
mccrindlebrian Mar 14, 2024
1324eeb
merge pre-staging into vllm streaming
mccrindlebrian Mar 14, 2024
663ed07
adds asyncio semaphore
p-ferreira Mar 14, 2024
f53427a
change docs for axon port
mccrindlebrian Mar 16, 2024
5e3e4bc
Merge pull request #161 from opentensor/hotfix/readme
steffencruz Mar 16, 2024
cb79147
remove return streamer from vllm
mccrindlebrian Mar 18, 2024
a245e02
test code for semaphore
mccrindlebrian Mar 18, 2024
6462df8
refactors streaming miner with asyncio loop approach
p-ferreira Mar 20, 2024
ea37a31
adds custom simple_hf miner with pseudo streaming
p-ferreira Mar 21, 2024
96a155a
overall logging adjustments
p-ferreira Mar 21, 2024
e8a92dd
Merge branch 'main' into pre-staging
p-ferreira Mar 21, 2024
6036fae
Merge branch 'pre-staging' into features/thread_lock_stream
p-ferreira Mar 21, 2024
314544c
fix merge conflicts
p-ferreira Mar 21, 2024
a2404e6
minor updates
p-ferreira Mar 21, 2024
53b60c1
merge pre-staging into vllm streaming
mccrindlebrian Mar 21, 2024
76dc85f
Merge branch 'features/thread_lock_stream' into features/vllm-streaming
p-ferreira Mar 21, 2024
6f7280f
improves stream encapsulation
p-ferreira Mar 21, 2024
915593a
blacks the repo
p-ferreira Mar 21, 2024
7764676
Merge branch 'features/thread_lock_stream' into features/vllm-streaming
p-ferreira Mar 21, 2024
4b63675
Merge pull request #159 from opentensor/features/vllm-streaming
p-ferreira Mar 21, 2024
7af0793
pipeline adjustments
p-ferreira Mar 21, 2024
856675c
precommit-hook-test
p-ferreira Mar 21, 2024
cfed494
black
mccrindlebrian Mar 21, 2024
c8c47bb
Merge pull request #171 from opentensor/hotfix/black
p-ferreira Mar 21, 2024
0e9d53f
drops debug from repo
p-ferreira Mar 21, 2024
0a2084b
Merge pull request #170 from opentensor/features/thread_lock_stream
p-ferreira Mar 21, 2024
10cd731
merge main into prestaing
mccrindlebrian Mar 27, 2024
1c3de3d
adapts forward test to streaming + bugfix on mock metagraph
p-ferreira Mar 28, 2024
5372f27
remove unneeded parsing in protocol
mccrindlebrian Mar 28, 2024
e103c2b
Remove <|im_start|> with other roles
bkb2135 Apr 2, 2024
b6499f6
Update all_cleaners.py
bkb2135 Apr 2, 2024
40ff2f7
Update all_cleaners.py
bkb2135 Apr 2, 2024
c5fb529
Update all_cleaners.py
bkb2135 Apr 2, 2024
8e84844
Update all_cleaners.py
bkb2135 Apr 2, 2024
123070e
Merge pull request #184 from opentensor/bkb2135-patch-1
mccrindlebrian Apr 2, 2024
6331f30
quick dirty fix on forward
p-ferreira Apr 2, 2024
7b3d082
Filter CS Math Questions
bkb2135 Apr 3, 2024
8233c45
first attempt on parallelizing stream calls
p-ferreira Apr 3, 2024
601b6b7
Merge pull request #186 from opentensor/hotfix/reduce-cs-hallucinations
mccrindlebrian Apr 3, 2024
411b932
change timing logs from dendrite process_time to be timeout
mccrindlebrian Apr 3, 2024
718daae
Merge pull request #187 from opentensor/fix-dendrite-timing
mccrindlebrian Apr 3, 2024
5f463aa
brians refactoring on dendrite
p-ferreira Apr 3, 2024
f826d2b
parallelize streaming with results wrapper
p-ferreira Apr 3, 2024
778620e
Merge branch 'pre-staging' into from-main
p-ferreira Apr 3, 2024
fa29bd4
updates versioning
p-ferreira Apr 3, 2024
f022a0a
adds safe failure clause on forward
p-ferreira Apr 3, 2024
9551663
adapts unit test + fix typo
p-ferreira Apr 4, 2024
65354fe
add doc strings and typing
mccrindlebrian Apr 4, 2024
18f7496
adds timeout on forward pass
p-ferreira Apr 4, 2024
195d011
runs black on validator code
p-ferreira Apr 4, 2024
ddade62
Merge branch 'staging' into pre-staging
p-ferreira Apr 4, 2024
55c0f2f
Merge branch 'pre-staging' into from-main
p-ferreira Apr 4, 2024
802eaad
replace readme for streaming, simplify
mccrindlebrian Apr 4, 2024
83ad35b
fix variable missing from merge
p-ferreira Apr 4, 2024
c5277ed
Merge pull request #190 from opentensor/features/improve-readme-for-s…
mccrindlebrian Apr 4, 2024
ec10276
Merge pull request #182 from opentensor/from-main
mccrindlebrian Apr 4, 2024
0da2f4b
Merge pull request #136 from opentensor/pre-staging
p-ferreira Apr 4, 2024
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Prev Previous commit
Next Next commit
merge main into prestaing
  • Loading branch information
mccrindlebrian committed Mar 27, 2024
commit 10cd731bc420cfde0bd1359915d23361ed6eb1f1
2 changes: 1 addition & 1 deletion .github/workflows/python-package.yml
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@ jobs:
run: |
python -m pip install --upgrade pip
python -m pip install flake8 pytest black
pip install -e .
bash install.sh

- name: Lint with flake8
run: |
Expand Down
2 changes: 1 addition & 1 deletion install.sh
Original file line number Diff line number Diff line change
Expand Up @@ -13,4 +13,4 @@ pip install -r requirements.txt
pip uninstall uvloop -y

# Reinstalling pydantic and transformers with specific versions that work with our repository and vllm
pip install pydantic==1.10.7 transformers==4.36.2
pip install pydantic==1.10.7 transformers==4.36.2 angle_emb==0.3.8 peft==0.9.0
2 changes: 1 addition & 1 deletion prompting/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@
# DEALINGS IN THE SOFTWARE.

# Define the version of the template module.
__version__ = "1.2.0"
__version__ = "1.3.0"
version_split = __version__.split(".")
__spec_version__ = (
(10000 * int(version_split[0]))
Expand Down
30 changes: 24 additions & 6 deletions prompting/conversation.py
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@
from transformers import Pipeline


def create_task(llm_pipeline: Pipeline, task_name: str) -> Task:
def create_task(llm_pipeline: Pipeline, task_name: str, create_reference=True) -> Task:
wiki_based_tasks = ["summarization", "qa"]
coding_based_tasks = ["debugging"]
# TODO: Abstract dataset classes into common dynamic interface
Expand All @@ -33,20 +33,38 @@ def create_task(llm_pipeline: Pipeline, task_name: str) -> Task:
dataset = WikiDateDataset()

if task_name == "summarization":
task = SummarizationTask(llm_pipeline=llm_pipeline, context=dataset.next())
task = SummarizationTask(
llm_pipeline=llm_pipeline,
context=dataset.next(),
create_reference=create_reference,
)

elif task_name == "qa":
task = QuestionAnsweringTask(llm_pipeline=llm_pipeline, context=dataset.next())
task = QuestionAnsweringTask(
llm_pipeline=llm_pipeline,
context=dataset.next(),
create_reference=create_reference,
)

elif task_name == "debugging":
task = DebuggingTask(llm_pipeline=llm_pipeline, context=dataset.next())
task = DebuggingTask(
llm_pipeline=llm_pipeline,
context=dataset.next(),
create_reference=create_reference,
)

elif task_name == "math":
task = MathTask(llm_pipeline=llm_pipeline, context=dataset.next())
task = MathTask(
llm_pipeline=llm_pipeline,
context=dataset.next(),
create_reference=create_reference,
)

elif task_name == "date_qa":
task = DateQuestionAnsweringTask(
llm_pipeline=llm_pipeline, context=dataset.next()
llm_pipeline=llm_pipeline,
context=dataset.next(),
create_reference=create_reference,
)

else:
Expand Down
31 changes: 28 additions & 3 deletions prompting/forward.py
Original file line number Diff line number Diff line change
Expand Up @@ -18,6 +18,7 @@

import time
import sys
import asyncio
import numpy as np
import bittensor as bt

Expand All @@ -29,6 +30,22 @@
from prompting.rewards import RewardResult
from prompting.utils.uids import get_random_uids
from prompting.utils.logging import log_event
from prompting.utils.misc import async_log


@async_log
async def generate_reference(agent):
loop = asyncio.get_running_loop()
result = await loop.run_in_executor(
None, agent.task.generate_reference, agent.llm_pipeline
)
return result


@async_log
async def execute_dendrite_call(dendrite_call):
responses = await dendrite_call
return responses


async def handle_response(responses: Dict[int, Awaitable]) -> List[bt.Synapse]:
Expand Down Expand Up @@ -137,13 +154,12 @@ async def run_step(
**response_event.__state_dict__(),
}

log_event(self, event)

return event


async def forward(self):
bt.logging.info("🚀 Starting forward loop...")
forward_start_time = time.time()

while True:
bt.logging.info(
Expand All @@ -155,7 +171,11 @@ async def forward(self):
)
bt.logging.info(f"📋 Creating {task_name} task... ")
try:
task = create_task(llm_pipeline=self.llm_pipeline, task_name=task_name)
task = create_task(
llm_pipeline=self.llm_pipeline,
task_name=task_name,
create_reference=False,
)
break
except Exception as e:
bt.logging.error(
Expand All @@ -180,6 +200,11 @@ async def forward(self):
timeout=self.config.neuron.timeout,
exclude=exclude_uids,
)

# Adds forward time to event and logs it to wandb
event["forward_time"] = time.time() - forward_start_time
log_event(self, event)

exclude_uids += event["uids"]
task.complete = True

Expand Down
12 changes: 8 additions & 4 deletions prompting/llms/vllm_llm.py
Original file line number Diff line number Diff line change
Expand Up @@ -154,14 +154,18 @@ def _make_prompt(self, messages: List[Dict[str, str]]):

for message in messages:
if message["role"] == "system":
composed_prompt += f'<|system|>{message["content"]} '
composed_prompt += (
f'<|im_start|>system\n{message["content"]} <|im_end|>'
)
elif message["role"] == "user":
composed_prompt += f'<|user|>{message["content"]} '
composed_prompt += f'<|im_start|>user\n{message["content"]} <|im_end|>'
elif message["role"] == "assistant":
composed_prompt += f'<|assistant|>{message["content"]} '
composed_prompt += (
f'<|im_start|>assistant\n{message["content"]} <|im_end|>'
)

# Adds final tag indicating the assistant's turn
composed_prompt += "<|assistant|>"
composed_prompt += "<|im_start|>assistant\n"
return composed_prompt

def forward(self, messages: List[Dict[str, str]]):
Expand Down
2 changes: 1 addition & 1 deletion prompting/tools/datasets/math.py
Original file line number Diff line number Diff line change
Expand Up @@ -57,7 +57,7 @@ def get(
"""
bt.logging.info(f"Getting math problem {name!r}")
info = mathgenerator.generate_context(name, **kwargs)
if info["reward_type"] != "float":
if info["reward_type"] != "float" or info["topic"] == "computer_science":
return None

math_words = [
Expand Down
2 changes: 1 addition & 1 deletion prompting/utils/config.py
Original file line number Diff line number Diff line change
Expand Up @@ -269,7 +269,7 @@ def add_validator_args(cls, parser):
"--neuron.model_id",
type=str,
help="The model to use for the validator.",
default="HuggingFaceH4/zephyr-7b-beta",
default="NousResearch/Nous-Hermes-2-SOLAR-10.7B",
)

parser.add_argument(
Expand Down
24 changes: 24 additions & 0 deletions prompting/utils/misc.py
Original file line number Diff line number Diff line change
Expand Up @@ -17,6 +17,9 @@
# DEALINGS IN THE SOFTWARE.

import time
import asyncio
import threading
import bittensor as bt
from math import floor
from typing import Callable, Any
from functools import lru_cache, update_wrapper
Expand Down Expand Up @@ -108,3 +111,24 @@ def ttl_get_block(self) -> int:
Note: self here is the miner or validator instance
"""
return self.subtensor.get_current_block()


def async_log(func):
async def wrapper(*args, **kwargs):
start_time = time.time()
task_id = id(asyncio.current_task())
func_name = func.__name__
bt.logging.debug(f"Starting {func_name} on task {task_id} at {start_time}")

# Execute the wrapped function
result = await func(*args, **kwargs)

end_time = time.time()
execution_time = end_time - start_time
bt.logging.debug(
f"Completed {func_name} on task {task_id} in {execution_time} seconds"
)

return result

return wrapper
6 changes: 1 addition & 5 deletions scripts/setup_ubuntu_machine.sh
Original file line number Diff line number Diff line change
Expand Up @@ -23,10 +23,6 @@ git clone https://github.com/opentensor/prompting.git
# Change to the prompting directory
cd prompting

# Install Python dependencies
python3 -m pip install -r requirements.txt

# Install prompting package
python3 -m pip install -e .
bash install.sh

echo "Script completed successfully."
59 changes: 59 additions & 0 deletions tests/test_forward.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,59 @@
import pytest
import time
import asyncio
import sys
from functools import partial
import bittensor as bt
from prompting.forward import run_step
from neurons.validator import Validator
from prompting.tasks import Task, QuestionAnsweringTask
from .fixtures.task import WIKI_CONTEXT
from prompting.agent import HumanAgent
from unittest.mock import patch, Mock
from prompting.protocol import PromptingSynapse

sys.argv = [__file__, "--mock", "--wandb.off", "--neuron.tasks", "qa"]
mock_neuron = Validator()

task = QuestionAnsweringTask(
llm_pipeline=mock_neuron.llm_pipeline, context=WIKI_CONTEXT, create_reference=False
)


def generate_reference(x, delay=1):
time.sleep(delay)
return "Fake reference"


async def mock_dendrite_call(delay=1, **kwargs):
time.sleep(delay)

mock_synapse = PromptingSynapse(roles=["user"], messages=[""])
mock_synapse.completion = "Fake response"

return [mock_synapse]


@pytest.mark.parametrize(
"generate_reference_time, dendrite_time, expected_forward_time",
[(0.5, 0.5, 0.5), (0.5, 0.4, 0.5), (0.4, 0.5, 0.5)],
)
def test_generate_reference_parallel_to_dendrite(
generate_reference_time, dendrite_time, expected_forward_time
):
task.generate_reference = partial(generate_reference, delay=generate_reference_time)
mock_agent = HumanAgent(task, mock_neuron.llm_pipeline)

mock_neuron.dendrite = partial(mock_dendrite_call, delay=dendrite_time)

event = asyncio.run(run_step(mock_neuron, mock_agent, k=4, timeout=0.1))

step_time = event["step_time"]
reward_pipeline_time = sum(
event[key] for key in event if key.endswith("batch_time")
)
network_and_reference_gen_time = step_time - reward_pipeline_time

assert network_and_reference_gen_time == pytest.approx(
expected_forward_time, abs=0.1
)
5 changes: 3 additions & 2 deletions tests/test_mock.py
Original file line number Diff line number Diff line change
Expand Up @@ -76,6 +76,7 @@ async def run():
timeout=timeout,
)

eps = 0.2
responses = asyncio.run(run())
for synapse in responses:
assert (
Expand All @@ -89,9 +90,9 @@ async def run():

# check that the dendrite take between min_time and max_time
assert min_time <= dendrite.process_time
assert dendrite.process_time <= max_time + 0.1
assert dendrite.process_time <= max_time + eps
# check that responses which take longer than timeout have 408 status code
if dendrite.process_time >= timeout + 0.1:
if dendrite.process_time >= timeout + eps:
assert dendrite.status_code == 408
assert dendrite.status_message == "Timeout"
assert synapse.completion == ""
Expand Down
Loading