You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
beam search uses tuple endpoints (i.e. address, port), while dht switched to string endpoints
beam search needs one extra step in beam search because prefix.123.321 != expert.123.321
we may no longer need parallel autograd if it is implemented in pytorch (not the case)
remove hivemind.utils.autograd in favor of _RemoteExpertCallMany
add a more feature-rich test for moe.py (with several DHT nodes and experts)
cancel unused queries in first_k_active?
when declaring experts, introduce some kind of "grace period" - only "declare" prefixes that have not been updated for that period. (rationale: first prefixes are likely to be already updated by other peers)
The text was updated successfully, but these errors were encountered:
The text was updated successfully, but these errors were encountered: