Skip to content

1.1.0: libp2p support in hivemind.moe, PowerSGD training, persistent P2P identity

Compare
Choose a tag to compare
@mryab mryab released this 20 Jun 19:02
· 87 commits to release/1.1.x since this release

Release highlights

  • Starting from this release, all components of hivemind.moe use libp2p for communication. This comes with the same benefits as in averaging and DHT previously (simplified NAT traversal, better performance, etc.) and marks the end of gRPC usage in hivemind. The user API is mostly the same: if you were using abstractions like RemoteMixtureOfExperts, the code should not be changed, although cross-release training is not possible.
  • If you need another way to reduce the network footprint during training with hivemind.Optimizer, you can now use PowerSGD for gradient averaging. This method decreases the communication costs by factorizing the gradients of the model and aggregating the factorized versions. To enable this method in your code, pass grad_averager_factory=partial(PowerSGDGradientAverager, averager_rank=RANK) when creating an instance of Optimizer. Here, RANK denotes the factorization rank; lower values give higher compression at the cost of the reconstruction quality.
  • Similarly to hivemind-server, it is now possible to launch a dedicated DHT instance with a command-line tool. The tool, available via hivemind-dht, can be used to quickly create a lightweight peer that is used mostly for connecting others to the DHT (for example, on a publicly available server) or for DHT metadata replication.
  • Previously, restarting a libp2p instance required generating a new P2P identity, which resulted in a new multiaddress. Thus, it was difficult to use the same command to connect to a peer in case of repeated launches, which is often the case during debugging. Now, you can store the persistent peer identity of a peer in a file and reuse it between launches: this is done by specifying the --identity_path argument, available both in the ALBERT example and CLI tools of hivemind.

Deprecations

  • The parameters quic, use_relay_hop, and use_relay_discovery of hivemind.P2P are deprecated since our update of the libp2p dependency in the p2p daemon. They will be removed in the 1.2.0 release of hivemind

What's Changed

  • Pin pytest version in requirements-dev, use file_descriptor in tests by @justheuristic in #454
  • Pin isort version, bump black by @mryab in #456
  • Clean compression/init.py by @borzunov in #460
  • Do not use offload_optimizer with local_updates by deafult by @foksly in #462
  • Add PowerSGD for compressed gradient averaging by @artek0chumak in #432
  • Bump Black to 22.3.0, pin Golang version by @mryab in #466
  • use_local_updates in optimizer by @justheuristic in #468
  • Update p2pd to v0.3.8 (and libp2p to v0.17.0) by @borzunov in #469
  • Generate new private key if identity file doesn't exist by @borzunov in #473
  • Convert hivemind.server to libp2p backend by @GreenFatGuy in #470
  • Implement a CLI for hivemind.DHT by @mryab in #465
  • Use PeerID exclusively to address MoE experts by @justheuristic in #479
  • Remove deprecated code in hivemind.optim and hivemind.averaging before the 1.1.0 release by @mryab in #480
  • Fix shape validation in GradientAverager by @mryab in #481
  • Change expiration time in declare_experts, fix update_period discrepancy by @justheuristic in #482
  • Add identity_path option for MoE.Server runners by @GreenFatGuy in #484
  • Simplify ExpertBackend interface by @justheuristic in #483
  • Clean up imports, remove unused utils by @mryab in #486
  • finish renaming experts -> module_backends in ConnectionHandler by @justheuristic in #487
  • Remove gRPC services and grpcio requirement by @mryab in #485

New Contributors

Full Changelog: 1.0.1...1.1.0