zebra_network::peer_set

Module set

source
Expand description

Abstractions that represent “the rest of the network”.

§Implementation

The PeerSet implementation is adapted from the one in tower::Balance.

As described in Tower’s documentation, it:

Distributes requests across inner services using the Power of Two Choices.

As described in the Finagle Guide:

The algorithm randomly picks two services from the set of ready endpoints and selects the least loaded of the two. By repeatedly using this strategy, we can expect a manageable upper bound on the maximum load of any server.

The maximum load variance between any two servers is bound by ln(ln(n)) where n is the number of servers in the cluster.

The Power of Two Choices should work well for many network requests, but not all of them. Some requests should only be made to a subset of connected peers. For example, a request for a particular inventory item should be made to a peer that has recently advertised that inventory hash. Other requests require broadcasts, such as transaction diffusion.

Implementing this specialized routing logic inside the PeerSet – so that it continues to abstract away “the rest of the network” into one endpoint – is not a problem, as the PeerSet can simply maintain more information on its peers and route requests appropriately. However, there is a problem with maintaining accurate backpressure information, because the Service trait requires that service readiness is independent of the data in the request.

For this reason, in the future, this code will probably be refactored to address this backpressure mismatch. One possibility is to refactor the code so that one entity holds and maintains the peer set and metadata on the peers, and each “backpressure category” of request is assigned to different Service impls with specialized poll_ready() implementations. Another less-elegant solution (which might be useful as an intermediate step for the inventory case) is to provide a way to borrow a particular backing service, say by address.

§Behavior During Network Upgrades

ZIP-201 specifies peer behavior during network upgrades:

With scheduled network upgrades, at the activation height, nodes on each consensus branch should disconnect from nodes on other consensus branches and only accept new incoming connections from nodes on the same consensus branch.

Zebra handles this with the help of MinimumPeerVersion, which determines the minimum peer protocol version to accept based on the current best chain tip height. The minimum version is therefore automatically increased when the block height reaches a network upgrade’s activation height. The helper type is then used to:

§Network Coalescence

ZIP-201 also specifies how Zcashd behaves leading up to a activation height. Since Zcashd limits the number of connections to at most eight peers, it will gradually migrate its connections to up-to-date peers as it approaches the activation height.

The motivation for this behavior is to avoid an abrupt partitioning the network, which can lead to isolated peers and increases the chance of an eclipse attack on some peers of the network.

Zebra does not gradually migrate its peers as it approaches an activation height. This is because Zebra by default can connect to up to 75 peers, as can be seen in Config::default. Since this is a lot larger than the 8 peers Zcashd connects to, an eclipse attack becomes a lot more costly to execute, and the probability of an abrupt network partition that isolates peers is lower.

Even if a Zebra node is manually configured to connect to a smaller number of peers, the AddressBook is configured to hold a large number of peer addresses (MAX_ADDRS_IN_ADDRESS_BOOK). Since the address book prioritizes addresses it trusts (like those that it has successfully connected to before), the node should be able to recover and rejoin the network by itself, as long as the address book is populated with enough entries.

Structs§

  • A signal sent by the PeerSet to cancel a Client’s current request or response.
  • A signal sent by the PeerSet when it has no ready peers, and gets a request from Zebra.
  • A [tower::Service] that abstractly represents “the rest of the network”.