fix: route getdata requests to the peer that sent the inv#527
fix: route getdata requests to the peer that sent the inv#527xdustinface merged 1 commit intov0.42-devfrom
getdata requests to the peer that sent the inv#527Conversation
📝 WalkthroughWalkthroughAdds a per-peer send path: a new Changes
Sequence Diagram(s)sequenceDiagram
actor Requester
participant NetworkManager
participant PeerConn as PeerConnection
participant Distrib as DistributedSend
Requester->>NetworkManager: NetworkRequest::SendMessageToPeer(msg, addr)
NetworkManager->>PeerConn: check connection(addr)
alt peer connected
NetworkManager->>PeerConn: send(msg) (spawned task)
PeerConn-->>NetworkManager: send result / ok or error
else peer disconnected
NetworkManager->>Distrib: DistributedSend(msg)
Distrib-->>NetworkManager: broadcast result
end
NetworkManager-->>Requester: ack / log outcome
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches
🧪 Generate unit tests (beta)
📝 Coding Plan for PR comments
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
dash-spv/src/network/mod.rs (1)
45-90:⚠️ Potential issue | 🟠 MajorPlease add coverage for the new targeted request path.
This introduces new routing behavior, but the provided changes only update an unrelated pipeline test. At minimum, I'd add one unit test proving
request_inventory()enqueuesNetworkRequest::SendMessageToPeer, and one network-level test covering the fallback path when the target peer disappears. As per coding guidelines, "Write unit tests for new functionality in Rust code" and "Write integration tests for network operations in Rust code".🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@dash-spv/src/network/mod.rs` around lines 45 - 90, Add tests covering the new targeted request path: (1) a unit test for RequestSender::request_inventory that creates an mpsc::unbounded_channel, constructs a RequestSender, calls request_inventory(..., peer_address) and asserts the receiver gets NetworkRequest::SendMessageToPeer(NetworkMessage::GetData(...), peer_address) (use pattern matching to validate message payload and address); (2) a network-level/integration test that simulates the target peer disappearing (e.g., drop or unregister the peer in the test network harness) and then verifies the fallback behavior (expect either a broadcast NetworkRequest::SendMessage(NetworkMessage::GetData(...)) or whatever fallback SendMessage variant your code should emit) by observing messages on the network receiver; place the unit test alongside other RequestSender tests and the integration test in the network tests suite so both send_message_to_peer and request_inventory flows are covered.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@dash-spv/src/network/manager.rs`:
- Around line 794-807: The SendMessageToPeer branch only falls back when
get_peer() returns None, but if the peer disconnects during
send_message_to_peer() the error is logged and the message is dropped; change
the logic in the NetworkRequest::SendMessageToPeer handler (the async block that
calls get_peer, send_message_to_peer, and send_distributed) to treat
send_message_to_peer() errors caused by peer disconnect as a trigger to call
send_distributed(msg). Specifically, after calling
this.send_message_to_peer(&peer_address, &peer, msg).await, inspect the Err
variant and if it indicates the peer disconnected (or any send failure that
should cause redistribution) call this.send_distributed(msg).await before
logging final failure; ensure any ChainLock-related state like
requested_chainlocks is updated/cleared as currently done when using distributed
sends so the hash is not left stuck.
---
Outside diff comments:
In `@dash-spv/src/network/mod.rs`:
- Around line 45-90: Add tests covering the new targeted request path: (1) a
unit test for RequestSender::request_inventory that creates an
mpsc::unbounded_channel, constructs a RequestSender, calls
request_inventory(..., peer_address) and asserts the receiver gets
NetworkRequest::SendMessageToPeer(NetworkMessage::GetData(...), peer_address)
(use pattern matching to validate message payload and address); (2) a
network-level/integration test that simulates the target peer disappearing
(e.g., drop or unregister the peer in the test network harness) and then
verifies the fallback behavior (expect either a broadcast
NetworkRequest::SendMessage(NetworkMessage::GetData(...)) or whatever fallback
SendMessage variant your code should emit) by observing messages on the network
receiver; place the unit test alongside other RequestSender tests and the
integration test in the network tests suite so both send_message_to_peer and
request_inventory flows are covered.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: c56685ec-ebf8-4a51-bc6a-81349a010f27
📒 Files selected for processing (5)
dash-spv/src/network/manager.rsdash-spv/src/network/mod.rsdash-spv/src/sync/chainlock/sync_manager.rsdash-spv/src/sync/filters/pipeline.rsdash-spv/src/sync/instantsend/sync_manager.rs
We currently just randomly distribute `getdata` requests which leads to issues where we request inventory `x`, which we received from peer `A`, from peer `B` which doesn't have it (yet). This PR: - adds `SendMessageToPeer` variant to `NetworkRequest` and adjusts `request_inventory` in `RequestSender` to accept a `peer`. The request processing falls back to distributed send if the target peer disconnects. - adjusts the `instantsend` and `chainlock` `inv`->`getdata` flows.
b7d2809 to
7b304f9
Compare
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@dash-spv/src/network/manager.rs`:
- Around line 797-817: The retry path currently calls send_distributed after
send_message_to_peer fails or the peer is missing, but send_distributed rebuilds
candidates from the full pool and can pick the same stale peer; add a new helper
(e.g., build_distributed_candidates_excluding) that mirrors how send_distributed
builds its candidate list but filters out the original peer_address before
selection, then update the failure branches (the Err branch in the
send_message_to_peer match and the None branch) to call this new helper or a
wrapper that invokes send_distributed with the filtered candidate list so the
failed peer is excluded from the fallback retry.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: f9b757cb-2ef4-49d6-a981-41936e3f8f2f
📒 Files selected for processing (5)
dash-spv/src/network/manager.rsdash-spv/src/network/mod.rsdash-spv/src/sync/chainlock/sync_manager.rsdash-spv/src/sync/filters/pipeline.rsdash-spv/src/sync/instantsend/sync_manager.rs
🚧 Files skipped from review as they are similar to previous changes (2)
- dash-spv/src/sync/filters/pipeline.rs
- dash-spv/src/sync/chainlock/sync_manager.rs
Codecov Report❌ Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## v0.42-dev #527 +/- ##
=============================================
- Coverage 66.92% 66.86% -0.07%
=============================================
Files 313 313
Lines 64729 64762 +33
=============================================
- Hits 43322 43303 -19
- Misses 21407 21459 +52
*This pull request uses carry forward flags. Click here to find out more.
|
| "Target peer {} disconnected, falling back to distributed send", | ||
| peer_address | ||
| ); | ||
| this.send_distributed(fallback_msg).await |
There was a problem hiding this comment.
In the SendMessage branch we are using the following logic:
let result = match &msg {
// Distribute across peers for parallel sync
NetworkMessage::GetCFHeaders(_)
| NetworkMessage::GetCFilters(_)
| NetworkMessage::GetData(_)
| NetworkMessage::GetMnListD(_)
| NetworkMessage::GetQRInfo(_)
| NetworkMessage::GetHeaders(_)
| NetworkMessage::GetHeaders2(_) => {
this.send_distributed(msg).await
}
_ => {
this.send_to_single_peer(msg).await
}more specifically, the Inv message triggers the send_to_single_peer path but you are calling send_distributed
There was a problem hiding this comment.
Yeah but there is no reason for the other messages to be sent "to a single peer" instead of following the same round robin distribution. There are still few things that im working on cleaning up here. Its okay like this for now.
There was a problem hiding this comment.
I'm not sure this is correct either... if we ask to send to a peer, but the peer doesn't exist, it seems weird to send to the network.
We currently just randomly distribute
getdatarequests which leads to issues where we request inventoryx, which we received from peerA, from peerBwhich doesn't have it (yet).This is especially a problem in the coming mempool support PR.
This PR:
SendToPeervariant toNetworkRequestand adjustsrequest_inventoryinRequestSenderto accept apeer. The request processing falls back to distributed send if the target peer disconnects.instantsendandchainlockinv->getdataflows.Summary by CodeRabbit
New Features
Tests