-
Notifications
You must be signed in to change notification settings - Fork 366
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Introduce Padding for Payment and Message Blinded Tlvs #3177
base: main
Are you sure you want to change the base?
Conversation
Notes:
|
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #3177 +/- ##
==========================================
+ Coverage 89.62% 89.95% +0.32%
==========================================
Files 127 127
Lines 103517 106639 +3122
Branches 103517 106639 +3122
==========================================
+ Hits 92780 95926 +3146
- Misses 8041 8072 +31
+ Partials 2696 2641 -55 ☔ View full report in Codecov by Sentry. |
Updated from pr3177.01 to pr3177.02 (diff): Changes:
|
Updated from pr3177.02 to pr3177.03 (diff): Changes:
Diff post rebase: --- a/lightning/src/ln/offers_tests.rs
+++ b/lightning/src/ln/offers_tests.rs
@@ -1789,7 +1789,7 @@ fn test_blinded_path_padding() {
let onion_message = charlie.onion_messenger.next_onion_message_for_peer(david_id).unwrap();
david.onion_messenger.handle_onion_message(&charlie_id, &onion_message);
- let invoice = extract_invoice(david, &onion_message);
+ let (invoice, _) = extract_invoice(david, &onion_message);
assert_eq!(invoice, expected_invoice); |
Updated from pr3177.03 to pr3177.04 (diff):
|
Update: From pr3177.08 to pr3177.09 (diff): Changes based on @jkczyz's feedback:
|
@@ -1772,6 +1772,7 @@ fn packet_payloads_and_keys<T: OnionMessageContents, S: secp256k1::Signing + sec | |||
if let Some(ss) = prev_control_tlvs_ss.take() { | |||
payloads.push((Payload::Forward(ForwardControlTlvs::Unblinded( | |||
ForwardTlvs { | |||
padding: None, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@TheBlueMatt One thing that I'm unsure of is whether we only need to pad the blinded path or if we should also pad the unblinded portion of the onion.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM modulo one minor comment. Feel free to squash after fixing it.
pub(crate) struct ForwardTlvs { | ||
/// The padding data used to make all packets of a blinded path the same size | ||
pub(crate) padding: Option<Padding>, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why are we storing a Padding
here? Its not a real "thing", so it really seems like we should handle it purely at the serialization layer (ie in ForwardTlvs::write/read
) rather than exposing it in ForwardTlvs
across the crate. This would substantially reduce the size of this patch.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmm... there was some earlier discussion around this approach and it was implemented here: b1436f8. The shortcoming was that you could use WithPadding
to write an invalid TLV stream having two type 1
TLV records. Maybe we don't care so long as we always use it right?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmm, that design constraint assumes that we want to pad to make each hop equal (by looking at the max hop length), but I'm not sure that's what we want anyway. Doing so may leak something about the contents still - if we assume the last hop is the max, then we leak the type of context (okay, generally fine), but in the future may leak some info based on if some field is set or not. Instead, we could have a simple PADDING_TARGET_LENGTH constant and always pad to that length in both ForwardTlvs::write
and ReceiveTlvs::write
(with appropriate debug assertions that by default we never have hops longer than that).
This may not be appropriate for things that go in QR codes, but in those cases we may well not want to pad at all because we're super space constrained (and we'll want to just manually verify that we are always writing paths where forward hops are always the same length and receive hops are always the same length). But there it seems like we should WriteableArgs
rather than Writeable
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmm, that design constraint assumes that we want to pad to make each hop equal (by looking at the max hop length), but I'm not sure that's what we want anyway. Doing so may leak something about the contents still - if we assume the last hop is the max, then we leak the type of context (okay, generally fine), but in the future may leak some info based on if some field is set or not. Instead, we could have a simple PADDING_TARGET_LENGTH constant and always pad to that length in both
ForwardTlvs::write
andReceiveTlvs::write
(with appropriate debug assertions that by default we never have hops longer than that).
My thoughts regarding padding was more in the context of dummy hops in order to hide which hop is the recipient, not necessarily to prevent leaking the intended use from a forwarder.
Regarding PADDING_TARGET_LENGTH
, we couldn't use that for MessageContext::Custom
since its size is arbitrary.
This may not be appropriate for things that go in QR codes, but in those cases we may well not want to pad at all because we're super space constrained (and we'll want to just manually verify that we are always writing paths where forward hops are always the same length and receive hops are always the same length). But there it seems like we should
WriteableArgs
rather thanWriteable
?
Just so we have some concrete numbers, for an Offer
using compact blinded paths, we use 10 bytes per forward hop and 28 bytes for the receive hop. For non-compact blinded paths, the forward hops use 35 bytes each. So we would lose much of the savings by padding each to 28 bytes in the compact case.
Regarding WriteableArgs
, we don't have such a trait (yet) -- only ReadableArgs
. For use here, the length would need to be piped through construct_blinded_hops
to encrypt_payload
, and we'd need to change or have a variation of ChaChaPolyWriteAdapter
to implement (and wrap a) WriteableArgs
.
Though, I'm not sure I quite understand what you mean by "manually verify ..." above. Could you give a more concrete example with a blinded path for an offer?
Idle thought: I recall some conversations about concatenating blinded paths. Is it possible to take a blinded path from an offer and concatenate dummy hops to the end of it? That would seem useful when requesting an invoice to help obscure who the final recipient is, assuming all hops were equally padded.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My thoughts regarding padding was more in the context of dummy hops in order to hide which hop is the recipient, not necessarily to prevent leaking the intended use from a forwarder.
Wait, this PR is just about padding to make individual hops equal-sized, right? Anyone who sees a BP can see how many hops it is.
Regarding PADDING_TARGET_LENGTH, we couldn't use that for MessageContext::Custom since its size is arbitrary.
Right, we'd want to just pad to some multiple of a round number.
Just so we have some concrete numbers, for an Offer using compact blinded paths, we use 10 bytes per forward hop and 28 bytes for the receive hop. For non-compact blinded paths, the forward hops use 35 bytes each. So we would lose much of the savings by padding each to 28 bytes in the compact case.
Right, I don't think we can waste that much.
Regarding WriteableArgs, we don't have such a trait (yet) -- only ReadableArgs. For use here, the length would need to be piped through construct_blinded_hops to encrypt_payload, and we'd need to change or have a variation of ChaChaPolyWriteAdapter to implement (and wrap a) WriteableArgs.
Mmm, yea, that would be a bit annoying code quantity. Another option would be a WithPadding
wrapper that wraps a Forward/ReceiveTlvs
and specifies the amount of padding we want. We could even drop the direct Writeable
on {Forward,Receive}Tlvs
at that point. I think my real complaint here is that the API ends up super surprising - a user might build a ReceiveTlvs
object manually, add (or not) their own Padding
and expect that to be used, but instead LDK silently replaces whatever the user told us to do with what LDK thinks it should do. I think LDK is probably right and should make that decision, but having it in the API is really weird.
Though, I'm not sure I quite understand what you mean by "manually verify ..." above. Could you give a more concrete example with a blinded path for an offer?
As in test outside of test cases but by hand.
Idle thought: I recall some conversations about concatenating blinded paths. Is it possible to take a blinded path from an offer and concatenate dummy hops to the end of it? That would seem useful when requesting an invoice to help obscure who the final recipient is, assuming all hops were equally padded.
Not sure its related, if we want to hide the hop count we can just add dummy hops, this doesn't hide the hop count.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Idle thought: I recall some conversations about concatenating blinded paths. Is it possible to take a blinded path from an offer and concatenate dummy hops to the end of it? That would seem useful when requesting an invoice to help obscure who the final recipient is, assuming all hops were equally padded.
I don't think the sender can add dummy hops, per the spec: "the recipient needs to fully validate each dummy hop's onion payload to detect tampering (and must ensure that these hops have been used and not truncated)."
Ah, right. I was discussing this a bit with @valentinewallace and we concluded there's maybe a reason to do it for onion messages (but not clear if you can concatenate blinded paths without the introduction point having helped build the first BP?), but certainly for payments we wouldn't as it reduces your information available on failures.
Yeah, for onion messages if the sender concatenates a blinded path from themselves to the recipient's intro node using next_blinding_override
, then said intro node can't tell that they're the intro node. Adding dummy hops doesn't affect whether forwarding nodes can tell where they are along the path, IIUC.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We could even drop the direct
Writeable
on{Forward,Receive}Tlvs
at that point.Not sure if we could drop that because we need it to determine the serialized length without the padding. Maybe it would be implemented on those tupled with
Option<u64>
to determine whether or not to include padding?
Could have a method that calculates the serialized length without implementing the Writeable
trait, I suppose.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think the sender can add dummy hops, per the spec: "the recipient needs to fully validate each dummy hop's onion payload to detect tampering (and must ensure that these hops have been used and not truncated)."
Do we do that? Not sure I quite understand what the spec is saying, though. How can the recipient "ensure that these hops have been used"? IIUC, dummy hops come after the recipient hop.
Yeah, for onion messages if the sender concatenates a blinded path from themselves to the recipient's intro node using
next_blinding_override
, then said intro node can't tell that they're the intro node. Adding dummy hops doesn't affect whether forwarding nodes can tell where they are along the path, IIUC.
Could you explain how that last part is the case?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we do that? Not sure I quite understand what the spec is saying, though. How can the recipient "ensure that these hops have been used"? IIUC, dummy hops come after the recipient hop.
We don't add dummy hops atm, therefore we also don't support validation for them (since dummy hops are created by the recipient for themselves).
I think the intention of the spec is for dummy hops to be processed the same as normal hops, to avoid timing attacks. So IIUC they're supposed to be encoded along the lines of how we do phantom nodes, where we process it as a forwarding hop but the next hop is ourselves, and so on until we get to the innermost onion layer that contains our actual recipient payload. This is gleaned from discussions by eclair/cln, though, I would have to look at the eclair codebase to see how they actually do it.
Could you explain how that last part is the case?
I'm not sure how dummy hops could affect how intermediate nodes view their position in the path. Could you explain why you think that is the case? lol
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh, right! The payload is always the same size (i.e., never shrinks), so a forwarder can't tell if the next hop is the recipient or not.
This pull request sets up GitHub code scanning for this repository. Once the scans have completed and the checks have passed, the analysis results for this pull request branch will appear on this overview. Once you merge this pull request, the 'Security' tab will show more code scanning analysis results (for example, for the default branch). Depending on your configuration and choice of analysis tool, future pull requests will be annotated with code scanning analysis results. For more information about GitHub code scanning, check out the documentation. |
Update: Changes:
Considerations & Shortcomings:
Todo:
|
Update: From pr3177.20 to pr3177.21 (diff): Changes:
|
1. This allows setting the length of padding at the time of writing. 2. This will be used in the following commit to allow setting the padding for blinded message paths, and blinded payment paths.
Update: From pr3177.22 to pr3177.23 (diff): Changes: Fixed the Padding approach.
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Basically LGTM, thanks!
lightning/src/blinded_path/utils.rs
Outdated
/// Reads padding to the end, ignoring what's read. | ||
pub(crate) struct Padding {} | ||
/// Represents the padding round off size (in bytes) that is used to pad [`BlindedHop`] | ||
pub const PADDING_ROUND_OFF: usize = 50; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why 50? Are common blinded path hops 50 bytes? We should at least write down how big blinded path hops generally are to make sure we're not cutting right in the middle of common sizes here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why 50? Are common blinded path hops 50 bytes?
Seems this would depend on the received TLVs' size (i.e., how big MessageContext
or PaymentContext
is for the given use case).
We should at least write down how big blinded path hops generally are to make sure we're not cutting right in the middle of common sizes here.
Could you elaborate on this concern?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My concern is if there's some feature of a blinded path which an observer wants to detect, we don't want to have a constant here that is exactly on the line between that feature and not that feature, otherwise we have gained nothing.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
First off, I was able to determine the common sizes for various message and payment TLVs. You can check out the branch where I introduced the test.
After that, I ran some experiments with different padding sizes. It looks like P=50 and P=70 are both solid options. P=50 gave us 3 unique groups, and P=70 reduced it to 2 while keeping the extra padding fairly minimal.
Here's a table for comparison:
Original Sizes | Rounded (P=50) | Rounded (P=70) |
---|---|---|
35 | 50 | 70 |
70 | 100 | 70 |
10 | 50 | 70 |
28 | 50 | 70 |
62 | 100 | 70 |
96 | 100 | 140 |
96 | 100 | 140 |
96 | 100 | 140 |
27 | 50 | 70 |
42 | 50 | 70 |
26 | 50 | 70 |
26 | 50 | 70 |
128 | 150 | 140 |
53 | 100 | 70 |
Let me know which size you think we should go with! I'm happy to make any adjustments.
Thanks so much!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could you label the rows by their types?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yea, I thought the question was what was the largest value we'd want to consider when picking the rounding size. I figure we should pick kinda the largest value we expect to send over lightning.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hello!
I’ve updated the htlc_minimum_msat
to 1 billion (1 million sats) and adjusted quantity
to None
(instead of Some(1)
) to better align with expected values.
Here’s the branch link for you to check out: branch.
I also ran an analysis to determine the best P
value with minimal added padding for the payment TLVs, especially focusing on Payment::ReceiveTlvs
. Here’s a quick look at the results:
Type | Property | Original Sizes | Rounded (P=29) | Rounded (P=30) | Rounded (P=32) |
---|---|---|---|---|---|
Payment ForwardTlvs | |||||
With no Blinding Override | 29 | 29 x 1 = 29 (No Padding) | 30 x 1 = 30 (+1 size) | 32 x 1 = 32 (+3 size) | |
With Blinding Override | 29 | 29 x 1 = 29 (No Padding) | 30 x 1 = 30 (+1 size) | 32 x 1 = 32 (+3 size) | |
Payment ReceiveTlvs | |||||
For Offer | 128 | 29 x 5 = 145 (+17 size) | 30 x 5 = 150 (+22 size) | 32 x 4 = 128 (No Padding) | |
For Refund | 56 | 29 x 2 = 58 (+ 2 size) | 30 x 2 = 60 (+4 size) | 32 x 2 = 64 (+8 size) |
From the looks of it, P=32
seems to be the optimal choice for our case with payment TLVs.
Let me know what you think about it! Thanks a lot!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Wouldn't this depend on how many forward TLVs are included?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi!
So, I ran an analysis for different numbers of ForwardTlvs
. When there are only a few ForwardTlvs
, the best padding value, P, switches between Offer
and Refund
. However, once the count goes over 5, P=29
generally becomes the optimal choice.
Here’s the table I put together for reference. (I removed the P=30 column since it wasn’t optimal in any scenario.)
ForwardTlvs Count | ReceiveTlvs Type | Total Size (P=29) | Total Size (P=32) |
---|---|---|---|
1 | Offer | 174 | *160 |
1 | Refund | *87 | 96 |
2 | Offer | 203 | *192 |
2 | Refund | *116 | 128 |
3 | Offer | 232 | *224 |
3 | Refund | *145 | 160 |
4 | Offer | 261 | *256 |
4 | Refund | *174 | 192 |
5 | Offer | 290 | *288 |
5 | Refund | *203 | 224 |
6 | Offer | *319 | 320 |
6 | Refund | *232 | 256 |
7 | Offer | *348 | 352 |
7 | Refund | *261 | 288 |
8 | Offer | *377 | 384 |
8 | Refund | *290 | 320 |
9 | Offer | *406 | 416 |
9 | Refund | *319 | 352 |
10 | Offer | *435 | 448 |
10 | Refund | *348 | 384 |
By the way, in practice, what's the general number of forwardTlvs present in a blinded path?
Thanks a lot!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So, I ran an analysis for different numbers of
ForwardTlvs
. When there are only a fewForwardTlvs
, the best padding value, P, switches betweenOffer
andRefund
. However, once the count goes over 5,P=29
generally becomes the optimal choice.
Thanks for putting this together.!
By the way, in practice, what's the general number of forwardTlvs present in a blinded path? Thanks a lot!
Probably no more than a two or three. I'd lean towards using 32 or something slightly higher to prevent a quantity in the ReceiveTlvs
from adding another block of padding. Though including a payer note will already cause that.
|
||
/// Represents optional padding for encrypted payloads. | ||
/// Padding is used to ensure payloads have a consistent length. | ||
pub(crate) struct Padding { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we split Padding
into a write-side one and a read-side one now that we only use the write-side one here and the read-side one elsewhere in the module? Also because its weird that length
is only used on the write side.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That makes sense! I’m just wondering though, would introducing two separate structs for Padding
add extra complexity to the code?
lightning/src/blinded_path/utils.rs
Outdated
/// Reads padding to the end, ignoring what's read. | ||
pub(crate) struct Padding {} | ||
/// Represents the padding round off size (in bytes) that is used to pad [`BlindedHop`] | ||
pub const PADDING_ROUND_OFF: usize = 50; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why 50? Are common blinded path hops 50 bytes?
Seems this would depend on the received TLVs' size (i.e., how big MessageContext
or PaymentContext
is for the given use case).
We should at least write down how big blinded path hops generally are to make sure we're not cutting right in the middle of common sizes here.
Could you elaborate on this concern?
Add a generic `WithPadding` struct to handle padding for `ForwardTlvs` and `ReceiveTlvs` used in `BlindedMessagePath` and `BlindedPaymentPath`. This struct applies padding to the contained TLVs, rounding them off to a specified value. This design provides flexibility in padding TLVs of varying sizes. The `PADDING_ROUND_OFF` value is chosen to be sufficiently large to properly mask the data type of the contained TLVs.
A note of Compact Blinded Paths: Compact Blinded paths are intended to be as short as possible. So to maintain there compactness, we don't apply padding to them.
Add test to verify blinded message and payment path padding.
Update: From pr3177.24 to pr3177.25 (diff): Changes:
|
Description
This PR introduces padding for
Payment
andMessage
Blinded TLVs to ensure that the size of each packet in the path is uniform.