Skip to content
This repository has been archived by the owner on Oct 4, 2019. It is now read-only.

ECIP1011 for Replay Attack Fix Using Change in the Signing Process #10

Open
wants to merge 32 commits into
base: master
Choose a base branch
from

Conversation

avtarsehra
Copy link

See the draft version of the ECIP1011 here:

https://github.com/avtarsehra/ECIPs/blob/master/ECIPs/ECIP1011.md

I beleive changing the signing process would be the simplest and cleanest method for fixing this. While future proofing is useful, I personally feel that a robust, secure, clean and quick fix right now that can be upgraded later would be the best option. This approach allows you to use a constant value, which can be made into a dynamic value at a later stage. But I think trying to implement a solution that can cover all future forks would be overkill, and would require reference to the blockchain thus minimising the possibilities for creating offline transactions.

@elaineo
Copy link
Member

elaineo commented Oct 3, 2016

Future forks might not even need to worry about replay attacks, right? If we leave difficulty bombs behind...

@splix
Copy link
Contributor

splix commented Oct 3, 2016

And fork 1920k is already behind, and it's a well known block. It's too late to introduce any fix for that particular block, it already have several fixes like SafeSplit contract and similar

@avtarsehra
Copy link
Author

Yes I agree. This is why we could even utilise the approach suggested in the ECIP, but just use an arbitrary constant hash for the change. Maybe something that can be easily adjusted without hardcoding, so if a change is required in future it can be executed by users directly - even in the case of light clients.

@avtarsehra
Copy link
Author

I had a detailed chat with @splix on this, around using a dynamic value for the appended value in the signing and validation process. The outcomes of this made sense so I have updated the ECIP based on these discussions.

@marcusrbrown
Copy link
Contributor

The pros of this algorithm are pretty straightforward:

  • No increase in transaction size
  • Simple signing and verification mechanism
  • Replay prevention against incompatible chains (e.g. ETH)

However it does not provide replay prevention within the same chain, or compatible chains (e.g a private chain based off of ETC).

Another issue is that the client will have to keep track of all previous and current block hashes in order to sync the blockchain from the beginning. This is especially true for light clients - how do you communicate when the hash change occurs, and how do light clients verify transactions at any given point in time? My proposal suffers from the same issue - dependency on a stable block number that can change. Once you have to maintain more than one block hash, you begin increasing the cost of verifying transactions to the receiver, since the receiver has to iterate over the current and previous hashes prior to recovery.

An alternative is to use a pool of block numbers that are frozen at an interval (128 or 256 blocks). The pool could exist inside of a contract. That solves the issue of having a constant, verifiable source of hashes to test against. That still doesn't solve the problem of the receiver having to compute the KEC() of the transactions + block hash for every hash in the pool.

BTW, I don't agree that we are up against the clock where we need to avoid future-proofing the algorithm against additional forks or cousin chains. We have the opportunity to solve this issue for the foreseeable future. We are close to having an elegant solution, let's spend a bit more time iterating on it.

@avtarsehra
Copy link
Author

@igetgames with the new modification the following is not the case:

However it does not provide replay prevention within the same chain, or compatible chains (e.g a private chain based off of ETC).
We have introduced a dynamic variable that is appended to transactions in the signing process. This dynamic variable is the block hash chosen based on a running lead time of 6 epochs (~24hours). But there is also an option for light clients to use a default variable of 0, so they don't need online access. This would be a compromise for building transactions completely offline.

In this update we included an approach where validation of a signed transaction can be done with a pool of 7 block header hashes from last 7 days.

I am not a big fan of relying on a contract for referencing protocol level parameters. I don't think I can give you a good answer to that, except that if had to write this up in the yellow paper spec (which I am planning to do as we go forward) that would just look very inelegant. But that is just me, and may not be everyone's pragmatic PoV.

Yes I agree with you, I don't think we should be driven to compromise on a a good solution, as you say we are close to finding something that would be really cool!

@marcusrbrown
Copy link
Contributor

@avtarsehra and I are discussing this on Slack. We'll update this thread in a bit.


With this approach, it would ensure that for any future forks the replay attack is resolved after a time lag of ```6 epochss``` (or 24 hours ```~ 6,144 blocks```).

In this dynamic approach, an option could be used that T_a could also be an arbitrary value ```0```, which can be used by light clients or for constructing off-line transactions.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does this make off-line tx's later susceptible to replay attacks if there is another fork?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes offline transactions would be susceptible to relay attacks. But the arbitrary constant can be easy to modify and chosen at point of a fork. However, I can't see how offline wallets would automatically manage forks without some manual change no matter what approach we go with.

4. Validation can then take place with up to 1 weeks of values: ```t0, t0-24, t0-48 ... t0-168```, which implies at most 7 checks would need to be performed for transactions that would be delayed by 7 days.

With this approach, it would ensure that for any future forks the replay attack is resolved after a time lag of ```6 epochss``` (or 24 hours ```~ 6,144 blocks```).

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't understand this. Is this intended to prevent delayed transactions that were waiting in the txpool?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes this approach would make delayed transactions invalid. We can increase the period of validity by increasing how far back a check would take place.

@avtarsehra
Copy link
Author

avtarsehra commented Oct 4, 2016

I'm happy with your suggestion. Dynamic signing variable is a nice to have from my view, but I agree it needs more analysis and simulations to find the optimal parameters. I will continue working on it as I think it can be used in some way...

@marcusrbrown
Copy link
Contributor

Would you be OK with going forward with client implementation using the most recent block hash? That value is trivially accessible. If you come up with another solution for the dynamic variable then it will be easy to plug in at that point.

@avtarsehra
Copy link
Author

Yes definitely that would be ok. If it is a fixed value it doesn't matter which is used, so most recent block hash would be good. The other can be null (as you mentioned) for offline client support.

@elaineo
Copy link
Member

elaineo commented Oct 5, 2016

I agree with @igetgames on the dynamic signing var -- this is a separate spec that changes the current functionality, and I don't see the rationale for it.

The general procedure of changing the signing process looks like a good solution to me.

@avtarsehra
Copy link
Author

I'm good with that. I will remove the last section in the ECIP connected to the dynamic signing variable.

@avtarsehra
Copy link
Author

avtarsehra commented Oct 5, 2016

@splix @elaineo @igetgames I have made the update as discussed above. Just for reference I have left the below part in, but in the main body included some of the points from above.

On a related note, we will need to get back to devs for parity etc and discuss timelines regarding parity changes. If everyone is ok with what we got we can share the ECIP for the diff bomb delay, and one for the replay fix. We will need to include our timelines i.e. when we planning to go live.

@whatisgravity
Copy link
Contributor

This is to be stacked with the difficulty bomb so it will not cause unnecessary interruptions with older clients?

@realcodywburns
Copy link
Contributor

light clients and hardware wallets shouldn't be affected by the change as block header hash can be 0 if the dynamic variable is used. Wallet addresses would be unaffected but will be unable to send any transactions with old clients after the switch. Enough time needs to be given to the exchanges and miners to upgrade their systems for a smooth transition. the clients can be released with the fix asap and then the trigger block can engage both changes at once globally.

@marcusrbrown
Copy link
Contributor

light clients and hardware wallets shouldn't be affected by the change as block header hash can be 0 if the dynamic variable is used.

All clients are affected. Hashing an additional 32 bytes of '0' will make the transaction hash differ on clients that only hash the existing transaction structure.

Enough time needs to be given to the exchanges and miners to upgrade their systems for a smooth transition. the clients can be released with the fix asap and then the trigger block can engage both changes at once globally.

I would agree except that replay attacks are an ongoing and critical issue. The implementation can be written to trigger on a specific block sometime in November (giving enough time for Parity and other implementations to apply and test, and for announcements to be made), and future-proofed because we know the difficulty bomb delay will occur at 3000000. The code will simply check the block height to retrieve the most recent fork hash.

An advantage of fixing replay attacks sooner than later is that it will demonstrate that the Classic developers can coordinate critical fixes and roll them out in a timely fashion.

@realcodywburns
Copy link
Contributor

realcodywburns commented Oct 5, 2016

This is very true. Is the plan to issue this as a eip as well? Losing ledger, trezor, and what ever else will be unfortunate. Also cew and cew-cx will need a patch

@elaineo
Copy link
Member

elaineo commented Oct 5, 2016

@igetgames I don't think a November trigger is realistic. I would love to have the replay fixed before ETH, but would prefer to spend more time having our clients fully tested and getting it propagated to everyone with plenty of advance notice.

@marcusrbrown
Copy link
Contributor

@realcodywburns I definitely think we should issue this as an EIP, but I would want @avtarsehra to sign off on it first.

@elaineo Understood.

@realcodywburns
Copy link
Contributor

Is this PR ready to be merged?

@elaineo
Copy link
Member

elaineo commented Nov 3, 2016

Looks like ETH will have a hard fork in about 2 weeks (block 2,642,462) to implement this: ethereum/EIPs#155

@avtarsehra
Copy link
Author

avtarsehra commented Nov 3, 2016

@elaineo yes that's great they are doing it. But would also be good for ETC to implement our own version, so if ETC forks we can prevent replay issues between ETC and other chains that may emerge. We are using this as a test for new dev's. @splix and I were supposed to have a call with them today but unfortunately it had to be cancelled at the last minute. I will follow-up on that tomorrow and provide an update.

@elaineo
Copy link
Member

elaineo commented Nov 3, 2016

do we want our own version? if the changes are compatible except for the CHAIN_ID, then it will be easier to pull upstream changes.

@avtarsehra
Copy link
Author

Yes that is a good point. We were mainly using as a dev test as it utilises some key skills we wanted to identify.

@splix
Copy link
Contributor

splix commented Jun 11, 2017

@avtarsehra maybe we should merge it with status Rejected? for history and reference

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants