-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Denial-of-Service Through UTXOs Flooding #546
Comments
DadeKuma marked the issue as duplicate of #402 |
DadeKuma marked the issue as sufficient quality report |
0xean changed the severity to 3 (High Risk) |
0xean marked the issue as satisfactory |
Hi, @0xean. I'd like to point one detail for why this issue is not a duplicate of #402. Both have different root causes and fixes. The root cause for #402 is that As presented above, the root cause for this issue #546 is the fact that the software is vulnerable to a DoS attack because it doesn't implement a defensive and functional strategy to consolidate and deal with all UTXOs, specially near-dust ones. This attack path is still open for confirmed UTXOs at a negligible cost for the attacker, and only accepting confirmed UTXOs does not fix this. The "Recommended Mitigation Steps" explains what could be done differently. UTXO consolidation is a critical strategy for wallets, specially in this context. Bitcoin Core, and similar alternatives, implements a robust strategy to consolidate UTXOs. But, due to very specific needs, ZetaChain foregoes those protections by selecting the UTXOs that will be used to build the transaction without Bitcoin Core's coin selection algorithm. After being flooded with near-dust UTXOs without any strategy to deal with that, the DoS presented in the issue is persistent and consolidating these UTXOs in order to re-establish normal operations with the same TSS address may be prohibitively expensive; calling for more drastic measures. Here, given that the UTXOs selection is consensus-critical amongst peers in ZetaChain, the issue is even more sensitive. A similar protection must be implemented, otherwise the risk for unintended or actively malicious DoS is inevitable. Dust limit is not part of Bitcoin's consensus, but, as shown in the issue above, we may consider that it's realistic to expect that transactions with UTXOs above 546 satoshis will be successfully relayed across the network:
Focusing on an attack and not normal occurrences in the daily management of the wallet, Thank you so much for your consideration and time. |
Thanks @ciphermarco - agreed this is a separate issue. That being said, I do not believe this shows sufficient proof to show that a DOS will occur here and more evidence would be required to show the number of UTXO's required to actually have the network fail and the costs or timeline as to when that number would be reached. I DO think this is a very valid QA issue however, but cannot award it as M as there is not sufficient evidence to show the feasibility of this attack or when the state would be reach through "normal" use for enough resources to be consumed dealing with these UTXO's to actually DOS the chain. |
0xean marked the issue as not a duplicate |
0xean changed the severity to QA (Quality Assurance) |
0xean marked the issue as grade-b |
Lines of code
https://github.com/code-423n4/2023-11-zetachain/blob/b237708ed5e86f12c4bddabddfd42f001e81941a/repos/node/zetaclient/bitcoin_client.go#L736-L752
Vulnerability details
Impact
The function
FetchUTXOS
inbitcoin_client.go
is responsible for fetching and ordering UTXOs for use by the Bitcoin TSS address. The way the UTXOs are treated and traversed opens a simple way to perform a denial-of-service attack in the client. The impact is worsened due to the small interval schedule between the potentially intensive loops and the listing of zero confirmation UTXOs.Proof of Concept
When
FetchUTXOS
is called, it lists all the known UTXOs for the current Bitcoin TSS address. The RPC calllistunspent
can be, by itself, a bottleneck for wallets dealing with a big number of transactions and UTXOs; ~160ms, as commented, is not guaranteed for such cases. But the main issue emerges when the whole list of UTXOs is looped and sorted indiscriminately.Be it through daily customised use of the address or an attacker actively targeting this vulnerability, the list of UTXOs can grow unmanageably large without a robust and recurrent UTXOs consolidation mechanism. In this case, listing zero confirmations UTXOs increases the risk by decreasing the cost of an attack, but even confirmed UTXOs can be exploited by an attacker trying to drain the network participant's resources and disrupt critical operations. Let us focus on an active attack.
Active DoS
An attacker can generate near-dust, just enough to be relayed (check dust limit concept) UTXOs to this address, spending very little funds to cause huge resources drain over time. This attack is cost-efficient even if the UTXOs are lost to the attacker, but, as it is now, the cost of the attack is highly reduced since the listed UTXOs can have zero confirmations and, thus, be "double-spent" or replaced-by-fee. Besides the almost DoS effect that can be caused by only one looping and sorting of such a huge UTXOs list, this potentially maliciously intensive operation is expected to be performed every 30 seconds.
SelectUTXOs
mentions consolidated UTXOs and implements an algorithm for allegedly consolidating UTXOs. While it is true there is some consolidation performed, this is not even close to enough to deal with the problem, specially when actively exploited.This is a clear DoS vector for clients and cannot be deemed safe without a robust and recurrent strategy to consolidate UTXOs. Requiring zero confirmations before spending the resources on the UTXOs just increases the attack efficiency and its impact.
Tools Used
Manual: code editor.
Recommended Mitigation Steps
I think the best potential solution is to implement a robust and recurrent UTXOs consolidation strategy that is not limited by a low and fixed value (e.g.,
maxNoOfInputsPerTx
), and being defensive when processing these UTXOs in loops and any potentially resource-intensive operation.The consolidation strategy may need to be separated from the main transactions construction and signature. It is, it could be another scheduled operation with the sole purpose of consolidating UTXOs. Though, some consideration could be given to making
FetchUTXOS
a bit more efficient in clearing UTXOs without affect the main funds movement.As for being defensive when processing UTXOs, wasting resources on unsafe zero confirmations UTXOs must be completely prevented. And, when possible, avoid performing potentially resouce-intensive operations below a safely configured UTXO value threshold. But as this is not always possible and the UTXOs list need to be processed one way or another, a robust and recurrent UTXOs consolidation strategy seems to be the main course of action.
Assessed type
DoS
The text was updated successfully, but these errors were encountered: