-
Notifications
You must be signed in to change notification settings - Fork 0
/
34524513.deep-dive-into-the-oracle-problem.html
1 lines (1 loc) · 21.5 KB
/
34524513.deep-dive-into-the-oracle-problem.html
1
<p>It was long assumed that anything buit on a blockchain could not have ties to the physical world, i.e., that it would be a self-contained system. And since it would be a self-contained system it could not meaningfully impact anything in the physical world.</p><p></p><p>The recent growth of DeFi, however, seems to challenge these assumptions. Let’s look at some DeFi assets and why they have value.</p><ol><li><p>ETH - has value because it’s a store of value, or it earns cashflow from transaction fees in proof of stake.</p></li><li><p>USDC - has value because we trust Circle to back it with fiat</p></li><li><p>DAI - has value because we trust an oracle system to back it with ETH</p></li><li><p>UNI - has value because it can earn fees from trading of other DeFi coins</p></li><li><p>LINK - has value because it will earn fees from queries in the future (I hope)</p></li><li><p>SNX - has value because it earns fees from users of their synthetic assets.</p></li><li><p>sUSD - has value because we trust chainlink oracles to back it with SNX</p></li><li><p>FTT - has value because we trust ftx to back it with fees from the ftx exchange.</p></li></ol><p>Broadly, assets have value for one of these reasons:</p><ol><li><p>They are a store of value - ETH, BTC or fiat</p></li><li><p>They earn fees (cashflow) - UNI, SNX, LINK</p></li><li><p>We trust someone to back them by assets that belong of type 1 or 2 - USDC backed by fiat, FTT backed by ftx exchange fees, sUSD backed by SNX, DAI backed by ETH</p></li></ol><p>In the beginning ETH was the only asset that existed on the ethereum blockchain. Assets like UNI or AAVE (cashflow-based) could not have had value at that point because they require atleast two different assets to already exist. Therefore our entire ecosystem must rely on assets of the third type, the one where we “trust” someone to back one asset by another. Indeed, the entire DeFi ecosystem was kickstarted by protocols like MakerDAO and Synthetix. Even today, most DeFi volume goes through or indirectly relies on assets like USDC and DAI.</p><p>How to do this while minimising trust is the oracle problem.</p><h3>The oracle problem</h3><p>So far we broadly have two types - both at extremes. One is a centralised oracle - where a single party like Circle for USDC or ftx for FTT - is a trusted party. The other is a fully decentralised oracle like Chainlink or MakerDAO’s oracle - where anyone can anonymously and permissionlessly join the process. The collective as a whole uses majority voting and we trust this majority.</p><p>What are the reasons for an oracle to cheat?</p><ul><li><p><strong>Profit:</strong> The security of some system relies on the oracle acting correctly. Gaming this system allows you to profit. For instance you could short sUSD and then report its price as zero. Or Circle could simply stop users from redeeming their USDC for fiat.</p></li></ul><p>What are the reasons to not cheat?</p><ul><li><p><strong>Loss of all future revenue:</strong> For instance MakerDAO or Chainlink would no longer be able to earn revenue if it became known that their oracles were faulty. Since the value of MKR or LINK depends on the revenue they generate, its price will go to zero - all future revenue is lost by the oracle providers. The more revenue you stand to gain in the future if you’re honest today, the more incentive for you to be honest today. Clearly this favours protocols that are expected to make more revenue.</p></li><li><p><strong>Loss of future revenue via reputation:</strong> Reputation is important because it leads to loss of revenue in future opportunities. For instance if it is known that someone was involved in manipulating Chainlink’s oracles, then that person won’t be trusted even in future opportunites outside of Chainlink. On the flip-side, someone who has acted honestly for a long time can be trusted with more value. This favours those who can carry their identity from one project to the next. At the extreme end we have fully public identities, who attach their reputation to every single action they make and have every action public. There is a tradeoff between trust and privacy.</p></li><li><p><strong>Loss of reputation (psychological): </strong>We tend to assume money as the only incentive to be honest, whereas there is a psychological benefit too from being perceived as honest or trustworthy as opposed to scammy or immoral.</p></li><li><p><strong>Moral reasons: </strong>Fairly self-explanatory, many of us inherently don’t want to use “unfair” tactics to gain wealth from others. What is unfair is subjective and governed by consensus. For instance bluffing in poker is not unfair (consensus says lying is allowed), but lying about your tax accounts is unfair. Some perceive bug exploits in DeFi to be unfair, some don’t. Oracle dishonesty is usually assumed to be unfair.</p></li><li><p><strong>Loss of freedoms, access to opportunities: </strong>This is a major source of security in the real world. The court can take away your time or access to opportunities by imprisoning you or banning you from engaging in certain activities. Since it is assumed that your freedom is worth more than any economic incentive or penalty, this is highly effective. Imprisonment requires an active police force, it also requires consensus. Two police forces operating under conflicting rules (say, one court-operated and one DAO-operated) cannot coexist, as this leads to war. Therefore it is unlikely we will be able to enforce this using the blockchain any time soon. We can however piggyback on top of the existing system, by creating laws regarding fraud on the blockchain. A weaker form of loss of freedoms would be loss of access to specific opportunities. For instance Aave may blacklist addresses and tokens involved in a Chainlink exploit, restricting that user’s ability to get on-chain loans against those tokens.</p></li><li><p><strong>Loss of staked value and layering: </strong>When you stake LINK or UMA or MKR, you only stand to lose future revenue. However if you additionally stake, say ETH, you can stand to lose more than this future revenue. This is a problem of capital efficiency - the more value you have to stake to prove your honesty, the less profit you will be making per unit of value staked. This does not increase security against a malicious majority, since a malicious majority will not slash itself, however it increases security via the incoordination defence (discussed later in the article). Alternatively you need to have a lower, more trusted layer that can slash users in higher, less trusted layers. An example would be courts administering fines. Even if the board of a company has 51% acting maliciously to gain profit, a court can impose fines as long as the court itself is not 51% attacked.</p></li></ul><h3>Who all does the oracle problem apply to?</h3><p>Everyone. More broadly, the oracle problem simply refers to incentivising one set of actors to behave in a certain way to ensure that another system that relies on this can be secure. So it applies to:</p><ul><li><p>Blockchains, such as ethereum and bitcoin</p></li><li><p>Oracles such UMA, Chainlink and MakerDAO oracles</p></li><li><p>Centralised trust sources such as Circle for USDC</p></li><li><p>Rollup operators</p></li></ul><p>Broadly we have seen there are incentives and penalties applied, and hopefully the profit obtainable from being dishonest is less than the incentives and penalties. UMA has made <a href="https://docs.umaproject.org/oracle/econ-architecture">some progress</a> in studying this, as has <a href="https://augur.net/blog/v2-resolution/">Augur</a>.</p><h3>Isolated box defence</h3><p>Blockchains such as ethereum have the peculiar benefit that they are reversible - and the assets they secure hold value only if they are honest. ETH and all ERC20s (USDC, DAI, etc) will no longer hold value if ethereum’s rules are violated. People will hard fork and revert the transaction, and the tokens on this fork will now have value. This is unlike most systems. For instance USDC will continue to hold value if it is stolen using a Chainlink exploit. Gold will continue to have value if it is stolen from a bank.</p><p>This isolated box defence ensures that even a colluding majority has no incentive to be dishonest. Even if they make very little revenue and have very little to lose by being dishonest - they cannot profit by being dishonest.</p><p>The isolated box defence can be applied to oracles too. For instance if any dapp that relies on Chainlink is only allowed to transact in LINK (and not USDC or ETH etc), then this is an example of an isolated box defence. If Chainlink is ever malicious, LINK will drop in value and people will supposedly get defrauded. However they can deploy a clone, LINK2, on ethereum, fork out all the malicious LINK voters, and give LINK2 to everyone else who owned LINK. The malicious oracle update can be reverted. Now the users of the dapp that owned LINK are protected through social coordination, just the way ETH holders are.</p><p>This is also an example of how isolated boxes can be built inside isolated boxes. The disadvantage ofcourse, is that the inner box cannot interact with anything else. All trades or bets need to be using LINK and not ETH or USDC. Since that is the case, the inner box can even isolate itself from the outer box, i.e., this Chainlink variant being discussed can form an isolated blockchain and have its own consensus mechanism. A blockchain’s consensus mechanism can hard-code any connections to the real world, this includes the full scope of what any oracle can do.</p><h3>Incoordination defence</h3><p>Decentralised oracles have an additional benefit in that they work well with uncoordinated actors. Consider a system where a single person acts as the oracle. This person stands to gain X by cheating. They also stand to gain Y in discounted cashflows till the end of time. Assume X < Y. A single person oracle does not have an incentive to cheat. If we replace this single person with hundred people and split the cashflow and profit as Y/100 each and X/100 each, still nothing changes. </p><p>What changes however is the need for coordination among the actors. So even if X is slightly more than Y, it may not be possible to launch an attack due to lack of coordination. Any minority that is building their attack cannot do on-chain or over a period of time, as then they will get slashed. They need to attack as a majority at the same instant, with an already established way to split the profits from manipulation. How difficult or easy this coordination is depends on the system and is an open problem.</p><p>Most systems implement the incoordination defence via a token that captures cashflow. The token represents profits until end of time, which are hopefully greater than the profit obtainable from cheating immediately. Tokens make rights to cashflow easily tradeable. For instance if a single actor oracle is tired of running the oracle, they can sell the rights to be the oracle (and rights to cashflow) to a second actor. The same can happen in a decentralised setup.</p><h3>Layers of trust</h3><p>Layering is a powerful way to increase scalability in the average case. For instance, optimistic rollups are a form of layering. We trust the rollup operators to act honestly and not require every single validator to check them. However if someone finds they’re malicious they can force a check at the lower layer (ethereum blockchain). UMA seeks to replicate the same concept with price oracles using <a href="https://docs.umaproject.org/getting-started/oracle">optimistic oracles</a>.</p><p>The real world uses many layers to attain scalability. For instance consider a company engaging in business with another company. Both parties sign a contract, and then both parties themselves peacefully operate as per the terms of the contract, without anyone else. This is akin to a state channel where two parties operate peacefully because they know that if they don’t, the other party escalates fraud checking to a more trusted layer. Let’s suppose the CEO of one company tries to defraud the other company. There exist protections for other board members or whistleblowers to prevent him from doing this. Let’s suppose now that the entire board is in on the fraud, so they go to court. The judge again is only a single actor who can be bribed using the profits obtained from the fraud. If proof of this bribe is obtained, a second court case can be opened, possibly at a more trusted court. If the supreme court has been 51% attacked too (bribed using profits from the fraud), the legislature can intervene. If the legislature is fraudulent as well, the citizens can demand a new election.</p><p>Contrast this system with one like ethereum under proof of stake, where every single transaction goes through direct democratic vote across the entire population with no trust and no representatives. Such a system gives you very high trustlessness but very low scalability.</p><h3>Eternal profit streams, self-oracles and courts</h3><p>The above discussion broadly assumes profit and future expectation of profit as the primary incentive for being honest. This incentive fails when there is no longer expectation of profit. Consider MakerDAO’s in-house oracle system. As long as MakerDAO is profitable, this system works - it is an honest “wild west” bank. However if MakerDAO has competition from say MakerDAO 2, then it may lose future expectation of profits. Future expectation of profits is fickle and speculative, therefore this expectation may crash before all DAI has exited the system. There might now exist a point where expectation of profit (MKR market cap) is less than the total wealth that can be stolen by the oracles (total DAI supply or ETH collateral backing DAI).</p><p>The above reasoning applies to any system, business or DAO that tries to act as its own oracle.</p><p>The best “wild-west” companies are eternal ones - ones that always expect to make stable revenue in the future. This is simply a court that runs on taxes, and is not bound to a single business. Ethereum or an isolated oracle-enabled blockchain would do exactly this. We can also run courts on the ethereum blockchain, but there needs to be sufficient expectation that users will continue to rely on the same courts in the future, i.e., there needs to be limited competition.</p><p>Alternatively there needs to be a way to gracefully transition in and out of oracle systems as they gain and lose expected revenue and trust.</p><h3>Oracle bootstrap</h3><p>Reputation is a great way to bootstrap an oracle. For instance if it is known that the ethereum foundation is operating an oracle, this oracle will be instantly more trusted than most other oracles. Same goes for a blockchain oracle operated by any real world entity, be it Coca-Cola or a Central Bank. These parties already have a lot of revenue and reputation to lose outside of ethereum - if they operate dishonest oracles on ethereum. Therefore a multisig of such parties can bootstrap an on-chain oracle or court. At the current level of maturity in ethereum, such multisigs would be highly beneficial for most oracle applications.</p><p>On-chain protocols that already generate significant revenue can also act as the bootstrap, for instance Sushiswap could buy up physical property and we would trust it as a self-oracle escrow.</p><p>Reputation is even more subjective than expectation of future revenue for a venture. Naively, reputation is the expectation of future revenue across all possible ventures a person (or more strictly, an identity) may undertake. However due to psychological reasons it can even exceed this.</p><p>As the oracle becomes more popular and starts generating more revenue, we can rely less on outside-of-oracle revenue and incentives and rely more on those obtained directly from operating the oracle. That does not mean we need to ever eliminate this cross-margining of reputation obtained from various identities and revenue streams. The ultimate form of cross-margining is simply a supreme court. However if we want to free up identities (enable more privacy), we can transition to a more decentralised and pseudoynmous system.</p><h3>Trust versus privacy (DIDs)</h3><p>In the previous section I took care to distinguish between a person and an identity. A blockchain allows for a person to hold multiple identities. A person may operate three ventures with one identity while simultaneously engaging in other questionable activity using a second identity. Such modularity is also supported in the real world to a limited extent. For instance, what you do in the privacy of your home may be of no consequence to your bank credit score. However which shop you buy your food from may be.</p><p>Attaching more activity to a single identity increases trust if all this activity is perceived as good and trustworthy. But it decreases trust if it is not. Relative trust is also decreased if one person chooses to attach more activity to their identity and another person doesn’t. At one extreme we have a complete loss of privacy where people need to attach more and more activity to their identity in order to be percieved as trustworthy. Roles that require you to obtain more trust, such as a politician, may put additional pressure on you to be transparent at the cost of privacy.</p><p>There is clearly a tradeoff between trust and privacy. This full spectrum may be better explorable in blockchain designs due to transparent modular design and lack of information leak. More information gets leaked in physical interactions; in digital interactions you can often obtain cryptographic guarantees of privacy.</p><p><code>Exploring this tradeoff might be the most important reason for the existence of blockchains and decentralised systems of trust.</code></p><p>Identity of some sort is needed to solve the sybil problem and prevent a system from being spammed - be it a blockchain, an oracle or a court. The more workload the oracle must handle, the tighter the sybil mechanism needs to be.</p><h3>What can you do with secure oracles?</h3><p>Literally anything you can do in the real world. In fact it is surprising how little has been explored. Some examples:</p><ul><li><p><strong>Fiat onramps.</strong> Oracles can match fiat tranasactions with crypto transactions, acting as the escrow. This is an example of a service with low profit from manipulation and high revenue - since escrow occurs only for a day or two at maximum, whereas revenue will continuously be generated as more users wish to onramp.</p></li><li><p><strong>Overcollateralisation of any real world asset.</strong> You can generate synthetics the way UMA or Synthetix does - this enables you to recreate the global stock market on the blockchain. Commodities, bonds, stocks can all be created in synthetic form.</p></li><li><p><strong>Synthetic control. </strong>The above point also extends to non-fungible assets. For instance if an anonymous organisation (DAO) wants to operate a shop in the physical world, they could have a representative who owns the shop in the real world. This representative could have ETH bonded - which is slashed if the oracle determines that this representative did not operate the shop in the way the DAO commanded them to. Here we are not just talking about collateralising assets (the shop) for the purpose of price speculation, but actually enacting control on how it is operated in the real world. This control can exerted in various forms - via people or in a more automated fashion (say managing an electricy grid or mechanical actuators). Control via people is the most powerful - a DAO can say anonymously run the top level of Maersk or McDonalds.</p></li><li><p><strong>Salaries. </strong>Completion of any job in any industry, be it freelance or permanent - can be verified by an oracle - following which payment is released. For instance if in the previous example, if the DAO wants to hire a painter for the shop, they will need an oracle to confirm that painting was done, subject to which payment is released.</p></li><li><p><strong>Data availability oracles. </strong>Data availability in an off-chain source can be guaranteed by an oracle system - for a plasma or sidechain. This enables blockchain scaling.</p></li></ul><h3>Scope of an oracle</h3><p>Scope of an oracle is important to define for a number of reasons.</p><ul><li><p><strong>Subjectivity.</strong> Oracles inherently deal with a world more subjective and murky than a pure blockchain. For instance an oracle operating a fiat onramp has to consider what happens if the bank flags the fiat transaction, reverses it, demands more documents or goes to a court (in the real world). It is important to pre-define how the oracle handles various situations, and how much error is tolerable in an oracle that is not malicious.</p></li><li><p><strong>Sybil mechanism. </strong>The less sybil control that is prevalent, the more workload an oracle will have. Sybil control here refers to how much activity or whitelisting needs to be attached to an identity for it to be able to query the oracle. At one extreme we have an oracle that anyone can query. Next, we can demand a fee to be paid for query. Beyond that we can demand that users can only query on conflicts they are personally involved in. Then we can demand or give preference to those users who have not lost conflicts in the past.</p></li><li><p><strong>Verifying cost. </strong>The more work an oracle does, the more resources are required to store and verify any past work that an oracle has done. If a user wishes to use an oracle they would prefer to be able to verify that the work it has done in the past is honest. This problem is parallel to the verifying cost of a blockchain - and involves similar issues such as size of the data and synchronisation time.</p></li></ul>