Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feat: Add Galactic Network doc #21

Merged
merged 7 commits into from
Mar 11, 2024
Merged
Show file tree
Hide file tree
Changes from 5 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
8 changes: 8 additions & 0 deletions docs/ZKPool-1.0/_category_.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
{
"label": "ZKPool-1.0",
"position": 3,
"link": {
"type": "generated-index",
"description": "Introduction about the ZKPool-1.0"
}
}
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes
File renamed without changes.
38 changes: 38 additions & 0 deletions docs/ZKPool-2.0/Galactic-Network-introduction.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,38 @@
---
sidebar_label: 'Galactic Network Introduction'
sidebar_position: 1
slug: /ZKPool-2.0/
---
# Galactic-Network-introduction
hugo-blue marked this conversation as resolved.
Show resolved Hide resolved
We refer to ZKPool-2.0 as the [Galactic Network](https://en.wikipedia.org/wiki/Intergalactic_Computer_Network) in tribute to J.C.R. Licklider, who first proposed the Galactic Network concept, an early vision of the open internet.

## Background

- Verifiable computation, specifically Zero-knowledge proof (ZKPs) technology, serves as a cornerstone in the crypto world. However, designing and operating a reliable, low-cost, decentralized, and economically healthy proving network can be very challenging.
- As Vitalik mentioned[1][2], the zkEVM might have bugs. For better security, a multi-provers framework has been proposed, and increasingly more projects, like Taiko, Scroll etc. are adopting this solution. Such a design will make the network more complex.
- ZKP requires a significant amount of computational power. However, it's crucial to understand that, unlike the POW project, these requirements are dynamic, not constant. For instance, the total number of transactions in a rollup may fluctuate, and some ZKP projects operate in optimism mode, requiring ZKP only at specific times. Therefore, in each independent ZKP network, the use of ZKP accelerators may vary, increasing the overall cost. Sharing the proving network is essential for fully utilizing the vital computational power.

## Galactic Network

The Galactic network aims to create a modular Verifiable Computation Layer (MVCL) that is affordable, decentralized, and easily accessible. This network will significantly reduce the development cost for ZKP (Zero-Knowledge Proof) projects' developers.

Ethereum also plans to use ZKP to verify Layer 1. Vitalik proposed the "Enshrined ZKEVM" to allow L1 and L2 to share the ZKEVM prover[3]. A modular verifiable computation layer will be aligned with Ethereum's long-term vision.

Additionally, it supports not only the Ethereum ZKP proving network but also other types of verifiable computation, including the Bitcoin ecosystem, web2 verifiable computation scenarios, and so on.

![Galactic Network Ecosystem](./images/modular ecosystem.png)
hugo-blue marked this conversation as resolved.
Show resolved Hide resolved
The Galactic network comprises the following components:

- Ethereum L3 based Appchain: A decentralized, permissionless network constructed to support protocols for provers, verifiers, and more to schedule the proving/verification tasks and distribute rewards.
- Galactic prover node: Nodes responsible for generating proofs.
- Galactic verification node: Nodes that handle verifications.
- Galactic relayer node: Nodes to relay proving tasks from ZKP projects to the Galactic network.
hugo-blue marked this conversation as resolved.
Show resolved Hide resolved
- Galactic oracle node: Nodes to split and schedule proving and verification tasks, and aggregate multi-provers.

This network offers several unique features:

- Low-cost and high-performance
- Support for multiple ZKP provers.
- Support for a POS-based verifier.

ZKP accelerators can participate in ZKP-proving tasks, which effectively boosts the utilization rate of their accelerators. Meanwhile, the verifier can aid in the validation of these proofs.
8 changes: 8 additions & 0 deletions docs/ZKPool-2.0/_category_.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
{
"label": "ZKPool-2.0",
"position": 2,
"link": {
"type": "generated-index",
"description": "Introduction about the ZKPool-2.0"
}
}
Binary file added docs/ZKPool-2.0/images/Galactic Contract.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/ZKPool-2.0/images/Oracle nodes.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/ZKPool-2.0/images/PoS-verifier-flow.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/ZKPool-2.0/images/UMP.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/ZKPool-2.0/images/flow chart.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/ZKPool-2.0/images/flow.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/ZKPool-2.0/images/modular ecosystem.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/ZKPool-2.0/images/modules.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/ZKPool-2.0/images/proof composition.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/ZKPool-2.0/images/ump flow.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
35 changes: 35 additions & 0 deletions docs/ZKPool-2.0/technology/PoS-based-verifiers.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,35 @@
---
sidebar_label: 'PoS-Based Verifiers'
sidebar_position: 3
---

# PoS-Based Verifiers

Proofs need to be verified within the network before being sent back or retrieved by the requester for the following reasons:

1. If a proof is invalid, the network can reassign the task to another prover.
2. Rewards are only distributed to provers who produce valid proofs.
3. Verifying proofs enhances the network's trustworthiness.

A decentralized network of verifier nodes will be established to reach a consensus on proof settlement. A node offers greater flexibility than a verifier contract because not all projects have a Solidity version of verification code. Verifying the batched proof can further reduce the cost. The mechanism will resemble Ethereum's proof of stake but likely in a simpler form. The process is as follows:

![PoS-Based Verifiers](./images/PoS-verifier-flow.png)*PoS-Based Verifiers Flow*

1. ZK projects supply the verifier code, which can be in different languages.
2. Either the ZK project or the community operates one or more verifier nodes.
3. To ensure the verifier's good intentions, a minimal deposit is required as staking asset.
4. Galactic contract or Oracle node aggregates proofs and produce an batchedProof
5. The Galactic contract or Oracle node uses a VRF to select a committee of verifier nodes for the batchedProof.
6. The verifier nodes carry out the proof verification and submit their results to the Galactic contract or Oracle node within a specified timeframe.
7. If a majority of the committee (for example, 2/3) reach a consensus, and if correct, all proofs in the batched proof are considered verified as either valid, if not then iterate to verify each proof.
8. Honest verifier nodes are equally rewarded, while dishonest ones are penalized.
9. The verifier's reputation is updated based on their actions.

We choose a verifier network over a DAO which handles challenges in an optimistic manner for the following reasons:

1. ZK verification is quick and cost-effective.
2. Verification consensus can be achieved rapidly because it depends on the L3 block proposing speed, which can be significantly fast.
3. It takes longer for a DAO to settle a proof if a challenge arises.
4. In both scenarios, verification codes from various ZK projects are required.

Building a verifier network could be approached in two ways: by starting from scratch with a simplified Proof of Stake (PoS) version, or by utilizing an existing platform like EigenLayer. If EigenLayer's staking, slashing, operator, and Actively Validated Services (AVS) capabilities are leveraged, less development is required as the already developed PoS platform can be reused. However, it's necessary to ensure that all verifier AVS nodes have stakes for the network to function as expected. Since EigenLayer runs on Ethereum Layer 1, interactions with smart contracts can be expensive and relatively slow. The cost and transactions per second (TPS) are factors to consider.
8 changes: 8 additions & 0 deletions docs/ZKPool-2.0/technology/_category_.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
{
"label": "Technology",
"position": 2,
"link": {
"type": "generated-index",
"description": "Technology of ZKPool-2.0"
}
}
119 changes: 119 additions & 0 deletions docs/ZKPool-2.0/technology/graph-based-computation-tasks-scheduling.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,119 @@
---
sidebar_label: 'Graph-Based Computation Tasks Scheduling'
sidebar_position: 2
---
# Graph-Based Computation Tasks Scheduling
## ZKP Proving Task Description

In AI, like TensorFlow, we will use graphs to describe the computation. In the graph, there are many operations, such as conv, pooling, and so on.

In ZKP, we also have similar requirements, because:

1. For ZKVM, the continuation technology will split the big proving tasks into smaller ones.
2. The recursion/composition/aggregation technology is widely used.

![Proof Composition](./images/proof%20composition.png)*Proof Composition (Source: Figment Capital)*

For each zkp proving task, we can define it as a kind of operation of a computation graph. Each device works as a computation node to finish the part of the proving tasks.

Thus, we can use a graph to describe the overall proving tasks.
![Graph-Based Computation](./images/graph-based%20computation.png)
*Graph-Based Computation*

For each operation, we can define these properties:

- input operation (null, one input or multiple inputs)
- name: (support customization)
- “zkp-singleton"
- “zkp-continuation”
- “zkp-recursion-A”
- “zkp-recursion-B”
- “zkp-aggregation”
- ……
- device requirements
- OS type
- CPU requirements
- GPU requirements
- Memory requirements
- devices id (null when it’s not assigned)
- output operation

## Scheduler

We will use Oracle Node to take the role of scheduling.

Oracle node will receive the status of all the connected provers and record its liveness and busy/idle status. Once a new task is published, the scheduler will start to work.

Here, we define a task-node matching mechanism.

Firstly, after analyzing a computation graph, a

- Descriptor of tasks (optimized binary related)
- Graph
- Task mode:
- Performance priority: Redundancy of provers
- Cost priority: Only one prover for each operation.
- Descriptor of device
- OS type
- CPU type
- GPU type
- Memory

Then we we find a candidate device list for each operation.

If there is more than one candidate for each operation, we will choose a device according to its reputation and random mechanism.

1. A random number in [1, 100] is generated as R
2. Assume all the reputation scores of candidate devices are: [s1, s2…si……sn]
3. Normalize all the reputations as si’=si*100/sum(s1…sn), and the new vector is [s1’, s2’…si’……sn’]
4. Compare R with sum(s1’…si’), if the sum is greater than R, then we choose device i.

When we need more than one device, and then we will exclude the assigned device and use the above method to choose the other devices.

Finally, we will fill in the device id for each operation of the computation graph.

## Galactic Universal Modular Prover

The UMP means each ZKP accelerator can support different kinds of ZKP proving tasks.

![UMP](./images/UMP.png)
*Universal Modular Prover*

The Oracle node features a plug-in service. This allows provers to connect and determine the types of tasks the prover can manage. The corresponding proving binary Docker is then downloaded, enabling the node to handle such tasks.

In this manner, a single accelerator can support multiple ZKP proving binaries.

![UMP Flow](./images/ump flow.png)
*Universal Modular Prover Flow*

The local scheduler/Galactic SDK connects to the proving binary plugin via RPC call.

The protocol includes:

1. init
2. start computation
3. stop

A mechanism to trigger different kinds of computation.

The computation node has two modes:

- high-efficient mode (default mode): The computation service is restarted each time. It can easily switch among different tasks.
- high-performance mode: The computation service stays in the memory, and when a new task comes, it doesn’t need to restart the service. It’s used for high throughput tasks.

Each kind of requester project can define its ideal mode.

## Power of Computation

The platform’s incentives are measured by the Power of Computation. Without accurate computation power measurement, we can’t effectively incentivize the devices.

For GPU, we will use the real amount of OP to benchmark its contribution similar to Nsight Compute tools.

For any GPU, the most important acceleration engine is cuda cores or tensor cores. The ZKP usually uses CUDA to accelerate and the AI uses tensor core to accelerate.

A benchmark will be used to measure the computation amount required by some tasks.

We will define different computations of Galactic-gas, such as:

- tensor core: xx gas/TOPs
- cuda core: xx gas/TOPs
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/ZKPool-2.0/technology/images/L3.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/ZKPool-2.0/technology/images/UMP.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/ZKPool-2.0/technology/images/flow chart.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/ZKPool-2.0/technology/images/flow.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/ZKPool-2.0/technology/images/modules.png
Binary file added docs/ZKPool-2.0/technology/images/ump flow.png
89 changes: 89 additions & 0 deletions docs/ZKPool-2.0/technology/moludar-Galactic-Network-overview.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,89 @@
---
hugo-blue marked this conversation as resolved.
Show resolved Hide resolved
sidebar_label: 'Modular Galactic Network Overview'
sidebar_position: 1
---
# Modular Galactic Network Overview
## Ethereum L3-Based Appchain

We expect the network to have high-performance requirements due to the frequent chain interactions and token transactions associated with the lifecycle of proof tasks. As such, cost is a significant consideration.

To reduce the overall cost, we will deploy our protocol layer on an Ethereum L3 Appchain which is L2 of another L2 rollup, which offers the following advantages:

1. Compatibility with the Ethereum ecosystem
2. Low transaction fees, essential for the numerous interactions with smart contracts
3. Customizable block proposing speed, for instance, every 1 second, and block finalization frequency, like every 1 minute, to further reduce cost
4. Its own ecosystem including bridge, DEX, etc. which could potentially expand in the future
5. Decentralization

We will start with the Taiko chain, and consider other chains as a backup.

Taiko allows for setting the ratio between optimistic proofs (no ZK computation) and ZK proofs (requires ZK computation) of blocks, significantly reducing the cost. We will also batch blocks to decrease the transaction fee of settling block data into L2. Furthermore, we will consider using a Data Availability Layer to lessen the cost of writing data to L2.

However, we will not compromise on security. The protocol's main contract and token contract will be deployed on Taiko L2, which are used to settle the protocol's vault and reward pool. Using Taiko's built-in cross-chain messaging infrastructure, we can seamlessly settle funds from Galactic network’s protocol layer to L2. Storing funds on L2 is safer as it is more decentralized compared to our protocol layer, which is more application-specific.

On our protocol layer, we will deploy frequently operated contract logic, such as the circulation of proof fees, streaming payment, and the lifecycle of proving tasks. This will help maintain a low overall cost, allowing network users to focus on their tasks, not the cost of network transactions.

![Ethereum L3-Based Appchain](./images/L3.png)
*Ethereum L3-Based Appchain*

L2 is an extension of Ethereum's performance, while L3 is an extension of L2's performance. We estimate that app-specific L3 can have a gas limit of 0.5-1B, which is equivalent to each block containing 1000 ERC20 transfers. And it can achieve sub-second block speed. In summary, it can reach 1000-5000 tps or higher.

The main transaction cost of L3 is the block data storage written to L2. On average, each L3 transaction consumes 2000-3000 L2 gas. If it reaches 1000 tps, it will consume 0.002 ETH at an L2 gas price of 1 gwei. So if L2 is used as the data availability layer, the cost will be high. Another choice is to use other dedicated data availability layers, such as Celestia, EigenDA or Avail. According to the [calculations](https://medium.com/@numia.data/the-impact-of-celestias-modular-da-layer-on-ethereum-l2s-a-first-look-8321bd41ff25) here, the cost will be reduced by about 300~500 times, which will be a huge improvement in cost reduction.

## Galactic Network Modules

The Galactic contract will act as the central hub for essential network records, including projects, provers, tasks, rewards, bonds, and staking, among others. This will be the foundation for the entire decentralized network. The Oracle node, a component of the network, will handle complex task scheduling, reward distribution, and proof aggregation. Additionally, it will provide utility tools like a data explorer and a front-end. This node could be further decentralized in the future. Prover, relayer, and verifier nodes can all function in a decentralized manner externally, using the Galactic contract as their source of truth.

![modules](./images/modules.png)
*Galactic Network Modules*

### What does the Galactic Contract contain?

![Galactic Contract](./images/Galactic%20Contract.png)
*Galactic Contracts*

1. Task events for actions: submission, proof, and verification.
2. Users, projects, provers, verifiers and tasks.
hugo-blue marked this conversation as resolved.
Show resolved Hide resolved
3. Provers' rewards: these are updated periodically, for instance, daily.
4. The default reward calculation is conducted on-chain.
5. Provers' bonds.
6. Users' staking.

### What does the Oracle Node contain?

![Galactic Oracle nodes](./images/Oracle%20nodes.png)
*Galactic Oracle nodes*

1. Task scheduling: This is a complex logic. The final task scheduling results are written to the Galactic contract, while intermediate task statuses are stored in the local DB. Since task data are huge, which could potentially overload the contract, only the most necessary data are written to the chain.
1. Prover statuses are collected to facilitate the task scheduling process.
2. Task split and aggregation: A task might be divided into multiple smaller subtasks and proved in parallel to increase efficiency.
3. Proof aggregation: generate a batched proof for a group of proofs
4. Complex reward calculation can be done off-chain if the default version in the Galactic contract cannot handle it or involves too many steps.
5. Utility tools such as the front-end and data explorer.

### What does the Relayer Node do?

A relayer node acts as an external node that integrates with a specific ZK project. It does this by retrieving active tasks from the chain.

### What’s the task flow?

![Galactic Network Flow](./images/flow.png)
*Galactic Network Flow*

1. The task request is directly submitted to the Galactic contract (Active mode).
2. Alternatively, the relayer retrieves the task from another chain (Passive mode). Then, the relayer submits the task with the bond to the Galactic contract.
3. The oracle node schedules the task based on the prover's attributes and availability.
4. The prover node syncs the task from the Galactic contract.
5. The prover node generates the proof and sends it back to the Galactic contract.
6. Oracle Node aggregates proofs and produce an batchedProof to the Galactic contract
7. The Galactic contract publishes the verifier task.
8. Once the Galactic contract evaluates that the majority of verification results pass, it marks all the proofs in the batch as verified.
9. The Galactic contract or the oracle node calculates the reward.
10. The Galactic contract returns the bond and shares the reward with the prover.

Here is a detailed sequence diagram

![Galactic Network Flow Sequence](./images/flow%20chart.png)
*Galactic Network Flow Sequence*

In the architecture above there will be lots of interactions between other parties with the Galactic contract. This drives our choice of Ethereum L3 which will further reduce the transaction cost.
5 changes: 2 additions & 3 deletions docs/overview.md
Original file line number Diff line number Diff line change
Expand Up @@ -55,6 +55,5 @@ The design principles of ZKPool include:

The ZKPool will have the following milestones:

- Connect one ZKP project (Done)
- Connect multiple ZKP projects with the UMP (Universal Modular Prover) (Ongoing)
- Fully decentralized ZKP computing pool via Super-UMP (TBD)
- ZKPool-1.0: Connect one ZKP project (Done)
- ZKPool-2.0: Fully decentralized ZKP computing pool via UMP (Universal Modular Prover)(Ongoing)
Loading