Precompiled contracts for elliptic curve operations are required in order to perform zkSNARK verification.
The operations that you need to be able to perform are elliptic curve point addition, elliptic curve point scalar multiplication, and elliptic curve pairing.
This document explains the precompiles responsible for elliptic curve point addition and scalar multiplication and the design decisions. You can read the specification here.
On top of having a set of opcodes to choose from, the EVM also offers a set of more advanced functionalities through precompiled contracts. These are a special kind of contracts that are bundled with the EVM at fixed addresses and can be called with a determined gas cost. The addresses start from 1, and increment for each contract. New hard forks may introduce new precompiled contracts. They are called from the opcodes like regular contracts, with instructions like CALL. The gas cost mentioned here is purely the cost of the contract and does not consider the cost of the call itself nor the instructions to put the parameters in memory.
For Go-Ethereum, the code being run is written in Go, and the gas costs are defined in each precompile spec.
In the case of zkSync Era, ecAdd and ecMul precompiles are written as a smart contract for two reasons:
- zkEVM needs to be able to prove their execution (and at the moment it cannot do that if the code being run is executed outside the VM)
- Writing custom circuits for Elliptic curve operations is hard, and time-consuming, and after all such code is harder to maintain and audit.
The BN254 (also known as alt-BN128) is an elliptic curve defined by the equation uint256
.
The arithmetic is carried out with the field elements encoded in the Montgomery form. This is done not only because
operating in the Montgomery form speeds up the computation but also because the native modular multiplication, which is
carried out by Yul's mulmod
opcode, is very inefficient.
Instructions set on zkSync and EVM are different, so the performance of the same Yul/Solidity code can be efficient on EVM, but not on zkEVM and opposite.
One such very inefficient command is mulmod
. On EVM there is a native opcode that makes modulo multiplication and it
costs only 8 gas, which compared to the other opcodes costs is only 2-3 times more expensive. On zkEVM we don’t have
native mulmod
opcode, instead, the compiler does full-with multiplication (e.g. it multiplies two uint256
s and gets
as a result an uint512
). Then the compiler performs long division for reduction (but only the remainder is kept), in
the generic form it is an expensive operation and costs many opcode executions, which can’t be compared to the cost of
one opcode execution. The worst thing is that mulmod
is used a lot for the modulo inversion, so optimizing this one
opcode gives a huge benefit to the precompiles.
As said before, multiplication was carried out by implementing the Montgomery reduction, which works with general moduli and provides a significant speedup compared to the naïve approach.
The squaring operation is obtained by multiplying a number by itself. However, this operation can have an additional speedup by implementing the SOS Montgomery squaring.
Inversion was performed using the extended binary Euclidean algorithm (also known as extended binary greatest common
divisor). This algorithm is a modification of Algorithm 3 MontInvbEEA
from
Montgomery inversion.
The exponentiation was carried out using the square and multiply algorithm, which is a standard technique for this operation.
Let’s take a number R
, such that gcd(N, R) == 1
and R
is a number by which we can efficiently divide and take
module over it (for example power of two or better machine word, aka 2^256). Then transform every number to the form of
x * R mod N
/ y * R mod N
and then we get efficient modulo addition and multiplication. The only thing is that
before working with numbers we need to transform them to the form from x mod N
to the x * R mod N
and after
performing operations transform the form back.
For the latter, we will assume that N
is the module that we use in computations, and R
is gcd(N, R) == 1
.
Reference: https://en.wikipedia.org/wiki/Montgomery_modular_multiplication#The_REDC_algorithm
/// @notice Implementation of the Montgomery reduction algorithm (a.k.a. REDC).
/// @dev See <https://en.wikipedia.org/wiki/Montgomery_modular_multiplication#The_REDC_algorithm>
/// @param lowestHalfOfT The lowest half of the value T.
/// @param higherHalfOfT The higher half of the value T.
/// @return S The result of the Montgomery reduction.
function REDC(lowestHalfOfT, higherHalfOfT) -> S {
let q := mul(lowestHalfOfT, N_PRIME())
let aHi := add(higherHalfOfT, getHighestHalfOfMultiplication(q, P()))
let aLo, overflowed := overflowingAdd(lowestHalfOfT, mul(q, P()))
if overflowed {
aHi := add(aHi, 1)
}
S := aHi
if iszero(lt(aHi, P())) {
S := sub(aHi, P())
}
}
By choosing
Addition and subtraction in Montgomery form are the same as ordinary modular addition and subtraction because of the distributive law
/// @notice Computes the Montgomery addition.
/// @dev See <https://en.wikipedia.org/wiki/Montgomery_modular_multiplication#The_The_REDC_algorithm> for further details on the Montgomery multiplication.
/// @param augend The augend in Montgomery form.
/// @param addend The addend in Montgomery form.
/// @return ret The result of the Montgomery addition.
function montgomeryAdd(augend, addend) -> ret {
ret := add(augend, addend)
if iszero(lt(ret, P())) {
ret := sub(ret, P())
}
}
/// @notice Computes the Montgomery subtraction.
/// @dev See <https://en.wikipedia.org/wiki/Montgomery_modular_multiplication#The_The_REDC_algorithm> for further details on the Montgomery multiplication.
/// @param minuend The minuend in Montgomery form.
/// @param subtrahend The subtrahend in Montgomery form.
/// @return ret The result of the Montgomery addition.
function montgomerySub(minuend, subtrahend) -> ret {
ret := montgomeryAdd(minuend, sub(P(), subtrahend))
}
We do not use addmod
. That's because in most cases the sum does not exceed the modulus.
The product of
/// @notice Computes the Montgomery multiplication using the Montgomery reduction algorithm (REDC).
/// @dev See <https://en.wikipedia.org/wiki/Montgomery_modular_multiplication#The_The_REDC_algorithm> for further details on the Montgomery multiplication.
/// @param multiplicand The multiplicand in Montgomery form.
/// @param multiplier The multiplier in Montgomery form.
/// @return ret The result of the Montgomery multiplication.
function montgomeryMul(multiplicand, multiplier) -> ret {
let hi := getHighestHalfOfMultiplication(multiplicand, multiplier)
let lo := mul(multiplicand, multiplier)
ret := REDC(lo, hi)
}
/// @notice Computes the Montgomery modular inverse skipping the Montgomery reduction step.
/// @dev The Montgomery reduction step is skipped because a modification in the binary extended Euclidean algorithm is used to compute the modular inverse.
/// @dev See the function `binaryExtendedEuclideanAlgorithm` for further details.
/// @param a The field element in Montgomery form to compute the modular inverse of.
/// @return invmod The result of the Montgomery modular inverse (in Montgomery form).
function montgomeryModularInverse(a) -> invmod {
invmod := binaryExtendedEuclideanAlgorithm(a)
}
As said before, we use a modified version of the bEE algorithm that lets us “skip” the Montgomery reduction step.
The regular algorithm would be
Precompile for computing elliptic curve point addition. The points are represented in affine form, given by a pair of
coordinates
Affine coordinates are the conventional way of expressing elliptic curve points, which use 2 coordinates. The math is concise and easy to follow.
For a pair of constants
To compute
- If
$P = O$ , then$2P = O$ . - Else
$P = (x, y)$ - If
$y = 0$ , then$2P = O$ . - Else
$y≠0$ , then $$ \begin{gather*} \lambda = \frac{3x_{p}^{2} + a}{2y_{p}} \ x_{r} = \lambda^{2} - 2x_{p} \ y_{r} = \lambda(x_{p} - x_{r}) - y_{p}\end{gather*} $$
- If
The complicated case involves approximately 6 multiplications, 4 additions/subtractions, and 1 division. There could also be 4 multiplications, 6 additions/subtractions, and 1 division, and if you want you could trade a multiplication with 2 more additions.
To compute
- If
$P = 0$ and$Q \neq 0$ , then$P + Q = Q$ . - If
$Q = 0$ and$P \neq 0$ , then$P + Q = P$ . - Else
$P = (x_{p},\ y_{p})$ and$Q = (x_{q},\ y_{q})$- If
$x_{p} = x_{q}$ (and necessarily$y_{p} \neq y_{q}$ ), then$P + Q = O$ . - Else
$x_{p} \neq x_{q}$ , then $$ \begin{gather*} \lambda = \frac{y_{2} - y_{1}}{x_{2} - x_{1}} \ x_{r} = \lambda^{2} - x_{p} - x_{q} \ y_{r} = \lambda(x_{p} - x_{r}) - y_{p}\end{gather*} $$ and$P + Q = R = (x_{r},\ y_{r})$ .
- If
The complicated case involves approximately 2 multiplications, 6 additions/subtractions, and 1 division.
Precompile for computing elliptic curve point scalar multiplication. The points are represented in homogeneous
projective coordinates, given by the coordinates
The key idea of projective coordinates is that instead of performing every division immediately, we defer the divisions by multiplying them into a denominator. The denominator is represented by a new coordinate. Only at the very end, do we perform a single division to convert from projective coordinates back to affine coordinates.
In affine form, each elliptic curve point has 2 coordinates, like
The affine form case
For the interesting case where
After expanding and simplifying the equations (demonstration here), the following substitutions come out
Using them, we can write
As we can see, the complicated case involves approximately 18 multiplications, 4 additions/subtractions, and 0 divisions.
The affine form case
For the interesting case where
After expanding and simplifying the equations (demonstration here), the following substitutions come out
Using them, we can write
As we can see, the complicated case involves approximately 15 multiplications, 6 additions/subtractions, and 0 divisions.