You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The question is whether we want to keep the current model, where everything is basically computed the way the user writes it (e.g., g.pow(x).pow(y) will be computed with two exponentiations, e.apply(g,h).pow(x) will exponentiate in GT), or whether we want to auto-optimize these (e.g., in g.pow(x).pow(y) the second .pow() call would return g.pow(x*y) instead and e.apply(g,h).pow(x) would become e.apply(g.pow(x),h)).
The question may be how far this should go (though I'd ideally like an all-or-nothing approach). What about (g1.pow(x1).op(g2.pow(x2)).op(g3.pow(x3))...).pow(y) ? Should this automatically become (g1.pow(x1.mul(y)).op(g2.pow(x2.mul(y))).op(g3.pow(x3.mul(y)))...)? Should we decide this dynamically depending on whether x1, x2,... are small numbers or not?
At some point, my thinking was that we shall have the user do useful computational decisions statically instead of dynamically rewriting everything. The question is whether this is the right way to go.
Alternatively, as an idea out of the left field, we could implement some sort of OpinionatedGroup that would give feedback on how to (statically) optimize operations. This has the advantage that we won't clutter the (normal) runtime with dynamic optimization decisions.
The text was updated successfully, but these errors were encountered:
After having used the API a little bit for the protocol implementation, I'd say it's very freeing knowing that a lot of optimization is done automatically. It makes things easier to write down.
It also has the advantage that it allows for optimization that is virtually impossible to do in the user space - for example, if you get h = g.pow(x) as input and your method computes h.pow(y), your code would never be able to optimize this to g.pow(x.mul(y)). So this discourages splitting computation of stuff across multiple methods.
The question is whether we want to keep the current model, where everything is basically computed the way the user writes it (e.g.,
g.pow(x).pow(y)
will be computed with two exponentiations,e.apply(g,h).pow(x)
will exponentiate in GT), or whether we want to auto-optimize these (e.g., ing.pow(x).pow(y)
the second.pow()
call would returng.pow(x*y)
instead ande.apply(g,h).pow(x)
would becomee.apply(g.pow(x),h)
).The question may be how far this should go (though I'd ideally like an all-or-nothing approach). What about
(g1.pow(x1).op(g2.pow(x2)).op(g3.pow(x3))...).pow(y)
? Should this automatically become(g1.pow(x1.mul(y)).op(g2.pow(x2.mul(y))).op(g3.pow(x3.mul(y)))...)
? Should we decide this dynamically depending on whetherx1
,x2
,... are small numbers or not?At some point, my thinking was that we shall have the user do useful computational decisions statically instead of dynamically rewriting everything. The question is whether this is the right way to go.
Alternatively, as an idea out of the left field, we could implement some sort of
OpinionatedGroup
that would give feedback on how to (statically) optimize operations. This has the advantage that we won't clutter the (normal) runtime with dynamic optimization decisions.The text was updated successfully, but these errors were encountered: