-
Notifications
You must be signed in to change notification settings - Fork 6
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Allow domains to register for events and update states #183
Conversation
64b436f
to
b92e300
Compare
|
||
func (d *domain) handleEventBatchForContract(ctx context.Context, batchID uuid.UUID, contractAddress tktypes.EthAddress, events []*blockindexer.EventWithData) (*prototk.HandleEventBatchResponse, error) { | ||
var res *prototk.HandleEventBatchResponse | ||
eventsJSON, err := json.Marshal(events) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I couldn't decide on the best format for sending the events. For the moment I've just directly encoded []*blockindexer.EventWithData
, but I'm not sure if the blockindexer
types should be exposed to domains. As an alternative, I could re-encode in a different Go/JSON struct here, or more natively as a protobuf message.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ideally the format should be spelled out as a protobuf, with only the data-payload itself being a JSON string inside.
e.g. all this gets re-spelled in proto
https://github.com/kaleido-io/paladin/blob/c166cac5a387e8b53863e51729611c19201158da/core/go/pkg/blockindexer/index_types.go#L57-L65
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks, I thought this was the case - but didn't want to do all the spelling work without a second opinion.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Note for the record - this came in as a new OnChainEvent
protobuf type in #268
@@ -205,40 +218,49 @@ func (dc *domainContext) mergedUnFlushed(schema Schema, dbStates []*State, query | |||
// Get the list of new un-flushed states, which are not already locked for spend | |||
allUnFlushedStates := dc.unFlushed.states | |||
allUnFlushedStateLocks := dc.unFlushed.stateLocks | |||
allUnFlushedStateSpends := dc.unFlushed.stateSpends |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Note that I merge in unflushed spends, but I don't merge in any info from unflushed confirmations. Couldn't decide if they were useful in this context (as they don't affect the state availability either way).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Great comment, and got me thinking. No disagreement, but here's where I got to...
I note that when we return states to the domain, one important thing they need to know is if a state is confirmed or not. Zeto in particular wants to avoid selecting any unconfirmed states when building transactions.
However, only seeing a state as confirmed once the confirm has been flushed to the DB "feels right" (which I think is the behavior you're describing we have)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, that describes the behavior, and I think it's sensible enough. I was struggling with it because even unflushed confirms are deterministically confirmed (because the blockchain has said they are confirmed). But I'm happy to leave this as-is until we find a good reason to make it more complex.
core/go/internal/domainmgr/domain.go
Outdated
if err != nil { | ||
return nil, err | ||
} | ||
transactionsComplete = append(transactionsComplete, res.TransactionsComplete...) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we write anything to the database here? Currently transaction completion is just a back-and-forth between Paladin and the domain - it doesn't result in anything getting persisted to the database.
res, err = d.api.HandleEventBatch(ctx, &prototk.HandleEventBatchRequest{ | ||
BatchId: batchID.String(), | ||
JsonEvents: string(eventsJSON), | ||
}) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Note that there's no guarantee about how events will be broken up into batches. We've discussed offline that it would be nice to ensure all events from a given transaction always end up in the same batch (ie batches won't split up events from a single blockchain transaction). This could be either a default or a configurable behavior.
I can look into this as a follow-up. It may tie into the question as to how events should be formatted as well - should they continue to be a flat list, or should they be hierarchically organized by transaction...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I wonder overall if it's worth capturing the things coming out of this review in an issue once we've closed out the discussions?
... when you add them up it's likely at more than a day's work, which we need to make sure we don't defer too long (but probably do want to defer a little time while we're getting to the full E2E here)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Signed-off-by: Andrew Richardson <[email protected]>
Signed-off-by: Andrew Richardson <[email protected]>
Signed-off-by: Andrew Richardson <[email protected]>
Signed-off-by: Andrew Richardson <[email protected]>
Signed-off-by: Andrew Richardson <[email protected]>
Signed-off-by: Andrew Richardson <[email protected]>
Signed-off-by: Andrew Richardson <[email protected]>
Signed-off-by: Andrew Richardson <[email protected]>
Signed-off-by: Andrew Richardson <[email protected]>
Signed-off-by: Andrew Richardson <[email protected]>
Signed-off-by: Andrew Richardson <[email protected]>
Signed-off-by: Andrew Richardson <[email protected]>
Signed-off-by: Andrew Richardson <[email protected]>
f00327e
to
67f4e96
Compare
This reverts commit cb53d9b. Signed-off-by: Andrew Richardson <[email protected]>
This reverts commit 65c2fe2. Signed-off-by: Andrew Richardson <[email protected]>
This reverts commit 2e06542. Signed-off-by: Andrew Richardson <[email protected]>
Signed-off-by: Andrew Richardson <[email protected]>
Signed-off-by: Andrew Richardson <[email protected]>
Signed-off-by: Andrew Richardson <[email protected]>
Signed-off-by: Andrew Richardson <[email protected]>
Signed-off-by: Andrew Richardson <[email protected]>
SIMDomain and Noto are fully wired up to support completing transactions, but Zeto and Pente will require more work. Signed-off-by: Andrew Richardson <[email protected]>
Signed-off-by: Andrew Richardson <[email protected]>
67f4e96
to
1b2da14
Compare
domains/pente/src/main/java/io/kaleido/paladin/pente/domain/PenteConfiguration.java
Outdated
Show resolved
Hide resolved
@@ -48,6 +48,8 @@ var NotoConfigOutputABI_V0 = &abi.ParameterArray{ | |||
{Name: "variant", Type: "bytes32"}, | |||
} | |||
|
|||
var NotoTransactionData_V0 = ethtypes.MustNewHexBytes0xPrefix("0x00010000") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I adopted the same paradigm here as with the domain config data... using a 4-byte prefix on the data payload to give it a version in case we ever need to change the format.
Signed-off-by: Andrew Richardson <[email protected]>
Signed-off-by: Andrew Richardson <[email protected]>
Signed-off-by: Andrew Richardson <[email protected]>
Bytes32[] outputs, | ||
@JsonProperty | ||
Bytes data | ||
) {} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not sure if there's a "better" place these records should live... I just stuck them here near the point I needed them.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think overall some work will be beneficial on the organization of the files in Pente. fwiw I'd been trying to keep the amount of non-core code in the PenteDomain.java
file to a minimum, pushing all the decoding/encoding gubbins out into PenteConfiguration.java
... but that means that other file is quite messy.
Don't know if a PenteTypes
class would make sense, or PenteJSON
and PenteABI
maybe.
... not needed for this PR though
Signed-off-by: Andrew Richardson <[email protected]>
@@ -444,6 +444,7 @@ EVMExecutionResult executeEVM(long chainId, Address fromAddr, AccountLoader acco | |||
} | |||
// Note we only increment the nonce after successful executions | |||
sender.setNonce(nonce+1); | |||
evm.getWorld().getUpdater().commit(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I found a bug where the sender nonce was never actually getting incremented (just kept using nonce 0 for every transaction). This ensures it is persisted properly - hopefully there are no side effects to having it here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for working that through. Yes, agree this is correct from how I think about the integration between the Pente code and the Besu EVM APIs right now.
It might be that we're missing a trick in introducing our own type of frame
that achieves this as an automatic part of executing the transaction, but that would take some more investigation of how Besu works during block mining I think (looking at how the nonce gets updated for a public transaction in a revert
situation would be the path I think).
@@ -122,7 +122,7 @@ void testSimpleStorage() throws Exception { | |||
JsonHex.addressFrom(contractAddr), | |||
simpleStorageDeployABI, | |||
deployValues | |||
)); | |||
), true); | |||
assertFalse(contractAddr.isBlank()); | |||
|
|||
// TODO: This is a hack because we need to define a way for domain transactions to |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I couldn't quite parse this comment. It feels like this PR should be providing the thing this needs (or at least a step toward it).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So what I had to do here, was have the test calculate the address that we expect the deployment to go to inside the PrivacyGroup, because there's not yet a logical equivalent of "wait for the contractAddress
in the eth_getTransactionReceipt
" that you'd do in the public EVM.
We need to define that kind of event in a way applications coding agains the private EVMs can consume
// Note: hits will be cached, but events from unrecognized contracts will always | ||
// result in a cache miss and a database lookup | ||
// TODO: revisit if we should optimize this | ||
psc, err := d.dm.getSmartContractCached(ctx, tx, ev.Address) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think a cache could safely include a negative result in this case, as long as we invalidate it in this function too when we get a notification of a new contract.
The event processing is a single threaded thing per domain (currently even across domains).
Rather than deferring, wonder if it's worth doing the work "now" for this as it's pretty small?
Ok -... writing this made me look at a wider problem.
This handler registration:
d.eventStream, err = d.dm.blockIndexer.AddEventStream
I believe it must be the case that it includes the domain registration event, for the factory address for this domain alongside the same registration.
Once we've done that, the caching becomes trivial - but because the data/threading model is assured - which is needed regardless of the cache.
This is a change from us registering those on startup, and it probably means updating the event stream interface to allow pairs of:
- Address selector
- ABI selector
Because the one event stream we need is:
- Registration events from contract XYZ
- Domain-specific events from any contract
Sorry to add to the work list @awrichar - can be deferred (maybe until after #179 works its way through because it also has blockindexer API changes), but it can't be forgotten
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Right - this was on the edge of my mind but didn't make it into this PR. I'll open a follow-up issue to come back to it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Opened #193
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is great @awrichar - a significant step forwards.
I've put some important comments on here, along with a suggestion that we need to track some/all of them in a way that ensures they don't get lost in the coming weeks. But I didn't come across anything that should stop this merging.
Closes the loop on letting domains process blockchain events and interpret them as things that update states and/or complete Paladin transactions.
abi_events_json
parameter inConfigureDomain
, for domain to specify the events that it cares aboutHandleEventBatch
callback where domain will receive batches of events and respond with:transactions_complete
- IDs of Paladin transactions which are now completespent_states
- states which are now confirmed to be spentconfirmed_states
- states which are now confirmed to exist (unspent)new_states
- new states to be created (only if event happens to contain actual state data)NewTransactionComplete
callback where domain will receive notification that Paladin has successfully processed the transaction/state changes supplied in the event batch response aboveNoto is fully updated to leverage these new features, and the Noto E2E test is therefore able to query for available/spent states and get the expected results.
Pente also updated.
Zeto requires a release with hyperledger-labs/zeto#75 before it can be updated.