Replies: 3 comments 5 replies
-
Thanks for the summary Adrian! For future context, the relevant issues are #635 & #662. A different way at looking at the semantics is that using a
We're stating that we're using the value of One way to reify this notion of the "ghost done signal of a group" is using these indices. Using this, we can say that reading a |
Beta Was this translation helpful? Give feedback.
-
Compiling Groups and
|
Beta Was this translation helpful? Give feedback.
-
Here is just a quick note to capture some of the insights that @rachitnigam revealed during a synchronous meeting earlier today, as I interpreted them:
|
Beta Was this translation helpful? Give feedback.
-
On Isolating Groups
We were discussing (synchronously) a problem today that I will chicken out of summarizing directly, because it's super tricky to describe, but I want to summarize some thoughts about the problem here.
The underlying issue is about isolating the execution of groups. An important concept in the idea of Calyx groups is that they are supposed to have semantics that are isolated from the execution of other groups. In other words, a group can do whatever it wants "inside" without worrying about interfering effects from "outside." The upshot is that we can reason about programs compositionally: you can understand what each group does once, and then draw conclusions about what the overall program does based on that understanding about the individual groups.
A consequence of this isolation is that it should always be the case that two groups activated in a
seq
control structure should be able to share cells willy-nilly without interfering with one another. This is the property, for example, enables the program equivalence reasoning that powers our register-sharing pass.Consider a program whose control looks like this:
…where
foo
andbar
are groups that use two different registersfoo_reg
andbar_reg
. But they don't actually have data flow between them, in the sense that both of them write to their register before reading from it, so they can't possibly "communicate" through the register's state. Then this program is equivalent to one where both groups share a single register, call itreg
, instead. The reason these programs are equivalent relies on the isolation of groups: we know thatfoo
's use ofreg
has absolutely no bearing onbar
's use of thereg
. This story is all just to emphasize that we really like this isolation property and the compositional reasoning that it enables.The unnamed problem that we've been discussing has to do with a violation of this isolation. Namely, groups often (always??) use registers to drive their
[done]
holes—they frequently use assignments like this:The problem with this is that the control logic we generate uses
foo[done]
to decide when to start the execution of the next cycle. So if we wantbar
to start doing its thing immediately on the cycle thatfoo[done]
becomes true, thenbar
actually ends up usingreg
to decide when to start! Which means thatfoo
andbar
are actually "sharing"reg
during that one cycle, in a kind of invisible way, and the groups can't really rely on isolation. Most perniciously,bar
can't rely onreg
being useful for driving its own[done]
signal, because it needs to be 0 whenfoo
's is 1.A Modest Proposal
I have no idea if this following proposal will actually solve the problem, but I think discussing it would at least be illustrative. The proposal is to give every group its own ghostly "register" to use just for signaling done-ness. Philosophically, the idea is to acknowledge that our (non-combinational) groups typically (always?) need to declare and use such a register anyway to make sure that they take at least one cycle. So why don't we bake that into the semantics of groups instead of making all groups reinvent the "register for driving
[done]
" pattern? That way, we can offer an abstraction that ensures that every group appears to own an isolated state element for this purpose and avoid any unpleasantness about accidentally sharing these registers.How this plays out concretely is left deliberately a little fuzzy, but one what would be to redefine
[done]
to work more like a register'sin
port. Namely, outside observers don't see the done signal become true until the cycle after the group assigns 1 to[done]
.A straightforward/correct way to compile groups in this hypothetical, proposed world would be to literally generate a single new register for every single group. A basic approach to generating FSMs would read this register from each group, increment the FSM state, and then write back to the FSM state register. This would introduce an extra cycle between group executions—that is,
foo
would finish up, its doneness register would become true in the next cycle, and the FSM state would only increment (activatingbar
) in the following cycle. This is wasteful but at least it’s correct in the sense that the groups stay isolated. (And I think it’s the name number of cycles as our current dynamic control compilation??)A better way, of course, would somehow let the FSM state increment immediately on the cycle after
foo
writes 1 to its “doneness register.” To do that, we can take advantage of the fact thatfoo[done]
is not a real register and instead implement it using the FSM’s existing state register instead. The very rough idea is to “inline” the[done]
assignments into the FSM’s logic. So iffoo
has this:Then the FSM logic might include something like this:
The point is that the computation
whatever
now “goes into” the FSM state register, which takes the place of a literal physical register implementingfoo[done]
. Instead,bar
can now activate immediately after this assignment happens (because its[go]
hole is hooked up to the same FSM state register), saving one cycle relative to the naive FSM generation approach above.Beta Was this translation helpful? Give feedback.
All reactions