-
Notifications
You must be signed in to change notification settings - Fork 24
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
x_mem_result_ready needs to be added for subsystem sharing #56
Comments
I would add that if you want, you can instantiate multiple FIFOs or so, however, this won't solve the problem even with a single CPU as you force the accelerator to pay the AREA overhead of FIFOs, especially for those FPUs which are slow and take multiple cycles to be ready - so I suggest to add the READY signal anyway, and for those accelerators that wants to go faster, they can keep READY equal to 1 all the time |
@moimfeld , @Silabs-ArjanB , @michael-platzer what do you think? |
Hi @andermattmarco Adding a ready on this interface would be really difficult to support on a core like the CV32E40P or CV32E40X as there is also no ready signal related to its data_rvalid_i (which gets forwarded onto x_mem_result_valid). If we would add a ready signal on the on the memory result interface we need a ready signal on the OBI data interface, then we need one on the interconnect, etc. Of course the X interface has been defined as a point-to-point protocol so in principle the issue you describe cannot occur. If you want to build an interconnect nonetheless, then you can use the memory (request/response) interface (which has both valid and ready signals) to limit the number of possible x_mem_result_valid that can possibly occur and you can provide the necessary buffering inside your interconnect to make sure you can always immediately accept a x_mem_result_valid. |
Hi @Silabs-ArjanB But I like your solution of handling the buffering / adding the handshaking on the interconnect level, such that the core-side implementation stays lightweight. |
Hi, I agree with @moimfeld that it would be possible to implement an If the memory result buffer within the main core is small, then the main core will only be able to accept a few memory requests before de-asserting If the memory result buffer within the main core is large, such that a series of contiguous memory requests can be accepted without de-asserting @andermattmarco I am wondering in which situation it is beneficial to connect the XIF memory interface to multiple cores? I guess that the memory requests of the individual cores will end up being routed to the same memory bus, so maybe it makes more sense to have only one core attached to the XIF memory interface that takes care of all XIF memory requests? I understand that in a multi-core system you might want to have several cores that use the XIF issue, commit and result interfaces, but the memory and memory result interfaces could be connected to one core only. |
it actually makes sense what you suggest @michael-platzer ! however, it still makes sense to me explore also the sharing of the mem interface, but invite @andermattmarco to take your option into account |
If one core would handle the XIF memory requests that are related to the issue interface of another core, then we would be completely changing the protocol. I still think that the most logical place to address the issue is within the interconnect (which introduced the issue). |
In order to support the sharing of a subsystem between multiple cores using the x-interface, an x_mem_result_ready would have to be added. The current spec states that there is no ready because the subsystem must be ready to accept the memory result at any time. With multiple cores, this cannot be guaranteed as it would then even be possible for multiple cores to raise the x_mem_valid at the same time. The subsystem would in this case only be able to react to one of those data packets, violating the property that it must be ready to accept the memory result at any time for every core raising the valid.
The text was updated successfully, but these errors were encountered: