-
Notifications
You must be signed in to change notification settings - Fork 58
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Lsu allow spec load exec #92
Conversation
7f2402b
to
7e40e55
Compare
7e40e55
to
11626be
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we add some documentation on what are the purpose of various in and out ports of LSU, at least between LSU & MMU and LSU & Dache
LSU / MMU interface has 3 input ports and 1 output:
- output:
- out_mmu_lookup_req
- inputs:
- in_mmu_lookup_req
- in_mmu_lookup_ack
- in_mmu_free_req
Similary, LSU / Dache interface has 3 input ports and 1 output port:
- output:
- out_cache_lookup_req
- inputs:
- in_cache_lookup_req
- in_cache_lookup_ack
- in_cache_free_req
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you please explain what these priorities / states mean and when they would occur? Not sure I understand what is meant by 'mss ack' and 'another oustanding miss finish'
HIGHEST = 0,
__FIRST = HIGHEST,
CACHE_RELOAD, // Receive mss ack, waiting for cache re-access
CACHE_PENDING, // Wait for another outstanding miss finish
MMU_RELOAD, // Receive for mss ack, waiting for mmu re-access
MMU_PENDING, // Wait for another outstanding miss finish
NEW_DISP, // Wait for new issue
LOWEST,
Does NEW_DISP
simply mean dispatched and waiting to be issued?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
One drawback of having a enumeration of IssuePriority is that it somewhat ties us to having one issue priority policy. If we want to allow multiple issue priority policies as a paramter, it may be difficult to with IssuePriority enumerator.
For that, we should keep the CACHE/MMU_RELOAD/PENDING info as states of the LoadStoreInfo class and allow different policies to decide which one to issue.
This can be done in a different PR after this is merged in.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is a great start. A few items here and there to tackle, obviously
The priority values are used to select the instruction that needs to be issued. |
|
How would we implement an issue policy which prioritizes requests with MMU pending over request with CACHE pending? How about instead of IssuePriority enum, we keep something equivalent to bitmasks, specifying for each instance of LoadStoreInstInfo, whether MMU_LOOKUP is needed and has been satisfied, CACHE_LOOKUP is needed and has been satisfied. This would enable implementation of various issue policies, without changing the enum. But this can be done as apart of another MR |
2f4c527
to
9e23b97
Compare
Did you test this with Kunal L2Cache stuff? Does it work? |
I havent been able to test it out, I tried to keep the PR in sync with master |
@klingaard I rebuilt the project(cleant cache, cmake build folder) I ran both coremark and drhystone and both of them ran successfully.
|
…src/riscv-perf-model into lsu_allow_spec_load_exec
I'll take a look this evening. |
I forgot one more thing. 😄 Can you run run clang format on the LSU since you're the last person to make major changes to it? |
@h0lyalg0rithm, I hijacked your PR to update the names of the CI regressions (and merged with master). FYI... |
@klingaard I ran the formatter on most of the files I touched.I left some like |
Thanks for running the formatter. One last, last thing, promise! Can you update the Description field on what this PR did -- the changes it implemented and new expected behaviors. That will be the final commit message. |
I updated the PR description |
This PR introduces the following - Allows loads to perform Virtual address to Physical address translation before older loads complete This can be configured using a parameter in the LSU (`allow_speculative_load_exec`) - Allow non blocking cache lookup requests. Depends on riscv-software-src#91 for the cache to support this feature - Implements Ready queue - simulator-only structure to speed up instruction lookup in the LSU - Mitigates data hazards introduced by instructions running out of order - The length of the different stages of the LSU pipeline can be configured through the parameters. --------- Co-authored-by: Knute Lingaard <[email protected]>
PARAMETER(uint32_t, replay_issue_delay, 3, "Replay Issue delay") | ||
// LSU microarchitecture parameters | ||
PARAMETER( | ||
bool, allow_speculative_load_exec, true, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@h0lyalg0rithm is this parameter's boolean meaning inverted? I'm looking at the code and if this is set to true
, then the LS will NOT allow a load to bypass an older store.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@klingaard This parameter by default is set to true, so it should allow a load to complete before a store
This PR introduces the following
This can be configured using a parameter in the LSU