ee92d6ff | 25-Apr-2025 |
Yanqin Li <[email protected]> |
fix(StoreQueue): add nc_req_ack state to avoid duplicated request (#4625)
## Bug Discovery The Svpbmt CI of master at https://github.com/OpenXiangShan/XiangShan/actions/runs/14639358525/job/41077890
fix(StoreQueue): add nc_req_ack state to avoid duplicated request (#4625)
## Bug Discovery The Svpbmt CI of master at https://github.com/OpenXiangShan/XiangShan/actions/runs/14639358525/job/41077890352 reported the following implicit output error:
``` check_misa_h PASSED test_pbmt_perf TEST: read 4 Bytes 1000 times
Svpbmt IO test... addr:0x10006d000 start: 8589, end: 59845, ticks: 51256
Svpbmt NC test... addr:0x10006c000 start: 67656, end: 106762, ticks: 39106
Svpbmt NC OUTSTANDING test... smblockctl = 0x3f7 addr:0x10006c000 start: 118198, end: 134513, ticks: 16315
Svpbmt PMA test... addr:0x100000000 start: 142696, end: 144084, ticks: 1388 PASSED test_pbmt_ldld_violate ERROR: untested exception! cause NO: 5 (mhandler, 219) [FORK_INFO pid(1251274)] clear processes... Core 0: HIT GOOD TRAP at pc = 0x80005d64 Core-0 instrCnt = 174,141, cycleCnt = 240,713, IPC = 0.723438 ```
## Design Background For NC (Non-Cacheable) store operations, the handshake logic between the StoreQueue and Uncache is as follows:
1. **Without Outstanding Enabled:** In the `nc_idle` state, when an executable `nc store` is encountered, it transitions to the `nc_req` state. After `req.fire`, it moves to the `nc_resp` state. Once `resp.fire` is triggered, it returns to `nc_idle`, and both `rdataPtrExtNext` and `deqPtrExtNext` are updated to handle the next request.
2. **With Outstanding Enabled:** In the `nc_idle` state, upon encountering an executable `nc store`, it transitions to the `nc_req` state. After `req.fire`, it **returns to `nc_idle`** (Point A). Once the request is fully written into Uncache, i.e., upon receiving `ncSlaveAck` (Point B), it updates `rdataPtrExtNext` and `deqPtrExtNext` to handle the next request.
## Bug Description In the above scenario, since the transition to `nc_idle` at Point A occurs earlier (by two cycles) than Point B due to timing differences, the `rdataPtr` at Point A still points to the location of the previous uncache request (let’s call it NC1). The condition for sending uncache request is still met at this moment, leading Point A to issue a **duplicate `uncache` request** for NC1.
By the time Point B occurs, **two identical requests for NC1** have already been sent. At Point B, `rdataPtr` is updated to proceed to the next request. However, when the **second `ncSlaveAck`** for NC1 returns, `rdataPtr` is updated **again**, causing it to move forward **twice** for a single request. This eventually results in one of the following requests never being executed.
## Bug Fix Given that multiple cycles are required to ensure that a request is fully written to Uncache, a new state called `nc_req_ack` is introduced. The revised handshake logic with outstanding enabled is as follows:
In the `nc_idle` state, when an executable `ncstore` is encountered, it transitions to the `nc_req` state. After `req.fire`, it moves to the `nc_req_ack` state. Once the request is fully written to Uncache and `ncSlaveAck` is received, it transitions back to `nc_idle`, and updates `rdataPtrExtNext` and `deqPtrExtNext` to handle the next request.
show more ...
|
99a48a76 | 21-Apr-2025 |
cz4e <[email protected]> |
timing(LoadQueueUncache): adjust s1 enq and s2 enq valid generate logic (#4603) |
ce78e60c | 21-Apr-2025 |
Anzo <[email protected]> |
fix(StoreQueue): remove `cboZeroUop` saved `sqptr` (#4591) |
4a02bbda | 15-Apr-2025 |
Anzo <[email protected]> |
fix(LSU): misalign writeback aligned raw rollback (#4476)
By convention, we need to make `rollback` and `writeback` happen at the same time, and not make `writeback` earlier than `rollback`.
Curren
fix(LSU): misalign writeback aligned raw rollback (#4476)
By convention, we need to make `rollback` and `writeback` happen at the same time, and not make `writeback` earlier than `rollback`.
Currently, the `rollback` generated by raw occurs at `s4`. A normal store would take an extra N beats after the end of s3 (based on the number of RAWQueue entries, which is now 1 beat), which is equivalent to `writeback` at `s4` And misaligned would `writeback` at `s2`, then `writeback` after switching to `s_wb` state, which is equivalent to `writeback` at `s3`
---
This pr adjusts the misaligned `writeback` logic to align with the `StoreUnit`. At the same time, it unified the way to calculate the number of beats.
show more ...
|
35bb7796 | 14-Apr-2025 |
Anzo <[email protected]> |
fix(LSU): fix exception for misalign access to `nc` space (#4526)
For misaligned accesses, say if the access after the split goes to `nc` space, then a misaligned exception should also be generated.
fix(LSU): fix exception for misalign access to `nc` space (#4526)
For misaligned accesses, say if the access after the split goes to `nc` space, then a misaligned exception should also be generated.
Co-authored-by: Yanqin Li <[email protected]>
show more ...
|
724e3eb4 | 10-Apr-2025 |
Yanqin Li <[email protected]> |
fix(StoreQueue): keep readPtr until slave ack when outstanding (#4531) |
4ec1f462 | 09-Apr-2025 |
cz4e <[email protected]> |
timing(StoreMisalignBuffer): fix misalign buffer enq timing (#4493)
* a misalign store will enqueue misalign buffer at s1, and revoke if it needs at s2 |
1592abd1 | 08-Apr-2025 |
Yan Xu <[email protected]> |
feat: support inst lifetime trace (#4007)
PerfCCT(performance counter commit trace) is a Instruction-level granularity perfCounter like GEM5 How to use this: 1. Make with "WITH_CHISELDB=1" argument
feat: support inst lifetime trace (#4007)
PerfCCT(performance counter commit trace) is a Instruction-level granularity perfCounter like GEM5 How to use this: 1. Make with "WITH_CHISELDB=1" argument 2. Run with "--dump-db --dump-select-db lifetime", then get the database 3. Instruction lifetime visualize run "python3 scripts/perfcct.py "the-db-file-path" -p 1 -v | less" 4. Analysis script now is in XS-GEM5 repo, see https://github.com/OpenXiangShan/GEM5/blob/xs-dev/util/ClockAnalysis.py
How it works: 1. Allocate one unique tag "seqNum" like GEM5 for each instruction at fetch stage 2. Passing the "seqNum" in each pipeline 3. Recording perf data through the DPIC interface
show more ...
|
522c7f99 | 07-Mar-2025 |
Anzo <[email protected]> |
fix(LSU): misaligned violation detection stuck (#4369)
Since a load instruction that cross 16Byte needs to be split and accessed twice, it needs to enter the `RAR Queue` twice, but occupies only one
fix(LSU): misaligned violation detection stuck (#4369)
Since a load instruction that cross 16Byte needs to be split and accessed twice, it needs to enter the `RAR Queue` twice, but occupies only one `virtual load queue`, so in the extreme case it may happen that 36 load instructions that span 16Byte fill all 72 `RAR queues`.
---
There is some problem with our previous handling; if the oldest load instruction spanning 16Byte enters the `replayqueue` and at the same time there exists an instruction in the `loadmisalignbuffer` that can't finish executing because the `RAR Queue` is full, then the oldest load instruction is never cannot be issued because the `loadmisalignbuffer` has instructions in it all the time.
---
Therefore, we use a more violent scheme to do this. When the RAR is full, we let the misaligned load generate a rollback, and the next load instruction that the loadmisalignbuffer can receive must be the oldest (if it is misaligned).
show more ...
|
90f8d3cf | 06-Mar-2025 |
cz4e <[email protected]> |
fix(LoadUnit): exclude prefetch requests (#4367)
* In order to ensure timing, the RAR enqueue conditions need to be compromised, worst source of timing from `pmp` and `missQueue`.
* if `LoadQueueRA
fix(LoadUnit): exclude prefetch requests (#4367)
* In order to ensure timing, the RAR enqueue conditions need to be compromised, worst source of timing from `pmp` and `missQueue`.
* if `LoadQueueRARSize` == `VirtualLoadQueueSize`, just need to exclude prefetching. * if `LoadQueueRARSize` < `VirtualLoadQueueSize`, need to consider the situation of `s2_can_query`
show more ...
|
0d55e1db | 28-Feb-2025 |
cz4e <[email protected]> |
timing(LoadQueueRAR, LoadUnit): adjust rar/raw query logic (#4297)
* Because of `LoadQueueRARSize == VirtualLoadQueueSize`, so no need to add additional logic for rar enq * When no need fast replay,
timing(LoadQueueRAR, LoadUnit): adjust rar/raw query logic (#4297)
* Because of `LoadQueueRARSize == VirtualLoadQueueSize`, so no need to add additional logic for rar enq * When no need fast replay, loadunit allocate raw entry
show more ...
|
4e7fa708 | 27-Feb-2025 |
zhanglinjuan <[email protected]> |
fix(StoreQueue): cbo.zero is written to sbuffer only if allocated (#4316)
For misalign store that crosses 16-byte boundary, a store would write sbuffer twice in one cycle but only takes up one SQ en
fix(StoreQueue): cbo.zero is written to sbuffer only if allocated (#4316)
For misalign store that crosses 16-byte boundary, a store would write sbuffer twice in one cycle but only takes up one SQ entry. If there is only one misalign store in SQ, `isCboZeroToSbVec`, which is used to check if there is any cbo.zero written to sbuffer based on `fuOpType` in `uop`, may apply wrong `fuOpType` in an empty SQ entry, or lead to X-state propogation in VCS simulaition.
show more ...
|
afa1262c | 24-Feb-2025 |
Yanqin Li <[email protected]> |
fix(LoadQueueUncache): exhaust the various cases of flush (#4300)
**Bug trigger point:**
The flush occurs during the `s_wait` phase. The entry has already passed the flush trigger condition of `io.
fix(LoadQueueUncache): exhaust the various cases of flush (#4300)
**Bug trigger point:**
The flush occurs during the `s_wait` phase. The entry has already passed the flush trigger condition of `io.uncache.resp.fire`, leading to no flush. As a result, `needFlushReg` remains in the register until the next new entry's `io.uncache.resp.fire`, at which point the normal entry is flushed, causing the program to stuck.
**Bug analysis:** The granularity of flush handling is too coarse.
In the original calculation: ``` val flush = (needFlush && uncacheState === s_idle) || (io.uncache.resp.fire && needFlushReg) ``` Flush is only handled in two states: `s_idle` and non-`s_idle`. This distinction makes the handling of the other three non-`s_idle` states very coarse. In fact, for the remaining three states, there needs to be corresponding feedback based on when `needFlush` is generated and when `NeedFlushReg` is delayed in the register. 1. In the `s_req` state, before the uncache request is sent, the flush can be performed in time, using `needFlush` to prevent the request from being sent. 2. If the request has been sent and the state reaches `s_resp`, to avoid mismatch between the uncache request and response, the flush can be only performed after receiving the uncache response, i.e., use `needFlush || needFlushReg` to flush when `io.uncache.resp.fire`. 3. If a flush occurs during the `s_wait` state, it can also prevent a write-back and use `needFlush` to flush in time.
**Bug Fix:**
For better code readability, the `uncacheState` state machine update is used here to update the `wire` `flush`. Where `flush` refers to executing the flush, `needFlush` refers to the signal that triggers the flush, and `needFlushReg` refers to the flush signal stored for delayed processing flush.
show more ...
|
1eb8dd22 | 24-Feb-2025 |
Kunlin You <[email protected]> |
submodule(utility), XSDebug: support collecting missing XSDebug (#4251)
Previous in PR#3982, we support collecting XSLogs to LogPerfEndpoint.
However with --enable-log, we should also collect some
submodule(utility), XSDebug: support collecting missing XSDebug (#4251)
Previous in PR#3982, we support collecting XSLogs to LogPerfEndpoint.
However with --enable-log, we should also collect some missing XSDebug.
This change move these missing XSDebug outside WhenContext, and add
WireInit to LogUtils' apply, to enable probing some subaccessed data,
like a vec elem with dynamic index.
show more ...
|
a7904e27 | 24-Feb-2025 |
Anzo <[email protected]> |
fix(StoreQueue): fix threshold condition for fore write sbuffer (#4306)
Previously, `ForceWrite` was conditioned to write dead (60, 55), which no longer applies after we adjusted `StoreQueueSize`.
fix(StoreQueue): fix threshold condition for fore write sbuffer (#4306)
Previously, `ForceWrite` was conditioned to write dead (60, 55), which no longer applies after we adjusted `StoreQueueSize`.
---
Now a more reasonable parameterized setting is used. However, the conditions for optimal performance still need to be tested.
show more ...
|
a94ed9a2 | 20-Feb-2025 |
cz4e <[email protected]> |
timing(STU, StoreMisalignBuffer): adjust misalign buffer enq logic (#4254)
There is no exception misaligned store instruction enters the misalignbuffer. Due to the exception timing difference genera
timing(STU, StoreMisalignBuffer): adjust misalign buffer enq logic (#4254)
There is no exception misaligned store instruction enters the misalignbuffer. Due to the exception timing difference generated by the `PMA`, the timing of the misalignbuffer rejection condition is bad timing, which in turn leads to the bad timing of `feedback_slow.hit`.
show more ...
|
3c808de0 | 17-Feb-2025 |
Anzo <[email protected]> |
fix(LSU): fix cbo instr exceptions and implementation (#4262)
1. typo.
2. `cbo` instr not produce misaligned exception.
3. `cbo zero` instr need flush `sbuffer`.
4. `cbo zero` sets mask correctly
fix(LSU): fix cbo instr exceptions and implementation (#4262)
1. typo.
2. `cbo` instr not produce misaligned exception.
3. `cbo zero` instr need flush `sbuffer`.
4. `cbo zero` sets mask correctly
5. Adding RAW checks to `cbo zero`.
6. Adding trigger(Debug Mode) checks to `cbo zero`.
7. Fixed several issues with the CBO instruction in NEMU.
----
In order not to create ambiguity with `io.mmioStout`, a new port of
`StoreQueue` is introduced for writeback `cbo zero` after flush sbuffer.
arbitration is performed in `MemBlock`, and currently, `cbo zero` has
higher priority by default.
`cbo zero` should not be writteback at the same time as `mmio`.
---
A check on `CacheLine` has been added to `RAWQueue` to ensure memory
consistency when executing `cbo zero`.
See this issues:https://github.com/OpenXiangShan/XiangShan/issues/4240
for specific issues.
---
The `cbo` instruction requires a trigger check.
---------
Co-authored-by: zhanglinjuan <[email protected]>
show more ...
|
638f3d84 | 17-Feb-2025 |
Yanqin Li <[email protected]> |
fix(uncache): uncache load fails to replay (#4275)
Fixed the situation where the nc_with_data was not replayed correctly. |
9e12e8ed | 08-Feb-2025 |
cz4e <[email protected]> |
style(Bundles): move bundles to Bundles.scala (#4247) |
c590fb32 | 08-Feb-2025 |
cz4e <[email protected]> |
refactor(MemBlock): move MemBlock.scala from backend to mem (#4221) |
75efee3d | 27-Jan-2025 |
Anzo <[email protected]> |
fix(StoreMisalignBuffer): fix state transition when writeback (#4227)
Assignment overwritten by forgetting to add `Otherwis`. |
74050fc0 | 26-Jan-2025 |
Yanqin Li <[email protected]> |
perf(Uncache): add merge policy when entering (#4154)
# Background
## Problem
How to design a more efficient entry rule for a new load/store request when a load/store with the same address already
perf(Uncache): add merge policy when entering (#4154)
# Background
## Problem
How to design a more efficient entry rule for a new load/store request when a load/store with the same address already exists in the `ubuffer`?
* **Old Design**: Always **reject** the new request. * **New Desig**n: Consider **merging** requests.
## Merge Scenarios
‼️If the new one can be merge into the existing one, both need to be `NC`.
1. **New Store Request:** 1. **Existing Store:** Merge (the new store is younger). 2. **Existing Load:** Reject.
2. **New Load Request:** 1. **Existing Load:** Merge (the new load may be younger or older. Both are ok to merge). 2. **Existing Store:** Reject.
# What this PR do?
## 1. Entry Actions
1. **Allocate** a new entry and mark as `valid` 1. When there is no matching address. 2. **Allocate** a new entry and mark as `valid` and `waitSame`: 1. When there is a matching address, and: * The virtual addresses and attributes are the same. * The older entry is either selected to issue or issued. 3. **Merge** into an Existing Entry: 1. When there is a matching address, and: * The virtual addresses and attributes are the same. * The older entry is **not** selected to issue or issued. 4. **Reject** the New Request: 1. When the ubuffer is full. 2. When there is a matching address, but: * The virtual addresses or attributes are **different**.
**NOTE:** According to the definition in the TL-UL SPEC, the `mask` must be continuous and naturally aligned, and the `addr` must correspond to the mask. Therefore, the "**same attributes**" here introduces a new condition: the merged `mask` must meet the requirements of being continuous and naturally aligned (function `continueAndAlign`). During merging, the block offset of addr must be synchronously updated in `UncacheEntry.update`.
## 2. Handshake Mechanism Between `LoadQueueUncache (M)` and `Uncache (S)`
> `mid`: master id > > `sid`: slave id
**Old Design:**
- `M` sends a `req` with a **`mid`**. - `S` receives the `req`, records the **`mid`**. - `S` sends a `resp` with the **`mid`**. - `M` receives the `resp` and matches it with the recorded **`mid`**.
**New Design:**
- `M` sends a `req` with a **`mid`**. - `S` receives the `req` and responds with `{mid, sid}` . - `M` matches it with the **`mid`** and updates its record with the received **`sid`**. - `S` sends a `resp` with the its **`sid`**. - `M` receives the `resp` and matches it with the recorded **`sid`**.
**Benefit:** The new design allows `S` to merge requests when new request enters.
## 3. Forwarding Mechanism
**Old Design:** Each address in the `ubuffer` is **unique**, so forwarding is straightforward based on a match.
**New Design:**
* A single address may have up to two entries matched in the `ubuffer`. * If it has two matched enties, it must be true that one entry is marked `inflight` and the other entry is marked `waitSame`. In this case, the forwarded data comes from the merged data of two entries, with the `inflight` entry being the older one.
## 4. Bug Fixes
1. In the `loadUnit`, `!tlbMiss` cannot be directly used as `tlbHit`, because when `tlbValid` is false, `!tlbMiss` can still be true. 2. `Uncache` state machine transition: The state indicating "**able to send requests**" (previously `s_refill_req`, now `s_inflight`) should not be triggered by `reqFire` but rather by `acquireFire`.
<img width="747" alt="image" src="https://github.com/user-attachments/assets/75fbc761-1da8-43d9-a0e6-615cc58cefef" />
# Evaluation
- ✅ timing - ✅ performance
| Type | 4B*1000 | Speedup1-IO | 1B*4096 | Speedup2-IO | | -------------- | ------- | ----------- | ------- | ----------- | | IO | 51026 | 1 | 208149 | 1.00 | | NC | 42343 | 1.21 | 169248 | 1.23 | | NC+OT | 20379 | 2.50 | 160101 | 1.30 | | NC+OT+mergeOpt | 16308 | 3.13 | 126369 | 1.65 | | cache | 1298 | 39.31 | 4410 | 47.20 |
show more ...
|
1abade56 | 22-Jan-2025 |
Anzo <[email protected]> |
fix(LSU): fix cbo instruction exception handling logic (#4215) |
e836c770 | 16-Jan-2025 |
Zhaoyang You <[email protected]> |
feat(TopDown): add TopDown PMU Events (#4122)
This PR adds hardware synthesizable three-level categorized TopDown performance counters. Level-1: Retiring, Frontend Bound, Bad Speculation, Backend Bo
feat(TopDown): add TopDown PMU Events (#4122)
This PR adds hardware synthesizable three-level categorized TopDown performance counters. Level-1: Retiring, Frontend Bound, Bad Speculation, Backend Bound. Level-2: Fetch Latency Bound, Fetch Bandwidth Bound, Branch Missprediction, machine clears, Core Bound, Memory Bound. Leval-3: L1 Bound, L2 Bound, L3 Bound, Mem Bound, Store Bound.
show more ...
|
0b4afd34 | 15-Jan-2025 |
cz4e <[email protected]> |
timing(LoadUnit): optimization load unit writeback data generate logic (#4167)
optimization load unit writeback data generate logic * merge multi source data at `s2`, select and expand data at `s3`
timing(LoadUnit): optimization load unit writeback data generate logic (#4167)
optimization load unit writeback data generate logic * merge multi source data at `s2`, select and expand data at `s3` * select data use one-hot instead of shifter
show more ...
|