1PSCI Performance Measurements on Arm Juno Development Platform
2==============================================================
3
4This document summarises the findings of performance measurements of key
5operations in the Trusted Firmware-A Power State Coordination Interface (PSCI)
6implementation, using the in-built Performance Measurement Framework (PMF) and
7runtime instrumentation timestamps.
8
9Method
10------
11
12We used the `Juno R1 platform`_ for these tests, which has 4 x Cortex-A53 and 2
13x Cortex-A57 clusters running at the following frequencies:
14
15+-----------------+--------------------+
16| Domain          | Frequency (MHz)    |
17+=================+====================+
18| Cortex-A57      | 900 (nominal)      |
19+-----------------+--------------------+
20| Cortex-A53      | 650 (underdrive)   |
21+-----------------+--------------------+
22| AXI subsystem   | 533                |
23+-----------------+--------------------+
24
25Juno supports CPU, cluster and system power down states, corresponding to power
26levels 0, 1 and 2 respectively. It does not support any retention states.
27
28Given that runtime instrumentation using PMF is invasive, there is a small
29(unquantified) overhead on the results. PMF uses the generic counter for
30timestamps, which runs at 50MHz on Juno.
31
32The following source trees and binaries were used:
33
34- `TF-A v2.11-rc0`_
35- `TFTF v2.11-rc0`_
36
37Please see the Runtime Instrumentation :ref:`Testing Methodology
38<Runtime Instrumentation Methodology>`
39page for more details.
40
41Procedure
42---------
43
44#. Build TFTF with runtime instrumentation enabled:
45
46    .. code:: shell
47
48        make CROSS_COMPILE=aarch64-none-elf- PLAT=juno \
49            TESTS=runtime-instrumentation all
50
51#. Fetch Juno's SCP binary from TF-A's archive:
52
53    .. code:: shell
54
55        curl --fail --connect-timeout 5 --retry 5 -sLS -o scp_bl2.bin \
56            https://downloads.trustedfirmware.org/tf-a/css_scp_2.12.0/juno/release/juno-bl2.bin
57
58#. Build TF-A with the following build options:
59
60    .. code:: shell
61
62        make CROSS_COMPILE=aarch64-none-elf- PLAT=juno \
63            BL33="/path/to/tftf.bin" SCP_BL2="scp_bl2.bin" \
64            ENABLE_RUNTIME_INSTRUMENTATION=1 fiptool all fip
65
66#. Load the following images onto the development board: ``fip.bin``,
67   ``scp_bl2.bin``.
68
69Results
70-------
71
72``CPU_SUSPEND`` to deepest power level
73~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
74
75.. table:: ``CPU_SUSPEND`` latencies (µs) to deepest power level in
76        parallel (v2.11)
77
78    +---------+------+-------------------+--------------------+-------------+
79    | Cluster | Core |     Powerdown     |       Wakeup       | Cache Flush |
80    +---------+------+-------------------+--------------------+-------------+
81    |    0    |  0   |  112.98 (-53.44%) |  26.16 (-89.33%)   |     5.48    |
82    +---------+------+-------------------+--------------------+-------------+
83    |    0    |  1   |       411.18      | 438.88 (+1572.56%) |    138.54   |
84    +---------+------+-------------------+--------------------+-------------+
85    |    1    |  0   | 261.82 (+150.88%) | 474.06 (+1649.30%) |     5.6     |
86    +---------+------+-------------------+--------------------+-------------+
87    |    1    |  1   |  714.76 (+86.84%) |       26.44        |     4.48    |
88    +---------+------+-------------------+--------------------+-------------+
89    |    1    |  2   |       862.66      |  149.34 (-45.00%)  |     4.38    |
90    +---------+------+-------------------+--------------------+-------------+
91    |    1    |  3   |      1045.12      |  98.12 (-55.76%)   |    79.74    |
92    +---------+------+-------------------+--------------------+-------------+
93
94.. table:: ``CPU_SUSPEND`` latencies (µs) to deepest power level in
95        parallel (v2.10)
96
97    +---------+------+-------------------+--------+-------------+
98    | Cluster | Core |     Powerdown     | Wakeup | Cache Flush |
99    +---------+------+-------------------+--------+-------------+
100    |    0    |  0   | 242.66 (+132.03%) | 245.1  |     5.4     |
101    +---------+------+-------------------+--------+-------------+
102    |    0    |  1   |  522.08 (+35.87%) | 26.24  |    138.32   |
103    +---------+------+-------------------+--------+-------------+
104    |    1    |  0   |  104.36 (-57.33%) |  27.1  |     5.32    |
105    +---------+------+-------------------+--------+-------------+
106    |    1    |  1   |  382.56 (-42.95%) | 23.34  |     4.42    |
107    +---------+------+-------------------+--------+-------------+
108    |    1    |  2   |       807.74      | 271.54 |     4.64    |
109    +---------+------+-------------------+--------+-------------+
110    |    1    |  3   |       981.36      | 221.8  |    79.48    |
111    +---------+------+-------------------+--------+-------------+
112
113.. table:: ``CPU_SUSPEND`` latencies (µs) to deepest power level in
114        serial (v2.11)
115
116    +---------+------+-----------+--------+-------------+
117    | Cluster | Core | Powerdown | Wakeup | Cache Flush |
118    +---------+------+-----------+--------+-------------+
119    |    0    |  0   |   244.42  | 27.42  |    138.12   |
120    +---------+------+-----------+--------+-------------+
121    |    0    |  1   |   245.02  | 27.34  |    138.08   |
122    +---------+------+-----------+--------+-------------+
123    |    1    |  0   |   297.66  |  26.2  |    77.68    |
124    +---------+------+-----------+--------+-------------+
125    |    1    |  1   |   108.02  | 21.94  |     4.52    |
126    +---------+------+-----------+--------+-------------+
127    |    1    |  2   |   107.48  | 21.88  |     4.46    |
128    +---------+------+-----------+--------+-------------+
129    |    1    |  3   |   107.52  | 21.86  |     4.46    |
130    +---------+------+-----------+--------+-------------+
131
132.. table:: ``CPU_SUSPEND`` latencies (µs) to deepest power level in
133        serial (v2.10)
134
135    +---------+------+-----------+--------+-------------+
136    | Cluster | Core | Powerdown | Wakeup | Cache Flush |
137    +---------+------+-----------+--------+-------------+
138    |    0    |  0   |   236.84  |  27.1  |    138.36   |
139    +---------+------+-----------+--------+-------------+
140    |    0    |  1   |   236.96  |  27.1  |    138.32   |
141    +---------+------+-----------+--------+-------------+
142    |    1    |  0   |   280.06  | 26.94  |     77.5    |
143    +---------+------+-----------+--------+-------------+
144    |    1    |  1   |   100.76  | 23.42  |     4.36    |
145    +---------+------+-----------+--------+-------------+
146    |    1    |  2   |   100.02  | 23.42  |     4.44    |
147    +---------+------+-----------+--------+-------------+
148    |    1    |  3   |   100.08  |  23.2  |     4.4     |
149    +---------+------+-----------+--------+-------------+
150
151``CPU_SUSPEND`` to power level 0
152~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
153
154.. table:: ``CPU_SUSPEND`` latencies (µs) to power level 0 in
155        parallel (v2.11)
156
157    +---------+------+-------------------+--------+-------------+
158    | Cluster | Core |     Powerdown     | Wakeup | Cache Flush |
159    +---------+------+-------------------+--------+-------------+
160    |    0    |  0   |       704.46      | 19.28  |     7.86    |
161    +---------+------+-------------------+--------+-------------+
162    |    0    |  1   |       853.66      | 18.78  |     7.82    |
163    +---------+------+-------------------+--------+-------------+
164    |    1    |  0   | 556.52 (+425.51%) | 19.06  |     7.82    |
165    +---------+------+-------------------+--------+-------------+
166    |    1    |  1   |  113.28 (-70.47%) | 19.28  |     7.48    |
167    +---------+------+-------------------+--------+-------------+
168    |    1    |  2   |  260.62 (-50.22%) |  19.8  |     7.26    |
169    +---------+------+-------------------+--------+-------------+
170    |    1    |  3   |  408.16 (+66.94%) | 19.82  |     7.38    |
171    +---------+------+-------------------+--------+-------------+
172
173.. table:: ``CPU_SUSPEND`` latencies (µs) to power level 0 in
174        parallel (v2.10)
175
176    +---------+------+-------------------+--------+-------------+
177    | Cluster | Core |     Powerdown     | Wakeup | Cache Flush |
178    +---------+------+-------------------+--------+-------------+
179    |    0    |  0   |       801.04      | 18.66  |     8.22    |
180    +---------+------+-------------------+--------+-------------+
181    |    0    |  1   |       661.28      | 19.08  |     7.88    |
182    +---------+------+-------------------+--------+-------------+
183    |    1    |  0   |  105.9 (-72.51%)  |  20.3  |     7.58    |
184    +---------+------+-------------------+--------+-------------+
185    |    1    |  1   | 383.58 (+261.32%) |  20.4  |     7.42    |
186    +---------+------+-------------------+--------+-------------+
187    |    1    |  2   |       523.52      |  20.1  |     7.74    |
188    +---------+------+-------------------+--------+-------------+
189    |    1    |  3   |       244.5       | 20.16  |     7.56    |
190    +---------+------+-------------------+--------+-------------+
191
192.. table:: ``CPU_SUSPEND`` latencies (µs) to power level 0 in serial (v2.11)
193
194    +---------+------+-----------+--------+-------------+
195    | Cluster | Core | Powerdown | Wakeup | Cache Flush |
196    +---------+------+-----------+--------+-------------+
197    |    0    |  0   |   106.78  |  19.2  |     5.32    |
198    +---------+------+-----------+--------+-------------+
199    |    0    |  1   |   107.44  | 19.64  |     5.44    |
200    +---------+------+-----------+--------+-------------+
201    |    1    |  0   |   295.82  | 19.14  |     4.34    |
202    +---------+------+-----------+--------+-------------+
203    |    1    |  1   |   104.34  | 19.18  |     4.28    |
204    +---------+------+-----------+--------+-------------+
205    |    1    |  2   |   103.96  | 19.34  |     4.4     |
206    +---------+------+-----------+--------+-------------+
207    |    1    |  3   |   104.32  | 19.18  |     4.34    |
208    +---------+------+-----------+--------+-------------+
209
210.. table:: ``CPU_SUSPEND`` latencies (µs) to power level 0 in serial (v2.10)
211
212    +---------+------+-----------+--------+-------------+
213    | Cluster | Core | Powerdown | Wakeup | Cache Flush |
214    +---------+------+-----------+--------+-------------+
215    |    0    |  0   |   99.84   | 18.86  |     5.54    |
216    +---------+------+-----------+--------+-------------+
217    |    0    |  1   |   100.2   | 18.82  |     5.66    |
218    +---------+------+-----------+--------+-------------+
219    |    1    |  0   |   278.12  | 20.56  |     4.48    |
220    +---------+------+-----------+--------+-------------+
221    |    1    |  1   |   96.68   | 20.62  |     4.3     |
222    +---------+------+-----------+--------+-------------+
223    |    1    |  2   |   96.94   | 20.14  |     4.42    |
224    +---------+------+-----------+--------+-------------+
225    |    1    |  3   |   96.68   | 20.46  |     4.32    |
226    +---------+------+-----------+--------+-------------+
227
228``CPU_OFF`` on all non-lead CPUs
229~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
230
231``CPU_OFF`` on all non-lead CPUs in sequence then, ``CPU_SUSPEND`` on the lead
232core to the deepest power level.
233
234.. table:: ``CPU_OFF`` latencies (µs) on all non-lead CPUs (v2.11)
235
236    +---------+------+-----------+--------+-------------+
237    | Cluster | Core | Powerdown | Wakeup | Cache Flush |
238    +---------+------+-----------+--------+-------------+
239    |    0    |  0   |   243.62  | 29.84  |    137.66   |
240    +---------+------+-----------+--------+-------------+
241    |    0    |  1   |   243.88  | 29.54  |    137.8    |
242    +---------+------+-----------+--------+-------------+
243    |    1    |  0   |   183.26  | 26.22  |    77.76    |
244    +---------+------+-----------+--------+-------------+
245    |    1    |  1   |   107.64  | 26.74  |     4.34    |
246    +---------+------+-----------+--------+-------------+
247    |    1    |  2   |   107.52  |  25.9  |     4.32    |
248    +---------+------+-----------+--------+-------------+
249    |    1    |  3   |   107.74  |  25.8  |     4.34    |
250    +---------+------+-----------+--------+-------------+
251
252.. table:: ``CPU_OFF`` latencies (µs) on all non-lead CPUs (v2.10)
253
254    +---------------------------------------------------+
255    |       test_rt_instr_cpu_off_serial (latest)       |
256    +---------+------+-----------+--------+-------------+
257    | Cluster | Core | Powerdown | Wakeup | Cache Flush |
258    +---------+------+-----------+--------+-------------+
259    |    0    |  0   |   236.04  | 30.02  |    137.9    |
260    +---------+------+-----------+--------+-------------+
261    |    0    |  1   |   235.38  |  29.7  |    137.72   |
262    +---------+------+-----------+--------+-------------+
263    |    1    |  0   |   175.18  | 26.96  |    77.26    |
264    +---------+------+-----------+--------+-------------+
265    |    1    |  1   |   100.56  | 28.34  |     4.32    |
266    +---------+------+-----------+--------+-------------+
267    |    1    |  2   |   100.38  | 26.82  |     4.3     |
268    +---------+------+-----------+--------+-------------+
269    |    1    |  3   |   100.86  | 26.98  |     4.42    |
270    +---------+------+-----------+--------+-------------+
271
272``CPU_VERSION`` in parallel
273~~~~~~~~~~~~~~~~~~~~~~~~~~~
274
275.. table:: ``CPU_VERSION`` latency (µs) in parallel on all cores (2.11)
276
277    +-------------+--------+--------------+
278    |   Cluster   |  Core  |   Latency    |
279    +-------------+--------+--------------+
280    |      0      |   0    |     1.26     |
281    +-------------+--------+--------------+
282    |      0      |   1    |     0.96     |
283    +-------------+--------+--------------+
284    |      1      |   0    |     0.54     |
285    +-------------+--------+--------------+
286    |      1      |   1    |     0.94     |
287    +-------------+--------+--------------+
288    |      1      |   2    |     0.92     |
289    +-------------+--------+--------------+
290    |      1      |   3    |     1.02     |
291    +-------------+--------+--------------+
292
293.. table:: ``CPU_VERSION`` latency (µs) in parallel on all cores (2.10)
294
295    +-------------+--------+----------------------+
296    |   Cluster   |  Core  |       Latency        |
297    +-------------+--------+----------------------+
298    |      0      |   0    |    1.1 (-25.68%)     |
299    +-------------+--------+----------------------+
300    |      0      |   1    |         1.06         |
301    +-------------+--------+----------------------+
302    |      1      |   0    |         0.58         |
303    +-------------+--------+----------------------+
304    |      1      |   1    |         0.88         |
305    +-------------+--------+----------------------+
306    |      1      |   2    |         0.92         |
307    +-------------+--------+----------------------+
308    |      1      |   3    |         0.9          |
309    +-------------+--------+----------------------+
310
311Annotated Historic Results
312--------------------------
313
314The following results are based on the upstream `TF master as of 31/01/2017`_.
315TF-A was built using the same build instructions as detailed in the procedure
316above.
317
318In the results below, CPUs 0-3 refer to CPUs in the little cluster (A53) and
319CPUs 4-5 refer to CPUs in the big cluster (A57). In all cases CPU 4 is the lead
320CPU.
321
322``PSCI_ENTRY`` corresponds to the powerdown latency, ``PSCI_EXIT`` the wakeup latency, and
323``CFLUSH_OVERHEAD`` the latency of the cache flush operation.
324
325``CPU_SUSPEND`` to deepest power level on all CPUs in parallel
326~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
327
328+-------+---------------------+--------------------+--------------------------+
329| CPU   | ``PSCI_ENTRY`` (us) | ``PSCI_EXIT`` (us) | ``CFLUSH_OVERHEAD`` (us) |
330+=======+=====================+====================+==========================+
331| 0     | 27                  | 20                 | 5                        |
332+-------+---------------------+--------------------+--------------------------+
333| 1     | 114                 | 86                 | 5                        |
334+-------+---------------------+--------------------+--------------------------+
335| 2     | 202                 | 58                 | 5                        |
336+-------+---------------------+--------------------+--------------------------+
337| 3     | 375                 | 29                 | 94                       |
338+-------+---------------------+--------------------+--------------------------+
339| 4     | 20                  | 22                 | 6                        |
340+-------+---------------------+--------------------+--------------------------+
341| 5     | 290                 | 18                 | 206                      |
342+-------+---------------------+--------------------+--------------------------+
343
344A large variance in ``PSCI_ENTRY`` and ``PSCI_EXIT`` times across CPUs is
345observed due to TF PSCI lock contention. In the worst case, CPU 3 has to wait
346for the 3 other CPUs in the cluster (0-2) to complete ``PSCI_ENTRY`` and release
347the lock before proceeding.
348
349The ``CFLUSH_OVERHEAD`` times for CPUs 3 and 5 are higher because they are the
350last CPUs in their respective clusters to power down, therefore both the L1 and
351L2 caches are flushed.
352
353The ``CFLUSH_OVERHEAD`` time for CPU 5 is a lot larger than that for CPU 3
354because the L2 cache size for the big cluster is lot larger (2MB) compared to
355the little cluster (1MB).
356
357``CPU_SUSPEND`` to power level 0 on all CPUs in parallel
358~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
359
360+-------+---------------------+--------------------+--------------------------+
361| CPU   | ``PSCI_ENTRY`` (us) | ``PSCI_EXIT`` (us) | ``CFLUSH_OVERHEAD`` (us) |
362+=======+=====================+====================+==========================+
363| 0     | 116                 | 14                 | 8                        |
364+-------+---------------------+--------------------+--------------------------+
365| 1     | 204                 | 14                 | 8                        |
366+-------+---------------------+--------------------+--------------------------+
367| 2     | 287                 | 13                 | 8                        |
368+-------+---------------------+--------------------+--------------------------+
369| 3     | 376                 | 13                 | 9                        |
370+-------+---------------------+--------------------+--------------------------+
371| 4     | 29                  | 15                 | 7                        |
372+-------+---------------------+--------------------+--------------------------+
373| 5     | 21                  | 15                 | 8                        |
374+-------+---------------------+--------------------+--------------------------+
375
376There is no lock contention in TF generic code at power level 0 but the large
377variance in ``PSCI_ENTRY`` times across CPUs is due to lock contention in Juno
378platform code. The platform lock is used to mediate access to a single SCP
379communication channel. This is compounded by the SCP firmware waiting for each
380AP CPU to enter WFI before making the channel available to other CPUs, which
381effectively serializes the SCP power down commands from all CPUs.
382
383On platforms with a more efficient CPU power down mechanism, it should be
384possible to make the ``PSCI_ENTRY`` times smaller and consistent.
385
386The ``PSCI_EXIT`` times are consistent across all CPUs because TF does not
387require locks at power level 0.
388
389The ``CFLUSH_OVERHEAD`` times for all CPUs are small and consistent since only
390the cache associated with power level 0 is flushed (L1).
391
392``CPU_SUSPEND`` to deepest power level on all CPUs in sequence
393~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
394
395+-------+---------------------+--------------------+--------------------------+
396| CPU   | ``PSCI_ENTRY`` (us) | ``PSCI_EXIT`` (us) | ``CFLUSH_OVERHEAD`` (us) |
397+=======+=====================+====================+==========================+
398| 0     | 114                 | 20                 | 94                       |
399+-------+---------------------+--------------------+--------------------------+
400| 1     | 114                 | 20                 | 94                       |
401+-------+---------------------+--------------------+--------------------------+
402| 2     | 114                 | 20                 | 94                       |
403+-------+---------------------+--------------------+--------------------------+
404| 3     | 114                 | 20                 | 94                       |
405+-------+---------------------+--------------------+--------------------------+
406| 4     | 195                 | 22                 | 180                      |
407+-------+---------------------+--------------------+--------------------------+
408| 5     | 21                  | 17                 | 6                        |
409+-------+---------------------+--------------------+--------------------------+
410
411The ``CFLUSH_OVERHEAD`` times for lead CPU 4 and all CPUs in the non-lead cluster
412are large because all other CPUs in the cluster are powered down during the
413test. The ``CPU_SUSPEND`` call powers down to the cluster level, requiring a
414flush of both L1 and L2 caches.
415
416The ``CFLUSH_OVERHEAD`` time for CPU 4 is a lot larger than those for the little
417CPUs because the L2 cache size for the big cluster is lot larger (2MB) compared
418to the little cluster (1MB).
419
420The ``PSCI_ENTRY`` and ``CFLUSH_OVERHEAD`` times for CPU 5 are low because lead
421CPU 4 continues to run while CPU 5 is suspended. Hence CPU 5 only powers down to
422level 0, which only requires L1 cache flush.
423
424``CPU_SUSPEND`` to power level 0 on all CPUs in sequence
425~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
426
427+-------+---------------------+--------------------+--------------------------+
428| CPU   | ``PSCI_ENTRY`` (us) | ``PSCI_EXIT`` (us) | ``CFLUSH_OVERHEAD`` (us) |
429+=======+=====================+====================+==========================+
430| 0     | 22                  | 14                 | 5                        |
431+-------+---------------------+--------------------+--------------------------+
432| 1     | 22                  | 14                 | 5                        |
433+-------+---------------------+--------------------+--------------------------+
434| 2     | 21                  | 14                 | 5                        |
435+-------+---------------------+--------------------+--------------------------+
436| 3     | 22                  | 14                 | 5                        |
437+-------+---------------------+--------------------+--------------------------+
438| 4     | 17                  | 14                 | 6                        |
439+-------+---------------------+--------------------+--------------------------+
440| 5     | 18                  | 15                 | 6                        |
441+-------+---------------------+--------------------+--------------------------+
442
443Here the times are small and consistent since there is no contention and it is
444only necessary to flush the cache to power level 0 (L1). This is the best case
445scenario.
446
447The ``PSCI_ENTRY`` times for CPUs in the big cluster are slightly smaller than
448for the CPUs in little cluster due to greater CPU performance.
449
450The ``PSCI_EXIT`` times are generally lower than in the last test because the
451cluster remains powered on throughout the test and there is less code to execute
452on power on (for example, no need to enter CCI coherency)
453
454``CPU_OFF`` on all non-lead CPUs in sequence then ``CPU_SUSPEND`` on lead CPU to deepest power level
455~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
456
457The test sequence here is as follows:
458
4591. Call ``CPU_ON`` and ``CPU_OFF`` on each non-lead CPU in sequence.
460
4612. Program wake up timer and suspend the lead CPU to the deepest power level.
462
4633. Call ``CPU_ON`` on non-lead CPU to get the timestamps from each CPU.
464
465+-------+---------------------+--------------------+--------------------------+
466| CPU   | ``PSCI_ENTRY`` (us) | ``PSCI_EXIT`` (us) | ``CFLUSH_OVERHEAD`` (us) |
467+=======+=====================+====================+==========================+
468| 0     | 110                 | 28                 | 93                       |
469+-------+---------------------+--------------------+--------------------------+
470| 1     | 110                 | 28                 | 93                       |
471+-------+---------------------+--------------------+--------------------------+
472| 2     | 110                 | 28                 | 93                       |
473+-------+---------------------+--------------------+--------------------------+
474| 3     | 111                 | 28                 | 93                       |
475+-------+---------------------+--------------------+--------------------------+
476| 4     | 195                 | 22                 | 181                      |
477+-------+---------------------+--------------------+--------------------------+
478| 5     | 20                  | 23                 | 6                        |
479+-------+---------------------+--------------------+--------------------------+
480
481The ``CFLUSH_OVERHEAD`` times for all little CPUs are large because all other
482CPUs in that cluster are powerered down during the test. The ``CPU_OFF`` call
483powers down to the cluster level, requiring a flush of both L1 and L2 caches.
484
485The ``PSCI_ENTRY`` and ``CFLUSH_OVERHEAD`` times for CPU 5 are small because
486lead CPU 4 is running and CPU 5 only powers down to level 0, which only requires
487an L1 cache flush.
488
489The ``CFLUSH_OVERHEAD`` time for CPU 4 is a lot larger than those for the little
490CPUs because the L2 cache size for the big cluster is lot larger (2MB) compared
491to the little cluster (1MB).
492
493The ``PSCI_EXIT`` times for CPUs in the big cluster are slightly smaller than
494for CPUs in the little cluster due to greater CPU performance.  These times
495generally are greater than the ``PSCI_EXIT`` times in the ``CPU_SUSPEND`` tests
496because there is more code to execute in the "on finisher" compared to the
497"suspend finisher" (for example, GIC redistributor register programming).
498
499``PSCI_VERSION`` on all CPUs in parallel
500~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
501
502Since very little code is associated with ``PSCI_VERSION``, this test
503approximates the round trip latency for handling a fast SMC at EL3 in TF.
504
505+-------+-------------------+
506| CPU   | TOTAL TIME (ns)   |
507+=======+===================+
508| 0     | 3020              |
509+-------+-------------------+
510| 1     | 2940              |
511+-------+-------------------+
512| 2     | 2980              |
513+-------+-------------------+
514| 3     | 3060              |
515+-------+-------------------+
516| 4     | 520               |
517+-------+-------------------+
518| 5     | 720               |
519+-------+-------------------+
520
521The times for the big CPUs are less than the little CPUs due to greater CPU
522performance.
523
524We suspect the time for lead CPU 4 is shorter than CPU 5 due to subtle cache
525effects, given that these measurements are at the nano-second level.
526
527--------------
528
529*Copyright (c) 2019-2024, Arm Limited and Contributors. All rights reserved.*
530
531.. _Juno R1 platform: https://developer.arm.com/documentation/100122/latest/
532.. _TF master as of 31/01/2017: https://git.trustedfirmware.org/TF-A/trusted-firmware-a.git/tree/?id=c38b36d
533.. _TF-A v2.11-rc0: https://git.trustedfirmware.org/TF-A/trusted-firmware-a.git/tree/?h=v2.11-rc0
534.. _TFTF v2.11-rc0: https://git.trustedfirmware.org/TF-A/tf-a-tests.git/tree/?h=v2.11-rc0
535