/linux-6.14.4/tools/perf/pmu-events/arch/x86/amdzen4/ |
D | floating-point.json | 11 "BriefDescription": "Retired x87 floating-point multiply ops.", 35 "BriefDescription": "Retired SSE and AVX floating-point multiply ops.", 47 …"BriefDescription": "Retired SSE and AVX floating-point multiply-accumulate ops (each operation is… 53 …"BriefDescription": "Retired SSE and AVX floating-point bfloat multiply-accumulate ops (each opera… 149 "BriefDescription": "Retired scalar floating-point multiply ops.", 155 "BriefDescription": "Retired scalar floating-point multiply-accumulate ops.", 215 "BriefDescription": "Retired vector floating-point multiply ops.", 221 "BriefDescription": "Retired vector floating-point multiply-accumulate ops.", 299 "BriefDescription": "Retired MMX integer multiply ops.", 305 "BriefDescription": "Retired MMX integer multiply-accumulate ops.", [all …]
|
/linux-6.14.4/tools/perf/pmu-events/arch/x86/amdzen1/ |
D | floating-point.json | 94 "BriefDescription": "Multiply Ops.", 95 … Ops that have retired. The number of events logged per cycle can vary from 0 to 8. Multiply Ops.", 115 "BriefDescription": "Double precision multiply-add FLOPS. Multiply-add counts as 2 FLOPS.", 116 …from 0 to 64. This event can count above 15. Double precision multiply-add FLOPS. Multiply-add cou… 129 "BriefDescription": "Double precision multiply FLOPS.", 130 … per cycle can vary from 0 to 64. This event can count above 15. Double precision multiply FLOPS.", 143 "BriefDescription": "Single precision multiply-add FLOPS. Multiply-add counts as 2 FLOPS.", 144 …from 0 to 64. This event can count above 15. Single precision multiply-add FLOPS. Multiply-add cou… 157 "BriefDescription": "Single-precision multiply FLOPS.", 158 … per cycle can vary from 0 to 64. This event can count above 15. Single-precision multiply FLOPS.",
|
/linux-6.14.4/tools/perf/pmu-events/arch/x86/amdzen5/ |
D | floating-point.json | 11 "BriefDescription": "Retired x87 floating-point multiply ops.", 35 "BriefDescription": "Retired SSE and AVX floating-point multiply ops.", 47 …"BriefDescription": "Retired SSE and AVX floating-point multiply-accumulate ops (each operation is… 143 "BriefDescription": "Retired scalar floating-point multiply ops.", 149 "BriefDescription": "Retired scalar floating-point multiply-accumulate ops.", 209 "BriefDescription": "Retired vector floating-point multiply ops.", 215 "BriefDescription": "Retired vector floating-point multiply-accumulate ops.", 293 "BriefDescription": "Retired MMX integer multiply ops.", 299 "BriefDescription": "Retired MMX integer multiply-accumulate ops.", 341 "BriefDescription": "Retired MMX integer multiply ops of other types.", [all …]
|
/linux-6.14.4/arch/parisc/math-emu/ |
D | fmpyfadd.c | 15 * Double Floating-point Multiply Fused Add 16 * Double Floating-point Multiply Negate Fused Add 17 * Single Floating-point Multiply Fused Add 18 * Single Floating-point Multiply Negate Fused Add 41 * Double Floating-point Multiply Fused Add 68 * set sign bit of result of multiply in dbl_fmpyfadd() 75 * Generate multiply exponent in dbl_fmpyfadd() 100 * sign opposite of the multiply result in dbl_fmpyfadd() 178 * invalid since multiply operands are in dbl_fmpyfadd() 191 * sign opposite of the multiply result in dbl_fmpyfadd() [all …]
|
D | sfmpy.c | 15 * Single Precision Floating-point Multiply 33 * Single Precision Floating-point Multiply 192 /* Multiply two source mantissas together */ in sgl_fmpy() 198 * simple shift and add multiply algorithm is used. in sgl_fmpy()
|
/linux-6.14.4/tools/perf/pmu-events/arch/arm64/fujitsu/monaka/ |
D | sve.json | 32 …This event counts architecturally executed floating-point fused multiply-add and multiply-subtract… 64 …escription": "This event counts architecturally executed Advanced SIMD integer multiply operation." 68 … "BriefDescription": "This event counts architecturally executed SVE integer multiply operation." 72 …n": "This event counts architecturally executed Advanced SIMD and SVE integer multiply operations." 76 …tion": "This event counts architecturally executed SVE integer 64-bit x 64-bit multiply operation." 80 …"This event counts architecturally executed SVE integer 64-bit x 64-bit multiply returning high pa… 252 …rchitecturally executed microarchitectural Advanced SIMD or SVE integer matrix multiply operation."
|
D | fp_operation.json | 123 …"BriefDescription": "This event counts architecturally executed floating-point multiply operations… 127 …ion": "This event counts architecturally executed Advanced SIMD floating-point multiply operation." 131 …"BriefDescription": "This event counts architecturally executed SVE floating-point multiply operat… 135 …is event counts architecturally executed Advanced SIMD and SVE floating-point multiply operations." 207 …turally executed microarchitectural Advanced SIMD or SVE floating-point matrix multiply operation."
|
D | spec_operation.json | 149 "BriefDescription": "This event counts architecturally executed integer multiply operation." 153 …cription": "This event counts architecturally executed integer 64-bit x 64-bit multiply operation." 157 …n": "This event counts architecturally executed integer 64-bit x 64-bit multiply returning high pa…
|
/linux-6.14.4/arch/m68k/include/asm/ |
D | delay.h | 50 * multiply instruction. So we need to handle them a little differently. 51 * We use a bit of shifting and a single 32*32->32 multiply to get close. 109 * multiply instruction. So we need to handle them a little differently. 110 * We use a bit of shifting and a single 32*32->32 multiply to get close. 112 * multiply and shift.
|
D | hash.h | 13 * entirely, let's keep it simple and just use an optimized multiply 16 * The best way to do that appears to be to multiply by 0x8647 with 17 * shifts and adds, and use mulu.w to multiply the high half by 0x61C8.
|
/linux-6.14.4/arch/microblaze/lib/ |
D | mulsi3.S | 5 * Multiply operation for 32 bit integers. 18 beqi r5, result_is_zero /* multiply by zero */ 19 beqi r6, result_is_zero /* multiply by zero */
|
/linux-6.14.4/lib/crypto/mpi/ |
D | mpih-mul.c | 37 /* Multiply the natural numbers u (pointed to by UP) and v (pointed to by VP), 61 /* Multiply by the first limb in V separately, as the result can be in mul_n_basecase() 76 /* For each iteration in the outer loop, multiply one limb from in mul_n_basecase() 100 * Multiply the least significant (size - 1) limbs with a recursive in mul_n() 213 /* Multiply by the first limb in V separately, as the result can be in mpih_sqr_n_basecase() 228 /* For each iteration in the outer loop, multiply one limb from in mpih_sqr_n_basecase() 249 * Multiply the least significant (size - 1) limbs with a recursive in mpih_sqr_n() 411 /* Multiply the natural numbers u (pointed to by UP, with USIZE limbs) 443 /* Multiply by the first limb in V separately, as the result can be in mpihelp_mul() 458 /* For each iteration in the outer loop, multiply one limb from in mpihelp_mul()
|
/linux-6.14.4/arch/mips/lib/ |
D | multi3.c | 14 /* multiply 64-bit values, low 64-bits returned */ 23 /* multiply 64-bit unsigned values, high 64-bits of 128-bit result returned */ 32 /* multiply 128-bit values, low 128-bits returned */
|
/linux-6.14.4/tools/perf/pmu-events/arch/s390/cf_z16/ |
D | pai_crypto.json | 839 "BriefDescription": "PCC SCALAR MULTIPLY P256", 840 "PublicDescription": "PCC-Scalar-Multiply-P256 function ending with CC=0" 846 "BriefDescription": "PCC SCALAR MULTIPLY P384", 847 "PublicDescription": "PCC-Scalar-Multiply-P384 function ending with CC=0" 853 "BriefDescription": "PCC SCALAR MULTIPLY P521", 854 "PublicDescription": "PCC-Scalar-Multiply-P521 function ending with CC=0" 860 "BriefDescription": "PCC SCALAR MULTIPLY ED25519", 861 "PublicDescription": "PCC-Scalar-Multiply-Ed25519 function ending with CC=0" 867 "BriefDescription": "PCC SCALAR MULTIPLY ED448", 868 "PublicDescription": "PCC-Scalar-Multiply-Ed448 function ending with CC=0" [all …]
|
/linux-6.14.4/include/crypto/internal/ |
D | ecc.h | 245 * @left: vli number to multiply with @right 246 * @right: vli number to multiply with @left 294 * @x: scalar to multiply with @p 295 * @p: point to multiply with @x 296 * @y: scalar to multiply with @q 297 * @q: point to multiply with @y
|
/linux-6.14.4/arch/m68k/fpsp040/ |
D | binstr.S | 28 | A3. Multiply the fraction in d2:d3 by 8 using bit-field 32 | A4. Multiply the fraction in d4:d5 by 2 using shifts. The msb 87 | A3. Multiply d2:d3 by 8; extract msbs into d1. 95 | A4. Multiply d4:d5 by 2; add carry out to d1.
|
/linux-6.14.4/tools/include/linux/ |
D | hash.h | 38 * which is very slightly easier to multiply by and makes no 77 /* 64x64-bit multiply is efficient on all 64-bit processors */ in hash_64_generic() 80 /* Hash 64 bits using only 32x32-bit multiply. */ in hash_64_generic()
|
/linux-6.14.4/include/linux/ |
D | hash.h | 38 * which is very slightly easier to multiply by and makes no 77 /* 64x64-bit multiply is efficient on all 64-bit processors */ in hash_64_generic() 80 /* Hash 64 bits using only 32x32-bit multiply. */ in hash_64_generic()
|
/linux-6.14.4/arch/parisc/include/asm/ |
D | hash.h | 6 * HP-PA only implements integer multiply in the FPU. However, for 19 * This is a multiply by GOLDEN_RATIO_32 = 0x61C88647 optimized for the 109 * Multiply by GOLDEN_RATIO_64 = 0x0x61C8864680B583EB using a heavily 112 * Without the final shift, the multiply proper is 19 instructions,
|
/linux-6.14.4/arch/sparc/include/asm/ |
D | elf_64.h | 73 #define AV_SPARC_MUL32 0x00000100 /* 32x32 multiply is efficient */ 81 #define AV_SPARC_FMAF 0x00010000 /* fused multiply-add */ 86 #define AV_SPARC_FJFMAU 0x00200000 /* unfused multiply-add */ 87 #define AV_SPARC_IMA 0x00400000 /* integer multiply-add */
|
/linux-6.14.4/arch/xtensa/lib/ |
D | umulsidi3.S | 47 /* a0 and a8 will be clobbered by calling the multiply function 97 #else /* no multiply hardware */ 118 #endif /* no multiply hardware */ 190 /* For Xtensa processors with no multiply hardware, this simplified
|
/linux-6.14.4/arch/m68k/ifpsp060/ |
D | ilsp.doc | 34 module can be used to emulate 64-bit divide and multiply, 78 For example, to use a 64-bit multiply instruction, 81 for unsigned multiply could look like: 90 bsr.l _060LISP_TOP+0x18 # branch to multiply routine
|
/linux-6.14.4/Documentation/arch/arm/nwfpe/ |
D | notes.rst | 22 emulator sees a multiply of a double and extended, it promotes the double to 23 extended, then does the multiply in extended precision.
|
/linux-6.14.4/tools/perf/pmu-events/arch/arm64/ |
D | common-and-microarch.json | 701 "BriefDescription": "Floating-point Operation speculatively executed, multiply." 706 … "BriefDescription": "Floating-point Operation speculatively executed, Advanced SIMD multiply." 711 "BriefDescription": "Floating-point Operation speculatively executed, SVE multiply." 716 …riefDescription": "Floating-point Operation speculatively executed, Advanced SIMD or SVE multiply." 844 "BriefDescription": "Integer Operation speculatively executed, multiply." 849 "BriefDescription": "Integer Operation speculatively executed, Advanced SIMD multiply." 854 "BriefDescription": "Integer Operation speculatively executed, SVE multiply." 859 … "BriefDescription": "Integer Operation speculatively executed, Advanced SIMD or SVE multiply." 864 "BriefDescription": "Integer Operation speculatively executed, 64\u00d764 multiply." 869 "BriefDescription": "Integer Operation speculatively executed, SVE 64\u00d764 multiply." [all …]
|
/linux-6.14.4/arch/arc/include/asm/ |
D | delay.h | 43 * -Mathematically if we multiply and divide a number by same value the 50 * -We simply need to ensure that the multiply per above eqn happens in
|