Lines Matching +full:16 +full:- +full:byte
1 /* SPDX-License-Identifier: GPL-2.0 */
5 /* Because kmalloc only guarantees 8-byte alignment for kmalloc'd data,
6 and GCC only guarantees 8-byte alignment for stack locals, we can't
7 be assured of 16-byte alignment for atomic lock data even if we
8 specify "__attribute ((aligned(16)))" in the type declaration. So,
10 type and dynamically select the 16-byte aligned int from the array
16 16-byte alignment requirement for ldcw and ldcd is relaxed, and instead
17 they only require "natural" alignment (4-byte for ldcw, 8-byte for
22 require 16-byte alignment. If the address is unaligned, the operation
25 This hid the problem for years. So, restore the 16-byte alignment dropped
26 by Kyle McMartin in "Remove __ldcw_align for PA-RISC 2.0 processors". */
28 #define __PA_LDCW_ALIGNMENT 16
30 unsigned long __ret = (unsigned long) &(a)->lock[0]; \
31 __ret = (__ret + __PA_LDCW_ALIGNMENT - 1) \
32 & ~(__PA_LDCW_ALIGNMENT - 1); \
42 /* LDCW, the only atomic read-write operation PA-RISC has. *sigh*.
45 reloaded when generating 64-bit PIC code. Instead, we clobber
58 # define __lock_aligned __section(".data..lock_aligned") __aligned(16)