Lines Matching +full:in +full:- +full:flight
1 /* SPDX-License-Identifier: GPL-2.0-or-later */
7 Copyright (C) 2003-2008, LINBIT Information Technologies GmbH.
8 Copyright (C) 2003-2008, Philipp Reisner <[email protected]>.
9 Copyright (C) 2003-2008, Lars Ellenberg <[email protected]>.
24 This header file (and its .c file; kernel-doc of functions see there)
29 We use an LRU policy if it is necessary to "cool down" a region currently in
33 As it actually Tracks Objects in an Active SeT, we could also call it
42 we need to resync all regions that have been target of in-flight WRITE IO
43 (in use, or "hot", regions), as we don't know whether or not those WRITEs
48 This is known as "write intent log", and can be implemented as on-disk
53 in-flight WRITE IO, e.g. by only lazily clearing the on-disk write-intent
62 in the mean time, or, if both replica have been changed independently [*],
63 all blocks that have been changed on either replica in the mean time.
64 [*] usually as a result of a cluster split-brain and insufficient protection.
68 Having it fine-grained reduces the amount of resync traffic.
76 The on-disk "dirty bitmap" may be re-used as "write-intent" bitmap as well.
77 To reduce the frequency of bitmap updates for write-intent log purposes,
79 on-disk bitmap, while keeping the in-memory "dirty" bitmap as clean as
80 possible, flushing it to disk again when a previously "hot" (and on-disk
81 dirtied as full chunk) area "cools down" again (no IO in flight anymore,
82 and none expected in the near future either).
86 for write-intent log purposes, additionally to the fine grained dirty bitmap.
95 not changing members of the set in a round robin fashion. To do so, we use a
99 change itself (index: -old_label, +new_label), and which index is associated
121 /* this defines an element in a tracked set
124 * region number (label) easily. To do the label -> object lookup without a
128 * in_use: currently in use (refcnt > 0, lc_number != LC_FREE)
134 * an element is said to be "in the active set",
142 * them, as the change "index: -old_label, +LC_FREE" would need a transaction
145 * But it avoids high order page allocations in kmalloc.
151 /* back "pointer" into lc_cache->element[index],
165 /* the least recently used item is kept at lru->prev */
171 /* the pre-created kmem cache to allocate the objects from */
174 /* size of tracked objects, used to memset(,0,) them in lc_reset */
176 /* offset of struct lc_element member in the tracked object */
185 * 8 high bits of .lc_index to be overloaded with flags in the future. */
198 /* see below: flag-bits for lru_cache */
210 /* flag-bits for lru_cache */
259 * lc_try_lock_for_transaction - can be used to stop lc_get() from changing the tracked set
268 return !test_and_set_bit(__LC_LOCKED, &lc->flags); in lc_try_lock_for_transaction()
272 * lc_try_lock - variant to stop lc_get() from changing the tracked set
283 * lc_unlock - unlock @lc, allow lc_get() to change the set again
288 clear_bit(__LC_DIRTY, &lc->flags); in lc_unlock()
289 clear_bit_unlock(__LC_LOCKED, &lc->flags); in lc_unlock()