Lines Matching full:writeback
3 * fs/fs-writeback.c
9 * pages against inodes. ie: data writeback. Writeout of the
14 * Additions for address_space-based writeback
26 #include <linux/writeback.h>
50 unsigned int for_sync:1; /* sync(2) WB_SYNC_ALL writeback */
52 enum wb_reason reason; /* why was writeback initiated? */
81 #include <trace/events/writeback.h>
229 * The current cgroup writeback is built on the assumption that multiple
419 * folios actually under writeback. in inode_do_switch_wbs()
691 * In addition to the inodes that have completed writeback, also switch in cleanup_offline_cgwb()
695 * bandwidth restrictions, as writeback of inode metadata is not in cleanup_offline_cgwb()
729 * Record @inode's writeback context into @wbc and unlock the i_lock. On
730 * writeback completion, wbc_detach_inode() should be called. This is used
731 * to track the cgroup writeback context.
772 * alternative entry point into writeback code, and first ensures @inode is
786 * @wbc: writeback_control of the just finished writeback
788 * To be called after a writeback attempt of an inode finishes and undoes
793 * the usefulness of such sharing, cgroup writeback tracks ownership
798 * behaviors (single foreign page can lead to gigabytes of writeback to be
801 * To resolve this issue, cgroup writeback detects the majority dirtier of
805 * a certain amount of time and/or writeback attempts.
807 * On each writeback attempt, @wbc tries to detect the majority writer
908 * wbc_account_cgroup_owner - account writeback to update inode cgroup ownership
909 * @wbc: writeback_control of the writeback in progress
913 * @bytes from @folio are about to written out during the writeback
927 * regular writeback instead of writing things out itself. in wbc_account_cgroup_owner()
989 * @skip_if_busy: skip wb's which already have writeback in progress
1068 * cgroup_writeback_by_id - initiate cgroup writeback from bdi and memcg IDs
1071 * @reason: reason why some writeback work initiated
1125 /* issue the writeback work */ in cgroup_writeback_by_id()
1270 * All callers of this function want to start writeback of all in wb_start_writeback()
1286 * wb_start_background_writeback - start background writeback
1290 * This makes sure WB_SYNC_NONE background writeback happens. When
1299 * writeback as soon as there is no other work to do. in wb_start_background_writeback()
1306 * Remove the inode from the writeback list it is on.
1325 * mark an inode as under writeback on the sb
1343 * clear an inode as under writeback on the sb
1428 * from permanently stopping the whole bdi writeback. in inode_dirtied_after()
1475 * Inode is already marked as I_SYNC_QUEUED so writeback list handling is in move_expired_inodes()
1533 * Wait for writeback on an inode to complete. Called with i_lock held.
1584 * Find proper writeback list for the inode depending on its current state and
1585 * possibly also change of its state while we were doing writeback. Here we
1586 * handle things such as livelock prevention or fairness of writeback among
1588 * processes all inodes in writeback lists and requeueing inodes behind flusher
1609 * Writeback is not making progress due to locked buffers. in requeue_inode()
1632 * Writeback blocked by something other than in requeue_inode()
1635 * retrying writeback of the dirty page/inode in requeue_inode()
1642 * Filesystems can dirty the inode during writeback operations, in requeue_inode()
1652 /* The inode is clean. Remove from writeback lists. */ in requeue_inode()
1661 * This doesn't remove the inode from the writeback list it is on, except
1663 * expiration. The caller is otherwise responsible for writeback list handling.
1685 * I/O completion. We don't do it for sync(2) writeback because it has a in __writeback_single_inode()
1756 * the regular batched writeback done by the flusher threads in
1777 * Writeback is already running on the inode. For WB_SYNC_NONE, in writeback_single_inode()
1779 * must wait for the existing writeback to complete, then do in writeback_single_inode()
1780 * writeback again if there's anything left. in writeback_single_inode()
1791 * still under writeback, e.g. due to prior WB_SYNC_NONE writeback. If in writeback_single_inode()
1814 * removed from its writeback list (if any). Otherwise the in writeback_single_inode()
1815 * flusher threads are responsible for the writeback lists. in writeback_single_inode()
1848 * The intended call sequence for WB_SYNC_ALL writeback is: in writeback_chunk_size()
1938 * If this inode is locked for writeback and we are not in writeback_sb_inodes()
1939 * doing writeback-for-data-integrity, move it to in writeback_sb_inodes()
1940 * b_more_io so that writeback can proceed with the in writeback_sb_inodes()
1955 * are doing WB_SYNC_NONE writeback. So this catches only the in writeback_sb_inodes()
2085 * Explicit flushing or periodic writeback of "old" data.
2088 * dirtying-time in the inode's address_space. So this periodic writeback code
2092 * Try to run once per dirty_writeback_interval. But if a writeback event
2112 * Stop writeback when nr_pages has been consumed in wb_writeback()
2118 * Background writeout and kupdate-style writeback may in wb_writeback()
2164 * Dirty inodes are moved to b_io for writeback in batches. in wb_writeback()
2184 * become available for writeback. Otherwise in wb_writeback()
2240 * When set to zero, disable periodic writeback in wb_check_old_data_flush()
2293 * Retrieve work items and do the writeback they describe
2313 * Check for periodic writeback, kupdated() style in wb_do_writeback()
2323 * Handle writeback of dirty data for the device backed by this bdi. Also
2364 * Start writeback of all dirty pages on this bdi.
2387 * Wakeup the flusher threads to start writeback of all currently dirty pages
2394 * If we are expecting writeback progress we must submit plugged IO. in wakeup_flusher_threads()
2560 * If the inode is queued for writeback by flush worker, just in __mark_inode_dirty()
2644 * Splice the writeback list onto a temporary list to avoid waiting on in wait_sb_inodes()
2645 * inodes that have started writeback after this point. in wait_sb_inodes()
2649 * the local list because inodes can be dropped from either by writeback in wait_sb_inodes()
2657 * Data integrity sync. Must wait for all pages under writeback, because in wait_sb_inodes()
2671 * writeback tag. Writeback completion is responsible to remove in wait_sb_inodes()
2672 * the inode from either list once the writeback tag is cleared. in wait_sb_inodes()
2699 * applications can catch the writeback error using fsync(2). in wait_sb_inodes()
2739 * writeback_inodes_sb_nr - writeback dirty inodes from given super_block
2742 * @reason: reason why some writeback work initiated
2744 * Start writeback on some inodes on this super_block. No guarantees are made
2757 * writeback_inodes_sb - writeback dirty inodes from given super_block
2759 * @reason: reason why some writeback work was initiated
2761 * Start writeback on some inodes on this super_block. No guarantees are made
2772 * try_to_writeback_inodes_sb - try to start writeback if none underway
2774 * @reason: reason why some writeback work was initiated
2776 * Invoke __writeback_inodes_sb_nr if no writeback is currently underway.
2811 * inodes under writeback and I_DIRTY_TIME inodes ignored by in sync_inodes_sb()