Lines Matching +full:can +full:- +full:primary
1 .. SPDX-License-Identifier: GPL-2.0
7 ----------
9 Hibernation is sometimes called suspend-to-disk, as it writes a memory
12 memory image is restored from disk so that it can resume execution
14 Documentation/admin-guide/pm/sleep-states.rst.
23 Hibernation can be initiated within Linux by writing "disk" to
30 ---------------------------------------
31 Linux guests on Hyper-V can also be hibernated, in which case the
32 hardware is the virtual hardware provided by Hyper-V to the guest VM.
34 the underlying Hyper-V host continue to run normally. While the
35 underlying Windows Hyper-V and physical hardware on which it is
40 Resuming a hibernated guest VM can be more challenging than with
49 Additional complexity can ensue because the disks of the hibernated VM
50 can be moved to another newly created VM that otherwise has the same
56 Hyper-V also provides ways to move a VM from one Hyper-V host to
57 another. Hyper-V tries to ensure processor model and Hyper-V version
63 model or Hyper-V version, settings recorded in the hibernation image
70 -----------------------------
71 Hibernation of a Hyper-V guest VM is disabled by default because
72 hibernation is incompatible with memory hot-add, as provided by the
73 Hyper-V balloon driver. If hot-add is used and the VM hibernates, it
75 resumes from hibernation, Hyper-V gives the VM only the originally
78 To enable a Hyper-V VM for hibernation, the Hyper-V administrator must
80 Hyper-V provides to the guest VM. Such enablement is accomplished by
84 prioritizes Linux hibernation in the VM over hot-add, so the Hyper-V
85 balloon driver in Linux disables hot-add. Enablement is indicated if
91 guest VM hibernation is not available on Hyper-V for arm64.
94 -------------------------------
95 Guest VMs can self-initiate hibernation using the standard Linux
97 call. As an additional layer, Linux guests on Hyper-V support the
98 "Shutdown" integration service, via which a Hyper-V administrator can
100 command generates a request to the Hyper-V shutdown driver in Linux,
107 --------------------------------------------------
112 primary VMBus channels and their associated Linux devices, such as
116 resumes, the devices are re-offered by Hyper-V and are connected to
123 the resume functions expect that the devices offered by Hyper-V have
126 devices to be matched to the primary VMBus channel data structures in
128 offered that don't match primary VMBus channel data structures that
130 primary VMBus channels that exist in the resumed hibernation image are
135 When resuming existing primary VMBus channels, the newly offered
136 relids might be different because relids can change on each VM boot,
141 VMBus sub-channels are not persisted in the hibernation image. Each
142 VMBus device driver's suspend function must close any sub-channels
143 prior to hibernation. Closing a sub-channel causes Hyper-V to send a
145 channel data structures so that all vestiges of the sub-channel are
146 removed. By contrast, primary channels are marked closed and their
147 ring buffers are freed, but Hyper-V does not send a rescind message,
149 device driver's resume function re-allocates the ring buffer and
150 re-opens the existing channel. It then communicates with Hyper-V to
151 re-open sub-channels from scratch.
153 The Linux ends of Hyper-V sockets are forced closed at the time of
154 hibernation. The guest can't force closing the host end of the socket,
155 but any host-side actions on the host end will produce an error.
160 Documentation/driver-api/pm/devices.rst for the sequencing of the
164 -----------------------------
170 function removes sub-channels, and leaves the primary channel in
173 closes any Hyper-V socket channels and unloads the top-level
174 VMBus connection with the Hyper-V host.
175 4. Linux PM disables non-boot CPUs, creates the hibernation image in
176 the previously allocated memory, then re-enables non-boot CPUs.
178 closed primary channels, but no sub-channels.
180 for the VMBus bus, which re-establishes the top-level VMBus
181 connection and requests that Hyper-V re-offer the VMBus devices.
182 As offers are received for the primary channels, the relids are
185 device re-opens its primary channel, and communicates with Hyper-V
186 to re-establish sub-channels if appropriate. The sub-channels
187 are re-created as new channels since they were previously removed
192 phase. VMBus channels are closed and the top-level VMBus
194 9. Linux PM disables non-boot CPUs, and then enters ACPI sleep state
198 ------------------------
200 the top-level VMBus connection is established, and synthetic
207 to shutdown VMBus devices and unload the top-level VMBus
210 4. Linux PM disables non-boot CPUs, and transfers control to the
211 read-in hibernation image. In the now-running hibernation image,
212 non-boot CPUs are restarted.
214 from the hibernation sequence. The top-level VMBus connection is
215 re-established, and offers are received and matched to primary
217 functions re-open primary channels and re-create sub-channels.
221 Key-Value Pair (KVP) Pseudo-Device Anomalies
222 --------------------------------------------
223 The VMBus KVP device behaves differently from other pseudo-devices
224 offered by Hyper-V. When the KVP primary channel is closed, Hyper-V
226 removed. But Hyper-V then re-offers the device, causing it to be newly
227 re-created. The removal and re-creation occurs during the "freeze"
228 phase of hibernation, so the hibernation image contains the re-created
231 cases, the top-level VMBus connection is subsequently unloaded, which
232 causes the device to be discarded on the Hyper-V side. So no harm is
236 -------------------
238 into the VM's physical address space so the VM can interact directly
239 with the hardware. vPCI devices include those accessed via what Hyper-V
240 calls "Discrete Device Assignment" (DDA), as well as SR-IOV NIC
243 Hyper-V DDA devices are offered to guest VMs after the top-level VMBus
246 unless the Hyper-V administrator makes changes to the configuration.
254 SR-IOV NIC VFs also have a VMBus identity as well as a PCI
258 operating and communicates to Hyper-V that it is prepared to accept a
260 might later be unloaded and then re-established without the VM being
264 VMBus connection is re-established, the VFs are offered on the
265 re-established connection without intervention by the synthetic NIC driver.
268 -----------
269 A VMBus device can be exposed to user space using the Hyper-V UIO
270 driver (uio_hv_generic.c) so that a user space driver can control and
274 and Linux continues to run normally. The most common use of the Hyper-V
278 --------------------------
280 customer VM only exists as saved configuration and disks -- the VM no
281 longer exists on any Hyper-V host. When the customer VM is resumed, a
282 new Hyper-V VM with identical configuration is created, likely on a
283 different Hyper-V host. That new Hyper-V VM becomes the resumed
288 the Hyper-V-provided VMBus instance GUIDs of the disk controllers and
294 Hyper-V always assigns the same instance GUIDs. For example, the
295 Hyper-V mouse, the shutdown pseudo-device, the time sync pseudo
297 Hyper-V installs as well as in the Azure cloud.
302 controllers, and Azure code overrides the normal Hyper-V behavior
306 hold for local Hyper-V installs.
310 overrides the normal Hyper-V behavior so that the instance GUID
312 customer VM is deallocated or hibernated, and then re-constituted
314 does not hold for local Hyper-V installs.
319 NVMe controllers or GPUs. For SR-IOV NIC VFs, Azure removes the
327 tell Linux to do the hibernation. If hibernation is self-initiated
333 hibernation to work for most general-purpose Azure VMs sizes. While
335 on a local Hyper-V install, orchestrating such actions is not provided
336 out-of-the-box by local Hyper-V and so requires custom scripting.