migration: Add documentation for fdset with multifd + file

With the last few changes to the fdset infrastructure, we now allow
multifd to use an fdset when migrating to a file. This is useful for
the scenario where the management layer wants to have control over the
migration file.

By receiving the file descriptors directly, QEMU can delegate some
high level operating system operations to the management layer (such
as mandatory access control). The management layer might also want to
add its own headers before the migration stream.

Document the "file:/dev/fdset/#" syntax for the multifd migration with
mapped-ram. The requirements for the fdset mechanism are:

- the fdset must contain two fds that are not duplicates between
  themselves;

- if direct-io is to be used, exactly one of the fds must have the
  O_DIRECT flag set;

- the file must be opened with WRONLY on the migration source side;

- the file must be opened with RDONLY on the migration destination
  side.

Reviewed-by: Peter Xu <peterx@redhat.com>
Signed-off-by: Fabiano Rosas <farosas@suse.de>
This commit is contained in:
Fabiano Rosas 2024-06-17 15:57:30 -03:00
parent 99c147e2f5
commit 8d60280e4f
2 changed files with 24 additions and 6 deletions

View File

@ -47,11 +47,25 @@ over any transport.
QEMU interference. Note that QEMU does not flush cached file
data/metadata at the end of migration.
In addition, support is included for migration using RDMA, which
transports the page data using ``RDMA``, where the hardware takes care of
transporting the pages, and the load on the CPU is much lower. While the
internals of RDMA migration are a bit different, this isn't really visible
outside the RAM migration code.
The file migration also supports using a file that has already been
opened. A set of file descriptors is passed to QEMU via an "fdset"
(see add-fd QMP command documentation). This method allows a
management application to have control over the migration file
opening operation. There are, however, strict requirements to this
interface if the multifd capability is enabled:
- the fdset must contain two file descriptors that are not
duplicates between themselves;
- if the direct-io capability is to be used, exactly one of the
file descriptors must have the O_DIRECT flag set;
- the file must be opened with WRONLY on the migration source side
and RDONLY on the migration destination side.
- rdma migration: support is included for migration using RDMA, which
transports the page data using ``RDMA``, where the hardware takes
care of transporting the pages, and the load on the CPU is much
lower. While the internals of RDMA migration are a bit different,
this isn't really visible outside the RAM migration code.
All these migration protocols use the same infrastructure to
save/restore state devices. This infrastructure is shared with the

View File

@ -16,7 +16,7 @@ location in the file, rather than constantly being added to a
sequential stream. Having the pages at fixed offsets also allows the
usage of O_DIRECT for save/restore of the migration stream as the
pages are ensured to be written respecting O_DIRECT alignment
restrictions (direct-io support not yet implemented).
restrictions.
Usage
-----
@ -35,6 +35,10 @@ Use a ``file:`` URL for migration:
Mapped-ram migration is best done non-live, i.e. by stopping the VM on
the source side before migrating.
For best performance enable the ``direct-io`` parameter as well:
``migrate_set_parameter direct-io on``
Use-cases
---------