Currently, if an array of pointers contains a NULL pointer, that
pointer will be encoded as '0' in the stream. Since the JSON writer
doesn't define a "pointer" type, that '0' will now be an uint8, which
is different from the original type being pointed to, e.g. struct.
(we're further calling uint8 "nullptr", but that's irrelevant to the
issue)
That mixed-type array shouldn't be compressed, otherwise data is lost
as the code currently makes the whole array have the type of the first
element:
css = {NULL, NULL, ..., 0x5555568a7940, NULL};
{"name": "s390_css", "instance_id": 0, "vmsd_name": "s390_css",
"version": 1, "fields": [
...,
{"name": "css", "array_len": 256, "type": "nullptr", "size": 1},
...,
]}
In the above, the valid pointer at position 254 got lost among the
compressed array of nullptr.
While we could disable the array compression when a NULL pointer is
found, the JSON part of the stream still makes part of downtime, so we
should avoid writing unecessary bytes to it.
Keep the array compression in place, but if NULL and non-NULL pointers
are mixed break the array into several type-contiguous pieces :
css = {NULL, NULL, ..., 0x5555568a7940, NULL};
{"name": "s390_css", "instance_id": 0, "vmsd_name": "s390_css",
"version": 1, "fields": [
...,
{"name": "css", "array_len": 254, "type": "nullptr", "size": 1},
{"name": "css", "type": "struct", "struct": {"vmsd_name": "s390_css_img", ... }, "size": 768},
{"name": "css", "type": "nullptr", "size": 1},
...,
]}
Now each type-discontiguous region will become a new JSON entry. The
reader should interpret this as a concatenation of values, all part of
the same field.
Parsing the JSON with analyze-script.py now shows the proper data
being pointed to at the places where the pointer is valid and
"nullptr" where there's NULL:
"s390_css (14)": {
...
"css": [
"nullptr",
"nullptr",
...
"nullptr",
{
"chpids": [
{
"in_use": "0x00",
"type": "0x00",
"is_virtual": "0x00"
},
...
]
},
"nullptr",
}
Reviewed-by: Peter Xu <peterx@redhat.com>
Message-Id: <20250109185249.23952-7-farosas@suse.de>
Signed-off-by: Fabiano Rosas <farosas@suse.de>
(cherry picked from commit 35049eb0d2fc72bb8c563196ec75b4d6c13fce02)
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
Rename vmstate_info_nullptr from "uint64_t" to "nullptr". This vmstate
actually reads and writes just a byte, so the proper name would be
uint8. However, since this is a marker for a NULL pointer, it's
convenient to have a more explicit name that can be identified by the
consumers of the JSON part of the stream.
Change the name to "nullptr" and add support for it in the
analyze-migration.py script. Arbitrarily use the name of the type as
the value of the field to avoid the script showing 0x30 or '0', which
could be confusing for readers.
Reviewed-by: Peter Xu <peterx@redhat.com>
Message-Id: <20250109185249.23952-5-farosas@suse.de>
Signed-off-by: Fabiano Rosas <farosas@suse.de>
(cherry picked from commit f52965bf0eeee28e89933264f1a9dbdcdaa76a7e)
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
The parsing for the S390StorageAttributes section is currently leaving
an unconsumed token that is later interpreted by the generic code as
QEMU_VM_EOF, cutting the parsing short.
The migration will issue a STATTR_FLAG_DONE between iterations, which
the script consumes correctly, but there's a final STATTR_FLAG_EOS at
.save_complete that the script is ignoring. Since the EOS flag is a
u64 0x1ULL and the stream is big endian, on little endian hosts a byte
read from it will be 0x0, the same as QEMU_VM_EOF.
Fixes: 81c2c9dd5d ("tests/qtest/migration-test: Fix analyze-migration.py for s390x")
Reviewed-by: Peter Xu <peterx@redhat.com>
Message-Id: <20250109185249.23952-4-farosas@suse.de>
Signed-off-by: Fabiano Rosas <farosas@suse.de>
(cherry picked from commit 69d1f784569fdb950f2923c3b6d00d7c1b71acc1)
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
The analyze-migration script was seen failing in s390x in misterious
ways. It seems we're reaching the VMSDFieldStruct constructor without
any fields, which would indicate an empty .subsection entry, a
VMSTATE_STRUCT with no fields or a vmsd with no fields. We don't have
any of those, at least not without the unmigratable flag set, so this
should never happen.
Add some debug statements so that we can see what's going on the next
time the issue happens.
Reviewed-by: Peter Xu <peterx@redhat.com>
Message-Id: <20250109185249.23952-2-farosas@suse.de>
Signed-off-by: Fabiano Rosas <farosas@suse.de>
(cherry picked from commit 86bee9e0c761a3d0e67c43b44001fd752f894cb0)
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
RAM_SAVE_FLAG_MEM_SIZE contains the total length of ramblock idstr to know
whether scanning of ramblocks is complete. Drop the trick.
Reviewed-by: Fabiano Rosas <farosas@suse.de>
Link: https://lore.kernel.org/r/20240117075848.139045-4-peterx@redhat.com
Signed-off-by: Peter Xu <peterx@redhat.com>
The migration stream on s390x contains data for the storage_attributes
which the analyze-migration.py cannot handle yet. Add the basic code
for handling this, so we can re-enable the check in the migration-test.
Message-ID: <20231120113951.162090-1-thuth@redhat.com>
Reviewed-by: Fabiano Rosas <farosas@suse.de>
Signed-off-by: Thomas Huth <thuth@redhat.com>
The migration code uses unsigned values for 16, 32 and 64-bit
operations. Fix the script to do the same.
This was causing an issue when parsing the migration stream generated
on the ppc64 target because one of instance_ids was larger than the
32bit signed maximum:
Traceback (most recent call last):
File "/home/fabiano/kvm/qemu/build/scripts/analyze-migration.py", line 658, in <module>
dump.read(dump_memory = args.memory)
File "/home/fabiano/kvm/qemu/build/scripts/analyze-migration.py", line 592, in read
classdesc = self.section_classes[section_key]
KeyError: ('spapr_iommu', -2147483648)
Signed-off-by: Fabiano Rosas <farosas@suse.de>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
Message-ID: <20231009184326.15777-6-farosas@suse.de>
The script is currently broken when the x-ignore-shared capability is
used:
Traceback (most recent call last):
File "./scripts/analyze-migration.py", line 656, in <module>
dump.read(dump_memory = args.memory)
File "./scripts/analyze-migration.py", line 593, in read
section.read()
File "./scripts/analyze-migration.py", line 163, in read
self.name = self.file.readstr(len = namelen)
File "./scripts/analyze-migration.py", line 53, in readstr
return self.readvar(len).decode('utf-8')
UnicodeDecodeError: 'utf-8' codec can't decode byte 0x82 in position 55: invalid start byte
We're currently adding data to the middle of the ram section depending
on the presence of the capability. As a consequence, any code loading
the ram section needs to know about capabilities so it can interpret
the stream.
Skip the byte that's added when x-ignore-shared is used to fix the
script.
Signed-off-by: Fabiano Rosas <farosas@suse.de>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
Message-ID: <20231009184326.15777-5-farosas@suse.de>
The script is broken when the configuration/capabilities section is
present. Add support for parsing the capabilities so we can fix it in
the next patch.
Signed-off-by: Fabiano Rosas <farosas@suse.de>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
Message-ID: <20231009184326.15777-4-farosas@suse.de>
The 'configuration' state subsections are currently not being parsed
and the script fails when analyzing an aarch64 stream:
Traceback (most recent call last):
File "./scripts/analyze-migration.py", line 625, in <module>
dump.read(dump_memory = args.memory)
File "./scripts/analyze-migration.py", line 571, in read
raise Exception("Unknown section type: %d" % section_type)
Exception: Unknown section type: 5
Signed-off-by: Fabiano Rosas <farosas@suse.de>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
Message-ID: <20231009184326.15777-3-farosas@suse.de>
When we try to use 'analyze-migration.py -x' with python3,
we have the following errors:
Traceback (most recent call last):
File "scripts/analyze-migration.py", line 593, in <module>
f.write(jsonenc.encode(dump.vmsd_desc))
TypeError: a bytes-like object is required, not 'str'
Traceback (most recent call last):
File "scripts/analyze-migration.py", line 601, in <module>
f.write(jsonenc.encode(dict))
TypeError: a bytes-like object is required, not 'str'
This happens because the file 'f' is open in binary mode while
jsonenc.encode() returns a string.
The results are human-readable files, 'desc.json' and 'state.json',
so there is no reason to use the binary mode.
Signed-off-by: Laurent Vivier <lvivier@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Message-Id: <20211015131645.501281-3-lvivier@redhat.com>
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
The parameters of '-d' can be either 'state' or 'desc', not 'dump'
as it is reported in the error message.
Fixes: b17425701d ("Add migration stream analyzation script")
Signed-off-by: Laurent Vivier <lvivier@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Message-Id: <20211015131645.501281-2-lvivier@redhat.com>
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
There is no "version 2" of the "Lesser" General Public License.
It is either "GPL version 2.0" or "Lesser GPL version 2.1".
This patch replaces all occurrences of "Lesser GPL version 2" with
"Lesser GPL version 2.1" in comment section.
This patch contains all the files, whose maintainer I could not get
from ‘get_maintainer.pl’ script.
Signed-off-by: Chetan Pant <chetan4windows@gmail.com>
Message-Id: <20201023124424.20177-1-chetan4windows@gmail.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
[thuth: Adapted exec.c and qdev-monitor.c to new location]
Signed-off-by: Thomas Huth <thuth@redhat.com>
Since we use result of read_migration_debug_json() as JSON formatted string,
we must provide proper type. Before Python 3.6 json.loads() method
support only str typed input.
Signed-off-by: Alexey Kirillov <lekiravi@yandex-team.ru>
Message-Id: <20200715152135.20287-1-lekiravi@yandex-team.ru>
[ehabkost: added comment explaining why decode() is needed}
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
We guarantee 3.5+ everywhere; remove more dead checks. In general, try
to avoid using version checks and instead prefer to attempt behavior
when possible.
Signed-off-by: John Snow <jsnow@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Message-Id: <20200514035230.25756-1-jsnow@redhat.com>
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Traceback (most recent call last):
File "../scripts/analyze-migration.py", line 611, in <module>
dump.read(desc_only = True)
File "../scripts/analyze-migration.py", line 513, in read
self.load_vmsd_json(file)
File "../scripts/analyze-migration.py", line 556, in load_vmsd_json
vmsd_json = file.read_migration_debug_json()
File "../scripts/analyze-migration.py", line 89, in read_migration_debug_json
nulpos = data.rfind("\0")
TypeError: argument should be integer or bytes-like object, not 'str'
Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Message-Id: <20191127101038.327080-2-marcandre.lureau@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Cleber Rosa <crosa@redhat.com>
Since commit 3a38429748 ("Add a "no HPT" encoding to HTAB migration stream")
the HTAB migration stream contains a header set to "-1", meaning there
is no HPT. Teach analyze-migration.py to ignore the section in this case.
Without this fix, the script fails with a dump from a POWER9 guest:
Traceback (most recent call last):
File "./qemu/scripts/analyze-migration.py", line 602, in <module>
dump.read(dump_memory = args.memory)
File "./qemu/scripts/analyze-migration.py", line 539, in read
section.read()
File "./qemu/scripts/analyze-migration.py", line 250, in read
self.file.readvar(n_valid * self.HASH_PTE_SIZE_64)
File "./qemu/scripts/analyze-migration.py", line 64, in readvar
raise Exception("Unexpected end of %s at 0x%x" % (self.filename, self.file.tell()))
Exception: Unexpected end of migrate.dump at 0x1d4763ba
Fixes: 3a38429748 ("Add a "no HPT" encoding to HTAB migration stream")
Signed-off-by: Laurent Vivier <lvivier@redhat.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Reviewed-by: Greg Kurz <groug@kaod.org>
Signed-off-by: Juan Quintela <quintela@redhat.com>
Commit 61964 "Add configuration section" broke the analyze-migration.py script
which terminates due to the unrecognised section. Fix the script by parsing
the contents of the configuration section directly into a new
ConfigurationSection object (although nothing is done with it yet).
Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Reviewed-by: Juan Quintela <quintela@redhat.com>al3
Signed-off-by: Juan Quintela <quintela@redhat.com>al3
It fixes the following error:
Traceback (most recent call last):
File "./scripts/analyze-migration.py", line 584, in <module>
dump.read(dump_memory = args.memory)
File "./scripts/analyze-migration.py", line 528, in read
self.sections[section_id].read()
File "./scripts/analyze-migration.py", line 250, in read
self.file.readvar(n_valid * HASH_PTE_SIZE_64)
NameError: global name 'HASH_PTE_SIZE_64' is not defined
Signed-off-by: Greg Kurz <gkurz@linux.vnet.ibm.com>
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
This patch adds a python tool to the scripts directory that can read
a dumped migration stream if it contains the JSON description of the
device states. I constructs a human readable JSON stream out of it.
It's very simple to use:
$ qemu-system-x86_64
(qemu) migrate "exec:cat > mig"
$ ./scripts/analyze_migration.py -f mig
Signed-off-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Amit Shah <amit.shah@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>