Yoshinori said [*] URL references on OSDN were stable, but they
appear not to be. Mirror the artifacts on GitHub to avoid failures
while testing on CI.
[*] https://www.mail-archive.com/qemu-devel@nongnu.org/msg686487.html
Cc: Yoshinori Sato <ysato@users.sourceforge.jp>
Reported-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-ID: <20200630202631.7345-1-f4bug@amsat.org>
[huth: Adapt the patch to the new version in the functional framework]
Message-ID: <20241229083419.180423-1-huth@tuxfamily.org>
Signed-off-by: Thomas Huth <thuth@redhat.com>
(cherry picked from commit ec2dfb7c389b94d71ee825caa20b709d5df6c166)
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
(Mjt: fixup for missing v9.2.0-421-g65d35a4e27a8 "tests/functional: convert tests to new uncompress helper")
When QPL compression is enabled on the migration channel and the same
dirty page changes from a normal page to a zero page in the iterative
memory copy, the dirty page will not be updated to a zero page again
on the target side, resulting in incorrect memory data on the source
and target sides.
The root cause is that the target side does not record the normal pages
to the receivedmap.
The solution is to add ramblock_recv_bitmap_set_offset in target side
to record the normal pages.
Signed-off-by: Yuan Liu <yuan1.liu@intel.com>
Reviewed-by: Jason Zeng <jason.zeng@intel.com>
Reviewed-by: Peter Xu <peterx@redhat.com>
Message-Id: <20241218091413.140396-4-yuan1.liu@intel.com>
Signed-off-by: Fabiano Rosas <farosas@suse.de>
(cherry picked from commit a523bc52166c80d8a04d46584f9f3868bd53ef69)
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
When QPL compression is enabled on the migration channel and the same
dirty page changes from a normal page to a zero page in the iterative
memory copy, the dirty page will not be updated to a zero page again
on the target side, resulting in incorrect memory data on the source
and target sides.
The root cause is that the target side does not record the normal pages
to the receivedmap.
The solution is to add ramblock_recv_bitmap_set_offset in target side
to record the normal pages.
Signed-off-by: Yuan Liu <yuan1.liu@intel.com>
Reviewed-by: Jason Zeng <jason.zeng@intel.com>
Reviewed-by: Peter Xu <peterx@redhat.com>
Message-Id: <20241218091413.140396-3-yuan1.liu@intel.com>
Signed-off-by: Fabiano Rosas <farosas@suse.de>
(cherry picked from commit 2588a5f99b0c3493b4690e3ff01ed36f80e830cc)
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
When compression is enabled on the migration channel and
the pages processed are all zero pages, these pages will
not be sent and updated on the target side, resulting in
incorrect memory data on the source and target sides.
The root cause is that all compression methods call
multifd_send_prepare_common to determine whether to compress
dirty pages, but multifd_send_prepare_common does not update
the IOV of MultiFDPacket_t when all dirty pages are zero pages.
The solution is to always update the IOV of MultiFDPacket_t
regardless of whether the dirty pages are all zero pages.
Fixes: 303e6f54f9 ("migration/multifd: Implement zero page transmission on the multifd thread.")
Cc: qemu-stable@nongnu.org #9.0+
Signed-off-by: Yuan Liu <yuan1.liu@intel.com>
Reviewed-by: Jason Zeng <jason.zeng@intel.com>
Reviewed-by: Peter Xu <peterx@redhat.com>
Message-Id: <20241218091413.140396-2-yuan1.liu@intel.com>
Signed-off-by: Fabiano Rosas <farosas@suse.de>
(cherry picked from commit cdc3970f8597ebdc1a4c2090cfb4d11e297329ed)
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
Currently, if an array of pointers contains a NULL pointer, that
pointer will be encoded as '0' in the stream. Since the JSON writer
doesn't define a "pointer" type, that '0' will now be an uint8, which
is different from the original type being pointed to, e.g. struct.
(we're further calling uint8 "nullptr", but that's irrelevant to the
issue)
That mixed-type array shouldn't be compressed, otherwise data is lost
as the code currently makes the whole array have the type of the first
element:
css = {NULL, NULL, ..., 0x5555568a7940, NULL};
{"name": "s390_css", "instance_id": 0, "vmsd_name": "s390_css",
"version": 1, "fields": [
...,
{"name": "css", "array_len": 256, "type": "nullptr", "size": 1},
...,
]}
In the above, the valid pointer at position 254 got lost among the
compressed array of nullptr.
While we could disable the array compression when a NULL pointer is
found, the JSON part of the stream still makes part of downtime, so we
should avoid writing unecessary bytes to it.
Keep the array compression in place, but if NULL and non-NULL pointers
are mixed break the array into several type-contiguous pieces :
css = {NULL, NULL, ..., 0x5555568a7940, NULL};
{"name": "s390_css", "instance_id": 0, "vmsd_name": "s390_css",
"version": 1, "fields": [
...,
{"name": "css", "array_len": 254, "type": "nullptr", "size": 1},
{"name": "css", "type": "struct", "struct": {"vmsd_name": "s390_css_img", ... }, "size": 768},
{"name": "css", "type": "nullptr", "size": 1},
...,
]}
Now each type-discontiguous region will become a new JSON entry. The
reader should interpret this as a concatenation of values, all part of
the same field.
Parsing the JSON with analyze-script.py now shows the proper data
being pointed to at the places where the pointer is valid and
"nullptr" where there's NULL:
"s390_css (14)": {
...
"css": [
"nullptr",
"nullptr",
...
"nullptr",
{
"chpids": [
{
"in_use": "0x00",
"type": "0x00",
"is_virtual": "0x00"
},
...
]
},
"nullptr",
}
Reviewed-by: Peter Xu <peterx@redhat.com>
Message-Id: <20250109185249.23952-7-farosas@suse.de>
Signed-off-by: Fabiano Rosas <farosas@suse.de>
(cherry picked from commit 35049eb0d2fc72bb8c563196ec75b4d6c13fce02)
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
QEMU plays a trick with null pointers inside an array of pointers in a VMSD
field. See 07d4e69147 ("migration/vmstate: fix array of ptr with
nullptrs") for more details on why. The idea makes sense in general, but
it may overlooked the JSON writer where it could write nothing in a
"struct" in the JSON hints section.
We hit some analyze-migration.py issues on s390 recently, showing that some
of the struct field contains nothing, like:
{"name": "css", "array_len": 256, "type": "struct", "struct": {}, "size": 1}
As described in details by Fabiano:
https://lore.kernel.org/r/87pll37cin.fsf@suse.de
It could be that we hit some null pointers there, and JSON was gone when
they're null pointers.
To fix it, instead of hacking around only at VMStateInfo level, do that
from VMStateField level, so that JSON writer can also be involved. In this
case, JSON writer will replace the pointer array (which used to be a
"struct") to be the real representation of the nullptr field.
Signed-off-by: Peter Xu <peterx@redhat.com>
Message-Id: <20250109185249.23952-6-farosas@suse.de>
Signed-off-by: Fabiano Rosas <farosas@suse.de>
(cherry picked from commit 9867c3a7ced12dd7519155c047eb2c0098a11c5f)
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
Rename vmstate_info_nullptr from "uint64_t" to "nullptr". This vmstate
actually reads and writes just a byte, so the proper name would be
uint8. However, since this is a marker for a NULL pointer, it's
convenient to have a more explicit name that can be identified by the
consumers of the JSON part of the stream.
Change the name to "nullptr" and add support for it in the
analyze-migration.py script. Arbitrarily use the name of the type as
the value of the field to avoid the script showing 0x30 or '0', which
could be confusing for readers.
Reviewed-by: Peter Xu <peterx@redhat.com>
Message-Id: <20250109185249.23952-5-farosas@suse.de>
Signed-off-by: Fabiano Rosas <farosas@suse.de>
(cherry picked from commit f52965bf0eeee28e89933264f1a9dbdcdaa76a7e)
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
The parsing for the S390StorageAttributes section is currently leaving
an unconsumed token that is later interpreted by the generic code as
QEMU_VM_EOF, cutting the parsing short.
The migration will issue a STATTR_FLAG_DONE between iterations, which
the script consumes correctly, but there's a final STATTR_FLAG_EOS at
.save_complete that the script is ignoring. Since the EOS flag is a
u64 0x1ULL and the stream is big endian, on little endian hosts a byte
read from it will be 0x0, the same as QEMU_VM_EOF.
Fixes: 81c2c9dd5d ("tests/qtest/migration-test: Fix analyze-migration.py for s390x")
Reviewed-by: Peter Xu <peterx@redhat.com>
Message-Id: <20250109185249.23952-4-farosas@suse.de>
Signed-off-by: Fabiano Rosas <farosas@suse.de>
(cherry picked from commit 69d1f784569fdb950f2923c3b6d00d7c1b71acc1)
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
The analyze-migration script was seen failing in s390x in misterious
ways. It seems we're reaching the VMSDFieldStruct constructor without
any fields, which would indicate an empty .subsection entry, a
VMSTATE_STRUCT with no fields or a vmsd with no fields. We don't have
any of those, at least not without the unmigratable flag set, so this
should never happen.
Add some debug statements so that we can see what's going on the next
time the issue happens.
Reviewed-by: Peter Xu <peterx@redhat.com>
Message-Id: <20250109185249.23952-2-farosas@suse.de>
Signed-off-by: Fabiano Rosas <farosas@suse.de>
(cherry picked from commit 86bee9e0c761a3d0e67c43b44001fd752f894cb0)
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
Commit f5f48a7891 ("migration/multifd: Separate SYNC request with
normal jobs") changed the multifd source side to stop sending data
along with the MULTIFD_FLAG_SYNC, effectively introducing the concept
of a SYNC-only packet. Relying on that, commit d7e58f412c
("migration/multifd: Don't send ram data during SYNC") later came
along and skipped reading data from SYNC packets.
In a versions timeline like this:
8.2 f5f48a7 9.0 9.1 d7e58f41 9.2
The issue arises that QEMUs < 9.0 still send data along with SYNC, but
QEMUs > 9.1 don't gather that data anymore. This leads to various
kinds of migration failures due to desync/missing data.
Stop checking for a SYNC packet on the destination and unconditionally
unfill the packet.
>From now on:
old -> new:
the source sends data + sync, destination reads normally
new -> new:
source sends only sync, destination reads zeros
new -> old:
source sends only sync, destination reads zeros
CC: qemu-stable@nongnu.org
Fixes: d7e58f412c ("migration/multifd: Don't send ram data during SYNC")
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/2720
Reviewed-by: Peter Xu <peterx@redhat.com>
Signed-off-by: Fabiano Rosas <farosas@suse.de>
Message-Id: <20241213160120.23880-2-farosas@suse.de>
Signed-off-by: Fabiano Rosas <farosas@suse.de>
(cherry picked from commit b93d897ea2f0abbe7fc341a9ac176b5ecd0f3c93)
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
>From Commit 90fa121c6c ("migration/multifd: Inline page_size and
page_count") onwards page_size is not part of MutiFD*Params but uses
an inline constant instead.
However, it missed updating an old usage, causing a compile error.
Fixes: 90fa121c6c ("migration/multifd: Inline page_size and page_count")
Signed-off-by: Shameer Kolothum <shameerali.kolothum.thodi@huawei.com>
Reviewed-by: Fabiano Rosas <farosas@suse.de>
Message-Id: <20241203124943.52572-1-shameerali.kolothum.thodi@huawei.com>
Signed-off-by: Fabiano Rosas <farosas@suse.de>
(cherry picked from commit d127294f265e6a17f8d614f2bef7df8455e81f56)
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
Fixes: 644e3c5d81 ("missing vmx features for Skylake-Server and Cascadelake-Server")
Signed-off-by: Han Han <hhan@redhat.com>
Reviewed-by: Chenyi Qiang <chenyi.qiang@intel.com>
Reviewed-by: Michael Tokarev <mjt@tls.msk.ru>
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
(cherry picked from commit 93dcc9390e5ad0696ae7e9b7b3a5b08c2d1b6de6)
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
TCG trace-events were deprecated before the v6.2 release,
and removed for v7.0.
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Michael Tokarev <mjt@tls.msk.ru>
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
(cherry picked from commit b4859e8f33a7d9c793a60395f792c10190cb4f78)
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
Use the same style for deprecated / removed commands.
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Michael Tokarev <mjt@tls.msk.ru>
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
(cherry picked from commit 916f50172baa91ddf0e669a9d6d2747055c0e610)
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
Hardcoded 32 bytes is used for vbsrl emulation check, there is
problem when options lsx=on,lasx=off is used for vbsrl.v instruction
in TCG mode. It injects LASX exception rather LSX exception.
Here actual operand size is used.
Cc: qemu-stable@nongnu.org
Fixes: df97f33807 ("target/loongarch: Implement xvreplve xvinsve0 xvpickve")
Signed-off-by: Bibo Mao <maobibo@loongson.cn>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
(cherry picked from commit d41989e7548397b469ec9c7be4cee699321a317e)
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
KVM is not happy when starting a VM with weird RAM sizes:
# qemu-system-s390x --enable-kvm --nographic -m 1234K
qemu-system-s390x: kvm_set_user_memory_region: KVM_SET_USER_MEMORY_REGION
failed, slot=0, start=0x0, size=0x244000: Invalid argument
kvm_set_phys_mem: error registering slot: Invalid argument
Aborted (core dumped)
Let's handle that in a better way by rejecting such weird RAM sizes
right from the start:
# qemu-system-s390x --enable-kvm --nographic -m 1234K
qemu-system-s390x: ram size must be multiples of 1 MiB
Message-ID: <20241219144115.2820241-2-david@redhat.com>
Acked-by: Michael S. Tsirkin <mst@redhat.com>
Reviewed-by: Eric Farman <farman@linux.ibm.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Acked-by: Janosch Frank <frankja@linux.ibm.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
(cherry picked from commit 14e568ab4836347481af2e334009c385f456a734)
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
In the section "4.7 Precise effects on interrupt-pending bits"
of the RISC-V AIA specification defines that:
"If the source mode is Level1 or Level0 and the interrupt domain
is configured in MSI delivery mode (domaincfg.DM = 1):
The pending bit is cleared whenever the rectified input value is
low, when the interrupt is forwarded by MSI, or by a relevant
write to an in_clrip register or to clripnum."
Update the riscv_aplic_set_pending() to match the spec.
Fixes: bf31cf06eb ("hw/intc/riscv_aplic: Fix setipnum_le write emulation for APLIC MSI-mode")
Signed-off-by: Yong-Xuan Wang <yongxuan.wang@sifive.com>
Acked-by: Alistair Francis <alistair.francis@wdc.com>
Message-ID: <20241029085349.30412-1-yongxuan.wang@sifive.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
(cherry picked from commit 0d0141fadc9063e527865ee420b2baf34e306093)
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
Since commit 5286c36622 ("target/i386: properly reset TSC on reset")
QEMU writes the special value of "1" to each online vCPU TSC on VM reset
to reset it.
However parked vCPUs don't get that handling and due to that their TSCs
get desynchronized when the VM gets reset.
This in turn causes KVM to turn off PVCLOCK_TSC_STABLE_BIT in its exported
PV clock.
Note that KVM has no understanding of vCPU being currently parked.
Without PVCLOCK_TSC_STABLE_BIT the sched clock is marked unstable in
the guest's kvm_sched_clock_init().
This causes a performance regressions to show in some tests.
Fix this issue by writing the special value of "1" also to TSCs of parked
vCPUs on VM reset.
Reproducing the issue:
1) Boot a VM with "-smp 2,maxcpus=3" or similar
2) device_add host-x86_64-cpu,id=vcpu,node-id=0,socket-id=0,core-id=2,thread-id=0
3) Wait a few seconds
4) device_del vcpu
5) Inside the VM run:
# echo "t" >/proc/sysrq-trigger; dmesg | grep sched_clock_stable
Observe the sched_clock_stable() value is 1.
6) Reboot the VM
7) Once the VM boots once again run inside it:
# echo "t" >/proc/sysrq-trigger; dmesg | grep sched_clock_stable
Observe the sched_clock_stable() value is now 0.
Fixes: 5286c36622 ("target/i386: properly reset TSC on reset")
Signed-off-by: Maciej S. Szmigiero <maciej.szmigiero@oracle.com>
Link: https://lore.kernel.org/r/5a605a88e9a231386dc803c60f5fed9b48108139.1734014926.git.maciej.szmigiero@oracle.com
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
(cherry picked from commit 3f2a05b31ee9ce2ddb6c75a9bc3f5e7f7af9a76f)
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
The macOS builds in our CI (and possibly other very recent distros)
are currently broken since the update to libnfs version 6 there.
That version apparently comes with a big API breakage. v5.0.3 was
the final release of the old API (see the libnfs commit here:
https://github.com/sahlberg/libnfs/commit/4379837 ).
Disallow version 6.x for now to get the broken CI job working
again. Once somebody had enough time to adapt our code in
block/nfs.c, we can revert this change again.
Message-ID: <20241218065157.209020-1-thuth@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Thomas Huth <thuth@redhat.com>
(cherry picked from commit e2d98f257138b83b6a492d1da5847a7fe0930d10)
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
In the GICv3 ITS model, we have a common coding pattern which has a
local C struct like "DTEntry dte", which is a C representation of an
in-guest-memory data structure, and we call a function such as
get_dte() to read guest memory and fill in the C struct. These
functions to read in the struct sometimes have cases where they will
leave early and not fill in the whole struct (for instance get_dte()
will set "dte->valid = false" and nothing else for the case where it
is passed an entry_addr implying that there is no L2 table entry for
the DTE). This then causes potential use of uninitialized memory
later, for instance when we call a trace event which prints all the
fields of the struct. Sufficiently advanced compilers may produce
-Wmaybe-uninitialized warnings about this, especially if LTO is
enabled.
Rather than trying to carefully separate out these trace events into
"only the 'valid' field is initialized" and "all fields can be
printed", zero-init all the structs when we define them. None of
these structs are large (the biggest is 24 bytes) and having
consistent behaviour is less likely to be buggy.
Cc: qemu-stable@nongnu.org
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/2718
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Message-id: 20241213182337.3343068-1-peter.maydell@linaro.org
(cherry picked from commit 9678b9c505725732353baefedb88b53c2eb8a184)
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
Without descriptor libvirt cannot discover the EDK II binaries via
the qemu:///system connection.
Signed-off-by: Heinrich Schuchardt <heinrich.schuchardt@canonical.com>
Message-ID: <20241212090059.94167-1-heinrich.schuchardt@canonical.com>
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
(cherry picked from commit 74dc38d0c6c15fd57a5dee94125d13ac5b00491d)
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
If the binary loaded via -kernel is *not* a linux kernel (in which
case protocol == 0), do not patch the linux kernel header fields.
It's (a) pointless and (b) might break binaries by random patching
and (c) changes the binary hash which in turn breaks secure boot
verification.
Background: OVMF happily loads and runs not only linux kernels but
any efi binary via direct kernel boot.
Note: Breaking the secure boot verification is a problem for linux
kernels too, but fixed that is left for another day ...
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
Message-ID: <20240905141211.1253307-3-kraxel@redhat.com>
(cherry picked from commit 57e2cc9abf5da38f600354fe920ff20e719607b4)
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
When allocating new temps during tcg_optmize, do not re-use
any EBB temps that were used within the TB. We do not have
any idea what span of the TB in which the temp was live.
Introduce tcg_temp_ebb_reset_freed and use before tcg_optimize,
as well as replacing the equivalent in plugin_gen_inject and
tcg_func_start.
Cc: qemu-stable@nongnu.org
Fixes: fb04ab7ddd ("tcg/optimize: Lower TCG_COND_TST{EQ,NE} if unsupported")
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/2711
Reported-by: wannacu <wannacu2049@gmail.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Pierrick Bouvier <pierrick.bouvier@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
(cherry picked from commit 04e006ab36a8565b92d4e21dd346367fbade7d74)
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
The released fix for this CVE:
f6b0de53fb ("9pfs: prevent opening special files (CVE-2023-2861)")
caused a regression with security_model=passthrough. When handling a
'Tmknod' request there was a side effect that 'Tmknod' request could fail
as 9p server was trying to adjust permissions:
#6 close_if_special_file (fd=30) at ../hw/9pfs/9p-util.h:140
#7 openat_file (mode=<optimized out>, flags=2228224,
name=<optimized out>, dirfd=<optimized out>) at
../hw/9pfs/9p-util.h:181
#8 fchmodat_nofollow (dirfd=dirfd@entry=31,
name=name@entry=0x5555577ea6e0 "mysocket", mode=493) at
../hw/9pfs/9p-local.c:360
#9 local_set_cred_passthrough (credp=0x7ffbbc4ace10, name=0x5555577ea6e0
"mysocket", dirfd=31, fs_ctx=0x55555811f528) at
../hw/9pfs/9p-local.c:457
#10 local_mknod (fs_ctx=0x55555811f528, dir_path=<optimized out>,
name=0x5555577ea6e0 "mysocket", credp=0x7ffbbc4ace10) at
../hw/9pfs/9p-local.c:702
#11 v9fs_co_mknod (pdu=pdu@entry=0x555558121140,
fidp=fidp@entry=0x5555574c46c0, name=name@entry=0x7ffbbc4aced0,
uid=1000, gid=1000, dev=<optimized out>, mode=49645,
stbuf=0x7ffbbc4acef0) at ../hw/9pfs/cofs.c:205
#12 v9fs_mknod (opaque=0x555558121140) at ../hw/9pfs/9p.c:3711
That's because server was opening the special file to adjust permissions,
however it was using O_PATH and it would have not returned the file
descriptor to guest. So the call to close_if_special_file() on that branch
was incorrect.
Let's lift the restriction introduced by f6b0de53fb such that it would
allow to open special files on host if O_PATH flag is supplied, not only
for 9p server's own operations as described above, but also for any client
'Topen' request.
It is safe to allow opening special files with O_PATH on host, because
O_PATH only allows path based operations on the resulting file descriptor
and prevents I/O such as read() and write() on that file descriptor.
Fixes: f6b0de53fb ("9pfs: prevent opening special files (CVE-2023-2861)")
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/2337
Reported-by: Dirk Herrendorfer <d.herrendoerfer@de.ibm.com>
Signed-off-by: Christian Schoenebeck <qemu_oss@crudebyte.com>
Reviewed-by: Greg Kurz <groug@kaod.org>
Tested-by: Dirk Herrendorfer <d.herrendoerfer@de.ibm.com>
Message-Id: <E1tJWbk-007BH4-OB@kylie.crudebyte.com>
(cherry picked from commit d06a9d843fb65351e0e4dc42ba0c404f01ea92b3)
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
This series has 2 fixes:
- Fix to keep serial@90000000 as default
- Fixed undercounting of TTCR in continuous mode
-----BEGIN PGP SIGNATURE-----
iQIzBAABCAAdFiEE2cRzVK74bBA6Je/xw7McLV5mJ+QFAmdO56EACgkQw7McLV5m
J+T8BRAAxZMH4ykdRJBmYiFVOsYKagcdT6GGBHL44FGeQSr1lNyoU0Rn5r6v5GHe
Nwq7DTeZlKoVji5GXki53mGrwENXr00m+xfc9ACMoWr5IM6McQUPXlQAQ/50fIGs
lzXMZH/4EdPIVkkpCi+y8FLYw02oQg61U9G0HW02lQJy4Y4mudtvQFGzJ7f3SIZ3
EkKn5YLG0bqszq/amFNLQXlbnq3yI5zfcMHhHx0KuDsm2yNhrNA+AJP8tLI3JlxL
+0YIA+fWuxQzz8Zu9+ckc8VAV83HIgQpXVzI6rQxdSwbmRgUu9ITO09ZmxaDHZF6
sDI6K3VouyaHJVkvu4coDajpYTjHLE26c9LAlaVBpgdnmnYy4vlndEqbfaBqOouX
n0N2wJ3IGouIw7AnB9dTaZhM/Uo09hZKDr6kCm3hLfPn2+vi3yxsbwVwLaOpH3G3
kQ5ZFKjoA7XWOaXGOUMcmhByXkSxja+pSBppB58vJAFyVp73HYIpea3/q1Zd8S4S
noJoqxDtD2zf26bDBIe83pUEnSnL8fAcsh3rlQP8HrWYhU7ZulVSE1ZvPkPgDpkY
LVCPautTElsMp2Mg4a2oODGvSDN4/5h2dp6TaK4Qep92HHFOwPZQBQW607VwWR5N
II8dB/l8PluKkgZ3ymhP1p9JAAZFe9a2cMmegRIiM74PkPty0kk=
=guIi
-----END PGP SIGNATURE-----
Merge tag 'pull-or1k-20241203' of https://github.com/stffrdhrn/qemu into staging
OpenRISC updates for 9.2.0
This series has 2 fixes:
- Fix to keep serial@90000000 as default
- Fixed undercounting of TTCR in continuous mode
# -----BEGIN PGP SIGNATURE-----
#
# iQIzBAABCAAdFiEE2cRzVK74bBA6Je/xw7McLV5mJ+QFAmdO56EACgkQw7McLV5m
# J+T8BRAAxZMH4ykdRJBmYiFVOsYKagcdT6GGBHL44FGeQSr1lNyoU0Rn5r6v5GHe
# Nwq7DTeZlKoVji5GXki53mGrwENXr00m+xfc9ACMoWr5IM6McQUPXlQAQ/50fIGs
# lzXMZH/4EdPIVkkpCi+y8FLYw02oQg61U9G0HW02lQJy4Y4mudtvQFGzJ7f3SIZ3
# EkKn5YLG0bqszq/amFNLQXlbnq3yI5zfcMHhHx0KuDsm2yNhrNA+AJP8tLI3JlxL
# +0YIA+fWuxQzz8Zu9+ckc8VAV83HIgQpXVzI6rQxdSwbmRgUu9ITO09ZmxaDHZF6
# sDI6K3VouyaHJVkvu4coDajpYTjHLE26c9LAlaVBpgdnmnYy4vlndEqbfaBqOouX
# n0N2wJ3IGouIw7AnB9dTaZhM/Uo09hZKDr6kCm3hLfPn2+vi3yxsbwVwLaOpH3G3
# kQ5ZFKjoA7XWOaXGOUMcmhByXkSxja+pSBppB58vJAFyVp73HYIpea3/q1Zd8S4S
# noJoqxDtD2zf26bDBIe83pUEnSnL8fAcsh3rlQP8HrWYhU7ZulVSE1ZvPkPgDpkY
# LVCPautTElsMp2Mg4a2oODGvSDN4/5h2dp6TaK4Qep92HHFOwPZQBQW607VwWR5N
# II8dB/l8PluKkgZ3ymhP1p9JAAZFe9a2cMmegRIiM74PkPty0kk=
# =guIi
# -----END PGP SIGNATURE-----
# gpg: Signature made Tue 03 Dec 2024 11:12:33 GMT
# gpg: using RSA key D9C47354AEF86C103A25EFF1C3B31C2D5E6627E4
# gpg: Good signature from "Stafford Horne <shorne@gmail.com>" [unknown]
# gpg: WARNING: This key is not certified with a trusted signature!
# gpg: There is no indication that the signature belongs to the owner.
# Primary key fingerprint: D9C4 7354 AEF8 6C10 3A25 EFF1 C3B3 1C2D 5E66 27E4
* tag 'pull-or1k-20241203' of https://github.com/stffrdhrn/qemu:
hw/openrisc: Fixed undercounting of TTCR in continuous mode
hw/openrisc/openrisc_sim: keep serial@90000000 as default
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Peter points out double underscore prefix names tend to be reserved
for the system. Clean these up.
Suggested-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-ID: <20240828043337.14587-3-npiggin@gmail.com>
qemu_chardev_set_replay() was being called in chardev creation to
set up replay parameters even if the chardev is NULL.
A segfault can be reproduced by specifying '-serial chardev:bad' with
an rr=record mode.
Fix this with a NULL pointer check.
Reported-by: Peter Maydell <peter.maydell@linaro.org>
Resolves: Coverity CID 1559470
Fixes: 4c193bb129 ("chardev: set record/replay on the base device of a muxed device")
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-ID: <20240828043337.14587-2-npiggin@gmail.com>
When testing with a HVF-only binary, we get:
3/12 qemu:func-quick+func-aarch64 / func-aarch64-version ERROR 0.29s exit status 1
stderr:
Traceback (most recent call last):
File "tests/functional/test_version.py", line 22, in test_qmp_human_info_version
self.vm.launch()
File "machine/machine.py", line 461, in launch
raise VMLaunchFailure(
qemu.machine.machine.VMLaunchFailure: ConnectError: Failed to establish session: EOFError
Exit code: 1
Command: build/qemu-system-aarch64 -display none -vga none -chardev socket,id=mon,fd=5 -mon chardev=mon,mode=control -machine none -nodefaults
Output: qemu-system-aarch64: No accelerator selected and no default accelerator available
Fix by checking for HVF in configure_accelerators() and using
it by default when no other accelerator is available.
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Message-Id: <20241203094232.62232-1-philmd@linaro.org>
This test would have identified the crash caused by the addition of new
balloon stats fields.
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
Reviewed-by: Fabiano Rosas <farosas@suse.de>
Acked-by: Michael S. Tsirkin <mst@redhat.com>
Message-ID: <20241129135507.699030-4-berrange@redhat.com>
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
This test file is expected to be extended for arbitrary virtio-balloon
related tests, not merely those discovered by fuzzing.
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Fabiano Rosas <farosas@suse.de>
Message-ID: <20241129135507.699030-3-berrange@redhat.com>
[PMD: Update MAINTAINERS]
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Acked-by: Michael S. Tsirkin <mst@redhat.com>
balloon_stats_get_all will iterate over guest stats upto the max
VIRTIO_BALLOON_S_NR value, calling visit_type_uint64 to populate
the QObject dict. The dict keys are obtained from the static
array balloon_stat_names which is VIRTIO_BALLOON_S_NR in size.
Unfortunately the way that array is declared results in any
unassigned stats getting a NULL name, which will then cause
visit_type_uint64 to trigger an assert in qobject_output_add_obj.
The balloon_stat_names array was fortunately fully populated with
names until recently:
commit 0d2eeef77a
Author: Bibo Mao <maobibo@loongson.cn>
Date: Mon Oct 28 10:38:09 2024 +0800
linux-headers: Update to Linux v6.12-rc5
pulled a change to include/standard-headers/linux/virtio_balloon.h
which increased VIRTIO_BALLOON_S_NR by 6, and failed to add the new
names to balloon_stat_names.
This commit fills in the missing names, and uses a static assert to
guarantee that any future changes to VIRTIO_BALLOON_S_NR will cause
a build failure until balloon_stat_names is updated.
This problem was detected by the Cockpit Project's automated
integration tests on QEMU 9.2.0-rc1.
Fixes: https://bugzilla.redhat.com/show_bug.cgi?id=2329448
Fixes: 0d2eeef77a ("linux-headers: Update to Linux v6.12-rc5")
Reported-by: Martin Pitt <mpitt@redhat.com>
Reviewed-by: Richard W.M. Jones <rjones@redhat.com>
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
Reviewed-by: Michael Tokarev <mjt@tls.msk.ru>
Acked-by: Michael S. Tsirkin <mst@redhat.com>
Message-ID: <20241129135507.699030-2-berrange@redhat.com>
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
The 'pci-vga' device allow setting a 'big-endian-framebuffer'
property since commit 3c2784fc86 ("vga: Expose framebuffer
byteorder as a QOM property"). Similarly, the 'virtio-vga'
device since commit 8be61ce2ce ("virtio-vga: implement
big-endian-framebuffer property").
Both call vga_common_reset() in their reset handler, respectively
pci_secondary_vga_reset() and virtio_vga_base_reset_hold(), which
reset 'big_endian_fb', overwritting the property. This is not
correct: the hardware is expected to keep its configured
endianness during resets.
Move 'big_endian_fb' assignment from vga_common_reset() to
vga_common_init() which is called once when the common VGA state
is initialized.
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Message-Id: <20241129101721.17836-2-philmd@linaro.org>
In riscv_cpu_do_interrupt() we use the 'cause' value we got out of
cs->exception as a shift value. However this value can be larger
than 31, which means that "1 << cause" is undefined behaviour,
because we do the shift on an 'int' type.
This causes the undefined behaviour sanitizer to complain
on one of the check-tcg tests:
$ UBSAN_OPTIONS=print_stacktrace=1:abort_on_error=1:halt_on_error=1 ./build/clang/qemu-system-riscv64 -M virt -semihosting -display none -device loader,file=build/clang/tests/tcg/riscv64-softmmu/issue1060
../../target/riscv/cpu_helper.c:1805:38: runtime error: shift exponent 63 is too large for 32-bit type 'int'
#0 0x55f2dc026703 in riscv_cpu_do_interrupt /mnt/nvmedisk/linaro/qemu-from-laptop/qemu/build/clang/../../target/riscv/cpu_helper.c:1805:38
#1 0x55f2dc3d170e in cpu_handle_exception /mnt/nvmedisk/linaro/qemu-from-laptop/qemu/build/clang/../../accel/tcg/cpu-exec.c:752:9
In this case cause is RISCV_EXCP_SEMIHOST, which is 0x3f.
Use 1ULL instead to ensure that the shift is in range.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Fixes: 1697837ed9 ("target/riscv: Add M-mode virtual interrupt and IRQ filtering support.")
Fixes: 40336d5b1d ("target/riscv: Add HS-mode virtual interrupt and IRQ filtering support.")
Reviewed-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-ID: <20241128103831.3452572-1-peter.maydell@linaro.org>
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
The 'maxmem' parameter parsed on the command line is held in uint64_t
and then assigned to the MachineState field that is 'ram_addr_t'. This
assignment will wrap on 32-bit hosts, silently changing the user's
config request if it were over-sized.
Improve the existing diagnositics for validating 'size', and add the
same diagnostics for 'maxmem'
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
Tested-by: Ani Sinha <anisinha@redhat.com>
Reviewed-by: Ani Sinha <anisinha@redhat.com>
Message-ID: <20241127114057.255995-1-berrange@redhat.com>
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Also: add mapping for "quic_bcain@quicinc.com" which was ~briefly
used for some replies to mailing list traffic.
Signed-off-by: Brian Cain <bcain@quicinc.com>
Signed-off-by: Brian Cain <brian.cain@oss.qualcomm.com>
Tested-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Message-ID: <20241123164641.364748-2-bcain@quicinc.com>
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
When building QEMU configure with --disable-gtk --disable-cocoa
on macOS we get:
User interface
Cocoa support : NO
SDL support : YES 2.30.5
SDL image support : NO
GTK support : NO
pixman : YES 0.42.2
VTE support : NO
PNG support : YES 1.6.43
VNC support : YES
VNC SASL support : YES
VNC JPEG support : YES 3.0.3
spice protocol support : YES 0.14.4
spice server support : NO
curses support : YES
brlapi support : NO
User defined options
cocoa : disabled
docs : disabled
gtk : disabled
../system/main.c:30:10: fatal error: 'SDL.h' file not found
30 | #include <SDL.h>
| ^~~~~~~
1 error generated.
Fix by adding the SDL dependency to main.c it's CFLAGS contains
the SDL include directory.
Fixes: 64ed6f92ff ("meson: link emulators without Makefile.target")
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Acked-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <20241120114943.85080-1-philmd@linaro.org>
Song Gao is will be sick leave for a long time, I apply for maintainer
for LoongArch Virt Machine during this period, LoongArch TCG keeps unchanged
since I am not familiar with it. The maintainer duty will transfer to him
after he comes back to work.
Signed-off-by: Bibo Mao <maobibo@loongson.cn>
Acked-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Message-ID: <20241112073714.1953481-1-maobibo@loongson.cn>
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
These warnings are breaking some build configurations since 2 months
now (https://gitlab.com/qemu-project/qemu/-/issues/2575):
ui/cocoa.m:662:14: error: 'CVDisplayLinkCreateWithCGDisplay' is deprecated: first deprecated in macOS 15.0 - use NSView.displayLink(target:selector:), NSWindow.displayLink(target:selector:), or NSScreen.displayLink(target:selector:) [-Werror,-Wdeprecated-declarations]
662 | if (!CVDisplayLinkCreateWithCGDisplay(display, &displayLink)) {
| ^
/Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/System/Library/Frameworks/CoreVideo.framework/Headers/CVDisplayLink.h:89:20: note: 'CVDisplayLinkCreateWithCGDisplay' has been explicitly marked deprecated here
89 | CV_EXPORT CVReturn CVDisplayLinkCreateWithCGDisplay(
| ^
ui/cocoa.m:663:29: error: 'CVDisplayLinkGetNominalOutputVideoRefreshPeriod' is deprecated: first deprecated in macOS 15.0 - use NSView.displayLink(target:selector:), NSWindow.displayLink(target:selector:), or NSScreen.displayLink(target:selector:) [-Werror,-Wdeprecated-declarations]
663 | CVTime period = CVDisplayLinkGetNominalOutputVideoRefreshPeriod(displayLink);
| ^
/Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/System/Library/Frameworks/CoreVideo.framework/Headers/CVDisplayLink.h:182:18: note: 'CVDisplayLinkGetNominalOutputVideoRefreshPeriod' has been explicitly marked deprecated here
182 | CV_EXPORT CVTime CVDisplayLinkGetNominalOutputVideoRefreshPeriod( CVDisplayLinkRef CV_NONNULL displayLink );
| ^
ui/cocoa.m:664:13: error: 'CVDisplayLinkRelease' is deprecated: first deprecated in macOS 15.0 - use NSView.displayLink(target:selector:), NSWindow.displayLink(target:selector:), or NSScreen.displayLink(target:selector:) [-Werror,-Wdeprecated-declarations]
664 | CVDisplayLinkRelease(displayLink);
| ^
/Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/System/Library/Frameworks/CoreVideo.framework/Headers/CVDisplayLink.h:249:16: note: 'CVDisplayLinkRelease' has been explicitly marked deprecated here
249 | CV_EXPORT void CVDisplayLinkRelease( CV_RELEASES_ARGUMENT CVDisplayLinkRef CV_NULLABLE displayLink );
| ^
3 errors generated.
For the next release, ignore the warnings using #pragma directives.
At least until we figure the correct new API usage.
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Phil Dennis-Jordan <phil@philjordan.eu>
Tested-by: Phil Dennis-Jordan <phil@philjordan.eu>
Message-Id: <20241121131954.98949-1-philmd@linaro.org>
We used to only have a single UART on the platform and it was located at
address 0x90000000. When the number of UARTs was increased to 4, the
first UART remained at it's location, but instead of being the first one
to be registered, it became the last.
This caused QEMU to pick 0x90000300 as the default UART, which broke
software that hardcoded the address of 0x90000000 and expected it's
output to be visible when the user configured only a single console.
This caused regressions[1] in the barebox test suite when updating to a
newer QEMU. As there seems to be no good reason to register the UARTs in
inverse order, let's register them by ascending address, so existing
software can remain oblivious to the additional UART ports.
Changing the order of uart registration alone breaks Linux which
was choosing the UART at 0x90000300 as the default for ttyS0. To fix
Linux we fix three things in the device tree:
1. Define stdout-path only one time for the first registered UART
instead of incorrectly defining for each UART.
2. Change the UART alias name from 'uart0' to 'serial0' as almost all
Linux tty drivers look for an alias starting with "serial".
3. Add the UART nodes so they appear in the final DTB in the
order starting with the lowest address and working upwards.
In summary these changes mean that the QEMU default UART (serial_hd(0))
is now setup where:
* serial_hd(0) is the lowest-address UART
* serial_hd(0) is listed first in the DTB
* serial_hd(0) is the /chosen/stdout-path one
* the /aliases/serial0 alias points at serial_hd(0)
[1]: https://lore.barebox.org/barebox/707e7c50-aad1-4459-8796-0cc54bab32e2@pengutronix.de/T/#m5da26e8a799033301489a938b5d5667b81cef6ad
[stafford: Change to serial0 alias and update change message, reverse
uart registration order]
Fixes: 777784bda4 ("hw/openrisc: support 4 serial ports in or1ksim")
Cc: qemu-stable@nongnu.org
Signed-off-by: Ahmad Fatoum <a.fatoum@pengutronix.de>
Signed-off-by: Stafford Horne <shorne@gmail.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-ID: <20241203110536.402131-2-shorne@gmail.com>
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
In the existing design, TTCR is prone to undercounting when running in
continuous mode. This manifests as a timer interrupt appearing to
trigger a few cycles prior to the deadline set in SPR_TTMR_TP.
When the timer triggers, the virtual time delta in nanoseconds between
the time when the timer was set, and when it triggers is calculated.
This nanoseconds value is then divided by TIMER_PERIOD (50) to compute
an increment of cycles to apply to TTCR.
However, this calculation rounds down the number of cycles causing the
undercounting.
A simplistic solution would be to instead round up the number of cycles,
however this will result in the accumulation of timing error over time.
This patch corrects the issue by calculating the time delta in
nanoseconds between when the timer was last reset and the timer event.
This approach allows the TTCR value to be rounded up, but without
accumulating error over time.
Signed-off-by: Joel Holdsworth <jholdsworth@nvidia.com>
[stafford: Incremented version in vmstate_or1k_timer, checkpatch fixes]
Signed-off-by: Stafford Horne <shorne@gmail.com>
Message-ID: <20241203110536.402131-3-shorne@gmail.com>
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
In the existing design, TTCR is prone to undercounting when running in
continuous mode. This manifests as a timer interrupt appearing to
trigger a few cycles prior to the deadline set in SPR_TTMR_TP.
When the timer triggers, the virtual time delta in nanoseconds between
the time when the timer was set, and when it triggers is calculated.
This nanoseconds value is then divided by TIMER_PERIOD (50) to compute
an increment of cycles to apply to TTCR.
However, this calculation rounds down the number of cycles causing the
undercounting.
A simplistic solution would be to instead round up the number of cycles,
however this will result in the accumulation of timing error over time.
This patch corrects the issue by calculating the time delta in
nanoseconds between when the timer was last reset and the timer event.
This approach allows the TTCR value to be rounded up, but without
accumulating error over time.
Signed-off-by: Joel Holdsworth <jholdsworth@nvidia.com>
[stafford: Incremented version in vmstate_or1k_timer, checkpatch fixes]
Signed-off-by: Stafford Horne <shorne@gmail.com>
We used to only have a single UART on the platform and it was located at
address 0x90000000. When the number of UARTs was increased to 4, the
first UART remained at it's location, but instead of being the first one
to be registered, it became the last.
This caused QEMU to pick 0x90000300 as the default UART, which broke
software that hardcoded the address of 0x90000000 and expected it's
output to be visible when the user configured only a single console.
This caused regressions[1] in the barebox test suite when updating to a
newer QEMU. As there seems to be no good reason to register the UARTs in
inverse order, let's register them by ascending address, so existing
software can remain oblivious to the additional UART ports.
Changing the order of uart registration alone breaks Linux which
was choosing the UART at 0x90000300 as the default for ttyS0. To fix
Linux we fix three things in the device tree:
1. Define stdout-path only one time for the first registered UART
instead of incorrectly defining for each UART.
2. Change the UART alias name from 'uart0' to 'serial0' as almost all
Linux tty drivers look for an alias starting with "serial".
3. Add the UART nodes so they appear in the final DTB in the
order starting with the lowest address and working upwards.
In summary these changes mean that the QEMU default UART (serial_hd(0))
is now setup where:
* serial_hd(0) is the lowest-address UART
* serial_hd(0) is listed first in the DTB
* serial_hd(0) is the /chosen/stdout-path one
* the /aliases/serial0 alias points at serial_hd(0)
[1]: https://lore.barebox.org/barebox/707e7c50-aad1-4459-8796-0cc54bab32e2@pengutronix.de/T/#m5da26e8a799033301489a938b5d5667b81cef6ad
Fixes: 777784bda4 ("hw/openrisc: support 4 serial ports in or1ksim")
Cc: qemu-stable@nongnu.org
Signed-off-by: Ahmad Fatoum <a.fatoum@pengutronix.de>
[stafford: Change to serial0 alias and update change message, reverse
uart registration order]
Signed-off-by: Stafford Horne <shorne@gmail.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>