This support was limited to the Malta board, drop it.
I do not have a machine that can run VZ KVM, so I am assuming
that it works for -M malta as well.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20221221091718.71844-1-philmd@linaro.org>
VRCPSS, VRSQRTSS and VCVTSx2Sx have a 32-bit or 64-bit memory operand,
which is represented in the decoding tables by X86_VEX_REPScalar. Add it
to the tables, and make validate_vex() handle the case of an instruction
that is in exception type 4 without the REP prefix and exception type 5
with it; this is the cas of VRCP and VRSQRT.
Reported-by: yongwoo <https://gitlab.com/yongwoo36>
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1377
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
To avoid compilation errors when -Werror=maybe-uninitialized is used,
add a default case with g_assert_not_reached().
Otherwise with GCC 11.3.1 "cc (GCC) 11.3.1 20220421 (Red Hat 11.3.1-2)"
we get:
../target/i386/ops_sse.h: In function ‘helper_vpermdq_ymm’:
../target/i386/ops_sse.h:2495:13: error: ‘r3’ may be used
uninitialized in this function [-Werror=maybe-uninitialized]
2495 | d->Q(3) = r3;
| ~~~~~~~~^~~~
../target/i386/ops_sse.h:2494:13: error: ‘r2’ may be used
uninitialized in this function [-Werror=maybe-uninitialized]
2494 | d->Q(2) = r2;
| ~~~~~~~~^~~~
../target/i386/ops_sse.h:2493:13: error: ‘r1’ may be used
uninitialized in this function [-Werror=maybe-uninitialized]
2493 | d->Q(1) = r1;
| ~~~~~~~~^~~~
../target/i386/ops_sse.h:2492:13: error: ‘r0’ may be used
uninitialized in this function [-Werror=maybe-uninitialized]
2492 | d->Q(0) = r0;
| ~~~~~~~~^~~~
Signed-off-by: Eric Auger <eric.auger@redhat.com>
Message-Id: <20221222140158.1260748-1-eric.auger@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
When in 64-bit mode, IDT entiries are 16 bytes, so `intno * 16` is used
for base/limit/offset calculations. However, even in 64-bit mode, the
exception error code still uses bits [3,16) for the invlaid interrupt
index.
This means the error code should still be `intno * 8 + 2` even in 64-bit
mode.
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1382
Signed-off-by: Joe Richey <joerichey@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* Rework and improvements of the EINTR handling by Nikita
* Deprecate the -no-hpet command line option
* Disable the qtests in the 32-bit Windows CI job again
* Some other misc fixes here and there
-----BEGIN PGP SIGNATURE-----
iQJFBAABCAAvFiEEJ7iIR+7gJQEY8+q5LtnXdP5wLbUFAmO8It8RHHRodXRoQHJl
ZGhhdC5jb20ACgkQLtnXdP5wLbUwbA//dXgfHy95C1r2nTMDekk09+KkmNB1f6M8
3HK4ROmmrMT/aP9FwfqMBT7JHM/m4bwOGw0Sula8vfjg9NYGPWuSYjdObWKnrIq/
YORoTxqak9c98Co06EQbAfWn3Pj0ifQkX+FIyzcNGhu4856FWdBsMuyq52VLi36q
Z8ruSOmclzluoIB3mVYY/s5J7ED2A3K0h39frKLE9FGsKObX10KWj+MZyDHi9oGZ
ucTHai12OXgNghjlrwI0BqJziih4NxfIWs0JovSo3cN0at7m57G5JChjR38zTMNT
2Q46tDKoIXesY1GUmVuIgJ5F1Uoshc8Pz5qBSQ5mUbZUQMpivhFrEB666wsYmPd1
M/YwnZ+PFhWjem7p28fKmnmkeATvE0S+vMDifTVZ880nmAbyUm1vFKfqV6r2mBrT
p4iXfh/9easFfJWHueU4fBwyMndDGRaCRJnP8KQ5I9yb0WZbt+/0k/y8CQD8Oxr7
dNFFFoY3KnIO9DCRO5Wr+3OqUgtSAQyhBDf5V2wSMCFrwPHKsvWKSbdiWR3Qe4ck
41InWgawB3xx57+vXraDUA10+nBZ1VrM92ObqfLPTFqjLCom6Fm85cG4YFRLIvRt
rdlOC+ScpeVpec7MwcHrScGL0HmUgPnShDAo07pRy4oKK+c89sXzdAFf2nYJTAWS
WCuChrn7VFM=
=D+Yw
-----END PGP SIGNATURE-----
Merge tag 'pull-request-2023-01-09' of https://gitlab.com/thuth/qemu into staging
* s390x header clean-ups from Philippe
* Rework and improvements of the EINTR handling by Nikita
* Deprecate the -no-hpet command line option
* Disable the qtests in the 32-bit Windows CI job again
* Some other misc fixes here and there
# gpg: Signature made Mon 09 Jan 2023 14:21:19 GMT
# gpg: using RSA key 27B88847EEE0250118F3EAB92ED9D774FE702DB5
# gpg: issuer "thuth@redhat.com"
# gpg: Good signature from "Thomas Huth <th.huth@gmx.de>" [full]
# gpg: aka "Thomas Huth <thuth@redhat.com>" [full]
# gpg: aka "Thomas Huth <huth@tuxfamily.org>" [full]
# gpg: aka "Thomas Huth <th.huth@posteo.de>" [unknown]
# Primary key fingerprint: 27B8 8847 EEE0 2501 18F3 EAB9 2ED9 D774 FE70 2DB5
* tag 'pull-request-2023-01-09' of https://gitlab.com/thuth/qemu:
.gitlab-ci.d/windows: Do not run the qtests in the msys2-32bit job
error handling: Use RETRY_ON_EINTR() macro where applicable
Refactoring: refactor TFR() macro to RETRY_ON_EINTR()
docs/interop: Change the vnc-ledstate-Pseudo-encoding doc into .rst
i386: Deprecate the -no-hpet QEMU command line option
tests/qtest/bios-tables-test: Replace -no-hpet with hpet=off machine parameter
tests/readconfig: spice doesn't support unix socket on windows yet
target/s390x: Restrict sysemu/reset.h to system emulation
target/s390x/tcg/excp_helper: Restrict system headers to sysemu
target/s390x/tcg/misc_helper: Remove unused "memory.h" include
hw/s390x/pv: Restrict Protected Virtualization to sysemu
exec/memory: Expose memory_region_access_valid()
MAINTAINERS: Add MIPS-related docs and configs to the MIPS architecture section
tests/vm: Update get_default_jobs() to work on non-x86_64 non-KVM hosts
qemu-iotests/stream-under-throttle: do not shutdown QEMU
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
In user emulation, threads -- implemented as CPU -- are
created/destroyed, but never reset. There is no point in
allowing the user emulation access the sysemu/reset API.
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Message-Id: <20221220145625.26392-5-philmd@linaro.org>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Message-Id: <20221217152454.96388-6-philmd@linaro.org>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Message-Id: <20221217152454.96388-5-philmd@linaro.org>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Thomas Huth <thuth@redhat.com>
Protected Virtualization is irrelevant in user emulation.
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Message-Id: <20221217152454.96388-4-philmd@linaro.org>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Thomas Huth <thuth@redhat.com>
* Always send errors to logfile when daemonized (Greg)
* Add support for IDE CompactFlash card (Lubomir)
* First round of build system cleanups (myself)
* First round of feature removals (myself)
* Reduce "qemu/accel.h" inclusion (Philippe)
-----BEGIN PGP SIGNATURE-----
iQFIBAABCAAyFiEE8TM4V0tmI4mGbHaCv/vSX3jHroMFAmO3Ym0UHHBib256aW5p
QHJlZGhhdC5jb20ACgkQv/vSX3jHroNYmwf+LHEw+4T0fk1+2NfgIzH3+8s1EqDm
Ai56EjxO/p5NUptflXAnhn4P3LawswmmNE0ZIFFFBgwG5E9L+Jj/u5efuLu4uYPg
bboEBDn8nxSNN2l08u9TyS6kSWSxbwwrs7i2+V+4uQIlVIcCHu+A0vpXns4vWwY0
zZGF8CgJKDQdPIxdXrH8+6/xtadQ8uDkYsAWDiY/nhozCsCUTAZGTXWEQbHJLARI
Z4X+Cmz/NFB9G4ka6K/y0HbQw99KA8G/EMPUSglN0ya10yjpyzrmeI7IlIves+5U
8lhCZXyBhaV9GXlIK1vIgEXlHf83C19a+v0DpW0bpxK631n2VR5y3CArBg==
=2Koq
-----END PGP SIGNATURE-----
Merge tag 'for-upstream' of https://gitlab.com/bonzini/qemu into staging
* Atomic memslot updates for KVM (Emanuele, David)
* Always send errors to logfile when daemonized (Greg)
* Add support for IDE CompactFlash card (Lubomir)
* First round of build system cleanups (myself)
* First round of feature removals (myself)
* Reduce "qemu/accel.h" inclusion (Philippe)
# gpg: Signature made Thu 05 Jan 2023 23:51:09 GMT
# gpg: using RSA key F13338574B662389866C7682BFFBD25F78C7AE83
# gpg: issuer "pbonzini@redhat.com"
# gpg: Good signature from "Paolo Bonzini <bonzini@gnu.org>" [full]
# gpg: aka "Paolo Bonzini <pbonzini@redhat.com>" [full]
# Primary key fingerprint: 46F5 9FBD 57D6 12E7 BFD4 E2F7 7E15 100C CD36 69B1
# Subkey fingerprint: F133 3857 4B66 2389 866C 7682 BFFB D25F 78C7 AE83
* tag 'for-upstream' of https://gitlab.com/bonzini/qemu: (24 commits)
i386: SGX: remove deprecated member of SGXInfo
target/i386: Add SGX aex-notify and EDECCSSA support
util: remove support -chardev tty and -chardev parport
util: remove support for hex numbers with a scaling suffix
KVM: remove support for kernel-irqchip=off
docs: do not talk about past removal as happening in the future
meson: accept relative symlinks in "meson introspect --installed" data
meson: cleanup compiler detection
meson: support meson 0.64 -Doptimization=plain
configure: test all warnings
tests/qapi-schema: remove Meson workaround
meson: cleanup dummy-cpus.c rules
meson: tweak hardening options for Windows
configure: remove backwards-compatibility and obsolete options
configure: preserve qemu-ga variables
configure: cleanup $cpu tests
configure: remove dead function
configure: remove useless write_c_skeleton
ide: Add "ide-cf" driver, a CompactFlash card
ide: Add 8-bit data mode
...
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* Fix PMP propagation for tlb
* Collection of bug fixes
* Bump the OpenTitan supported version
* Add smstateen support
* Support native debug icount trigger
* Remove the redundant ipi-id property in the virt machine
* Support cache-related PMU events in virtual mode
* Add some missing PolarFire SoC io regions
* Fix mret exception cause when no pmp rule is configured
* Fix bug where disabling compressed instructions would crash QEMU
* Add Zawrs ISA extension support
* A range of code refactoring and cleanups
-----BEGIN PGP SIGNATURE-----
iQEzBAABCAAdFiEE9sSsRtSTSGjTuM6PIeENKd+XcFQFAmO3b5sACgkQIeENKd+X
cFTD8Af+L0OaGzE4C0jil7LdITFKk7ltdTg3yw84ZBvIxrEWMWlt5Yj3Ez5OIPbY
WpPmCLoJ9XM/5CV5PuPjxcFFExWjXLNeMEvaaT/3/3jPPnA/M/AbZa4hefKRluTg
kkvBiOuRMPXiYLki5xAHmjD/1j6SQ8ghylPLxYQqyGq87WZt4Dx9msOTZLVzmmYl
8y9oC8j6yO2UBZYC1TxSkKxFbj+Cp2HmtBZ09tDzwRL6DpRvIlfftiLl8X3awMlK
YTzCIrmllc38/+cV9IpQhdXzDUZ1kM7Zy56JbJl3XOsS4VnUYGmEtrKYpYQ2CKLY
/tcmrDKNw1ArWcP1axNN8FHfhy1FyQ==
=SH+C
-----END PGP SIGNATURE-----
Merge tag 'pull-riscv-to-apply-20230106' of https://github.com/alistair23/qemu into staging
First RISC-V PR for QEMU 8.0
* Fix PMP propagation for tlb
* Collection of bug fixes
* Bump the OpenTitan supported version
* Add smstateen support
* Support native debug icount trigger
* Remove the redundant ipi-id property in the virt machine
* Support cache-related PMU events in virtual mode
* Add some missing PolarFire SoC io regions
* Fix mret exception cause when no pmp rule is configured
* Fix bug where disabling compressed instructions would crash QEMU
* Add Zawrs ISA extension support
* A range of code refactoring and cleanups
# gpg: Signature made Fri 06 Jan 2023 00:47:23 GMT
# gpg: using RSA key F6C4AC46D4934868D3B8CE8F21E10D29DF977054
# gpg: Good signature from "Alistair Francis <alistair@alistair23.me>" [full]
# Primary key fingerprint: F6C4 AC46 D493 4868 D3B8 CE8F 21E1 0D29 DF97 7054
* tag 'pull-riscv-to-apply-20230106' of https://github.com/alistair23/qemu: (43 commits)
hw/intc: sifive_plic: Fix the pending register range check
hw/riscv: opentitan: Drop "hartid-base" and "priority-base" initialization
hw/intc: sifive_plic: Change "priority-base" to start from interrupt source 0
hw/riscv: virt: Fix the value of "riscv, ndev" in the dtb
hw/riscv: sifive_u: Avoid using magic number for "riscv, ndev"
hw/riscv: sifive_e: Fix the number of interrupt sources of PLIC
hw/riscv: microchip_pfsoc: Fix the number of interrupt sources of PLIC
hw/intc: sifive_plic: Update "num-sources" property default value
hw/intc: sifive_plic: Use error_setg() to propagate the error up via errp in sifive_plic_realize()
hw/intc: sifive_plic: Improve robustness of the PLIC config parser
hw/intc: sifive_plic: Drop PLICMode_H
hw/riscv: spike: Remove misleading comments
hw/riscv: Sort machines Kconfig options in alphabetical order
hw/riscv: Fix opentitan dependency to SIFIVE_PLIC
hw/intc: Select MSI_NONBROKEN in RISC-V AIA interrupt controllers
hw/riscv: Select MSI_NONBROKEN in SIFIVE_PLIC
RISC-V: Add Zawrs ISA extension support
target/riscv: Clear mstatus.MPRV when leaving M-mode for priv spec 1.12+
target/riscv: Simplify helper_sret() a little bit
target/riscv: Set pc_succ_insn for !rvc illegal insn
...
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Add tcg backend paired register allocation.
Cleanup tcg backend function call abi.
-----BEGIN PGP SIGNATURE-----
iQFRBAABCgA7FiEEekgeeIaLTbaoWgXAZN846K9+IV8FAmO3kZEdHHJpY2hhcmQu
aGVuZGVyc29uQGxpbmFyby5vcmcACgkQZN846K9+IV/JpwgAj9kwpiWehGWrpQp9
rbEL+Fsx+SDhnoLVpF6nmSB1nkDqdgkdnhyRaLX9wM69bnocsGppZ5sd57J/cH3m
WiODVVbWP80WHonx5EN4htQv99TZWqVmXVl11DwOfsRUmINl4GG4kvHOOABd8hdc
39eRgGBBMyMShc6MUJiToyjEAcZPcGAiHkSW9YDGbvzhlloNWh46eLP1bdW3UJWK
UiEwPpXqg+L0V8nuuQnSFoPr5FIJmmoTeiGCRHXtvgOT7J8/6eKUESpfcKkHq1ye
dwcJQATuZip3+hyCCVveiZ86TQ81RMp9en1qw+HVzfed1Ial3Tk+tqiDqZJFm25b
GMpa5g==
=OjPl
-----END PGP SIGNATURE-----
Merge tag 'pull-tcg-20230105' of https://gitlab.com/rth7680/qemu into staging
Fix race conditions in new user-only vma tracking.
Add tcg backend paired register allocation.
Cleanup tcg backend function call abi.
# gpg: Signature made Fri 06 Jan 2023 03:12:17 GMT
# gpg: using RSA key 7A481E78868B4DB6A85A05C064DF38E8AF7E215F
# gpg: issuer "richard.henderson@linaro.org"
# gpg: Good signature from "Richard Henderson <richard.henderson@linaro.org>" [full]
# Primary key fingerprint: 7A48 1E78 868B 4DB6 A85A 05C0 64DF 38E8 AF7E 215F
* tag 'pull-tcg-20230105' of https://gitlab.com/rth7680/qemu: (47 commits)
tests/tcg/multiarch: add vma-pthread.c
accel/tcg: Handle false negative lookup in page_check_range
accel/tcg: Use g_free_rcu for user-exec interval trees
accel/tcg: Fix tb_invalidate_phys_page_unwind
tcg: Add TCGHelperInfo argument to tcg_out_call
tcg/aarch64: Merge tcg_out_callr into tcg_out_call
tcg: Move ffi_cif pointer into TCGHelperInfo
tcg: Factor init_ffi_layouts() out of tcg_context_init()
tcg: Convert typecode_to_ffi from array to function
tcg: Reorg function calls
tcg: Use output_pref wrapper function
tcg: Vary the allocation size for TCGOp
tcg: Pass number of arguments to tcg_emit_op() / tcg_op_insert_*()
accel/tcg/plugin: Use copy_op in append_{udata,mem}_cb
accel/tcg/plugin: Avoid duplicate copy in copy_call
accel/tcg/plugin: Don't search for the function pointer index
tcg: Use TCG_CALL_ARG_EVEN for TCI special case
tcg: Replace TCG_TARGET_EXTEND_ARGS with TCG_TARGET_CALL_ARG_I32
tcg: Replace TCG_TARGET_CALL_ALIGN_ARGS with TCG_TARGET_CALL_ARG_I64
tcg: Introduce TCGCallReturnKind and TCGCallArgumentKind
...
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
-----BEGIN PGP SIGNATURE-----
iQEzBAABCgAdFiEENjXHiM5iuR/UxZq0ewJE+xLeRCIFAmO3Cl8ACgkQewJE+xLe
RCKopQf/eVpmA04C792MIYHJnAaASbXZ1FA2Q33l9zrPt9V5oL0cer+rNPlqwVIb
jUdFLrT6ppe3jgkzeZVzLTGoNiLO1BnpH3+NV5ZpnMBON4g6/uyagRJekvb5xcDw
a832LM77zWrSQbV+Z3C0sn87j7u0YFYiXtya3mJUv6iSfKZnR4bGZH+LW2dOrnXn
+uMGnjjUQ2Ac7mvHTnrtooVUNhgRpTq4tMBwl1mE/hacUuejBjDgMLHDb6e4yPC7
g0/BuMB1ygBYFDINJ9El5oD0JtYHZjHOX4TKs6i0oXntoeveut62oGRwgLrk8eRe
lwKiEvFNrz/RYmCIy8Pz7s+5HQUgqA==
=/i3r
-----END PGP SIGNATURE-----
Merge tag 'pull-hex-20230105' of https://github.com/quic/qemu into staging
Hexagon update: patches from several folks
# gpg: Signature made Thu 05 Jan 2023 17:35:27 GMT
# gpg: using RSA key 3635C788CE62B91FD4C59AB47B0244FB12DE4422
# gpg: Good signature from "Taylor Simpson (Rock on) <tsimpson@quicinc.com>" [unknown]
# gpg: WARNING: This key is not certified with a trusted signature!
# gpg: There is no indication that the signature belongs to the owner.
# Primary key fingerprint: 3635 C788 CE62 B91F D4C5 9AB4 7B02 44FB 12DE 4422
* tag 'pull-hex-20230105' of https://github.com/quic/qemu:
Update scripts/meson-buildoptions.sh
Hexagon (target/hexagon) implement mutability mask for GPRs
target/hexagon: suppress unused variable warning
target/hexagon/idef-parser: fix two typos in README
tests/tcg/hexagon: fix underspecifed asm constraints
target/hexagon: rename aliased register HEX_REG_P3_0
linux-user/hexagon: fix signal context save & restore
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This patch adds support for the Zawrs ISA extension.
Given the current (incomplete) implementation of reservation sets
there seems to be no way to provide a full emulation of the WRS
instruction (wake on reservation set invalidation or timeout or
interrupt). Therefore, we just exit the TB and return to the main loop.
The specification can be found here:
https://github.com/riscv/riscv-zawrs/blob/main/zawrs.adoc
Note, that the Zawrs extension is frozen, but not ratified yet.
Changes since v3:
* Remove "RFC" since the extension is frozen
* Rebase on master and fix integration issues
* Fix entry ordering in extension list
Changes since v2:
* Rebase on master and resolve conflicts
* Adjustments according to a specification change
* Inline REQUIRE_ZAWRS() since it has only one user
Changes since v1:
* Adding zawrs to the ISA string that is passed to the kernel
Signed-off-by: Christoph Müllner <christoph.muellner@vrull.eu>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-Id: <20221005144948.3421504-1-christoph.muellner@vrull.eu>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Since priv spec v1.12, MRET and SRET now clear mstatus.MPRV when
leaving M-mode.
Signed-off-by: Bin Meng <bmeng@tinylab.org>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-Id: <20221207090037.281452-2-bmeng@tinylab.org>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
There are 2 paths in helper_sret() and the same mstatus update codes
are replicated. Extract the common parts to simplify it a little bit.
Signed-off-by: Bin Meng <bmeng@tinylab.org>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-Id: <20221207090037.281452-1-bmeng@tinylab.org>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Failure to set pc_succ_insn may result in a TB covering zero bytes,
which triggers an assert within the code generator.
Cc: qemu-stable@nongnu.org
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1224
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Message-Id: <20221203175744.151365-1-richard.henderson@linaro.org>
[ Changes by AF:
- Add missing run-plugin-test-noc-% line
]
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
The priv spec v1.12 says:
If no PMP entry matches an M-mode access, the access succeeds. If
no PMP entry matches an S-mode or U-mode access, but at least one
PMP entry is implemented, the access fails. Failed accesses generate
an instruction, load, or store access-fault exception.
At present the exception cause is set to 'illegal instruction' but
should have been 'instruction access fault'.
Fixes: d102f19a20 ("target/riscv/pmp: Raise exception if no PMP entry is configured")
Signed-off-by: Bin Meng <bmeng@tinylab.org>
Reviewed-by: Wilfred Mallawa <wilfred.mallawa@wdc.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-Id: <20221205065303.204095-1-bmeng@tinylab.org>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
sstatus register dump is currently missing in riscv_cpu_dump_state().
As sstatus is a copy of mstatus, which is described in the priv spec,
it seems redundant to print the same information twice.
Add some comments for this to let people know this is intentional.
Signed-off-by: Bin Meng <bmeng@tinylab.org>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-Id: <20221125050354.3166023-1-bmeng@tinylab.org>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
let tlb_fill() function also increments PMU counter when it is from
two-stage translation, so QEMU could also monitor these PMU events when
CPU runs in VS/VU mode (like running guest OS).
Signed-off-by: Jim Shu <jim.shu@sifive.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-Id: <20221123090635.6574-1-jim.shu@sifive.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
We should use "&&" instead of "&" when checking hcounteren.TM and
henvcfg.STCE bits.
Fixes: 3ec0fe18a3 ("target/riscv: Add vstimecmp suppor")
Signed-off-by: Anup Patel <apatel@ventanamicro.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-Id: <20221108125703.1463577-2-apatel@ventanamicro.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Avoid calling riscv_itrigger_enabled() when calculate the tbflags.
As the itrigger enable status can only be changed when write
tdata1, migration load or itrigger fire, update env->itrigger_enabled
at these places.
Signed-off-by: LIU Zhiwei <zhiwei_liu@linux.alibaba.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-Id: <20221013062946.7530-5-zhiwei_liu@linux.alibaba.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
When QEMU is not in icount mode, execute instruction one by one. The
tdata1 can be read directly.
When QEMU is in icount mode, use a timer to simulate the itrigger. The
tdata1 may be not right because of lazy update of count in tdata1. Thus,
We should pack the adjusted count into tdata1 before read it back.
Signed-off-by: LIU Zhiwei <zhiwei_liu@linux.alibaba.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-Id: <20221013062946.7530-4-zhiwei_liu@linux.alibaba.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
The max count in itrigger can be 0x3FFF, which will cause a no trivial
translation and execution overload.
When icount is enabled, QEMU provides API that can fetch guest
instruction number. Thus, we can set an timer for itrigger with
the count as deadline.
Only when timer expires or priviledge mode changes, do lazy update
to count.
Signed-off-by: LIU Zhiwei <zhiwei_liu@linux.alibaba.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-Id: <20221013062946.7530-3-zhiwei_liu@linux.alibaba.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
When icount is not enabled, there is no API in QEMU that can get the
guest instruction number.
Translate the guest code in a way that each TB only has one instruction.
After executing the instruction, decrease the count by 1 until it reaches 0
where the itrigger fires.
Note that only when priviledge matches the itrigger configuration,
the count will decrease.
Signed-off-by: LIU Zhiwei <zhiwei_liu@linux.alibaba.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-Id: <20221013062946.7530-2-zhiwei_liu@linux.alibaba.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
This patch adds a mechanism to generate a virtual instruction
instruction exception instead of an illegal instruction exception
during instruction decode when virt is enabled.
Signed-off-by: Mayuresh Chitale <mchitale@ventanamicro.com>
Reviewed-by: Weiwei Li <liweiwei@iscas.ac.cn>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-Id: <20221016124726.102129-4-mchitale@ventanamicro.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Accesses to henvcfg, henvcfgh and senvcfg are allowed only if the corresponding
bit in mstateen0/hstateen0 is enabled. Otherwise an illegal instruction trap is
generated.
Signed-off-by: Mayuresh Chitale <mchitale@ventanamicro.com>
Reviewed-by: Weiwei Li<liweiwei@iscas.ac.cn>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-Id: <20221016124726.102129-3-mchitale@ventanamicro.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Smstateen extension specifies a mechanism to close
the potential covert channels that could cause security issues.
This patch adds the CSRs defined in the specification and
the corresponding predicates and read/write functions.
Signed-off-by: Mayuresh Chitale <mchitale@ventanamicro.com>
Reviewed-by: Weiwei Li <liweiwei@iscas.ac.cn>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-Id: <20221016124726.102129-2-mchitale@ventanamicro.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Only the pmp index that be checked by pmp_hart_has_privs can be used
by pmp_get_tlb_size to avoid an error pmp index.
Before modification, we may use an error pmp index. For example,
we check address 0x4fc, and the size 0x4 in pmp_hart_has_privs. If there
is an pmp rule, valid range is [0x4fc, 0x500), then pmp_hart_has_privs
will return true;
However, this checked pmp index is discarded as pmp_hart_has_privs
return bool value. In pmp_is_range_in_tlb, it will traverse all pmp
rules. The tlb_sa will be 0x0, and tlb_ea will be 0xfff. If there is
a pmp rule [0x10, 0x14), it will be misused as it is legal in
pmp_get_tlb_size.
As we have already known the correct pmp index, just remove the
remove the pmp_is_range_in_tlb and get tlb size directly from
pmp_get_tlb_size.
Signed-off-by: LIU Zhiwei <zhiwei_liu@linux.alibaba.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-Id: <20221012060016.30856-1-zhiwei_liu@linux.alibaba.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
The new SGX Asynchronous Exit (AEX) notification mechanism (AEX-notify)
allows one enclave to receive a notification in the ERESUME after the
enclave exit due to an AEX. EDECCSSA is a new SGX user leaf function
(ENCLU[EDECCSSA]) to facilitate the AEX notification handling.
Whether the hardware supports to create enclave with AEX-notify support
is enumerated via CPUID.(EAX=0x12,ECX=0x1):EAX[10]. The new EDECCSSA
user leaf function is enumerated via CPUID.(EAX=0x12,ECX=0x0):EAX[11].
Add support to allow to expose the new SGX AEX-notify feature and the
new EDECCSSA user leaf function to KVM guest.
Link: https://lore.kernel.org/lkml/166760360549.4906.809756297092548496.tip-bot2@tip-bot2/
Link: https://lore.kernel.org/lkml/166760360934.4906.2427175408052308969.tip-bot2@tip-bot2/
Reviewed-by: Yang Zhong <yang.zhong@linux.intel.com>
Signed-off-by: Kai Huang <kai.huang@intel.com>
Message-Id: <20221109024834.172705-1-kai.huang@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
-machine kernel-irqchip=off is broken for many guest OSes; kernel-irqchip=split
is the replacement that works, so remove the deprecated support for the former.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Use the official extend/extract functions instead of routines
that will shortly be internal to tcg.
Cc: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Some registers are defined to have immutable bits, this commit
will implement that behavior.
Signed-off-by: Marco Liebel <quic_mliebel@quicinc.com>
Reviewed-by: Taylor Simpson <tsimpson@quicinc.com>
Signed-off-by: Taylor Simpson <tsimpson@quicinc.com>
Message-Id: <20230105102349.2181856-1-quic_mliebel@quicinc.com>
This patch manually suppresses a warning for an unused variable
(yynerrs) emitted by bison.
This warning has been triggered for the first time by clang 15.
This patch also disables `-Wextra`, which is not usually adopted in
QEMU. However, clang 15 triggers the warning fixed in this patch even in
absence of `-Wextra`.
Signed-off-by: Alessandro Di Federico <ale@rev.ng>
Signed-off-by: Taylor Simpson <tsimpson@quicinc.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Tested-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Taylor Simpson <tsimpson@quicinc.com>
Tested-by: Taylor Simpson <tsimpson@quicinc.com>
Message-Id: <20221221155327.1504117-1-ale@rev.ng>
The patch renames the identifier of the 32bit register
HEX_REG_P3_0 to HEX_REG_P3_0_ALIASED.
This change is to intended to provide some warning that
HEX_REG_P3_0 is an aliased register which has multiple
representations in CPU state and therefore might require
special handling in some contexts. The hope is to prevent
accidental misuse of this register e.g the issue reported
for the signals tests failure [here][1].
[1]: https://lists.gnu.org/archive/html/qemu-devel/2021-11/msg01102.html
Signed-off-by: Mukilan Thiyagarajan <quic_mthiyaga@quicinc.com>
Signed-off-by: Taylor Simpson <tsimpson@quicinc.com>
Reviewed-by: Taylor Simpson <tsimpson@quicinc.com>
Message-Id: <20221229092006.10709-3-quic_mthiyaga@quicinc.com>
In CPUID registers exposed to userspace, some registers were missing
and some fields were not exposed. This patch aligns exposed ID
registers and their fields with what the upstream kernel currently
exposes.
Specifically, the following new ID registers/fields are exposed to
userspace:
ID_AA64PFR1_EL1.BT: bits 3-0
ID_AA64PFR1_EL1.MTE: bits 11-8
ID_AA64PFR1_EL1.SME: bits 27-24
ID_AA64ZFR0_EL1.SVEver: bits 3-0
ID_AA64ZFR0_EL1.AES: bits 7-4
ID_AA64ZFR0_EL1.BitPerm: bits 19-16
ID_AA64ZFR0_EL1.BF16: bits 23-20
ID_AA64ZFR0_EL1.SHA3: bits 35-32
ID_AA64ZFR0_EL1.SM4: bits 43-40
ID_AA64ZFR0_EL1.I8MM: bits 47-44
ID_AA64ZFR0_EL1.F32MM: bits 55-52
ID_AA64ZFR0_EL1.F64MM: bits 59-56
ID_AA64SMFR0_EL1.F32F32: bit 32
ID_AA64SMFR0_EL1.B16F32: bit 34
ID_AA64SMFR0_EL1.F16F32: bit 35
ID_AA64SMFR0_EL1.I8I32: bits 39-36
ID_AA64SMFR0_EL1.F64F64: bit 48
ID_AA64SMFR0_EL1.I16I64: bits 55-52
ID_AA64SMFR0_EL1.FA64: bit 63
ID_AA64MMFR0_EL1.ECV: bits 63-60
ID_AA64MMFR1_EL1.AFP: bits 47-44
ID_AA64MMFR2_EL1.AT: bits 35-32
ID_AA64ISAR0_EL1.RNDR: bits 63-60
ID_AA64ISAR1_EL1.FRINTTS: bits 35-32
ID_AA64ISAR1_EL1.BF16: bits 47-44
ID_AA64ISAR1_EL1.DGH: bits 51-48
ID_AA64ISAR1_EL1.I8MM: bits 55-52
ID_AA64ISAR2_EL1.WFxT: bits 3-0
ID_AA64ISAR2_EL1.RPRES: bits 7-4
ID_AA64ISAR2_EL1.GPA3: bits 11-8
ID_AA64ISAR2_EL1.APA3: bits 15-12
The code is also refactored to use symbolic names for ID register fields
for better readability and maintainability.
The test case in tests/tcg/aarch64/sysregs.c is also updated to match
the intended behavior.
Signed-off-by: Zhuojia Shen <chaosdefinition@hotmail.com>
Message-id: DS7PR12MB6309FB585E10772928F14271ACE79@DS7PR12MB6309.namprd12.prod.outlook.com
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
[PMM: use Sn_n_Cn_Cn_n syntax to work with older assemblers
that don't recognize id_aa64isar2_el1 and id_aa64mmfr2_el1]
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Remove some unused headers.
Signed-off-by: Claudio Fontana <cfontana@suse.de>
Acked-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Claudio Fontana <cfontana@suse.de>
Reviewed-by: Cornelia Huck <cohuck@redhat.com>
Signed-off-by: Fabiano Rosas <farosas@suse.de>
Message-id: 20221213190537.511-7-farosas@suse.de
[added back some includes that are still needed at this point]
Signed-off-by: Fabiano Rosas <farosas@suse.de>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Fix the following:
ERROR: spaces required around that '|' (ctx:VxV)
ERROR: space required before the open parenthesis '('
ERROR: spaces required around that '+' (ctx:VxB)
ERROR: space prohibited between function name and open parenthesis '('
(the last two still have some occurrences in macros which I left
behind because it might impact readability)
Signed-off-by: Fabiano Rosas <farosas@suse.de>
Reviewed-by: Claudio Fontana <cfontana@suse.de>
Reviewed-by: Cornelia Huck <cohuck@redhat.com>
Message-id: 20221213190537.511-3-farosas@suse.de
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Fix these:
WARNING: Block comments use a leading /* on a separate line
WARNING: Block comments use * on subsequent lines
WARNING: Block comments use a trailing */ on a separate line
Signed-off-by: Fabiano Rosas <farosas@suse.de>
Reviewed-by: Claudio Fontana <cfontana@suse.de>
Reviewed-by: Cornelia Huck <cohuck@redhat.com>
Message-id: 20221213190537.511-2-farosas@suse.de
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The check semihosting_enabled() wants to know if the guest is
currently in user mode. Unlike the other cases the test was inverted
causing us to block semihosting calls in non-EL0 modes.
Cc: qemu-stable@nongnu.org
Fixes: 19b26317e9 (target/arm: Honour -semihosting-config userspace=on)
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
All constants are taken from the ARM Cortex-R52 Processor TRM Revision: r1p3
Signed-off-by: Tobias Röhmel <tobias.roehmel@rwth-aachen.de>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20221206102504.165775-8-tobias.roehmel@rwth-aachen.de
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
ARMv8-R AArch32 CPUs behave as if TTBCR.EAE is always 1 even
tough they don't have the TTBCR register.
See ARM Architecture Reference Manual Supplement - ARMv8, for the ARMv8-R
AArch32 architecture profile Version:A.c section C1.2.
Signed-off-by: Tobias Röhmel <tobias.roehmel@rwth-aachen.de>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20221206102504.165775-5-tobias.roehmel@rwth-aachen.de
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The v8R PMSAv8 has a two-stage MPU translation process, but, unlike
VMSAv8, the stage 2 attributes are in the same format as the stage 1
attributes (8-bit MAIR format). Rather than converting the MAIR
format to the format used for VMSA stage 2 (bits [5:2] of a VMSA
stage 2 descriptor) and then converting back to do the attribute
combination, allow combined_attrs_nofwb() to accept s2 attributes
that are already in the MAIR format.
We move the assert() to combined_attrs_fwb(), because that function
really does require a VMSA stage 2 attribute format. (We will never
get there for v8R, because PMSAv8 does not implement FEAT_S2FWB.)
Signed-off-by: Tobias Röhmel <tobias.roehmel@rwth-aachen.de>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20221206102504.165775-4-tobias.roehmel@rwth-aachen.de
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
RVBAR shadows RVBAR_ELx where x is the highest exception
level if the highest EL is not EL3. This patch also allows
ARMv8 CPUs to change the reset address with
the rvbar property.
Signed-off-by: Tobias Röhmel <tobias.roehmel@rwth-aachen.de>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20221206102504.165775-3-tobias.roehmel@rwth-aachen.de
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Cores with PMSA have the MPUIR register which has the
same encoding as the MIDR alias with opc2=4. So we only
add that alias if we are not realizing a core that
implements PMSA.
Signed-off-by: Tobias Röhmel <tobias.roehmel@rwth-aachen.de>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20221206102504.165775-2-tobias.roehmel@rwth-aachen.de
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
In get_phys_addr_twostage() we set the lg_page_size of the result to
the maximum of the stage 1 and stage 2 page sizes. This works for
the case where we do want to create a TLB entry, because we know the
common TLB code only creates entries of the TARGET_PAGE_SIZE and
asking for a size larger than that only means that invalidations
invalidate the whole larger area. However, if lg_page_size is
smaller than TARGET_PAGE_SIZE this effectively means "don't create a
TLB entry"; in this case if either S1 or S2 said "this covers less
than a page and can't go in a TLB" then the final result also should
be marked that way. Set the resulting page size to 0 if either
stage asked for a less-than-a-page entry, and expand the comment
to explain what's going on.
This has no effect for VMSA because currently the VMSA lookup always
returns results that cover at least TARGET_PAGE_SIZE; however when we
add v8R support it will reuse this code path, and for v8R the S1 and
S2 results can be smaller than TARGET_PAGE_SIZE.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20221212142708.610090-1-peter.maydell@linaro.org
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
In addition, use tcg_enabled instead of !kvm_enabled.
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Adds checks to the hashst and hashchk instructions to only execute if
enabled by the relevant aspect in the DEXCR and HDEXCR.
This behaviour is guarded behind TARGET_PPC64 since Power10 is
currently the only implementation which has the DEXCR.
Reviewed-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Signed-off-by: Nicholas Miehlbradt <nicholas@linux.ibm.com>
Message-Id: <20221220042330.2387944-3-nicholas@linux.ibm.com>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Define the DEXCR and HDEXCR as special purpose registers.
Each register occupies two SPR indicies, one which can be read in an
unprivileged state and one which can be modified in the appropriate
priviliged state, however both indicies refer to the same underlying
value.
Note that the ISA uses the abbreviation UDEXCR in two different
contexts: the userspace DEXCR, the SPR index which can be read from
userspace (implemented in this patch), and the ultravisor DEXCR, the
equivalent register for the ultravisor state (not implemented).
Signed-off-by: Nicholas Miehlbradt <nicholas@linux.ibm.com>
Reviewed-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Message-Id: <20221220042330.2387944-2-nicholas@linux.ibm.com>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Let's not leave developers in the dark where this log message comes
from.
Signed-off-by: Bernhard Beschow <shentey@gmail.com>
Reviewed-by: Cédric Le Goater <clg@kaod.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Message-Id: <20221216145709.271940-2-shentey@gmail.com>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
kvm_ppc.h is missing various declarations from "cpu.h":
target/ppc/kvm_ppc.h:128:40: error: unknown type name 'CPUPPCState'; did you mean 'CPUState'?
static inline int kvmppc_get_hypercall(CPUPPCState *env,
^~~~~~~~~~~
CPUState
include/qemu/typedefs.h:45:25: note: 'CPUState' declared here
typedef struct CPUState CPUState;
^
target/ppc/kvm_ppc.h:134:40: error: unknown type name 'PowerPCCPU'
static inline int kvmppc_set_interrupt(PowerPCCPU *cpu, int irq, int level)
^
target/ppc/kvm_ppc.h:285:38: error: unknown type name 'hwaddr'
hwaddr ptex, int n)
^
target/ppc/kvm_ppc.h:220:15: error: unknown type name 'target_ulong'
static inline target_ulong kvmppc_configure_v3_mmu(PowerPCCPU *cpu,
^
target/ppc/kvm_ppc.h:286:38: error: unknown type name 'ppc_hash_pte64_t'
static inline void kvmppc_read_hptes(ppc_hash_pte64_t *hptes,
^
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Message-Id: <20221213123550.39302-2-philmd@linaro.org>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
The fid instruction (Floating-Point Identify) puts the FPU model and
revision into the Status Register. Since those values shouldn't be 0,
store values there which a PCX-L2 (for 32-bit) or a PCX-W2 (for 64-bit)
would return. Noticed while trying to install MPE/iX.
Signed-off-by: Helge Deller <deller@gmx.de>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Qemu currently emulates a 32-bit CPU only, and crashes with this error
when it faces a 64-bit load (e.g. "ldd 0(r26),r0") or a 64-bit store
(e.g. "std r26,0(r26)") instruction in the guest:
ERROR:../qemu/tcg/tcg-op.c:2822:tcg_canonicalize_memop: code should not be reached
Add checks for 64-bit sizes and generate an illegal instruction
exception if necessary.
Signed-off-by: Helge Deller <deller@gmx.de>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Typo had double-writes to data registers.
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1363
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
The values in env->flags are a subset of tb->flags.
Restore only the bits that belong.
Cc: qemu-stable@nongnu.org
Fixes: ab419fd8a0 ("target/sh4: Fix TB_FLAG_UNALIGN")
Signed-off-by: Guenter Roeck <linux@roeck-us.net>
Message-ID: <20221212011345.GA2235238@roeck-us.net>
[rth: Reduce to only the the superh_cpu_synchronize_from_tb change]
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Performance improvement
Add pkt and insn to DisasContext
Many functions need information from all 3 structures, so merge
them together.
2)
Bug fix
Fix predicated assignment to .tmp and .cur
3)
Performance improvement
Add overrides for S2_asr_r_r_sat/S2_asl_r_r_sat
These functions will not be handled by idef-parser
4-11)
The final 8 patches improve change-of-flow handling.
Currently, we set the PC to a new address before exiting a TB. The
ultimate goal is to use direct block chaining. However, several steps
are needed along the way.
4)
When a packet has more than one change-of-flow (COF) instruction, only
the first one taken is considered. The runtime bookkeeping is only
needed when there is more than one COF instruction in a packet.
5, 6)
Remove PC and next_PC from the runtime state and always use a
translation-time constant. Note that next_PC is used by call instructions
to set LR and by conditional COF instructions to set the fall-through
address.
7, 8, 9)
Add helper overrides for COF instructions. In particular, we must
distinguish those that use a PC-relative address for the destination.
These are candidates for direct block chaining later.
10)
Use direct block chaining for packets that have a single PC-relative
COF instruction. Instead of generating the code while processing the
instruction, we record the effect in DisasContext and generate the code
during gen_end_tb.
11)
Use direct block chaining for tight loops. We look for TBs that end
with an endloop0 that will branch back to the TB start address.
12-21)
Instruction definition parser (idef-parser) from rev.ng
Parses the instruction semantics and generates TCG
-----BEGIN PGP SIGNATURE-----
iQEzBAABCgAdFiEENjXHiM5iuR/UxZq0ewJE+xLeRCIFAmOc2BEACgkQewJE+xLe
RCKqFwf/U/uWaQiF59OXyLHj9PR/bTf7PmZL12g8MTrntzmtIpRiTQb7ajJaLwyn
TcCG9j9Ss6kWBq+LH5TBvstnSN9/3qEgnj2b26y6EAn85mSh6fai4foUPjXFUy7m
2Of0kuc2WKmwxN9C2iw6Hm6pbL3FSnYzKtBuSFzYyAIS0doLFT97zE97XnBtTQ4C
49JdNgQW9CKt7cCpKTcQA4N3ZO8LdARdvOtTShX1++qd4Trm0haTGRdaygSrTlS7
Eeqs4nbakKEE6VH2iltPGKX+KHbMCf2ZW7lefxHi+EuzE0DBIVoM64UnalyFfcSU
hVMGF15HgAIAjecim0Y4AbPB/zVlEw==
=PC9+
-----END PGP SIGNATURE-----
Merge tag 'pull-hex-20221216-1' of https://github.com/quic/qemu into staging
1)
Performance improvement
Add pkt and insn to DisasContext
Many functions need information from all 3 structures, so merge
them together.
2)
Bug fix
Fix predicated assignment to .tmp and .cur
3)
Performance improvement
Add overrides for S2_asr_r_r_sat/S2_asl_r_r_sat
These functions will not be handled by idef-parser
4-11)
The final 8 patches improve change-of-flow handling.
Currently, we set the PC to a new address before exiting a TB. The
ultimate goal is to use direct block chaining. However, several steps
are needed along the way.
4)
When a packet has more than one change-of-flow (COF) instruction, only
the first one taken is considered. The runtime bookkeeping is only
needed when there is more than one COF instruction in a packet.
5, 6)
Remove PC and next_PC from the runtime state and always use a
translation-time constant. Note that next_PC is used by call instructions
to set LR and by conditional COF instructions to set the fall-through
address.
7, 8, 9)
Add helper overrides for COF instructions. In particular, we must
distinguish those that use a PC-relative address for the destination.
These are candidates for direct block chaining later.
10)
Use direct block chaining for packets that have a single PC-relative
COF instruction. Instead of generating the code while processing the
instruction, we record the effect in DisasContext and generate the code
during gen_end_tb.
11)
Use direct block chaining for tight loops. We look for TBs that end
with an endloop0 that will branch back to the TB start address.
12-21)
Instruction definition parser (idef-parser) from rev.ng
Parses the instruction semantics and generates TCG
# gpg: Signature made Fri 16 Dec 2022 20:41:53 GMT
# gpg: using RSA key 3635C788CE62B91FD4C59AB47B0244FB12DE4422
# gpg: Good signature from "Taylor Simpson (Rock on) <tsimpson@quicinc.com>" [undefined]
# gpg: WARNING: This key is not certified with a trusted signature!
# gpg: There is no indication that the signature belongs to the owner.
# Primary key fingerprint: 3635 C788 CE62 B91F D4C5 9AB4 7B02 44FB 12DE 4422
* tag 'pull-hex-20221216-1' of https://github.com/quic/qemu: (21 commits)
target/hexagon: import additional tests
target/hexagon: call idef-parser functions
target/hexagon: import parser for idef-parser
target/hexagon: import lexer for idef-parser
target/hexagon: prepare input for the idef-parser
target/hexagon: introduce new helper functions
target/hexagon: make helper functions non-static
target/hexagon: make slot number an unsigned
target/hexagon: import README for idef-parser
target/hexagon: update MAINTAINERS for idef-parser
Hexagon (target/hexagon) Use direct block chaining for tight loops
Hexagon (target/hexagon) Use direct block chaining for direct jump/branch
Hexagon (target/hexagon) Add overrides for various forms of jump
Hexagon (target/hexagon) Add overrides for compound compare and jump
Hexagon (target/hexagon) Add overrides for direct call instructions
Hexagon (target/hexagon) Remove next_PC from runtime state
Hexagon (target/hexagon) Remove PC from the runtime state
Hexagon (target/hexagon) Only use branch_taken when packet has multi cof
Hexagon (target/hexagon) Add overrides for S2_asr_r_r_sat/S2_asl_r_r_sat
Hexagon (target/hexagon) Fix predicated assignment to .tmp and .cur
...
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Extend gen_tcg_funcs.py in order to emit calls to the functions emitted
by the idef-parser, if available.
Signed-off-by: Alessandro Di Federico <ale@rev.ng>
Signed-off-by: Anton Johansson <anjo@rev.ng>
Signed-off-by: Taylor Simpson <tsimpson@quicinc.com>
Reviewed-by: Taylor Simpson <tsimpson@quicinc.com>
Message-Id: <20220923173831.227551-11-anjo@rev.ng>
Signed-off-by: Alessandro Di Federico <ale@rev.ng>
Signed-off-by: Paolo Montesel <babush@rev.ng>
Signed-off-by: Anton Johansson <anjo@rev.ng>
Signed-off-by: Taylor Simpson <tsimpson@quicinc.com>
Reviewed-by: Taylor Simpson <tsimpson@quicinc.com>
Message-Id: <20220923173831.227551-10-anjo@rev.ng>
Signed-off-by: Alessandro Di Federico <ale@rev.ng>
Signed-off-by: Paolo Montesel <babush@rev.ng>
Signed-off-by: Anton Johansson <anjo@rev.ng>
Signed-off-by: Taylor Simpson <tsimpson@quicinc.com>
Reviewed-by: Taylor Simpson <tsimpson@quicinc.com>
Message-Id: <20220923173831.227551-9-anjo@rev.ng>
Introduce infrastructure necessary to produce a file suitable for being
parsed by the idef-parser. A build option is also added to fully disable
the output of idef-parser, which is useful for debugging.
Signed-off-by: Alessandro Di Federico <ale@rev.ng>
Signed-off-by: Anton Johansson <anjo@rev.ng>
Signed-off-by: Taylor Simpson <tsimpson@quicinc.com>
Reviewed-by: Taylor Simpson <tsimpson@quicinc.com>
Message-Id: <20220923173831.227551-8-anjo@rev.ng>
These helpers will be employed by the idef-parser generated code, to
correctly implement instruction semantics. "Helper" functions, in the
context of this patch, refers to functions which provide a manual TCG
implementation of certain features.
Signed-off-by: Alessandro Di Federico <ale@rev.ng>
Signed-off-by: Niccolò Izzo <nizzo@rev.ng>
Signed-off-by: Anton Johansson <anjo@rev.ng>
Signed-off-by: Taylor Simpson <tsimpson@quicinc.com>
Reviewed-by: Taylor Simpson <tsimpson@quicinc.com>
Message-Id: <20220923173831.227551-6-anjo@rev.ng>
Make certain helper functions non-static, making them available outside
genptr.c. These functions are required by code generated by the
idef-parser.
This commit also makes some functions in op_helper.c non-static in order
to avoid having them marked as unused when using the idef-parser
generated code.
Signed-off-by: Alessandro Di Federico <ale@rev.ng>
Signed-off-by: Paolo Montesel <babush@rev.ng>
Signed-off-by: Taylor Simpson <tsimpson@quicinc.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Taylor Simpson <tsimpson@quicinc.com>
Message-Id: <20220923173831.227551-5-anjo@rev.ng>
Signed-off-by: Alessandro Di Federico <ale@rev.ng>
Signed-off-by: Paolo Montesel <babush@rev.ng>
Signed-off-by: Taylor Simpson <tsimpson@quicinc.com>
Acked-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Taylor Simpson <tsimpson@quicinc.com>
Message-Id: <20220923173831.227551-4-anjo@rev.ng>
Signed-off-by: Alessandro Di Federico <ale@rev.ng>
Signed-off-by: Anton Johansson <anjo@rev.ng>
Signed-off-by: Taylor Simpson <tsimpson@quicinc.com>
Reviewed-by: Taylor Simpson <tsimpson@quicinc.com>
Message-Id: <20220923173831.227551-3-anjo@rev.ng>
Direct block chaining is documented here
https://qemu.readthedocs.io/en/latest/devel/tcg.html#direct-block-chaining
Hexagon inner loops end with the endloop0 instruction
To go back to the beginning of the loop, this instructions writes to PC
from register SA0 (start address 0). To use direct block chaining, we
have to assign PC with a constant value. So, we specialize the code
generation when the start of the translation block is equal to SA0.
When this is the case, we defer the compare/branch from endloop0 to
gen_end_tb. When this is done, we can assign the start address of the TB
to PC.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Taylor Simpson <tsimpson@quicinc.com>
Message-Id: <20221108162906.3166-12-tsimpson@quicinc.com>
Direct block chaining is documented here
https://qemu.readthedocs.io/en/latest/devel/tcg.html#direct-block-chaining
Recall that Hexagon allows packets with multiple jumps where only the
first one with a true predicate will actually jump. We can use
tcg_gen_goto_tb/tcg_gen_exit_tb when the packet contains a single
PC-relative branch or jump. If not, we use tcg_gen_lookup_and_goto_ptr.
We add the following to DisasContext in order to delay the branching
until the end of packet commit (in gen_end_tb)
branch_cond
The TCGCond condition under which the branch is taken
When branch_cond == TCG_COND_NEVER, there isn't a single
direct branch in this packet.
When branch_cond != TCG_COND_ALWAYS, the value is in
hex_branch_taken
branch_dest
The destination of the branch
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Taylor Simpson <tsimpson@quicinc.com>
Message-Id: <20221108162906.3166-11-tsimpson@quicinc.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Taylor Simpson <tsimpson@quicinc.com>
Message-Id: <20221108162906.3166-10-tsimpson@quicinc.com>
Acked-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Taylor Simpson <tsimpson@quicinc.com>
Message-Id: <20221108162906.3166-9-tsimpson@quicinc.com>
The imported files don't properly mark all CONDEXEC instructions, so
we add some logic to hex_common.py to add the attribute.
Acked-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Taylor Simpson <tsimpson@quicinc.com>
Message-Id: <20221108162906.3166-7-tsimpson@quicinc.com>
Add pc field to Packet structure
For helpers that need PC, pass an extra argument
Remove slot arg from conditional jump helpers
On a trap0, copy pkt->pc into hex_gpr[HEX_REG_PC]
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Taylor Simpson <tsimpson@quicinc.com>
Message-Id: <20221108162906.3166-6-tsimpson@quicinc.com>
When a packet has more than one change-of-flow instruction, only the first
one to branch is considered. We use the branch_taken variable to keep
track of this.
However, when there is a single cof instruction, we don't need the same
amount of bookkeeping.
We add the pkt_has_multi_cof member to the Packet structure, and pass this
information to the needed functions.
When there is a generated helper function with cof, the generator will
pass this pkt_has_multi_cof as a runtime value.
Acked-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Taylor Simpson <tsimpson@quicinc.com>
Message-Id: <20221108162906.3166-5-tsimpson@quicinc.com>
These instructions will not be generated by idef-parser, so we override
them manually.
Test cases added to tests/tcg/hexagon/usr.c
Co-authored-by: Matheus Tavares Bernardino <quic_mathbern@quicinc.com>
Signed-off-by: Matheus Tavares Bernardino <quic_mathbern@quicinc.com>
Signed-off-by: Taylor Simpson <tsimpson@quicinc.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20221108162906.3166-4-tsimpson@quicinc.com>
Here are example instructions with a predicated .tmp/.cur assignment
if (p1) v12.tmp = vmem(r7 + #0)
if (p0) v12.cur = vmem(r9 + #0)
The .tmp/.cur indicates that references to v12 in the same packet
take the result of the load. However, when the predicate is false,
the value at the start of the packet should be used. After the packet
commits, the .tmp value is dropped, but the .cur value is maintained.
To fix this bug, we preload the original value from the HVX register
into the temporary used for the result.
Test cases added to tests/tcg/hexagon/hvx_misc.c
Acked-by: Richard Henderson <richard.henderson@linaro.org>
Co-authored-by: Matheus Tavares Bernardino <quic_mathbern@quicinc.com>
Signed-off-by: Matheus Tavares Bernardino <quic_mathbern@quicinc.com>
Signed-off-by: Taylor Simpson <tsimpson@quicinc.com>
Message-Id: <20221108162906.3166-3-tsimpson@quicinc.com>
This enables us to reduce the number of parameters to many functions
In particular, the generated functions previously took all 3 as arguments
Not only does this simplify the code, it improves the translation time
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Taylor Simpson <tsimpson@quicinc.com>
Message-Id: <20221108162906.3166-2-tsimpson@quicinc.com>
Convert the xtensa CPU class to use 3-phase reset, so it doesn't
need to use device_class_set_parent_reset() any more.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Cédric Le Goater <clg@kaod.org>
Reviewed-by: Edgar E. Iglesias <edgar@zeroasic.com>
Reviewed-by: Taylor Simpson <tsimpson@quicinc.com>
Reviewed-by: Greg Kurz <groug@kaod.org>
Reviewed-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Message-id: 20221124115023.2437291-20-peter.maydell@linaro.org
Convert the tricore CPU class to use 3-phase reset, so it doesn't
need to use device_class_set_parent_reset() any more.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Cédric Le Goater <clg@kaod.org>
Reviewed-by: Edgar E. Iglesias <edgar@zeroasic.com>
Reviewed-by: Taylor Simpson <tsimpson@quicinc.com>
Reviewed-by: Greg Kurz <groug@kaod.org>
Reviewed-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Message-id: 20221124115023.2437291-19-peter.maydell@linaro.org
Convert the sparc CPU class to use 3-phase reset, so it doesn't
need to use device_class_set_parent_reset() any more.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Cédric Le Goater <clg@kaod.org>
Reviewed-by: Edgar E. Iglesias <edgar@zeroasic.com>
Reviewed-by: Taylor Simpson <tsimpson@quicinc.com>
Reviewed-by: Greg Kurz <groug@kaod.org>
Reviewed-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Message-id: 20221124115023.2437291-18-peter.maydell@linaro.org
Convert the sh4 CPU class to use 3-phase reset, so it doesn't
need to use device_class_set_parent_reset() any more.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Cédric Le Goater <clg@kaod.org>
Reviewed-by: Edgar E. Iglesias <edgar@zeroasic.com>
Reviewed-by: Taylor Simpson <tsimpson@quicinc.com>
Reviewed-by: Greg Kurz <groug@kaod.org>
Message-id: 20221124115023.2437291-17-peter.maydell@linaro.org
Convert the rx CPU class to use 3-phase reset, so it doesn't
need to use device_class_set_parent_reset() any more.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Cédric Le Goater <clg@kaod.org>
Reviewed-by: Edgar E. Iglesias <edgar@zeroasic.com>
Reviewed-by: Taylor Simpson <tsimpson@quicinc.com>
Reviewed-by: Greg Kurz <groug@kaod.org>
Message-id: 20221124115023.2437291-16-peter.maydell@linaro.org
Convert the riscv CPU class to use 3-phase reset, so it doesn't
need to use device_class_set_parent_reset() any more.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Cédric Le Goater <clg@kaod.org>
Reviewed-by: Edgar E. Iglesias <edgar@zeroasic.com>
Reviewed-by: Taylor Simpson <tsimpson@quicinc.com>
Reviewed-by: Greg Kurz <groug@kaod.org>
Message-id: 20221124115023.2437291-15-peter.maydell@linaro.org
Convert the ppc CPU class to use 3-phase reset, so it doesn't
need to use device_class_set_parent_reset() any more.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Cédric Le Goater <clg@kaod.org>
Reviewed-by: Edgar E. Iglesias <edgar@zeroasic.com>
Reviewed-by: Taylor Simpson <tsimpson@quicinc.com>
Reviewed-by: Greg Kurz <groug@kaod.org>
Message-id: 20221124115023.2437291-14-peter.maydell@linaro.org
Convert the openrisc CPU class to use 3-phase reset, so it doesn't
need to use device_class_set_parent_reset() any more.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Cédric Le Goater <clg@kaod.org>
Reviewed-by: Edgar E. Iglesias <edgar@zeroasic.com>
Reviewed-by: Taylor Simpson <tsimpson@quicinc.com>
Message-id: 20221124115023.2437291-13-peter.maydell@linaro.org
Convert the nios2 CPU class to use 3-phase reset, so it doesn't
need to use device_class_set_parent_reset() any more.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Cédric Le Goater <clg@kaod.org>
Reviewed-by: Edgar E. Iglesias <edgar@zeroasic.com>
Reviewed-by: Taylor Simpson <tsimpson@quicinc.com>
Message-id: 20221124115023.2437291-12-peter.maydell@linaro.org
Convert the mips CPU class to use 3-phase reset, so it doesn't
need to use device_class_set_parent_reset() any more.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Cédric Le Goater <clg@kaod.org>
Reviewed-by: Edgar E. Iglesias <edgar@zeroasic.com>
Reviewed-by: Taylor Simpson <tsimpson@quicinc.com>
Message-id: 20221124115023.2437291-11-peter.maydell@linaro.org
Convert the microblaze CPU class to use 3-phase reset, so it doesn't
need to use device_class_set_parent_reset() any more.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Cédric Le Goater <clg@kaod.org>
Reviewed-by: Edgar E. Iglesias <edgar@zeroasic.com>
Reviewed-by: Taylor Simpson <tsimpson@quicinc.com>
Message-id: 20221124115023.2437291-10-peter.maydell@linaro.org
Convert the m68k CPU class to use 3-phase reset, so it doesn't
need to use device_class_set_parent_reset() any more.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Cédric Le Goater <clg@kaod.org>
Reviewed-by: Edgar E. Iglesias <edgar@zeroasic.com>
Reviewed-by: Taylor Simpson <tsimpson@quicinc.com>
Message-id: 20221124115023.2437291-9-peter.maydell@linaro.org
Convert the loongarch CPU class to use 3-phase reset, so it doesn't
need to use device_class_set_parent_reset() any more.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Cédric Le Goater <clg@kaod.org>
Reviewed-by: Edgar E. Iglesias <edgar@zeroasic.com>
Reviewed-by: Taylor Simpson <tsimpson@quicinc.com>
Message-id: 20221124115023.2437291-8-peter.maydell@linaro.org
Convert the i386 CPU class to use 3-phase reset, so it doesn't
need to use device_class_set_parent_reset() any more.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Cédric Le Goater <clg@kaod.org>
Reviewed-by: Edgar E. Iglesias <edgar@zeroasic.com>
Reviewed-by: Taylor Simpson <tsimpson@quicinc.com>
Message-id: 20221124115023.2437291-7-peter.maydell@linaro.org
Convert the hexagon CPU class to use 3-phase reset, so it doesn't
need to use device_class_set_parent_reset() any more.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Cédric Le Goater <clg@kaod.org>
Reviewed-by: Edgar E. Iglesias <edgar@zeroasic.com>
Reviewed-by: Taylor Simpson <tsimpson@quicinc.com>
Message-id: 20221124115023.2437291-6-peter.maydell@linaro.org
Convert the cris CPU class to use 3-phase reset, so it doesn't
need to use device_class_set_parent_reset() any more.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Cédric Le Goater <clg@kaod.org>
Reviewed-by: Edgar E. Iglesias <edgar@zeroasic.com>
Message-id: 20221124115023.2437291-5-peter.maydell@linaro.org
Convert the avr CPU class to use 3-phase reset, so it doesn't
need to use device_class_set_parent_reset() any more.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Cédric Le Goater <clg@kaod.org>
Message-id: 20221124115023.2437291-4-peter.maydell@linaro.org
Convert the Arm CPU class to use 3-phase reset, so it doesn't
need to use device_class_set_parent_reset() any more.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Cédric Le Goater <clg@kaod.org>
Message-id: 20221124115023.2437291-3-peter.maydell@linaro.org
* Fix emulated MVCP and MVCS s390x instructions
* Clean-ups for the e1000e qtest
* Enable qtests on Windows
* Update FreeBSD CI to version 12.4
* Check --disable-tcg for ppc64 in the CI
* Improve scripts/make-releases a little bit
* Many other misc small clean-ups and fixes here and there
-----BEGIN PGP SIGNATURE-----
iQJFBAABCAAvFiEEJ7iIR+7gJQEY8+q5LtnXdP5wLbUFAmObN8gRHHRodXRoQHJl
ZGhhdC5jb20ACgkQLtnXdP5wLbVzWQ/+OzpXQODK/Xp0UUwre4kxm1Ju15tHdIPr
lb5GqxcxZQay2hH1OIr5RjCEODDCzO+uPPq9VPh+a8qrx+nypcJDXlNDX4y9e7yK
wz8pZ8G8E6OmflHjojbKESTUbdDkCY3c+Ge86Jdot0UMtvwSkmU8VZ9EiFRW+9MF
i6NDgEhpHeiur8834+7BMyYc5YDE5TPaNsbzpSulKZQUsPJ6gntKwA15YPgZobXX
s2CnCV6E48zUaxhtZs5plFQKIkUS2TAl15Zvr4OVKjAXpSk86ysBUWKZjZDQup/l
G6dh8yF1Y724l4oq7rpVcEG02vXQSlsUIZlpUWWmGyNzt6npJ71aWQbPN2we9C1j
pa3hZDak/g9B8wDlWJQMYNcKKHUtp6LW+qqJxxujKUjzGlsXWgflpBjeXBH8TE3z
vAi+iTXsQJSEwvhMboHzogZjLpyG1PXcrlksRyRPvgKyW00CzVTjclz58Yd+m/cm
Bfc2T/PWr+OsxYXRZBZPu0RebQW1s3XnG0bx8jctcEty3ZsPs0eyPNvYTkT+sEnl
e1BkXN6C1ImB214psH4mDzgtQx0/zz4vVMH+4JeeX5Wrfmc+FTjr083oiWsFJRsn
JE+CkRbK74iN3fY6HhH9XiD60OwVMOgNe/NMhjH9eIokez8D15lOeKFMuR1vPwqi
Rcuip8uENZk=
=u/jz
-----END PGP SIGNATURE-----
Merge tag 'pull-request-2022-12-15' of https://gitlab.com/thuth/qemu into staging
* s390x PCI fixes and improvements (for the ISM device)
* Fix emulated MVCP and MVCS s390x instructions
* Clean-ups for the e1000e qtest
* Enable qtests on Windows
* Update FreeBSD CI to version 12.4
* Check --disable-tcg for ppc64 in the CI
* Improve scripts/make-releases a little bit
* Many other misc small clean-ups and fixes here and there
# gpg: Signature made Thu 15 Dec 2022 15:05:44 GMT
# gpg: using RSA key 27B88847EEE0250118F3EAB92ED9D774FE702DB5
# gpg: issuer "thuth@redhat.com"
# gpg: Good signature from "Thomas Huth <th.huth@gmx.de>" [full]
# gpg: aka "Thomas Huth <thuth@redhat.com>" [full]
# gpg: aka "Thomas Huth <huth@tuxfamily.org>" [full]
# gpg: aka "Thomas Huth <th.huth@posteo.de>" [unknown]
# Primary key fingerprint: 27B8 8847 EEE0 2501 18F3 EAB9 2ED9 D774 FE70 2DB5
* tag 'pull-request-2022-12-15' of https://gitlab.com/thuth/qemu: (23 commits)
tests/qtest/vhost-user-blk-test: don't abort all qtests on missing envar
.gitlab/issue_templates: Move suggestions into comments
gitlab-ci: Check building ppc64 without TCG
FreeBSD: Upgrade to 12.4 release
tests/qtest: Enable qtest build on Windows
.gitlab-ci.d/windows.yml: Exclude qTests from 64-bit CI job for now
.gitlab-ci.d/windows.yml: Keep 64-bit and 32-bit build scripts consistent
.gitlab-ci.d/windows.yml: Unify the prerequisite packages
tests/qtest/libqos/e1000e: Correctly group register accesses
tests/qtest/e1000e-test: De-duplicate constants
tests/qtest/libqos/e1000e: Remove "other" interrupts
hw: Include the VMWare devices only in the x86 targets
MAINTAINERS: Add documentation files to the corresponding sections
util/oslib-win32: Remove obsolete reference to g_poll code
util/qemu-config: Fix "query-command-line-options" to provide the right values
scripts/make-release: Only clone single branches to speed up the script
scripts/make-release: Add a simple help text for the script
monitor/misc: Remove superfluous include statements
target/s390x: The MVCP and MVCS instructions are not privileged
target/s390x/tcg/mem_helper: Test the right bits in psw_key_valid()
...
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* hw/arm/virt: Add properties to allow more granular
configuration of use of highmem space
* target/arm: Add Cortex-A55 CPU
* hw/intc/arm_gicv3: Fix GICD_TYPER ITLinesNumber advertisement
* Implement FEAT_EVT
* Some 3-phase-reset conversions for Arm GIC, SMMU
* hw/arm/boot: set initrd with #address-cells type in fdt
* hw/misc: Move some arm-related files from specific_ss into softmmu_ss
* Restrict arm_cpu_exec_interrupt() to TCG accelerator
-----BEGIN PGP SIGNATURE-----
iQJNBAABCAA3FiEE4aXFk81BneKOgxXPPCUl7RQ2DN4FAmObW5wZHHBldGVyLm1h
eWRlbGxAbGluYXJvLm9yZwAKCRA8JSXtFDYM3nWUD/4xKR9eAX9NTgASnvW3MffZ
GMkCW8TVFJdJy7rNq7TvPfdBEieVwrAGUmsTpkqN41/CHIIwIfssLzq6k1iliCOa
/BrqA6p/vff+gRfV0rEgy7i/VsHmWzD5TU6qR/Jm899HA1ZsHGUFHzjs5p+V9Uzz
r3awYTYsYbmuUgeoCaAQhbsyLCRmSi7gVbxeEc6UZst36xZEVwG8tPTwnhZUk55E
KkZkSPvGiQ+1VKeGnL2yvjGS8blZnJL55oJ7wI7yAypCxy6gudB1HGegaNPnNcxf
9lwph5sYaXbc0vyyAK3EJ+KUaJV9ZDU/ipjt3o88egYSYlB40IRVH1d+3ckZKSdG
EBlD2vIx4vHjaIUXorB9U6BuktBsPtDmJ6jEsTT+cbb8roGDMB/opDZMNQB4WMIU
kVEGZg7o3qkNDruAQOjakz8kOLq1J/ZKXBkLKWAsTTzdfMlnOMxAbFuFMxut7a+q
JULMPjX9dBJDhsjE841QkAbYst1phDg8hCc00JB6Rbrl39jr4GZWfvaLKnci17ts
nskPa/ElXvQj5WLhNZmn6vAco8mtGeDHzSNPnAoCCcDHd+hwfXkXKkweWZzzAr3f
/PprDZy4hrksYqyYa9YMMQjHOlXQrJs5GntptnZwQQR2hhuQcCnBSNKxF+G6tk+K
7Q1FjpE2d8NvL8Ar+WaX8w==
=YAjF
-----END PGP SIGNATURE-----
Merge tag 'pull-target-arm-20221215-1' of https://git.linaro.org/people/pmaydell/qemu-arm into staging
target-arm queue:
* hw/arm/virt: Add properties to allow more granular
configuration of use of highmem space
* target/arm: Add Cortex-A55 CPU
* hw/intc/arm_gicv3: Fix GICD_TYPER ITLinesNumber advertisement
* Implement FEAT_EVT
* Some 3-phase-reset conversions for Arm GIC, SMMU
* hw/arm/boot: set initrd with #address-cells type in fdt
* hw/misc: Move some arm-related files from specific_ss into softmmu_ss
* Restrict arm_cpu_exec_interrupt() to TCG accelerator
# gpg: Signature made Thu 15 Dec 2022 17:38:36 GMT
# gpg: using RSA key E1A5C593CD419DE28E8315CF3C2525ED14360CDE
# gpg: issuer "peter.maydell@linaro.org"
# gpg: Good signature from "Peter Maydell <peter.maydell@linaro.org>" [ultimate]
# gpg: aka "Peter Maydell <pmaydell@gmail.com>" [ultimate]
# gpg: aka "Peter Maydell <pmaydell@chiark.greenend.org.uk>" [ultimate]
# gpg: aka "Peter Maydell <peter@archaic.org.uk>" [ultimate]
# Primary key fingerprint: E1A5 C593 CD41 9DE2 8E83 15CF 3C25 25ED 1436 0CDE
* tag 'pull-target-arm-20221215-1' of https://git.linaro.org/people/pmaydell/qemu-arm: (28 commits)
target/arm: Restrict arm_cpu_exec_interrupt() to TCG accelerator
hw/misc: Move some arm-related files from specific_ss into softmmu_ss
hw/arm/boot: set initrd with #address-cells type in fdt
hw/intc: Convert TYPE_KVM_ARM_ITS to 3-phase reset
hw/intc: Convert TYPE_ARM_GICV3_ITS to 3-phase reset
hw/intc: Convert TYPE_ARM_GICV3_ITS_COMMON to 3-phase reset
hw/intc: Convert TYPE_KVM_ARM_GICV3 to 3-phase reset
hw/intc: Convert TYPE_ARM_GICV3_COMMON to 3-phase reset
hw/intc: Convert TYPE_ARM_GIC_KVM to 3-phase reset
hw/intc: Convert TYPE_ARM_GIC_COMMON to 3-phase reset
hw/arm: Convert TYPE_ARM_SMMUV3 to 3-phase reset
hw/arm: Convert TYPE_ARM_SMMU to 3-phase reset
target/arm: Report FEAT_EVT for TCG '-cpu max'
target/arm: Implement HCR_EL2.TID4 traps
target/arm: Implement HCR_EL2.TICAB,TOCU traps
target/arm: Implement HCR_EL2.TTLBOS traps
target/arm: Implement HCR_EL2.TTLBIS traps
target/arm: Allow relevant HCR bits to be written for FEAT_EVT
hw/intc/arm_gicv3: Fix GICD_TYPER ITLinesNumber advertisement
target/arm: Add Cortex-A55 CPU
...
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
When building with --disable-tcg on Darwin we get:
target/arm/cpu.c:725:16: error: incomplete definition of type 'struct TCGCPUOps'
cc->tcg_ops->do_interrupt(cs);
~~~~~~~~~~~^
Commit 083afd18a9 ("target/arm: Restrict cpu_exec_interrupt()
handler to sysemu") limited this block to system emulation,
but neglected to also limit it to TCG.
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Fabiano Rosas <farosas@suse.de>
Message-id: 20221209110823.59495-1-philmd@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The header target/arm/kvm-consts.h checks CONFIG_KVM which is marked as
poisoned in common code, so the files that include this header have to
be added to specific_ss and recompiled for each, qemu-system-arm and
qemu-system-aarch64. However, since the kvm headers are only optionally
used in kvm-constants.h for some sanity checks, we can additionally
check the NEED_CPU_H macro first to avoid the poisoned CONFIG_KVM macro,
so kvm-constants.h can also be used from "common" files (without the
sanity checks - which should be OK since they are still done from other
target-specific files instead). This way, and by adjusting some other
include statements in the related files here and there, we can move some
files from specific_ss into softmmu_ss, so that they only need to be
compiled once during the build process.
Signed-off-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Message-id: 20221202154023.293614-1-thuth@redhat.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The "MOVE TO PRIMARY/SECONDARY" instructions can also be called
from problem state. We just should properly check whether the
secondary-space access key is valid here, too, and inject a
privileged program exception if it is invalid.
Message-Id: <20221205125852.81848-1-thuth@redhat.com>
Reviewed-by: Ilya Leoshkevich <iii@linux.ibm.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
The PSW key mask is a 16 bit field, and the psw_key variable is
in the range from 0 to 15, so it does not make sense to use
"0x80 >> psw_key" for testing the bits here. We should use 0x8000
instead.
Message-Id: <20221205142043.95185-1-thuth@redhat.com>
Reviewed-by: Nina Schoetterl-Glausch <nsg@linux.ibm.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
Update the ID registers for TCG's '-cpu max' to report the
FEAT_EVT Enhanced Virtualization Traps support.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
For FEAT_EVT, the HCR_EL2.TID4 trap allows trapping of the cache ID
registers CCSIDR_EL1, CCSIDR2_EL1, CLIDR_EL1 and CSSELR_EL1 (and
their AArch32 equivalents). This is a subset of the registers
trapped by HCR_EL2.TID2, which includes all of these and also the
CTR_EL0 register.
Our implementation already uses a separate access function for
CTR_EL0 (ctr_el0_access()), so all of the registers currently using
access_aa64_tid2() should also be checking TID4. Make that function
check both TID2 and TID4, and rename it appropriately.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
For FEAT_EVT, the HCR_EL2.TICAB bit allows trapping of the ICIALLUIS
and IC IALLUIS cache maintenance instructions.
The HCR_EL2.TOCU bit traps all the other cache maintenance
instructions that operate to the point of unification:
AArch64 IC IVAU, IC IALLU, DC CVAU
AArch32 ICIMVAU, ICIALLU, DCCMVAU
The two trap bits between them cover all of the cache maintenance
instructions which must also check the HCR_TPU flag. Turn the old
aa64_cacheop_pou_access() function into a helper function which takes
the set of HCR_EL2 flags to check as an argument, and call it from
new access_ticab() and access_tocu() functions as appropriate for
each cache op.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
For FEAT_EVT, the HCR_EL2.TTLBOS bit allows trapping on EL1
use of TLB maintenance instructions that operate on the
outer shareable domain:
TLBI VMALLE1OS, TLBI VAE1OS, TLBI ASIDE1OS,TLBI VAAE1OS,
TLBI VALE1OS, TLBI VAALE1OS, TLBI RVAE1OS, TLBI RVAAE1OS,
TLBI RVALE1OS, and TLBI RVAALE1OS.
(There are no AArch32 outer-shareable TLB maintenance ops.)
Implement the trapping.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
For FEAT_EVT, the HCR_EL2.TTLBIS bit allows trapping on EL1 use of
TLB maintenance instructions that operate on the inner shareable
domain:
AArch64:
TLBI VMALLE1IS, TLBI VAE1IS, TLBI ASIDE1IS, TLBI VAAE1IS,
TLBI VALE1IS, TLBI VAALE1IS, TLBI RVAE1IS, TLBI RVAAE1IS,
TLBI RVALE1IS, and TLBI RVAALE1IS.
AArch32:
TLBIALLIS, TLBIMVAIS, TLBIASIDIS, TLBIMVAAIS, TLBIMVALIS,
and TLBIMVAALIS.
Add the trapping support.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
FEAT_EVT adds five new bits to the HCR_EL2 register: TTLBIS, TTLBOS,
TICAB, TOCU and TID4. These allow the guest to enable trapping of
various EL1 instructions to EL2. In this commit, add the necessary
code to allow the guest to set these bits if the feature is present;
because the bit is always zero when the feature isn't present we
won't need to use explicit feature checks in the "trap on condition"
tests in the following commits.
Note that although full implementation of the feature (mandatory from
Armv8.5 onward) requires all five trap bits, the ID registers permit
a value indicating that only TICAB, TOCU and TID4 are implemented,
which might be the case for CPUs between Armv8.2 and Armv8.5.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
The Cortex-A55 is one of the newer armv8.2+ CPUs; in particular
it supports the Privileged Access Never (PAN) feature. Add
a model of this CPU, so you can use a CPU type on the virt
board that models a specific real hardware CPU, rather than
having to use the QEMU-specific "max" CPU type.
Signed-off-by: Timofey Kutergin <tkutergin@gmail.com>
Message-id: 20221121150819.2782817-1-tkutergin@gmail.com
[PMM: tweaked commit message]
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
-----BEGIN PGP SIGNATURE-----
iQJGBAABCAAwFiEENUvIs9frKmtoZ05fOHC0AOuRhlMFAmOZ6lYSHGFybWJydUBy
ZWRoYXQuY29tAAoJEDhwtADrkYZT6VEQAKynjWh3AIZ4/qOgrVqsP0oRspevLmfH
BbuGoldjYpEE7RbwuCaZalZ7iy7TcSySxnPfUDVsFHd7NWffJVjwKHifGC0D/Ez0
+Ggyb1CBebN+mS7t+BNFUHdMM+wxFIlHwg4f4aTFbn2o0HKgj2a8tcNzNRonZbfa
xURnvbD4G4u0VZEc3Jak+x193xbOJFsuuWq0BZnDuNk+XqjyW2RwfpXLPJVk+82a
4uy/YgYuqXUqBeULwcJj+shBL4SXR9GyajTFMS64przSUle0ADUmXkPtaS2agV7e
Pym/UQuAcxvNyw34fJsiMZxx6rZI9YU30jQUMRLoYcPRR/Q/aiPeiiHtiD6Kaid7
IfOeH/EArXaQRFpD89xj4YcaTnRLQOEj0NXgXvAbQf6eD8JYyao/S/0lCsPZEoA2
nibLqEQ25ncDNXoSomuwtfjVff3w68lODFbhwqfA0gf3cPtCgVZ6xQ8P/McNY6K6
wqFHXMWTDHk1LOCTucjYz1z2TGzTnSG4iWi5Yt6FSxAc958AO+v5ALn/1pcYun+E
azM/MF0AInKj2aJCT530zT0tpCs/Jo07YKC8k6ubi77S0ZdmGS1XLeXkRXfk1+yI
OhuUgiVlSTHxD69DagT2vbnx1mDMM9X+OBIMvEi5nwvD9A/ghaCgkDeGFvbA1ud0
t0mxPBZJ+tiZ
=JJjG
-----END PGP SIGNATURE-----
Merge tag 'pull-misc-2022-12-14' of https://repo.or.cz/qemu/armbru into staging
Miscellaneous patches for 2022-12-14
# gpg: Signature made Wed 14 Dec 2022 15:23:02 GMT
# gpg: using RSA key 354BC8B3D7EB2A6B68674E5F3870B400EB918653
# gpg: issuer "armbru@redhat.com"
# gpg: Good signature from "Markus Armbruster <armbru@redhat.com>" [full]
# gpg: aka "Markus Armbruster <armbru@pond.sub.org>" [full]
# Primary key fingerprint: 354B C8B3 D7EB 2A6B 6867 4E5F 3870 B400 EB91 8653
* tag 'pull-misc-2022-12-14' of https://repo.or.cz/qemu/armbru:
ppc4xx_sdram: Simplify sdram_ddr_size() to return
block/vmdk: Simplify vmdk_co_create() to return directly
cleanup: Tweak and re-run return_directly.cocci
io: Tidy up fat-fingered parameter name
qapi: Use returned bool to check for failure (again)
sockets: Use ERRP_GUARD() where obviously appropriate
qemu-config: Use ERRP_GUARD() where obviously appropriate
qemu-config: Make config_parse_qdict() return bool
monitor: Use ERRP_GUARD() in monitor_init()
monitor: Simplify monitor_fd_param()'s error handling
error: Move ERRP_GUARD() to the beginning of the function
error: Drop a few superfluous ERRP_GUARD()
error: Drop some obviously superfluous error_propagate()
Drop more useless casts from void * to pointer
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The has_FOO for pointer-valued FOO are redundant, except for arrays.
They are also a nuisance to work with. Recent commit "qapi: Start to
elide redundant has_FOO in generated C" provided the means to elide
them step by step. This is the step for qapi/machine*.json.
Said commit explains the transformation in more detail. The invariant
violations mentioned there do not occur here.
Cc: Eduardo Habkost <eduardo@habkost.net>
Cc: Marcel Apfelbaum <marcel.apfelbaum@gmail.com>
Cc: Philippe Mathieu-Daudé <f4bug@amsat.org>
Cc: Yanan Wang <wangyanan55@huawei.com>
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Message-Id: <20221104160712.3005652-16-armbru@redhat.com>
Tweak the semantic patch to drop redundant parenthesis around the
return expression.
Coccinelle drops a comment in hw/rdma/vmw/pvrdma_cmd.c; restored
manually.
Coccinelle messes up vmdk_co_create(), not sure why. Change dropped,
will be done manually in the next commit.
Line breaks in target/avr/cpu.h and hw/rdma/vmw/pvrdma_cmd.c tidied up
manually.
Whitespace in tools/virtiofsd/fuse_lowlevel.c tidied up manually.
checkpatch.pl complains "return of an errno should typically be -ve"
two times for hw/9pfs/9p-synth.c. Preexisting, the patch merely makes
it visible to checkpatch.pl.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Message-Id: <20221122134917.1217307-2-armbru@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Acked-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Commit 012d4c96e2 changed the visitor functions taking Error ** to
return bool instead of void, and the commits following it used the new
return value to simplify error checking. Since then a few more uses
in need of the same treatment crept in. Do that. All pretty
mechanical except for
* balloon_stats_get_all()
This is basically the same transformation commit 012d4c96e2 applied
to the virtual walk example in include/qapi/visitor.h.
* set_max_queue_size()
Additionally replace "goto end of function" by return.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Message-Id: <20221121085054.683122-10-armbru@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
The SET ADDRESS SPACE CONTROL FAST instruction is not privileged, it can be
used from problem space, too. Just the switching to the home address space
is privileged and should still generate a privilege exception. This bug is
e.g. causing programs like Java that use the "getcpu" vdso kernel function
to crash (see https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=990417#26 ).
While we're at it, also check if DAT is not enabled. In that case the
instruction is supposed to generate a special operation exception.
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/655
Message-Id: <20221201184443.136355-1-thuth@redhat.com>
Reviewed-by: Ilya Leoshkevich <iii@linux.ibm.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Thomas Huth <thuth@redhat.com>
In get_physical_address, the canonical address check failed to
set TranslateFault.stage2, which resulted in an uninitialized
read from the struct when reporting the fault in x86_cpu_tlb_fill.
Adjust all error paths to use structure assignment so that the
entire struct is always initialized.
Reported-by: Daniel Hoffman <dhoff749@gmail.com>
Fixes: 9bbcf37219 ("target/i386: Reorg GET_HPHYS")
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20221201074522.178498-1-richard.henderson@linaro.org>
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1324
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
MMX state is saved/restored by FSAVE/FRSTOR so the instructions are
not illegal opcodes even if CR4.OSFXSR=0. Make sure that validate_vex
takes into account the prefix and only checks HF_OSFXSR_MASK in the
presence of an SSE instruction.
Fixes: 20581aadec ("target/i386: validate VEX prefixes via the instructions' exception classes", 2022-10-18)
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1350
Reported-by: Helge Konetzka (@hejko on gitlab.com)
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
The LPA2 extension implements 52-bit virtual addressing for 4k and 16k
translation granules, and for the former, this means an additional level
of translation is needed. This means we start counting at -1 instead of
0 when doing a walk, and so 'level' is now a signed quantity, and should
be typed as such. So turn it from uint32_t into int32_t.
This avoids a level of -1 getting misinterpreted as being >= 3, and
terminating a page table walk prematurely with a bogus output address.
Cc: Peter Maydell <peter.maydell@linaro.org>
Cc: Philippe Mathieu-Daudé <f4bug@amsat.org>
Cc: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
In get_phys_addr_with_struct(), we call get_phys_addr_twostage() if
the CPU supports EL2. However, we don't check here that stage 2 is
actually enabled. Instead we only check that inside
get_phys_addr_twostage() to skip stage 2 translation. This means
that even if stage 2 is disabled we still tell the stage 1 lookup to
do its page table walks via stage 2.
This works by luck for normal CPU accesses, but it breaks for debug
accesses, which are used by the disassembler and also by semihosting
file reads and writes, because the debug case takes a different code
path inside S1_ptw_translate().
This means that setups that use semihosting for file loads are broken
(a regression since 7.1, introduced in recent ptw refactoring), and
that sometimes disassembly in debug logs reports "unable to read
memory" rather than showing the guest insns.
Fix the bug by hoisting the "is stage 2 enabled?" check up to
get_phys_addr_with_struct(), so that we handle S2 disabled the same
way we do the "no EL2" case, with a simple single stage lookup.
Reported-by: Jens Wiklander <jens.wiklander@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20221121212404.1450382-1-peter.maydell@linaro.org
With LPA2, the effective output address size is at most 48 bits when
TCR.DS == 0. This case is currently unhandled in the page table walker,
where we happily assume LVA/64k granule when outputsize > 48 and
param.ds == 0, resulting in the wrong conversion to be used from a
page table descriptor to a physical address.
if (outputsize > 48) {
if (param.ds) {
descaddr |= extract64(descriptor, 8, 2) << 50;
} else {
descaddr |= extract64(descriptor, 12, 4) << 48;
}
So cap the outputsize to 48 when TCR.DS is cleared, as per the
architecture.
Cc: Peter Maydell <peter.maydell@linaro.org>
Cc: Philippe Mathieu-Daudé <f4bug@amsat.org>
Cc: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20221116170316.259695-1-ardb@kernel.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Short queue with a build regression fix when using --disable-tcg.
-----BEGIN PGP SIGNATURE-----
iHUEABYKAB0WIQQX6/+ZI9AYAK8oOBk82cqW3gMxZAUCY3ZP3wAKCRA82cqW3gMx
ZB7uAPwN3C8R0YeL+NR44ZlzJVM75TjT1sDYD3oZjEraIfAz9AD7BKRNMPXmmF3A
sIHlbhVWquFU07ZDq5UmMfa2WudFZgc=
=z2+u
-----END PGP SIGNATURE-----
Merge tag 'pull-ppc-20221117' of https://gitlab.com/danielhb/qemu into staging
ppc patch queue for 2022-11-17:
Short queue with a build regression fix when using --disable-tcg.
# -----BEGIN PGP SIGNATURE-----
#
# iHUEABYKAB0WIQQX6/+ZI9AYAK8oOBk82cqW3gMxZAUCY3ZP3wAKCRA82cqW3gMx
# ZB7uAPwN3C8R0YeL+NR44ZlzJVM75TjT1sDYD3oZjEraIfAz9AD7BKRNMPXmmF3A
# sIHlbhVWquFU07ZDq5UmMfa2WudFZgc=
# =z2+u
# -----END PGP SIGNATURE-----
# gpg: Signature made Thu 17 Nov 2022 10:14:39 EST
# gpg: using EDDSA key 17EBFF9923D01800AF2838193CD9CA96DE033164
# gpg: Good signature from "Daniel Henrique Barboza <danielhb413@gmail.com>" [unknown]
# gpg: WARNING: This key is not certified with a trusted signature!
# gpg: There is no indication that the signature belongs to the owner.
# Primary key fingerprint: 17EB FF99 23D0 1800 AF28 3819 3CD9 CA96 DE03 3164
* tag 'pull-ppc-20221117' of https://gitlab.com/danielhb/qemu:
target/ppc: Fix build warnings when building with 'disable-tcg'
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Kowshik reported that building qemu with GCC 12.2.1 for 'ppc64-softmmu'
target is failing due to following build warnings:
<snip>
../target/ppc/cpu_init.c:7018:13: error: 'ppc_restore_state_to_opc' defined but not used [-Werror=unused-function]
7018 | static void ppc_restore_state_to_opc(CPUState *cs,
<snip>
Fix this by wrapping these function definitions in 'ifdef CONFIG_TCG' so that
they are only defined if qemu is compiled with '--enable-tcg'
Reported-by: Kowshik Jois B S <kowsjois@linux.ibm.com>
Fixes: 61bd1d2942 ("target/ppc: Convert to tcg_ops restore_state_to_opc")
Fixes: 670f1da374 ("target/ppc: Implement hashst and hashchk")
Fixes: 53ae2aeb94 ("target/ppc: Implement hashstp and hashchkp")
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1319
Signed-off-by: Vaibhav Jain <vaibhav@linux.ibm.com>
Reviewed-by: Greg Kurz <groug@kaod.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Tested-by: Kowshik Jois B S <kowsjois@linux.vnet.ibm.com>
Message-Id: <20221116131743.658708-1-vaibhav@linux.ibm.com>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Fix typos (discovered with the 'codespell' utility).
Note: Though "migrateable" still seems to be a valid spelling, we change
it to "migratable" since this is the way more common spelling here.
Message-Id: <20221111182828.282251-1-thuth@redhat.com>
Reviewed-by: Stefan Weil <sw@weilnetz.de>
Reviewed-by: Ilya Leoshkevich <iii@linux.ibm.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
When translating code that is using LAHF and SAHF in combination with the
REX prefix, the instructions should not use any other register than AH;
however, QEMU selects SPL (SP being register 4, just like AH) if the
REX prefix is present. To fix this, use deposit directly without
going through gen_op_mov_v_reg and gen_op_mov_reg_v.
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/130
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Unlike the memory case, where "the destination operand receives a write
cycle without regard to the result of the comparison", rm must not be
touched altogether if the write fails, including not zero-extending
it on 64-bit processors. This is not how the movcond currently works,
because it is always followed by a gen_op_mov_reg_v to rm.
To fix it, introduce a new function that is similar to gen_op_mov_reg_v
but writes to a TCG temporary.
Considering that gen_extu(ot, oldv) is not needed in the memory case
either, the two cases for register and memory destinations are different
enough that one might as well fuse the two "if (mod == 3)" into one.
So do that too.
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/508
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
[rth: Add a test case ]
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Some implementations (i.e. Loongson-2F) may decide to implement
a 64 bit FPU without implementing COP1X instructions.
As the eligibility of 64 bit FP instructions is already determined
by CP0St_FR, there is no need to check for COP1X again.
Signed-off-by: Jiaxun Yang <jiaxun.yang@flygoat.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Message-Id: <20221102165719.190378-1-jiaxun.yang@flygoat.com>
[PMD: Add missing trailing parenthesis (buildfix)]
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
I don't have access to Octeon68XX hardware but according
to my investigation Octeon never had DSP ASE support.
As per "Cavium Networks OCTEON Plus CN50XX Hardware Reference
Manual" CP0C3_DSPP is reserved bit and read as 0. Also I do have
access to a Ubiquiti Edgerouter 4 which has Octeon CN7130 processor
and I can confirm CP0C3_DSPP is read as 0 on that processor.
Further more, in linux kernel:
arch/mips/include/asm/mach-cavium-octeon/cpu-feature-overrides.h
cpu_has_dsp is overridden as 0.
So I believe we shouldn't emulate DSP in QEMU as well.
Signed-off-by: Jiaxun Yang <jiaxun.yang@flygoat.com>
Acked-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Pavel Dovgalyuk <Pavel.Dovgalyuk@ispras.ru>
Message-Id: <20221031132531.18122-4-jiaxun.yang@flygoat.com>
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
This patch changes condition and function name for enabling
indexed load instructions for Octeon vCPUs. Octeons do not
have DSP extension, but implement LBX-and-others.
Signed-off-by: Pavel Dovgalyuk <Pavel.Dovgalyuk@ispras.ru>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Message-Id: <166728058455.229236.13834649461181619195.stgit@pasha-ThinkPad-X280>
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
As per "Cavium Networks OCTEON Plus CN50XX Hardware Reference
Manual" offset field is signed 16 bit value. However arg_BBIT.offset
is unsigned. We need to cast it as signed to do address calculation.
Signed-off-by: Jiaxun Yang <jiaxun.yang@flygoat.com>
Acked-by: Richard Henderson <richard.henderson@linaro.org>
Acked-by: Pavel Dovgalyuk <Pavel.Dovgalyuk@ispras.ru>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Message-Id: <20221031132531.18122-3-jiaxun.yang@flygoat.com>
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
As per an unpublished document, in later reversion of chips
CP0St_{KX, SX, UX} is not writeable and hardcoded to 1.
Without those bits set, kernel is unable to access XKPHYS address
segment. So just set them up on CPU reset.
Signed-off-by: Jiaxun Yang <jiaxun.yang@flygoat.com>
Acked-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20221031132531.18122-2-jiaxun.yang@flygoat.com>
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
-----BEGIN PGP SIGNATURE-----
iLMEAAEIAB0WIQS4/x2g0v3LLaCcbCxAov/yOSY+3wUCY2hz5gAKCRBAov/yOSY+
31ESA/9ppQVu70y8AnA/aYCzC6koQxxAsaD/lvwc7M1xzeZnjwOsVnsFLpZITTJj
2+2O/BlEO3uRvAtV6E7Vtu9D/+Cc+HQ7yOFnwbY0jZmePThp3YYtTg6o+2T7/9ya
RMicgny1SYjsOjG8/Uam9+dRbH5QheNFwid0BWFhMts9MDB3Mg==
=zZ8m
-----END PGP SIGNATURE-----
Merge tag 'pull-loongarch-20221107' of https://gitlab.com/gaosong/qemu into staging
pull-loongarch-20221107
# -----BEGIN PGP SIGNATURE-----
#
# iLMEAAEIAB0WIQS4/x2g0v3LLaCcbCxAov/yOSY+3wUCY2hz5gAKCRBAov/yOSY+
# 31ESA/9ppQVu70y8AnA/aYCzC6koQxxAsaD/lvwc7M1xzeZnjwOsVnsFLpZITTJj
# 2+2O/BlEO3uRvAtV6E7Vtu9D/+Cc+HQ7yOFnwbY0jZmePThp3YYtTg6o+2T7/9ya
# RMicgny1SYjsOjG8/Uam9+dRbH5QheNFwid0BWFhMts9MDB3Mg==
# =zZ8m
# -----END PGP SIGNATURE-----
# gpg: Signature made Sun 06 Nov 2022 21:56:38 EST
# gpg: using RSA key B8FF1DA0D2FDCB2DA09C6C2C40A2FFF239263EDF
# gpg: Good signature from "Song Gao <m17746591750@163.com>" [unknown]
# gpg: WARNING: This key is not certified with a trusted signature!
# gpg: There is no indication that the signature belongs to the owner.
# Primary key fingerprint: B8FF 1DA0 D2FD CB2D A09C 6C2C 40A2 FFF2 3926 3EDF
* tag 'pull-loongarch-20221107' of https://gitlab.com/gaosong/qemu:
target/loongarch: Fix return value of CHECK_FPE
target/loongarch: Separate the hardware flags into MMU index and PLV
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Regarding the patchset v3 has been merged into main line, and not
approved, this patch updates to patchset v4.
Fixes: 2419978c ("target/loongarch: Fix emulation of float-point disable exception")
Link: https://lists.nongnu.org/archive/html/qemu-devel/2022-11/msg00808.html
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Rui Wang <wangrui@loongson.cn>
Message-Id: <20221107024526.702297-3-wangrui@loongson.cn>
Signed-off-by: Song Gao <gaosong@loongson.cn>
Regarding the patchset v3 has been merged into main line, and not
approved, this patch updates to patchset v4.
Fixes: b4bda200 ("target/loongarch: Adjust the layout of hardware flags bit fields")
Link: https://lists.nongnu.org/archive/html/qemu-devel/2022-11/msg00808.html
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Rui Wang <wangrui@loongson.cn>
Message-Id: <20221107024526.702297-2-wangrui@loongson.cn>
Signed-off-by: Song Gao <gaosong@loongson.cn>
We use the .h.inc extension to include C headers. To be consistent
with the rest of the codebase, rename the C headers using the .def
extension.
IDE/tools using our .editorconfig / .gitattributes will leverage
this consistency.
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Message-Id: <20221025235006.7215-4-philmd@linaro.org>
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
We use the .h.inc extension to include C headers. To be consistent
with the rest of the codebase, rename the C headers using the .def
extension.
IDE/tools using our .editorconfig / .gitattributes will leverage
this consistency.
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Message-Id: <20221025235006.7215-3-philmd@linaro.org>
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
We use the .h.inc extension to include C headers. To be consistent
with the rest of the codebase, rename the C headers using the .def
extension.
IDE/tools using our .editorconfig / .gitattributes will leverage
this consistency.
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Laurent Vivier <laurent@vivier.eu>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Message-Id: <20221025235006.7215-2-philmd@linaro.org>
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
* Fix regression booting Trusted Firmware
* Honor HCR_E2H and HCR_TGE in ats_write64()
* Copy the entire vector in DO_ZIP
* Fix Privileged Access Never (PAN) for aarch32
* Make TLBIOS and TLBIRANGE ops trap on HCR_EL2.TTLB
* Set SCR_EL3.HXEn when direct booting kernel
* Set SME and SVE EL3 vector lengths when direct booting kernel
-----BEGIN PGP SIGNATURE-----
iQJNBAABCAA3FiEE4aXFk81BneKOgxXPPCUl7RQ2DN4FAmNk+KkZHHBldGVyLm1h
eWRlbGxAbGluYXJvLm9yZwAKCRA8JSXtFDYM3vUsD/9SYZP3ne2OZxBe8he98jJ5
6apODiBksBLUM+1bKEoYW8Kw4XpS10I1Tbnxe7n0bNAfIiZlsZ7HJAJaYWy4MX4k
Bq0v1EIFo+Obumocc14ZzWcw9yPpHOGavKHXfPxTtIw0amtOmh3aMBPuOZKiMSaq
TdI/8654DbAOY3Hp/r6WnXwEgAc23kx/PtGhQFdU4iWhzTdeQeFkgCCsVMO02zFQ
ZM4wiAATpfNfgf5+Wxoin6RQ8nI9PF+Xf7HhN3d1CiXju3vOl+geYNkubJzIopv1
itLcnvduYE6+5oJsnXZ4FDNO6/nnqWRNqtyDf0/NjLROfj84BPJpZqMX+FR6Q0I0
d+4/oEw4A46qfaS5b4/YelbJOiUgiViWU1Xs3g2dkeTMT8CyGfDrJ2HRDKN7AaHo
llL7s1calkX2oSs+gU0BAw8xRETGwMBSOpF6JmPVh277LjvWfN1vsJzVUG3wrSXL
G7qa2h+fHV5Xu876sc/i0+d4qHuqcE/EU86VQ6X40f+dRzN02rkSCPAxzGFwLXOr
8fl5MsX6z5pqcubnzxkhi66ZHc6fXsvtUjKBxyrVpMyjMlV9PTJ2Q1RCgVctErXk
lDzsLuplzPSjZBy3Peib/rLnmYUxJHyPe0RFYIumzZv/UHwL4GjZgkI842UVBpAL
FvIGblcCXHhdP4UFvqgZhw==
=Fcb4
-----END PGP SIGNATURE-----
Merge tag 'pull-target-arm-20221104' of https://git.linaro.org/people/pmaydell/qemu-arm into staging
target-arm queue:
* Fix regression booting Trusted Firmware
* Honor HCR_E2H and HCR_TGE in ats_write64()
* Copy the entire vector in DO_ZIP
* Fix Privileged Access Never (PAN) for aarch32
* Make TLBIOS and TLBIRANGE ops trap on HCR_EL2.TTLB
* Set SCR_EL3.HXEn when direct booting kernel
* Set SME and SVE EL3 vector lengths when direct booting kernel
# -----BEGIN PGP SIGNATURE-----
#
# iQJNBAABCAA3FiEE4aXFk81BneKOgxXPPCUl7RQ2DN4FAmNk+KkZHHBldGVyLm1h
# eWRlbGxAbGluYXJvLm9yZwAKCRA8JSXtFDYM3vUsD/9SYZP3ne2OZxBe8he98jJ5
# 6apODiBksBLUM+1bKEoYW8Kw4XpS10I1Tbnxe7n0bNAfIiZlsZ7HJAJaYWy4MX4k
# Bq0v1EIFo+Obumocc14ZzWcw9yPpHOGavKHXfPxTtIw0amtOmh3aMBPuOZKiMSaq
# TdI/8654DbAOY3Hp/r6WnXwEgAc23kx/PtGhQFdU4iWhzTdeQeFkgCCsVMO02zFQ
# ZM4wiAATpfNfgf5+Wxoin6RQ8nI9PF+Xf7HhN3d1CiXju3vOl+geYNkubJzIopv1
# itLcnvduYE6+5oJsnXZ4FDNO6/nnqWRNqtyDf0/NjLROfj84BPJpZqMX+FR6Q0I0
# d+4/oEw4A46qfaS5b4/YelbJOiUgiViWU1Xs3g2dkeTMT8CyGfDrJ2HRDKN7AaHo
# llL7s1calkX2oSs+gU0BAw8xRETGwMBSOpF6JmPVh277LjvWfN1vsJzVUG3wrSXL
# G7qa2h+fHV5Xu876sc/i0+d4qHuqcE/EU86VQ6X40f+dRzN02rkSCPAxzGFwLXOr
# 8fl5MsX6z5pqcubnzxkhi66ZHc6fXsvtUjKBxyrVpMyjMlV9PTJ2Q1RCgVctErXk
# lDzsLuplzPSjZBy3Peib/rLnmYUxJHyPe0RFYIumzZv/UHwL4GjZgkI842UVBpAL
# FvIGblcCXHhdP4UFvqgZhw==
# =Fcb4
# -----END PGP SIGNATURE-----
# gpg: Signature made Fri 04 Nov 2022 07:34:01 EDT
# gpg: using RSA key E1A5C593CD419DE28E8315CF3C2525ED14360CDE
# gpg: issuer "peter.maydell@linaro.org"
# gpg: Good signature from "Peter Maydell <peter.maydell@linaro.org>" [full]
# gpg: aka "Peter Maydell <pmaydell@gmail.com>" [full]
# gpg: aka "Peter Maydell <pmaydell@chiark.greenend.org.uk>" [full]
# gpg: aka "Peter Maydell <peter@archaic.org.uk>" [unknown]
# Primary key fingerprint: E1A5 C593 CD41 9DE2 8E83 15CF 3C25 25ED 1436 0CDE
* tag 'pull-target-arm-20221104' of https://git.linaro.org/people/pmaydell/qemu-arm:
target/arm: Two fixes for secure ptw
target/arm: Honor HCR_E2H and HCR_TGE in ats_write64()
target/arm: Copy the entire vector in DO_ZIP
target/arm: Fix Privileged Access Never (PAN) for aarch32
target/arm: Make TLBIOS and TLBIRANGE ops trap on HCR_EL2.TTLB
hw/arm/boot: Set SCR_EL3.HXEn when booting kernel
hw/arm/boot: Set SME and SVE EL3 vector lengths when booting kernel
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Reversed the sense of non-secure in get_phys_addr_lpae,
and failed to initialize attrs.secure for ARMMMUIdx_Phys_S.
Fixes: 48da29e4 ("target/arm: Add ptw_idx to S1Translate")
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1293
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Tested-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
We need to check HCR_E2H and HCR_TGE to select the right MMU index for
the correct translation regime.
To check for EL2&0 translation regime:
- For S1E0*, S1E1* and S12E* ops, check both HCR_E2H and HCR_TGE
- For S1E2* ops, check only HCR_E2H
Signed-off-by: Ake Koomsin <ake@igel.co.jp>
Message-id: 20221101064250.12444-1-ake@igel.co.jp
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
With odd_ofs set, we weren't copying enough data.
Fixes: 09eb6d7025 ("target/arm: Move sve zip high_ofs into simd_data")
Reported-by: Idan Horowitz <idan.horowitz@gmail.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Message-id: 20221031054144.3574-1-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
When we implemented the PAN support we theoretically wanted
to support it for both AArch32 and AArch64, but in practice
several bugs made it essentially unusable with an AArch32
guest. Fix all those problems:
- Use CPSR.PAN to check for PAN state in aarch32 mode
- throw permission fault during address translation when PAN is
enabled and kernel tries to access user acessible page
- ignore SCTLR_XP bit for armv7 and armv8 (conflicts with SCTLR_SPAN).
Signed-off-by: Timofey Kutergin <tkutergin@gmail.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20221027112619.2205229-1-tkutergin@gmail.com
[PMM: tweak commit message]
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The HCR_EL2.TTLB bit is supposed to trap all EL1 execution of TLB
maintenance instructions. However we have added new TLB insns for
FEAT_TLBIOS and FEAT_TLBIRANGE, and forgot to set their accessfn to
access_ttlb. Add the missing accessfns.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
We need to emulate it to generate a floating point disable exception
when CSR.EUEN.FPE is zero.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Song Gao <gaosong@loongson.cn>
Signed-off-by: Rui Wang <wangrui@loongson.cn>
Message-Id: <20221104040517.222059-3-wangrui@loongson.cn>
Signed-off-by: Song Gao <gaosong@loongson.cn>
Suggested-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Song Gao <gaosong@loongson.cn>
Signed-off-by: Rui Wang <wangrui@loongson.cn>
Message-Id: <20221104040517.222059-2-wangrui@loongson.cn>
Signed-off-by: Song Gao <gaosong@loongson.cn>
When the address is invalid address, We should set exception_index
according to MMUAccessType, and EXCCODE_ADEF need't update badinstr.
Otherwise, The system enters an infinite loop. e.g:
run test.c on system mode
test.c:
#include<stdio.h>
void (*func)(int *);
int main()
{
int i = 8;
void *ptr = (void *)0x4000000000000000;
func = ptr;
func(&i);
return 0;
}
Signed-off-by: Song Gao <gaosong@loongson.cn>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-ID: <20221101073210.3934280-2-gaosong@loongson.cn>
We need subcodes to distinguish the same excode cs->exception_indexs,
such as EXCCODE_ADEF/EXCCODE_ADEM.
Signed-off-by: Song Gao <gaosong@loongson.cn>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-ID: <20221101073210.3934280-1-gaosong@loongson.cn>
1. When cpu read or write extioi COREISR reg, it should access
the reg belonged to itself, so the cpu index of 's->coreisr'
is current cpu number. Using MemTxAttrs' requester_id to get
the cpu index.
2. it need not to mask 0x1f when calculate the coreisr array index.
Signed-off-by: Xiaojuan Yang <yangxiaojuan@loongson.cn>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20221021015307.2570844-3-yangxiaojuan@loongson.cn>
Signed-off-by: Song Gao <gaosong@loongson.cn>
If CR0.PG is unset, pg_mode will be zero, but it will also be zero
for non-PAE/non-PSE page tables with CR0.WP=0. Restore the
correct test for paging enabled.
Fixes: 98281984a3 ("target/i386: Add MMU_PHYS_IDX and MMU_NESTED_IDX")
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1269
Reported-by: Andreas Gustafsson <gson@gson.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20221102091232.1092552-1-richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
- cleanup win32/64 docker files
- update test-mingw test
- add flex/bison to debian-all-test
- handle --enable-static/--disable-pie in config
- extend timeouts on x86_64 avocado tests
- add flex/bison to debian-hexagon-cross
- use regular semihosting for nios2 check-tcg
- fix obscure linker error to nios2 softmmu tests
- various windows portability fixes for tests
- clean-up of MAINTAINERS
- use -machine none when appropriate in avocado
- make raspi2_initrd test detect shutdown
- disable sh4 rd2 tests on gitlab
- re-enable threadcount/linux-test for sh4
- clean-up s390x handling of "ex" instruction
- better handle new CPUs in execlog plugin
- pass CONFIG_DEBUG_TCG to plugin builds
- try and avoid races in test-io-channel-command
- speed up ssh key checking for tests/vm
-----BEGIN PGP SIGNATURE-----
iQEzBAABCgAdFiEEZoWumedRZ7yvyN81+9DbCVqeKkQFAmNhI/MACgkQ+9DbCVqe
KkSFXggAg0HIpBDcNz0V5Mh5p69F14pwbDSygKqGDFBebdOHeL7f+WCvQPUGEWxp
814zjvRY3SC4Mo4mtzguRvNu0styaUpemvRw5FDYK48GpEjg2eVxTnAFD4nr7ud0
dhw3iaHP+RjA6s3EpPUqQ5nlZEgFJ+Tvkckk3wKSpksBYA4tJra6Uey5kpZ27x0T
KOzB2P6w+9B/B11n/aeSxvRPZdnXt2MyfS/3pwwfoFYioEyaEQ3Ie6ooachtdSL3
PEvnJVK0VVYbZQwBXJlycNLlK/D++s4AEwmnZ5GmvDFuXlkRO9YMy9Wa5TKJl7gz
76Aw1KHsE03SyAPvH4bE7eGkIwhJOQ==
=6hXE
-----END PGP SIGNATURE-----
Merge tag 'pull-testing-for-7.2-011122-3' of https://github.com/stsquad/qemu into staging
testing and plugin updates for 7.2:
- cleanup win32/64 docker files
- update test-mingw test
- add flex/bison to debian-all-test
- handle --enable-static/--disable-pie in config
- extend timeouts on x86_64 avocado tests
- add flex/bison to debian-hexagon-cross
- use regular semihosting for nios2 check-tcg
- fix obscure linker error to nios2 softmmu tests
- various windows portability fixes for tests
- clean-up of MAINTAINERS
- use -machine none when appropriate in avocado
- make raspi2_initrd test detect shutdown
- disable sh4 rd2 tests on gitlab
- re-enable threadcount/linux-test for sh4
- clean-up s390x handling of "ex" instruction
- better handle new CPUs in execlog plugin
- pass CONFIG_DEBUG_TCG to plugin builds
- try and avoid races in test-io-channel-command
- speed up ssh key checking for tests/vm
# -----BEGIN PGP SIGNATURE-----
#
# iQEzBAABCgAdFiEEZoWumedRZ7yvyN81+9DbCVqeKkQFAmNhI/MACgkQ+9DbCVqe
# KkSFXggAg0HIpBDcNz0V5Mh5p69F14pwbDSygKqGDFBebdOHeL7f+WCvQPUGEWxp
# 814zjvRY3SC4Mo4mtzguRvNu0styaUpemvRw5FDYK48GpEjg2eVxTnAFD4nr7ud0
# dhw3iaHP+RjA6s3EpPUqQ5nlZEgFJ+Tvkckk3wKSpksBYA4tJra6Uey5kpZ27x0T
# KOzB2P6w+9B/B11n/aeSxvRPZdnXt2MyfS/3pwwfoFYioEyaEQ3Ie6ooachtdSL3
# PEvnJVK0VVYbZQwBXJlycNLlK/D++s4AEwmnZ5GmvDFuXlkRO9YMy9Wa5TKJl7gz
# 76Aw1KHsE03SyAPvH4bE7eGkIwhJOQ==
# =6hXE
# -----END PGP SIGNATURE-----
# gpg: Signature made Tue 01 Nov 2022 09:49:39 EDT
# gpg: using RSA key 6685AE99E75167BCAFC8DF35FBD0DB095A9E2A44
# gpg: Good signature from "Alex Bennée (Master Work Key) <alex.bennee@linaro.org>" [unknown]
# gpg: WARNING: This key is not certified with a trusted signature!
# gpg: There is no indication that the signature belongs to the owner.
# Primary key fingerprint: 6685 AE99 E751 67BC AFC8 DF35 FBD0 DB09 5A9E 2A44
* tag 'pull-testing-for-7.2-011122-3' of https://github.com/stsquad/qemu: (31 commits)
tests/vm: use -o IdentitiesOnly=yes for ssh
tests/unit: cleanups for test-io-channel-command
contrib/plugins: protect execlog's last_exec expansion
contrib/plugins: enable debug on CONFIG_DEBUG_TCG
tests/tcg: include CONFIG_PLUGIN in config-host.mak
target/s390x: fake instruction loading when handling 'ex'
target/s390x: don't probe next pc for EXecuted insns
target/s390x: don't use ld_code2 to probe next pc
tests/tcg: re-enable threadcount for sh4
tests/tcg: re-enable linux-test for sh4
tests/avocado: disable sh4 rd2 tests on Gitlab
tests/avocado: raspi2_initrd: Wait for guest shutdown message before stopping
tests/avocado: set -machine none for userfwd and vnc tests
MAINTAINERS: fix-up for check-tcg Makefile changes
MAINTAINERS: add features_to_c.sh to gdbstub files
MAINTAINERS: add entries for the key build bits
hw/usb: dev-mtp: Use g_mkdir()
block/vvfat: Unify the mkdir() call
tcg: Avoid using hardcoded /tmp
semihosting/arm-compat-semi: Avoid using hardcoded /tmp
...
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
The helpers for reset_rf, cli, sti, clac, stac are
completely trivial; implement them inline.
Drop some nearby #if 0 code.
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
The value passed is always true, and if the target's
synchronize_from_tb hook is non-trivial, not exiting
may be erroneous.
Reviewed-by: Claudio Fontana <cfontana@suse.de>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Since we do not plan to exit, use cpu_unwind_state_data
and extract exactly the data requested.
This is a bug fix, in that we no longer clobber dflag.
Consider:
l.j L2 // branch
l.mfspr r1, ppc // delay
L1: boom
L2: l.lwa r3, (r4)
Here, dflag would be set by cpu_restore_state (because that is the current
state of the cpu), but but not cleared by tb_stop on exiting the TB
(because DisasContext has recorded the current value as zero).
The next TB begins at L2 with dflag incorrectly set. If the load has a
tlb miss, then the exception will be delivered as per a delay slot:
with DSX set in the status register and PC decremented (delay slots
restart by re-executing the branch). This will cause the return from
interrupt to go to L1, and boom!
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
We have called cpu_restore_state asserting will_exit.
Do not go back on that promise. This affects icount.
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Avoid cpu_restore_state, and modifying env->eip out from
underneath the translator with TARGET_TB_PCREL. There is
some slight duplication from x86_restore_state_to_opc,
but it's just a few lines.
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1269
Reviewed-by: Claudio Fontana <cfontana@suse.de>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
The s390x EXecute instruction is a bit weird as we synthesis the
executed instruction from what we have stored in memory. This missed
the plugin instrumentation.
Work around this with a special helper to inform the rest of the
translator about the instruction so things stay consistent.
Reviewed-by: David Hildenbrand <david@redhat.com>
Acked-by: Ilya Leoshkevich <iii@linux.ibm.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Cc: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20221027183637.2772968-26-alex.bennee@linaro.org>
We have finished the TB anyway so we can shortcut the other tests by
checking dc->ex_value first.
Reviewed-by: David Hildenbrand <david@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Acked-by: Ilya Leoshkevich <iii@linux.ibm.com>
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Message-Id: <20221027183637.2772968-25-alex.bennee@linaro.org>
This isn't an translator picking up an instruction so we shouldn't use
the translator_lduw function which has side effects for plugins.
Reviewed-by: David Hildenbrand <david@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Acked-by: Ilya Leoshkevich <iii@linux.ibm.com>
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Message-Id: <20221027183637.2772968-24-alex.bennee@linaro.org>
This queue has the second part of the ppc4xx_sdram cleanups, doorbell
instructions for POWER8, new pflash handling for the e500 machine and a
Radix MMU regression fix.
It also has a lot of performance optimizations in the PowerPC emulation
done by the researchers of the Eldorado institute. Between using gvec
for VMX/VSX instructions, a full rework of the interrupt model and PMU
optimizations, they managed to drastically speed up the emulation of
powernv8/9/10 machines. Here's an example with avocado tests:
- with master:
tests/avocado/boot_linux_console.py:BootLinuxConsole.test_ppc_powernv8:
PASS (38.89 s)
tests/avocado/boot_linux_console.py:BootLinuxConsole.test_ppc_powernv9:
PASS (43.89 s)
- with this queue applied:
tests/avocado/boot_linux_console.py:BootLinuxConsole.test_ppc_powernv8:
PASS (21.23 s)
tests/avocado/boot_linux_console.py:BootLinuxConsole.test_ppc_powernv9:
PASS (22.58 s)
Other ppc machines, like pseries, also had a noticeable performance
boost.
-----BEGIN PGP SIGNATURE-----
iHUEABYKAB0WIQQX6/+ZI9AYAK8oOBk82cqW3gMxZAUCY10J/gAKCRA82cqW3gMx
ZAbjAPwKNbE1wE2POJbMALBQAM5MewwLMV/UKGjE6jA7HAbb/AEA9e3o11FoUmSJ
rZkmTvMzBQZ81mMGRlS0cnqbrr4ADgc=
=gnKY
-----END PGP SIGNATURE-----
Merge tag 'pull-ppc-20221029' of https://gitlab.com/danielhb/qemu into staging
ppc patch queue for 2022-10-29:
This queue has the second part of the ppc4xx_sdram cleanups, doorbell
instructions for POWER8, new pflash handling for the e500 machine and a
Radix MMU regression fix.
It also has a lot of performance optimizations in the PowerPC emulation
done by the researchers of the Eldorado institute. Between using gvec
for VMX/VSX instructions, a full rework of the interrupt model and PMU
optimizations, they managed to drastically speed up the emulation of
powernv8/9/10 machines. Here's an example with avocado tests:
- with master:
tests/avocado/boot_linux_console.py:BootLinuxConsole.test_ppc_powernv8:
PASS (38.89 s)
tests/avocado/boot_linux_console.py:BootLinuxConsole.test_ppc_powernv9:
PASS (43.89 s)
- with this queue applied:
tests/avocado/boot_linux_console.py:BootLinuxConsole.test_ppc_powernv8:
PASS (21.23 s)
tests/avocado/boot_linux_console.py:BootLinuxConsole.test_ppc_powernv9:
PASS (22.58 s)
Other ppc machines, like pseries, also had a noticeable performance
boost.
# -----BEGIN PGP SIGNATURE-----
#
# iHUEABYKAB0WIQQX6/+ZI9AYAK8oOBk82cqW3gMxZAUCY10J/gAKCRA82cqW3gMx
# ZAbjAPwKNbE1wE2POJbMALBQAM5MewwLMV/UKGjE6jA7HAbb/AEA9e3o11FoUmSJ
# rZkmTvMzBQZ81mMGRlS0cnqbrr4ADgc=
# =gnKY
# -----END PGP SIGNATURE-----
# gpg: Signature made Sat 29 Oct 2022 07:09:50 EDT
# gpg: using EDDSA key 17EBFF9923D01800AF2838193CD9CA96DE033164
# gpg: Good signature from "Daniel Henrique Barboza <danielhb413@gmail.com>" [unknown]
# gpg: WARNING: This key is not certified with a trusted signature!
# gpg: There is no indication that the signature belongs to the owner.
# Primary key fingerprint: 17EB FF99 23D0 1800 AF28 3819 3CD9 CA96 DE03 3164
* tag 'pull-ppc-20221029' of https://gitlab.com/danielhb/qemu: (63 commits)
target/ppc: Fix regression in Radix MMU
hw/ppc/e500: Implement pflash handling
hw/sd/sdhci: Rename ESDHC_* defines to USDHC_*
hw/sd/sdhci-internal: Unexport ESDHC defines
hw/block/pflash_cfi0{1, 2}: Error out if device length isn't a power of two
docs/system/ppc/ppce500: Use qemu-system-ppc64 across the board(s)
target/ppc: Increment PMC5 with inline insns
target/ppc: Add new PMC HFLAGS
ppc4xx_sdram: Add errp parameter to ppc4xx_sdram_banks()
ppc4xx_sdram: Convert DDR SDRAM controller to new bank handling
ppc4xx_sdram: Generalise bank setup
ppc4xx_sdram: Rename local state variable for brevity
ppc4xx_sdram: Use hwaddr for memory bank size
ppc4xx_sdram: Move ppc4xx_sdram_banks() to ppc4xx_sdram.c
ppc4xx_devs.c: Move DDR SDRAM controller model to ppc4xx_sdram.c
ppc440_uc.c: Move DDR2 SDRAM controller model to ppc4xx_sdram.c
target/ppc: move the p*_interrupt_powersave methods to excp_helper.c
target/ppc: unify cpu->has_work based on cs->interrupt_request
target/ppc: introduce ppc_maybe_interrupt
target/ppc: remove ppc_store_lpcr from CONFIG_USER_ONLY builds
...
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
* Some more small s390x fixes and maintainer updates
* Make sure to remove all temporary files from qtests
* OpenBSD VM test update to version 7.2
* Add sndio to FreeBSD tests
* More patches to enable the qtests on Windows
-----BEGIN PGP SIGNATURE-----
iQJFBAABCAAvFiEEJ7iIR+7gJQEY8+q5LtnXdP5wLbUFAmNb1x8RHHRodXRoQHJl
ZGhhdC5jb20ACgkQLtnXdP5wLbXmcA//TCliiFkhprVxzIqy7zb9uz2Odu+sS4dT
azUSlXvC14fECm/Rb/rd2VLqCu5x2er8CYauxKQ4VhRImzcDta4kvpt/HKIppN2t
sqw5tipJL0DYcWBwYL1llvfutM26M+Oh0igwR8uV7b+W1FjojEZdcOr9IZ6E6V55
wQCE5OHm0VCr61QeI5IBfZTsiPo+DFomUCpj7w66j6i0CVDvmpoe36tCmvGgrcpZ
SP7ep7/Iq+dnGh2YnJyoUOPlXeeiBCxAygOVnIRXptDeniGoliCFn7ksLdKDQ9qY
69pSPR/W7mTZB/HkCRalAbYuYrI9Rcqxdu6c9vcyB8Pr0snQLTf8qThY+BJ2oC4w
JSGgWVniAk5MmrDazwNRkSbgngYLYf+CcT1h5AANuU5Kt50Bdy9Y3TuL5YVmofEp
N4bypV0ICImQyDECz76+i5/iJOcWiRyjMfLT6y00dspeuy983xHakrsHGD8xj0U/
3IVxnF9bDnUSVg6lFhYrgCB3dRG1TNPJoYQOM7raS5MAPRrDtIuSabwtyn84jo4+
9kZRPJBriMBHNsCjGVlJ9CATmaK1SKVAbRcabjgOKoIwhZTpAe6JalykREUJlTys
hB2V//lWWYPaSpzwY+OkvxoOmJIziixEskOmx6hPcoxID5v/bqlR69W15aUlKuLq
VWFb+/yMvaE=
=h0Ep
-----END PGP SIGNATURE-----
Merge tag 'pull-request-2022-10-28' of https://gitlab.com/thuth/qemu into staging
* Fix and test the VISTR instruction on s390x
* Some more small s390x fixes and maintainer updates
* Make sure to remove all temporary files from qtests
* OpenBSD VM test update to version 7.2
* Add sndio to FreeBSD tests
* More patches to enable the qtests on Windows
# -----BEGIN PGP SIGNATURE-----
#
# iQJFBAABCAAvFiEEJ7iIR+7gJQEY8+q5LtnXdP5wLbUFAmNb1x8RHHRodXRoQHJl
# ZGhhdC5jb20ACgkQLtnXdP5wLbXmcA//TCliiFkhprVxzIqy7zb9uz2Odu+sS4dT
# azUSlXvC14fECm/Rb/rd2VLqCu5x2er8CYauxKQ4VhRImzcDta4kvpt/HKIppN2t
# sqw5tipJL0DYcWBwYL1llvfutM26M+Oh0igwR8uV7b+W1FjojEZdcOr9IZ6E6V55
# wQCE5OHm0VCr61QeI5IBfZTsiPo+DFomUCpj7w66j6i0CVDvmpoe36tCmvGgrcpZ
# SP7ep7/Iq+dnGh2YnJyoUOPlXeeiBCxAygOVnIRXptDeniGoliCFn7ksLdKDQ9qY
# 69pSPR/W7mTZB/HkCRalAbYuYrI9Rcqxdu6c9vcyB8Pr0snQLTf8qThY+BJ2oC4w
# JSGgWVniAk5MmrDazwNRkSbgngYLYf+CcT1h5AANuU5Kt50Bdy9Y3TuL5YVmofEp
# N4bypV0ICImQyDECz76+i5/iJOcWiRyjMfLT6y00dspeuy983xHakrsHGD8xj0U/
# 3IVxnF9bDnUSVg6lFhYrgCB3dRG1TNPJoYQOM7raS5MAPRrDtIuSabwtyn84jo4+
# 9kZRPJBriMBHNsCjGVlJ9CATmaK1SKVAbRcabjgOKoIwhZTpAe6JalykREUJlTys
# hB2V//lWWYPaSpzwY+OkvxoOmJIziixEskOmx6hPcoxID5v/bqlR69W15aUlKuLq
# VWFb+/yMvaE=
# =h0Ep
# -----END PGP SIGNATURE-----
# gpg: Signature made Fri 28 Oct 2022 09:20:31 EDT
# gpg: using RSA key 27B88847EEE0250118F3EAB92ED9D774FE702DB5
# gpg: issuer "thuth@redhat.com"
# gpg: Good signature from "Thomas Huth <th.huth@gmx.de>" [full]
# gpg: aka "Thomas Huth <thuth@redhat.com>" [full]
# gpg: aka "Thomas Huth <huth@tuxfamily.org>" [full]
# gpg: aka "Thomas Huth <th.huth@posteo.de>" [unknown]
# Primary key fingerprint: 27B8 8847 EEE0 2501 18F3 EAB9 2ED9 D774 FE70 2DB5
* tag 'pull-request-2022-10-28' of https://gitlab.com/thuth/qemu: (21 commits)
tests/qtest: libqtest: Correct the timeout unit of blocking receive calls for win32
tests/qtest: libqos: Do not build virtio-9p unconditionally
tests/qtest: migration-test: Make sure QEMU process "to" exited after migration is canceled
tests/qtest: libqtest: Introduce qtest_wait_qemu()
tests/qtest: Use EXIT_FAILURE instead of magic number
tests/qtest: device-plug-test: Reverse the usage of double/single quotes
tests/qtest: Support libqtest to build and run on Windows
tests/qtest: Use send/recv for socket communication
accel/qtest: Support qtest accelerator for Windows
tests: Add sndio to the FreeBSD CI containers / VM
tests/vm: update openbsd to release 7.2
tests/qtest/libqos/e1000e: Use e1000_regs.h
tests/qtest/cxl-test: Remove temporary directories after testing
tests/qtest/tpm: Clean up remainders of swtpm
MAINTAINERS: target/s390x/: add Ilya as reviewer
tests/tcg/s390x: Add a test for the vistr instruction
target/s390x: Fix emulation of the VISTR instruction
tests/tcg/s390x: Test compiler flags only once, not every time
s390x/tod-kvm: don't save/restore the TOD in PV guests
s390x: step down as general arch maintainer
...
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Specify maximum possible APIC ID assigned for current VM session to KVM
prior to the creation of vCPUs. By this setting, KVM can set up VM-scoped
data structure indexed by the APIC ID, e.g. Posted-Interrupt Descriptor
pointer table to support Intel IPI virtualization, with the most optimal
memory footprint.
It can be achieved by calling KVM_ENABLE_CAP for KVM_CAP_MAX_VCPU_ID
capability once KVM has enabled it. Ignoring the return error if KVM
doesn't support this capability yet.
Signed-off-by: Zeng Guang <guang.zeng@intel.com>
Acked-by: Peter Xu <peterx@redhat.com>
Acked-by: Michael S. Tsirkin <mst@redhat.com>
Message-Id: <20220825025246.26618-1-guang.zeng@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Commit 47e83d9107 ended up unintentionally changing the control flow
of ppc_radix64_process_scoped_xlate(). When guest_visible is false,
it must not raise an exception, even if the radix configuration is
not valid.
This regression prevented Linux boot in a nested environment with
L1 using TCG and emulating KVM (cap-nested-hv=on) and L2 using
KVM. L2 would hang on Linux's futex_init(), when it tested how a
futex_atomic_cmpxchg_inatomic() handled a fault, because L1 would
start a loop of trying to perform partition scoped translations
and raising exceptions.
Fixes: 47e83d9107 ("target/ppc: Improve Radix xlate level validation")
Reported-by: Victor Colombo <victor.colombo@eldorado.org.br>
Signed-off-by: Leandro Lupori <leandro.lupori@eldorado.org.br>
Tested-by: Víctor Colombo <victor.colombo@eldorado.org.br>
Reviewed-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Message-Id: <20221028183617.121786-1-leandro.lupori@eldorado.org.br>
[danielhb: use %"PRIu64" to print 'nls']
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Profiling QEMU during Fedora 35 for PPC64 boot revealed that
6.39% of total time was being spent in helper_insns_inc(), on a
POWER9 machine. To avoid calling this helper every time PMCs had
to be incremented, an inline implementation of PMC5 increment and
check for overflow was developed. This led to a reduction of
about 12% in Fedora's boot time.
Signed-off-by: Leandro Lupori <leandro.lupori@eldorado.org.br>
Reviewed-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Message-Id: <20221025202424.195984-4-leandro.lupori@eldorado.org.br>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Add 2 new PMC related HFLAGS:
- HFLAGS_PMCJCE - value of MMCR0 PMCjCE bit
- HFLAGS_PMC_OTHER - set if a PMC other than PMC5-6 is enabled
These flags allow further optimization of PMC5 update code, by
allowing frequently tested conditions to be performed at
translation time.
Signed-off-by: Leandro Lupori <leandro.lupori@eldorado.org.br>
Reviewed-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Message-Id: <20221025202424.195984-3-leandro.lupori@eldorado.org.br>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Move the methods to excp_helper.c and make them static.
Reviewed-by: Fabiano Rosas <farosas@linux.ibm.com>
Signed-off-by: Matheus Ferst <matheus.ferst@eldorado.org.br>
Message-Id: <20221021142156.4134411-4-matheus.ferst@eldorado.org.br>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Now that cs->interrupt_request indicates if there is any unmasked
interrupt, checking if the CPU has work to do can be simplified to a
single check that works for all CPU models.
Reviewed-by: Fabiano Rosas <farosas@linux.ibm.com>
Signed-off-by: Matheus Ferst <matheus.ferst@eldorado.org.br>
Message-Id: <20221021142156.4134411-3-matheus.ferst@eldorado.org.br>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
This new method will check if any pending interrupt was unmasked and
then call cpu_interrupt/cpu_reset_interrupt accordingly. Code that
raises/lowers or masks/unmasks interrupts should call this method to
keep CPU_INTERRUPT_HARD coherent with env->pending_interrupts.
Signed-off-by: Matheus Ferst <matheus.ferst@eldorado.org.br>
Reviewed-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Message-Id: <20221021142156.4134411-2-matheus.ferst@eldorado.org.br>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Export p7_interrupt_powersave and use it in p7_next_unmasked_interrupt.
Signed-off-by: Matheus Ferst <matheus.ferst@eldorado.org.br>
Reviewed-by: Fabiano Rosas <farosas@linux.ibm.com>
Message-Id: <20221011204829.1641124-26-matheus.ferst@eldorado.org.br>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Move the interrupt masking logic out of cpu_has_work_POWER7 in a new
method, p7_interrupt_powersave, that only returns an interrupt if it can
wake the processor from power-saving mode.
Signed-off-by: Matheus Ferst <matheus.ferst@eldorado.org.br>
Reviewed-by: Fabiano Rosas <farosas@linux.ibm.com>
Message-Id: <20221011204829.1641124-25-matheus.ferst@eldorado.org.br>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Remove the following unused interrupts from the POWER7 interrupt
processing method:
- PPC_INTERRUPT_RESET: only raised for 6xx, 7xx, 970 and POWER5p;
- Hypervisor Virtualization: introduced in Power ISA v3.0;
- Hypervisor Doorbell and Event-Based Branch: introduced in
Power ISA v2.07;
- Critical Input, Watchdog Timer, and Fixed Interval Timer: only defined
for embedded CPUs;
- Doorbell and Critical Doorbell Interrupt: processor does not implement
the Embedded.Processor Control category;
- Programmable Interval Timer: 40x-only;
- PPC_INTERRUPT_THERM: only raised for 970 and POWER5p;
Signed-off-by: Matheus Ferst <matheus.ferst@eldorado.org.br>
Reviewed-by: Fabiano Rosas <farosas@linux.ibm.com>
Message-Id: <20221011204829.1641124-23-matheus.ferst@eldorado.org.br>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
The new method is identical to ppc_deliver_interrupt, processor-specific
code will be added/removed in the following patches.
Signed-off-by: Matheus Ferst <matheus.ferst@eldorado.org.br>
Reviewed-by: Fabiano Rosas <farosas@linux.ibm.com>
Message-Id: <20221011204829.1641124-22-matheus.ferst@eldorado.org.br>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Remove the following unused interrupts from the POWER7 interrupt masking
method:
- PPC_INTERRUPT_RESET: only raised for 6xx, 7xx, 970 and POWER5p;
- Hypervisor Virtualization: introduced in Power ISA v3.0;
- Hypervisor Doorbell and Event-Based Branch: introduced in
Power ISA v2.07;
- Critical Input, Watchdog Timer, and Fixed Interval Timer: only defined
for embedded CPUs;
- Doorbell and Critical Doorbell Interrupt: processor does not implement
the Embedded.Processor Control category;
- Programmable Interval Timer: 40x-only;
- PPC_INTERRUPT_THERM: only raised for 970 and POWER5p;
Signed-off-by: Matheus Ferst <matheus.ferst@eldorado.org.br>
Reviewed-by: Fabiano Rosas <farosas@linux.ibm.com>
Message-Id: <20221011204829.1641124-21-matheus.ferst@eldorado.org.br>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
The new method is identical to ppc_next_unmasked_interrupt_generic,
processor-specific code will be added/removed in the following patches.
Signed-off-by: Matheus Ferst <matheus.ferst@eldorado.org.br>
Reviewed-by: Fabiano Rosas <farosas@linux.ibm.com>
Message-Id: <20221011204829.1641124-20-matheus.ferst@eldorado.org.br>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Export p8_interrupt_powersave and use it in p8_next_unmasked_interrupt.
Signed-off-by: Matheus Ferst <matheus.ferst@eldorado.org.br>
Reviewed-by: Fabiano Rosas <farosas@linux.ibm.com>
Message-Id: <20221011204829.1641124-19-matheus.ferst@eldorado.org.br>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Move the interrupt masking logic out of cpu_has_work_POWER8 in a new
method, p8_interrupt_powersave, that only returns an interrupt if it can
wake the processor from power-saving mode.
Signed-off-by: Matheus Ferst <matheus.ferst@eldorado.org.br>
Reviewed-by: Fabiano Rosas <farosas@linux.ibm.com>
Message-Id: <20221011204829.1641124-18-matheus.ferst@eldorado.org.br>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Remove the following unused interrupts from the POWER8 interrupt
processing method:
- PPC_INTERRUPT_RESET: only raised for 6xx, 7xx, 970 and POWER5p;
- Debug Interrupt: removed in Power ISA v2.07;
- Hypervisor Virtualization: introduced in Power ISA v3.0;
- Critical Input, Watchdog Timer, and Fixed Interval Timer: only defined
for embedded CPUs;
- Critical Doorbell: processor does not implement the
"Embedded.Processor Control" category;
- Programmable Interval Timer: 40x-only;
- PPC_INTERRUPT_THERM: only raised for 970 and POWER5p;
Signed-off-by: Matheus Ferst <matheus.ferst@eldorado.org.br>
Reviewed-by: Fabiano Rosas <farosas@linux.ibm.com>
Message-Id: <20221011204829.1641124-16-matheus.ferst@eldorado.org.br>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
The new method is identical to ppc_deliver_interrupt, processor-specific
code will be added/removed in the following patches.
Signed-off-by: Matheus Ferst <matheus.ferst@eldorado.org.br>
Reviewed-by: Fabiano Rosas <farosas@linux.ibm.com>
Message-Id: <20221011204829.1641124-15-matheus.ferst@eldorado.org.br>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Remove the following unused interrupts from the POWER8 interrupt masking
method:
- PPC_INTERRUPT_RESET: only raised for 6xx, 7xx, 970, and POWER5p;
- Debug Interrupt: removed in Power ISA v2.07;
- Hypervisor Virtualization: introduced in Power ISA v3.0;
- Critical Input, Watchdog Timer, and Fixed Interval Timer: only defined
for embedded CPUs;
- Critical Doorbell: processor does not implement the "Embedded.Processor
Control" category;
- Programmable Interval Timer: 40x-only;
- PPC_INTERRUPT_THERM: only raised for 970 and POWER5p;
Signed-off-by: Matheus Ferst <matheus.ferst@eldorado.org.br>
Reviewed-by: Fabiano Rosas <farosas@linux.ibm.com>
Message-Id: <20221011204829.1641124-14-matheus.ferst@eldorado.org.br>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
The new method is identical to ppc_next_unmasked_interrupt_generic,
processor-specific code will be added/removed in the following patches.
Signed-off-by: Matheus Ferst <matheus.ferst@eldorado.org.br>
Reviewed-by: Fabiano Rosas <farosas@linux.ibm.com>
Message-Id: <20221011204829.1641124-13-matheus.ferst@eldorado.org.br>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Export p9_interrupt_powersave and use it in p9_next_unmasked_interrupt.
Signed-off-by: Matheus Ferst <matheus.ferst@eldorado.org.br>
Reviewed-by: Fabiano Rosas <farosas@linux.ibm.com>
Message-Id: <20221011204829.1641124-12-matheus.ferst@eldorado.org.br>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Move the interrupt masking logic out of cpu_has_work_POWER9 in a new
method, p9_interrupt_powersave, that only returns an interrupt if it can
wake the processor from power-saving mode.
Signed-off-by: Matheus Ferst <matheus.ferst@eldorado.org.br>
Reviewed-by: Fabiano Rosas <farosas@linux.ibm.com>
Message-Id: <20221011204829.1641124-11-matheus.ferst@eldorado.org.br>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Remove the following unused interrupts from the POWER9 interrupt
processing method:
- PPC_INTERRUPT_RESET: only raised for 6xx, 7xx, 970 and POWER5p;
- Debug Interrupt: removed in Power ISA v2.07;
- Critical Input, Watchdog Timer, and Fixed Interval Timer: only defined
for embedded CPUs;
- Critical Doorbell Interrupt: removed in Power ISA v3.0;
- Programmable Interval Timer: 40x-only.
Signed-off-by: Matheus Ferst <matheus.ferst@eldorado.org.br>
Reviewed-by: Fabiano Rosas <farosas@linux.ibm.com>
Message-Id: <20221011204829.1641124-9-matheus.ferst@eldorado.org.br>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
The new method is identical to ppc_deliver_interrupt, processor-specific
code will be added/removed in the following patches.
Signed-off-by: Matheus Ferst <matheus.ferst@eldorado.org.br>
Reviewed-by: Fabiano Rosas <farosas@linux.ibm.com>
Message-Id: <20221011204829.1641124-8-matheus.ferst@eldorado.org.br>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Remove the following unused interrupts from the POWER9 interrupt masking
method:
- PPC_INTERRUPT_RESET: only raised for 6xx, 7xx, 970 and POWER5p;
- Debug Interrupt: removed in Power ISA v2.07;
- Critical Input, Watchdog Timer, and Fixed Interval Timer: only defined
for embedded CPUs;
- Critical Doorbell Interrupt: removed in Power ISA v3.0;
- Programmable Interval Timer: 40x-only.
Signed-off-by: Matheus Ferst <matheus.ferst@eldorado.org.br>
Reviewed-by: Fabiano Rosas <farosas@linux.ibm.com>
Message-Id: <20221011204829.1641124-7-matheus.ferst@eldorado.org.br>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
The new method is identical to ppc_next_unmasked_interrupt_generic,
processor-specific code will be added/removed in the following patches.
Signed-off-by: Matheus Ferst <matheus.ferst@eldorado.org.br>
Reviewed-by: Fabiano Rosas <farosas@linux.ibm.com>
Message-Id: <20221011204829.1641124-6-matheus.ferst@eldorado.org.br>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Use ppc_set_irq to raise/clear interrupts to ensure CPU_INTERRUPT_HARD
will be set/reset accordingly.
Reviewed-by: Fabiano Rosas <farosas@linux.ibm.com>
Signed-off-by: Matheus Ferst <matheus.ferst@eldorado.org.br>
Message-Id: <20221011204829.1641124-3-matheus.ferst@eldorado.org.br>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
This enum defines the bit positions in env->pending_interrupts for each
interrupt. However, except for the comparison in kvmppc_set_interrupt,
the values are always used as (1 << PPC_INTERRUPT_*). Define them
directly like that to save some clutter. No functional change intended.
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Matheus Ferst <matheus.ferst@eldorado.org.br>
Message-Id: <20221011204829.1641124-2-matheus.ferst@eldorado.org.br>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Used gvec to translate XVTSTDCSP and XVTSTDCDP.
xvtstdcsp:
rept loop imm master version prev version current version
25 4000 0 0,206200 0,040730 (-80.2%) 0,040740 (-80.2%)
25 4000 1 0,205120 0,053650 (-73.8%) 0,053510 (-73.9%)
25 4000 3 0,206160 0,058630 (-71.6%) 0,058570 (-71.6%)
25 4000 51 0,217110 0,191490 (-11.8%) 0,192320 (-11.4%)
25 4000 127 0,206160 0,191490 (-7.1%) 0,192640 (-6.6%)
8000 12 0 1,234719 0,418833 (-66.1%) 0,386365 (-68.7%)
8000 12 1 1,232417 1,435979 (+16.5%) 1,462792 (+18.7%)
8000 12 3 1,232760 1,766073 (+43.3%) 1,743990 (+41.5%)
8000 12 51 1,239281 1,319562 (+6.5%) 1,423479 (+14.9%)
8000 12 127 1,231708 1,315760 (+6.8%) 1,426667 (+15.8%)
xvtstdcdp:
rept loop imm master version prev version current version
25 4000 0 0,159930 0,040830 (-74.5%) 0,040610 (-74.6%)
25 4000 1 0,160640 0,053670 (-66.6%) 0,053480 (-66.7%)
25 4000 3 0,160020 0,063030 (-60.6%) 0,062960 (-60.7%)
25 4000 51 0,160410 0,128620 (-19.8%) 0,127470 (-20.5%)
25 4000 127 0,160330 0,127670 (-20.4%) 0,128690 (-19.7%)
8000 12 0 1,190365 0,422146 (-64.5%) 0,388417 (-67.4%)
8000 12 1 1,191292 1,445312 (+21.3%) 1,428698 (+19.9%)
8000 12 3 1,188687 1,980656 (+66.6%) 1,975354 (+66.2%)
8000 12 51 1,191250 1,264500 (+6.1%) 1,355083 (+13.8%)
8000 12 127 1,197313 1,266729 (+5.8%) 1,349156 (+12.7%)
Overall, these instructions are the hardest ones to measure performance
as the gvec implementation is affected by the immediate. Above there are
5 different scenarios when it comes to immediate and 2 when it comes to
rept/loop combination. The immediates scenarios are: all bits are 0
therefore the target register should just be changed to 0, with 1 bit
set, with 2 bits set in a combination the new implementation can deal
with using gvec, 4 bits set and the new implementation can't deal with
it using gvec and all bits set. The rept/loop scenarios are high loop
and low rept (so it should spend more time executing it than translating
it) and high rept low loop (so it should spend more time translating it
than executing this code).
These comparisons are between the upstream version, a previous similar
implementation and a one with a cleaner code(this one).
For a comparison with o previous different implementation:
<20221010191356.83659-13-lucas.araujo@eldorado.org.br>
Signed-off-by: Lucas Mateus Castro (alqotel) <lucas.araujo@eldorado.org.br>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20221019125040.48028-13-lucas.araujo@eldorado.org.br>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Moved XSTSTDCSP, XSTSTDCDP and XSTSTDCQP to decodetree and moved some of
its decoding away from the helper as previously the DCMX, XB and BF were
calculated in the helper with the help of cpu_env, now that part was
moved to the decodetree with the rest.
xvtstdcsp:
rept loop master patch
8 12500 1,85393600 1,94683600 (+5.0%)
25 4000 1,78779800 1,92479000 (+7.7%)
100 1000 2,12775000 2,28895500 (+7.6%)
500 200 2,99655300 3,23102900 (+7.8%)
2500 40 6,89082200 7,44827500 (+8.1%)
8000 12 17,50585500 18,95152100 (+8.3%)
xvtstdcdp:
rept loop master patch
8 12500 1,39043100 1,33539800 (-4.0%)
25 4000 1,35731800 1,37347800 (+1.2%)
100 1000 1,51514800 1,56053000 (+3.0%)
500 200 2,21014400 2,47906000 (+12.2%)
2500 40 5,39488200 6,68766700 (+24.0%)
8000 12 13,98623900 18,17661900 (+30.0%)
xvtstdcdp:
rept loop master patch
8 12500 1,35123800 1,34455800 (-0.5%)
25 4000 1,36441200 1,36759600 (+0.2%)
100 1000 1,49763500 1,54138400 (+2.9%)
500 200 2,19020200 2,46196400 (+12.4%)
2500 40 5,39265700 6,68147900 (+23.9%)
8000 12 14,04163600 18,19669600 (+29.6%)
As some values are now decoded outside the helper and passed to it as an
argument the number of arguments of the helper increased, the number
of TCGop needed to load the arguments increased. I suspect that's why
the slow-down in the tests with a high REPT but low LOOP.
Signed-off-by: Lucas Mateus Castro (alqotel) <lucas.araujo@eldorado.org.br>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20221019125040.48028-12-lucas.araujo@eldorado.org.br>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Moved XVTSTDCSP and XVTSTDCDP to decodetree an restructured the helper
to be simpler and do all decoding in the decodetree (so XB, XT and DCMX
are all calculated outside the helper).
Obs: The tests in this one are slightly different, these are the sum of
these instructions with all possible immediate and those instructions
are repeated 10 times.
xvtstdcsp:
rept loop master patch
8 12500 2,76402100 2,70699100 (-2.1%)
25 4000 2,64867100 2,67884100 (+1.1%)
100 1000 2,73806300 2,78701000 (+1.8%)
500 200 3,44666500 3,61027600 (+4.7%)
2500 40 5,85790200 6,47475500 (+10.5%)
8000 12 15,22102100 17,46062900 (+14.7%)
xvtstdcdp:
rept loop master patch
8 12500 2,11818000 1,61065300 (-24.0%)
25 4000 2,04573400 1,60132200 (-21.7%)
100 1000 2,13834100 1,69988100 (-20.5%)
500 200 2,73977000 2,48631700 (-9.3%)
2500 40 5,05067000 5,25914100 (+4.1%)
8000 12 14,60507800 15,93704900 (+9.1%)
Signed-off-by: Lucas Mateus Castro (alqotel) <lucas.araujo@eldorado.org.br>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20221019125040.48028-11-lucas.araujo@eldorado.org.br>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Moved VPRTYBW and VPRTYBD to use gvec and both of them and VPRTYBQ to
decodetree. VPRTYBW and VPRTYBD now also use .fni4 and .fni8,
respectively.
vprtybw:
rept loop master patch
8 12500 0,01198900 0,00703100 (-41.4%)
25 4000 0,01070100 0,00571400 (-46.6%)
100 1000 0,01123300 0,00678200 (-39.6%)
500 200 0,01601500 0,01535600 (-4.1%)
2500 40 0,03872900 0,05562100 (43.6%)
8000 12 0,10047000 0,16643000 (65.7%)
vprtybd:
rept loop master patch
8 12500 0,00757700 0,00788100 (4.0%)
25 4000 0,00652500 0,00669600 (2.6%)
100 1000 0,00714400 0,00825400 (15.5%)
500 200 0,01211000 0,01903700 (57.2%)
2500 40 0,03483800 0,07021200 (101.5%)
8000 12 0,09591800 0,21036200 (119.3%)
vprtybq:
rept loop master patch
8 12500 0,00675600 0,00667200 (-1.2%)
25 4000 0,00619400 0,00643200 (3.8%)
100 1000 0,00707100 0,00751100 (6.2%)
500 200 0,01199300 0,01342000 (11.9%)
2500 40 0,03490900 0,04092900 (17.2%)
8000 12 0,09588200 0,11465100 (19.6%)
I wasn't expecting such a performance lost in both VPRTYBD and VPRTYBQ,
I'm not sure if it's worth to move those instructions. Comparing the
assembly of the helper with the TCGop they are pretty similar, so
I'm not sure why vprtybd took so much more time.
Signed-off-by: Lucas Mateus Castro (alqotel) <lucas.araujo@eldorado.org.br>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20221019125040.48028-6-lucas.araujo@eldorado.org.br>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
This patch moves VADDCUW and VSUBCUW to decodtree with gvec using an
implementation based on the helper, with the main difference being
changing the -1 (aka all bits set to 1) result returned by cmp when
true to +1. It also implemented a .fni4 version of those instructions
and dropped the helper.
vaddcuw:
rept loop master patch
8 12500 0,01008200 0,00612400 (-39.3%)
25 4000 0,01091500 0,00471600 (-56.8%)
100 1000 0,01332500 0,00593700 (-55.4%)
500 200 0,01998500 0,01275700 (-36.2%)
2500 40 0,04704300 0,04364300 (-7.2%)
8000 12 0,10748200 0,11241000 (+4.6%)
vsubcuw:
rept loop master patch
8 12500 0,01226200 0,00571600 (-53.4%)
25 4000 0,01493500 0,00462100 (-69.1%)
100 1000 0,01522700 0,00455100 (-70.1%)
500 200 0,02384600 0,01133500 (-52.5%)
2500 40 0,04935200 0,03178100 (-35.6%)
8000 12 0,09039900 0,09440600 (+4.4%)
Overall there was a gain in performance, but the TCGop code was still
slightly bigger in the new version (it went from 4 to 5).
Signed-off-by: Lucas Mateus Castro (alqotel) <lucas.araujo@eldorado.org.br>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20221019125040.48028-4-lucas.araujo@eldorado.org.br>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
This patch moves VMLADDUHM to decodetree a creates a gvec implementation
using mul_vec and add_vec.
rept loop master patch
8 12500 0,01810500 0,00903100 (-50.1%)
25 4000 0,01739400 0,00747700 (-57.0%)
100 1000 0,01843600 0,00901400 (-51.1%)
500 200 0,02574600 0,01971000 (-23.4%)
2500 40 0,05921600 0,07121800 (+20.3%)
8000 12 0,15326700 0,21725200 (+41.7%)
The significant difference in performance when REPT is low and LOOP is
high I think is due to the fact that the new implementation has a higher
translation time, as when using a helper only 5 TCGop are used but with
the patch a total of 10 TCGop are needed (Power lacks a direct mul_vec
equivalent so this instruction is implemented with the help of 5 others,
vmuleu, vmulou, vmrgh, vmrgl and vpkum).
Signed-off-by: Lucas Mateus Castro (alqotel) <lucas.araujo@eldorado.org.br>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20221019125040.48028-2-lucas.araujo@eldorado.org.br>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
The macro is missing a '{' after the if condition. Any use of REQUIRE_HV
would cause a compilation error.
Fixes: fc34e81acd ("target/ppc: add macros to check privilege level")
Signed-off-by: Matheus Ferst <matheus.ferst@eldorado.org.br>
Reviewed-by: Fabiano Rosas <farosas@linux.ibm.com>
Message-Id: <20221006200654.725390-4-matheus.ferst@eldorado.org.br>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
This instruction was added by Power ISA 3.0, using PPC2_PRCNTL makes it
available for older processors, like de e5500 and e6500.
Fixes: 7af1e7b022 ("target/ppc: add support for hypervisor doorbells on book3s CPUs")
Signed-off-by: Matheus Ferst <matheus.ferst@eldorado.org.br>
Reviewed-by: Fabiano Rosas <farosas@linux.ibm.com>
Message-Id: <20221006200654.725390-3-matheus.ferst@eldorado.org.br>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
On Power ISA v2.07, the category for these instructions became
"Embedded.Processor Control" or "Book S".
Signed-off-by: Matheus Ferst <matheus.ferst@eldorado.org.br>
Reviewed-by: Fabiano Rosas <farosas@linux.ibm.com>
Message-Id: <20221006200654.725390-2-matheus.ferst@eldorado.org.br>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
We had only been reporting the stage2 page size. This causes
problems if stage1 is using a larger page size (16k, 2M, etc),
but stage2 is using a smaller page size, because cputlb does
not set large_page_{addr,mask} properly.
Fix by using the max of the two page sizes.
Reported-by: Marc Zyngier <maz@kernel.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20221024051851.3074715-15-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Perform the atomic update for hardware management of the dirty bit.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20221024051851.3074715-14-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Perform the atomic update for hardware management of the access flag.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20221024051851.3074715-13-richard.henderson@linaro.org
[PMM: Fix accidental PROT_WRITE to PAGE_WRITE; add missing
main-loop.h include]
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Replace some gotos with some nested if statements.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Message-id: 20221024051851.3074715-12-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Both GP and DBM are in the upper attribute block.
Extend the computation of attrs to include them,
then simplify the setting of guarded.
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Message-id: 20221024051851.3074715-11-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Leave the upper and lower attributes in the place they originate
from in the descriptor. Shifting them around is confusing, since
one cannot read the bit numbers out of the manual. Also, new
attributes have been added which would alter the shifts.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Message-id: 20221024051851.3074715-10-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Always overriding fi->type was incorrect, as we would not properly
propagate the fault type from S1_ptw_translate, or arm_ldq_ptw.
Simplify things by providing a new label for a translation fault.
For other faults, store into fi directly.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Message-id: 20221024051851.3074715-9-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The unconditional loop was used both to iterate over levels
and to control parsing of attributes. Use an explicit goto
in both cases.
While this appears less clean for iterating over levels, we
will need to jump back into the middle of this loop for
atomic updates, which is even uglier.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20221024051851.3074715-8-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This fault type is to be used with FEAT_HAFDBS when
the guest enables hw updates, but places the tables
in memory where atomic updates are unsupported.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Message-id: 20221024051851.3074715-7-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Separate S1 translation from the actual lookup.
Will enable lpae hardware updates.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20221024051851.3074715-6-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Message-id: 20221024051851.3074715-5-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The MMFR1 field may indicate support for hardware update of
access flag alone, or access flag and dirty bit.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20221024051851.3074715-4-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Hoist the computation of the mmu_idx for the ptw up to
get_phys_addr_with_struct and get_phys_addr_twostage.
This removes the duplicate check for stage2 disabled
from the middle of the walk, performing it only once.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Tested-by: Alex Bennée <alex.bennee@linaro.org>
Message-id: 20221024051851.3074715-3-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reduce the amount of typing required for this check.
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20221024051851.3074715-2-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
An exception targeting EL2 from lower EL is actually maskable when
HCR_E2H and HCR_TGE are both set. This applies to both secure and
non-secure Security state.
We can remove the conditions that try to suppress masking of
interrupts when we are Secure and the exception targets EL2 and
Secure EL2 is disabled. This is OK because in that situation
arm_phys_excp_target_el() will never return 2 as the target EL. The
'not if secure' check in this function was originally written before
arm_hcr_el2_eff(), and back then the target EL returned by
arm_phys_excp_target_el() could be 2 even if we were in Secure
EL0/EL1; but it is no longer needed.
Signed-off-by: Ake Koomsin <ake@igel.co.jp>
Message-id: 20221017092432.546881-1-ake@igel.co.jp
[PMM: Add commit message paragraph explaining why it's OK to
remove the checks on secure and SCR_EEL2]
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
FEAT_E0PD adds new bits E0PD0 and E0PD1 to TCR_EL1, which allow the
OS to forbid EL0 access to half of the address space. Since this is
an EL0-specific variation on the existing TCR_ELx.{EPD0,EPD1}, we can
implement it entirely in aa64_va_parameters().
This requires moving the existing regime_is_user() to internals.h
so that the code in helper.c can get at it.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20221021160131.3531787-1-peter.maydell@linaro.org
The element size is encoded in the M3 field, not in the M4
field.
Fixes: be6324c6b7 ("s390x/tcg: Implement VECTOR ISOLATE STRING")
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1248
Message-Id: <20221012182755.1014853-3-thuth@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
Sometimes dumping a guest from the outside is the only way to get the
data that is needed. This can be the case if a dumping mechanism like
KDUMP hasn't been configured or data needs to be fetched at a specific
point. Dumping a protected guest from the outside without help from
fw/hw doesn't yield sufficient data to be useful. Hence we now
introduce PV dump support.
The PV dump support works by integrating the firmware into the dump
process. New Ultravisor calls are used to initiate the dump process,
dump cpu data, dump memory state and lastly complete the dump process.
The UV calls are exposed by KVM via the new KVM_PV_DUMP command and
its subcommands. The guest's data is fully encrypted and can only be
decrypted by the entity that owns the customer communication key for
the dumped guest. Also dumping needs to be allowed via a flag in the
SE header.
On the QEMU side of things we store the PV dump data in the newly
introduced architecture ELF sections (storage state and completion
data) and the cpu notes (for cpu dump data).
Users can use the zgetdump tool to convert the encrypted QEMU dump to an
unencrypted one.
Signed-off-by: Janosch Frank <frankja@linux.ibm.com>
Reviewed-by: Steffen Eiden <seiden@linux.ibm.com>
Message-Id: <20221017083822.43118-11-frankja@linux.ibm.com>
Since the only user, Arm MTE, always requires allocation,
merge the get and alloc functions to always produce a
non-null result. Also assume that the user has already
checked page validity.
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Use a constant target data allocation size for all pages.
This will be necessary to reduce overhead of page tracking.
Since TARGET_PAGE_DATA_SIZE is now required, we can use this
to omit data tracking for targets that don't require it.
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
These memory allocation functions return void *, and casting to
another pointer type is useless clutter. Drop these casts.
If you really want another pointer type, consider g_new().
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Laurent Vivier <laurent@vivier.eu>
Message-Id: <20220923120025.448759-3-armbru@redhat.com>
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
The only issue with FMA instructions is that there are _a lot_ of them (30
opcodes, each of which comes in up to 4 versions depending on VEX.W and
VEX.L; a total of 96 possibilities). However, they can be implement with
only 6 helpers, two for scalar operations and four for packed operations.
(Scalar versions do not do any merging; they only affect the bottom 32
or 64 bits of the output operand. Therefore, there is no separate XMM
and YMM of the scalar helpers).
First, we can reduce the number of helpers to one third by passing four
operands (one output and three inputs); the reordering of which operands
go to the multiply and which go to the add is done in emit.c.
Second, the different instructions also dispatch to the same softfloat
function, so the flags for float32_muladd and float64_muladd are passed
in the helper as int arguments, with a little extra complication to
handle FMADDSUB and FMSUBADD.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
F16C only consists of two instructions, which are a bit peculiar
nevertheless.
First, they access only the low half of an YMM or XMM register for the
packed-half operand; the exact size still depends on the VEX.L flag.
This is similar to the existing avx_movx flag, but not exactly because
avx_movx is hardcoded to affect operand 2. To this end I added a "ph"
format name; it's possible to reuse this approach for the VPMOVSX and
VPMOVZX instructions, though that would also require adding two more
formats for the low-quarter and low-eighth of an operand.
Second, VCVTPS2PH is somewhat weird because it *stores* the result of
the instruction into memory rather than loading it.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
VROUND, FSTCW and STMXCSR all have to perform the same conversion from
x86 rounding modes to softfloat constants. Since the ISA is consistent
on the meaning of the two-bit rounding modes, extract the common code
into a wrapper for set_float_rounding_mode.
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
If the destination is a memory register, op->n is -1. Going through
tcg_gen_gvec_dup_imm path is both useless (the value has been stored
by the gen_* function already) and wrong because of the out-of-bounds
access.
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20221020030641.2066807-10-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
In preparation for TARGET_TB_PCREL, reduce reliance on absolute values.
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20221020030641.2066807-9-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
In preparation for TARGET_TB_PCREL, reduce reliance on absolute values.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20221020030641.2066807-8-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
In preparation for TARGET_TB_PCREL, reduce reliance on absolute values.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20221020030641.2066807-7-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
In preparation for TARGET_TB_PCREL, reduce reliance on absolute values.
Since we always pass dc->pc_curr, fold the arithmetic to zero displacement.
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20221020030641.2066807-6-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
In preparation for TARGET_TB_PCREL, reduce reliance on absolute values.
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20221020030641.2066807-5-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
In preparation for TARGET_TB_PCREL, reduce reliance on
absolute values by passing in pc difference.
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20221020030641.2066807-4-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
In preparation for TARGET_TB_PCREL, reduce reliance on absolute values.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20221020030641.2066807-3-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
A simple helper to retrieve the length of the current insn.
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20221020030641.2066807-2-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The return type of the functions is already bool, but in a few
instances we used an integer type with the return statement.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20221011031911.2408754-13-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20221011031911.2408754-12-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
So far, limit the change to S1_ptw_translate, arm_ldl_ptw, and
arm_ldq_ptw. Use probe_access_full to find the host address,
and if so use a host load. If the probe fails, we've got our
fault info already. On the off chance that page tables are not
in RAM, continue to use the address_space_ld* functions.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20221011031911.2408754-11-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Hoist this test out of arm_ld[lq]_ptw into S1_ptw_translate.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20221011031911.2408754-10-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Before using softmmu page tables for the ptw, plumb down
a debug parameter so that we can query page table entries
from gdbstub without modifying cpu state.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20221011031911.2408754-9-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Consolidate most of the inputs and outputs of S1_ptw_translate
into a single structure. Plumb this through arm_ld*_ptw from
the controlling get_phys_addr_* routine.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20221011031911.2408754-8-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Compare only the VMID field when considering whether we need to flush.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20221011031911.2408754-7-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
We had been marking this ARM_MMU_IDX_NOTLB, move it to a real tlb.
Flush the tlb when invalidating stage 1+2 translations. Re-use
alle1_tlbmask() for other instances of EL1&0 + Stage2.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20221011031911.2408754-6-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Not yet used, but add mmu indexes for 1-1 mapping
to physical addresses.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20221011031911.2408754-5-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Add a field to TARGET_PAGE_ENTRY_EXTRA to hold the guarded bit.
In is_guarded_page, use probe_access_full instead of just guessing
that the tlb entry is still present. Also handles the FIXME about
executing from device memory.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20221011031911.2408754-4-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The CPUTLBEntryFull structure now stores the original pte attributes, as
well as the physical address. Therefore, we no longer need a separate
bit in MemTxAttrs, nor do we need to walk the tree of memory regions.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20221011031911.2408754-3-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Copy attrs and shareability, into the TLB. This will eventually
be used by S1_ptw_translate to report stage1 translation failures,
and by do_ats_write to fill in PAR_EL1.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20221011031911.2408754-2-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
QEMU doesn't model micro-architectural details which includes most
chip errata. The ARM_ERRATA_798181 work around in the Linux
kernel (see erratum_a15_798181_init) currently detects QEMU's
cortex-a15 as broken and triggers additional expensive TLB flushes as
a result.
Change the MIDR to report what the latest silicon would (r4p0). We
explicitly set the IMPDEF revidr bits to 0 because we don't need to
set anything other than the silicon revision to indicate these flushes
are not needed. This cuts about 5s from my Debian kernel boot with the
latest 6.0rc1 kernel (29s->24s).
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Tested-by: Anders Roxell <anders.roxell@linaro.org>
Message-id: 20221010153225.506394-1-alex.bennee@linaro.org
Cc: Arnd Bergmann <arnd@linaro.org>
Cc: Anders Roxell <anders.roxell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Tested-by: Anders Roxell <anders.roxell@linaro.org>
Message-Id: <20220906172257.2776521-1-alex.bennee@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This queue contains improvements in the e500 and ppc4xx boards, changes
in the maintainership of the project, a new QMP/HMP command and bug
fixes:
- Cedric is stepping back from qemu-ppc maintainership;
- ppc4xx_sdram: QOMification and clean ups;
- e500: add new types of flash and clean ups;
- QMP/HMP: introduce dumpdtb command;
- spapr_pci, booke doorbell interrupt and xvcmp* bit fixes;
The 'dumpdtb' implementation is also making changes to RISC-V files that
were acked by Alistair Francis and are being included in this queue.
-----BEGIN PGP SIGNATURE-----
iHUEABYKAB0WIQQX6/+ZI9AYAK8oOBk82cqW3gMxZAUCY02qEgAKCRA82cqW3gMx
ZIadAQCYY9f+NFrSJBm3z4JjUaP+GmbgEjibjZW05diyKwbqzQEAjE1KXFCcd40D
3Brs2Dm4YruaJCwb68vswVQAYteXaQ8=
=hl94
-----END PGP SIGNATURE-----
Merge tag 'pull-ppc-20221017' of https://gitlab.com/danielhb/qemu into staging
ppc patch queue for 2022-10-18:
This queue contains improvements in the e500 and ppc4xx boards, changes
in the maintainership of the project, a new QMP/HMP command and bug
fixes:
- Cedric is stepping back from qemu-ppc maintainership;
- ppc4xx_sdram: QOMification and clean ups;
- e500: add new types of flash and clean ups;
- QMP/HMP: introduce dumpdtb command;
- spapr_pci, booke doorbell interrupt and xvcmp* bit fixes;
The 'dumpdtb' implementation is also making changes to RISC-V files that
were acked by Alistair Francis and are being included in this queue.
# -----BEGIN PGP SIGNATURE-----
#
# iHUEABYKAB0WIQQX6/+ZI9AYAK8oOBk82cqW3gMxZAUCY02qEgAKCRA82cqW3gMx
# ZIadAQCYY9f+NFrSJBm3z4JjUaP+GmbgEjibjZW05diyKwbqzQEAjE1KXFCcd40D
# 3Brs2Dm4YruaJCwb68vswVQAYteXaQ8=
# =hl94
# -----END PGP SIGNATURE-----
# gpg: Signature made Mon 17 Oct 2022 15:16:34 EDT
# gpg: using EDDSA key 17EBFF9923D01800AF2838193CD9CA96DE033164
# gpg: Good signature from "Daniel Henrique Barboza <danielhb413@gmail.com>" [unknown]
# gpg: WARNING: This key is not certified with a trusted signature!
# gpg: There is no indication that the signature belongs to the owner.
# Primary key fingerprint: 17EB FF99 23D0 1800 AF28 3819 3CD9 CA96 DE03 3164
* tag 'pull-ppc-20221017' of https://gitlab.com/danielhb/qemu: (38 commits)
hw/riscv: set machine->fdt in spike_board_init()
hw/riscv: set machine->fdt in sifive_u_machine_init()
hw/ppc: set machine->fdt in spapr machine
hw/ppc: set machine->fdt in pnv_reset()
hw/ppc: set machine->fdt in pegasos2_machine_reset()
hw/ppc: set machine->fdt in xilinx_load_device_tree()
hw/ppc: set machine->fdt in sam460ex_load_device_tree()
hw/ppc: set machine->fdt in bamboo_load_device_tree()
hw/nios2: set machine->fdt in nios2_load_dtb()
qmp/hmp, device_tree.c: introduce dumpdtb
hw/ppc/spapr_pci.c: Use device_cold_reset() rather than device_legacy_reset()
target/ppc: Fix xvcmp* clearing FI bit
hw/ppc/e500: Remove if statement which is now always true
hw/ppc/mpc8544ds: Add platform bus
hw/ppc/mpc8544ds: Rename wrongly named method
hw/ppc/e500: Reduce usage of sysbus API
docs/system/ppc/ppce500: Add heading for networking chapter
hw/gpio/meson: Introduce dedicated config switch for hw/gpio/mpc8xxx
hw/ppc/meson: Allow e500 boards to be enabled separately
ppc440_uc.c: Remove unneeded parenthesis
...
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
With all SSE (and AVX!) instructions now implemented in disas_insn_new,
it's possible to remove gen_sse, as well as the helpers for instructions
that now use gvec.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
This adds another kind of weirdness when you thought you had seen it all:
an opcode byte that comes _after_ the address, not before. It's not
worth adding a new X86_SPECIAL_* constant for it, but it's actually
not unlike VCMP; so, forgive me for exploiting the similarity and just
deciding to dispatch to the right gen_helper_* call in a single code
generation function.
In fact, the old decoder had a bug where s->rip_offset should have
been set to 1 for 3DNow! instructions, and it's fixed now.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Include AVX, AVX2 and VAES in the guest cpuid features supported by TCG.
Signed-off-by: Paul Brook <paul@nowt.org>
Message-Id: <20220424220204.2493824-40-paul@nowt.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
These are exactly the same as the non-VEX version, but one has to be careful
that only VEX.L=0 is allowed.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Here the code is a bit uglier due to the truncation and extension
of registers to and from 32-bit. There is also a mistake in the
manual with respect to the size of the memory operand of CVTPS2PI
and CVTTPS2PI, reported by Ricky Zhou.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
These are mostly moves, and yet are a total pain. The main issue
is that:
1) some instructions are selected by mod==11 (register operand)
vs. mod=00/01/10 (memory operand)
2) stores to memory are two-operand operations, while the 3-register
and load-from-memory versions operate on the entire contents of the
destination; this makes it easier to separate the gen_* function for
the store case
3) it's inefficient to load into xmm_T0 only to move the value out
again, so the gen_* function for the load case is separated too
The manual also has various mistakes in the operands here, for example
the store case of MOVHPS operates on a 128-bit source (albeit discarding
the bottom 64 bits) and therefore should be Mq,Vdq rather than Mq,Vq.
Likewise for the destination and source of MOVHLPS.
VUNPCK?PS and VUNPCK?PD are the same as VUNPCK?DQ and VUNPCK?QDQ,
but encoded as prefixes rather than separate operands. The helpers
can be reused however.
For MOVSLDUP, MOVSHDUP and MOVDDUP I chose to reimplement them as
helpers. I named the helper for MOVDDUP "movdldup" in preparation
for possible future introduction of MOVDHDUP and to clarify the
similarity with MOVSLDUP.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Nothing special going on here, for once.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
There are several special cases here:
1) extending moves have different widths for the helpers vs. for the
memory loads, and the width for memory loads depends on VEX.L too.
This is represented by X86_SPECIAL_AVXExtMov.
2) some instructions, such as variable-width shifts, select the vector element
size via REX.W.
3) VSIB instructions (VGATHERxPy, VPGATHERxy) are also part of this group,
and they have (among other things) two output operands.
3) the macros for 4-operand blends (which are under 0x0f 0x3a) have to be
extended to support 2-operand blends. The 2-operand variant actually
came a few years earlier, but it is clearer to implement them in the
opposite order.
X86_TYPE_WM, introduced earlier for unaligned loads, is reused for helpers
that accept a Reg* but have a M argument.
These three-byte opcodes also include AVX new instructions, for which
the helpers were originally implemented by Paul Brook <paul@nowt.org>.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
As pmovmskb is used by strlen et al, this is the third
highest overhead sse operation at %0.8.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
[Reorganize to generate code for any vector size. - Paolo]
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
The more complicated operations here are insertions and extractions.
Otherwise, there are just more entries than usual because the PS/PD/SS/SD
variations are encoded in the opcode rater than in the prefixes.
These three-byte opcodes also include AVX new instructions, whose
implementation in the helpers was originally done by Paul Brook
<paul@nowt.org>.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Three-byte opcodes from the 0F3Ah area all have an immediate byte which
is usually unsigned. Clarify in the helper code that it is unsigned;
the new decoder treats immediates as signed by default, and seeing
an intN_t in the prototype might give the wrong impression that one
can use decode->immediate directly.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
The more complicated ones here are d6-d7, e6-e7, f7. The others
are trivial.
For LDDQU, using gen_load_sse directly might corrupt the register if
the second part of the load fails. Therefore, add a custom X86_TYPE_WM
value; like X86_TYPE_W it does call gen_load(), but it also rejects a
value of 11 in the ModRM field like X86_TYPE_M.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
This includes shifts by immediate, which use bits 3-5 of the ModRM byte
as an opcode extension. With the exception of 128-bit shifts, they are
implemented using gvec.
This also covers VZEROALL and VZEROUPPER, which use the same opcode
as EMMS. If we were wanting to optimize out gen_clear_ymmh then this
would be one of the starting points. The implementation of the VZEROALL
and VZEROUPPER helpers is by Paul Brook.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
These are a mixed batch, including the first two horizontal
(66 and F2 only) operations, more moves, and SSE4a extract/insert.
Because SSE4a is pretty rare, I chose to leave the helper as they are,
but it is possible to unify them by loading index and length from the
source XMM register and generating deposit or extract TCG ops.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
These are mostly floating-point SSE operations. The odd ones out
are MOVMSK and CVTxx2yy, the others are straightforward.
Unary operations are a bit special in AVX because they have 2 operands
for PD/PS operands (VEX.vvvv must be 1111b), and 3 operands for SD/SS.
They are handled using X86_OP_GROUP3 for compactness.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
These are more simple integer instructions present in both MMX and SSE/AVX,
with no holes that were later occupied by newer instructions.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
These are both MMX and SSE/AVX instructions, except for vmovdqu. In both
cases the inputs and output is in s->ptr{0,1,2}, so the only difference
between MMX, SSE, and AVX is which helper to call.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
The new implementation of SSE will cover AVX from the get go, because
all the work for the helper functions is already done. We just need to
build them.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
The new implementation of SSE will cover AVX from the get go, so include
the 24 extra comparison operators that are only available with the VEX
prefix.
Based on a patch by Paul Brook <paul@nowt.org>.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Compared to Paul's implementation, the new decoder will use a different approach
to implement AVX's merging of dst with src1 on scalar operations. Adjust the
old SSE decoder to be compatible with new-style helpers.
The affected instructions are CVTSx2Sx, ROUNDSx, RSQRTSx, SQRTSx, RCPSx.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Compared to Paul's implementation, the new decoder will use a different approach
to implement AVX's merging of dst with src1 on scalar operations. Adjust the
helpers to provide this functionality.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Add to the helpers all the operands that are needed to implement AVX.
Extracted from a patch by Paul Brook <paul@nowt.org>.
Message-Id: <20220424220204.2493824-26-paul@nowt.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Adjust all #ifdefs to match the ones in ops_sse.h.
Signed-off-by: Paul Brook <paul@nowt.org>
Message-Id: <20220424220204.2493824-23-paul@nowt.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Because these are the only VEX instructions that QEMU supports, the
new decoder is entered on the first byte of a valid VEX prefix, and VEX
decoding only needs to be done in decode-new.c.inc.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Many SSE and AVX instructions are only valid with specific prefixes
(none, 66, F3, F2). Introduce a direct way to encode this in the
decoding table to avoid using decode groups too much.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Add a new hflag bit to determine whether AVX instructions are allowed
Signed-off-by: Paul Brook <paul@nowt.org>
Message-Id: <20220424220204.2493824-4-paul@nowt.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
TCG will shortly implement VAES instructions, so add the relevant feature
word to the DisasContext.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Add generic code generation that takes care of preparing operands
around calls to decode.e.gen in a table-driven manner, so that ALU
operations need not take care of that.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
The new decoder is based on three principles:
- use mostly table-driven decoding, using tables derived as much as possible
from the Intel manual. Centralizing the decode the operands makes it
more homogeneous, for example all immediates are signed. All modrm
handling is in one function, and can be shared between SSE and ALU
instructions (including XMM<->GPR instructions). The SSE/AVX decoder
will also not have duplicated code between the 0F, 0F38 and 0F3A tables.
- keep the code as "non-branchy" as possible. Generally, the code for
the new decoder is more verbose, but the control flow is simpler.
Conditionals are not nested and have small bodies. All instruction
groups are resolved even before operands are decoded, and code
generation is separated as much as possible within small functions
that only handle one instruction each.
- keep address generation and (for ALU operands) memory loads and writeback
as much in common code as possible. All ALU operations for example
are implemented as T0=f(T0,T1). For non-ALU instructions,
read-modify-write memory operations are rare, but registers do not
have TCGv equivalents: therefore, the common logic sets up pointer
temporaries with the operands, while load and writeback are handled
by gvec or by helpers.
These principles make future code review and extensibility simpler, at
the cost of having a relatively large amount of code in the form of this
patch. Even EVEX should not be _too_ hard to implement (it's just a crazy
large amount of possibilities).
This patch introduces the main decoder flow, and integrates the old
decoder with the new one. The old decoder takes care of parsing
prefixes and then optionally drops to the new one. The changes to the
old decoder are minimal and allow it to be replaced incrementally with
the new one.
There is a debugging mechanism through a "LIMIT" environment variable.
In user-mode emulation, the variable is the number of instructions
decoded by the new decoder before permanently switching to the old one.
In system emulation, the variable is the highest opcode that is decoded
by the new decoder (this is less friendly, but it's the best that can
be done without requiring deterministic execution).
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
REX.W can be used even in 32-bit mode by AVX instructions, where it is retroactively
renamed to VEX.W. Make the field available even in 32-bit mode but keep the REX_W()
macro as it was; this way, that the handling of dflag does not use it by mistake and
the AVX code more clearly points at the special VEX behavior of the bit.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
ldq takes a pointer to the first byte to load the 64-bit word in;
ldo takes a pointer to the first byte of the ZMMReg. Make them
consistent, which will be useful in the new SSE decoder's
load/writeback routines.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
This will be used for emission and endian adjustments of gvec operations.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220822223722.1697758-2-richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Rather than recurse directly on mmu_translate, go through the
same softmmu lookup that we did for the page table walk.
This centralizes all knowledge of MMU_NESTED_IDX, with respect
to setup of TranslationParams, to get_physical_address.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20221002172956.265735-10-richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Use probe_access_full in order to resolve to a host address,
which then lets us use a host cmpxchg to update the pte.
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/279
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20221002172956.265735-9-richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
We don't need one variable set per translation level,
which requires copying into pte/pte_addr for huge pages.
Standardize on pte/pte_addr for all levels.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20221002172956.265735-8-richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Use MMU_NESTED_IDX for each memory access, rather than
just a single translation to physical. Adjust svm_save_seg
and svm_load_seg to pass in mmu_idx.
This removes the last use of get_hphys so remove it.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20221002172956.265735-7-richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
These new mmu indexes will be helpful for improving
paging and code throughout the target.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20221002172956.265735-6-richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Replace with PTE_HPHYS for the page table walk, and a direct call
to mmu_translate for the final stage2 translation. Hoist the check
for HF2_NPT_MASK out to get_physical_address, which avoids the
recursive call when stage2 is disabled.
We can now return all the way out to x86_cpu_tlb_fill before raising
an exception, which means probe works.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20221002172956.265735-5-richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Create TranslateParams for inputs, TranslateResults for successful
outputs, and TranslateFault for error outputs; return true on success.
Move stage1 error paths from handle_mmu_fault to x86_cpu_tlb_fill;
reorg the rest of handle_mmu_fault into get_physical_address.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20221002172956.265735-4-richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Use a boolean to control the call to get_hphys instead
of passing a null function pointer.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20221002172956.265735-3-richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Replace int is_write1 and magic numbers with the proper
MMUAccessType access_type and enumerators.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20221002172956.265735-2-richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Restore pc_save while undoing any state change that may have
happened while decoding the instruction. Leave a TODO about
removing all of that when the table-based decoder is complete.
Cc: Paolo Bonzini <pbonzini@redhat.com>
Suggested-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20221016222303.288551-1-richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
The semantic difference between the deprecated device_legacy_reset()
function and the newer device_cold_reset() function is that the new
function resets both the device itself and any qbuses it owns,
whereas the legacy function resets just the device itself and nothing
else.
The x86_cpu_after_reset() function uses device_legacy_reset() to reset
the APIC; this is an APICCommonState and does not have any qbuses, so
for this purpose the two functions behave identically and we can stop
using the deprecated one.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Acked-by: Michael S. Tsirkin <mst@redhat.com>
Message-Id: <20221013171926.1447899-1-peter.maydell@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Resetting a guest that has Hyper-V VMBus support enabled triggers a QEMU
assertion failure:
hw/hyperv/hyperv.c:131: synic_reset: Assertion `QLIST_EMPTY(&synic->sint_routes)' failed.
This happens both on normal guest reboot or when using "system_reset" HMP
command.
The failing assertion was introduced by commit 64ddecc88b ("hyperv: SControl is optional to enable SynIc")
to catch dangling SINT routes on SynIC reset.
The root cause of this problem is that the SynIC itself is reset before
devices using SINT routes have chance to clean up these routes.
Since there seems to be no existing mechanism to force reset callbacks (or
methods) to be executed in specific order let's use a similar method that
is already used to reset another interrupt controller (APIC) after devices
have been reset - by invoking the SynIC reset from the machine reset
handler via a new x86_cpu_after_reset() function co-located with
the existing x86_cpu_reset() in target/i386/cpu.c.
Opportunistically move the APIC reset handler there, too.
Fixes: 64ddecc88b ("hyperv: SControl is optional to enable SynIc") # exposed the bug
Signed-off-by: Maciej S. Szmigiero <maciej.szmigiero@oracle.com>
Message-Id: <cb57cee2e29b20d06f81dce054cbcea8b5d497e8.1664552976.git.maciej.szmigiero@oracle.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Vector instructions in general are not supposed to change the FI bit.
However, xvcmp* instructions are calling gen_helper_float_check_status,
which is leading to a cleared FI flag where it should be kept
unchanged.
As helper_float_check_status only affects inexact, overflow and
underflow, and the xvcmp* instructions don't change these flags, this
issue can be fixed by removing the call to helper_float_check_status.
By doing this, the FI bit in FPSCR will be preserved as expected.
Fixes: 00084a25ad ("target/ppc: introduce separate VSX_CMP macro for xvcmp* instructions")
Signed-off-by: Víctor Colombo <victor.colombo@eldorado.org.br>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20221005121551.27957-1-victor.colombo@eldorado.org.br>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
This partially reverts commit 9dc20cc37d ("target/ppc: Simplify
powerpc_excp_booke"), which removed DOORI and DOORCI interrupts.
Without this patch, a -cpu e5500 -smp 2 machine booting Linux
crashes with:
qemu: fatal: Invalid PowerPC exception 36. Aborting
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Reviewed-by: Cédric Le Goater <clg@kaod.org>
Message-Id: <20220924114436.1422786-1-npiggin@gmail.com>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Song Gao <gaosong@loongson.cn>
Message-Id: <20220930024510.800005-3-gaosong@loongson.cn>
use gen_bstrins/gen_bstrpic to replace gen_rr_ms_ls.
Suggested-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Song Gao <gaosong@loongson.cn>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220930024510.800005-2-gaosong@loongson.cn>
Since commit 4047368938 "accel/tcg: Introduce tlb_set_page_full" we
have been seeing this assert
../accel/tcg/cputlb.c:1294: tlb_set_page_with_attrs: Assertion `is_power_of_2(size)' failed.
When running Tock on the OpenTitan machine.
The issue is that pmp_get_tlb_size() would return a TLB size that wasn't
a power of 2. The size was also smaller then TARGET_PAGE_SIZE.
This patch ensures that any TLB size less then TARGET_PAGE_SIZE is
rounded down to 1 to ensure it's a valid size.
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: LIU Zhiwei<zhiwei_liu@linux.alibaba.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20221012011449.506928-1-alistair.francis@opensource.wdc.com
Message-Id: <20221012011449.506928-1-alistair.francis@opensource.wdc.com>
Add support for saving/restoring extended save states when signals
are delivered. This allows using AVX, MPX or PKRU registers in
signal handlers.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
The MSR_CORE_THREAD_COUNT MSR describes CPU package topology, such as number
of threads and cores for a given package. This is information that QEMU has
readily available and can provide through the new user space MSR deflection
interface.
This patch propagates the existing hvf logic from patch 027ac0cb51
("target/i386/hvf: add rdmsr 35H MSR_CORE_THREAD_COUNT") to KVM.
Signed-off-by: Alexander Graf <agraf@csgraf.de>
Message-Id: <20221004225643.65036-4-agraf@csgraf.de>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
KVM has grown support to deflect arbitrary MSRs to user space since
Linux 5.10. For now we don't expect to make a lot of use of this
feature, so let's expose it the easiest way possible: With up to 16
individually maskable MSRs.
This patch adds a kvm_filter_msr() function that other code can call
to install a hook on KVM MSR reads or writes.
Signed-off-by: Alexander Graf <agraf@csgraf.de>
Message-Id: <20221004225643.65036-3-agraf@csgraf.de>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Intel CPUs starting with Haswell-E implement a new MSR called
MSR_CORE_THREAD_COUNT which exposes the number of threads and cores
inside of a package.
This MSR is used by XNU to populate internal data structures and not
implementing it prevents virtual machines with more than 1 vCPU from
booting if the emulated CPU generation is at least Haswell-E.
This patch propagates the existing hvf logic from patch 027ac0cb51
("target/i386/hvf: add rdmsr 35H MSR_CORE_THREAD_COUNT") to TCG.
Signed-off-by: Alexander Graf <agraf@csgraf.de>
Message-Id: <20221004225643.65036-2-agraf@csgraf.de>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <20221001140935.465607-27-richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Expand this function at each of its callers.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <20221001140935.465607-26-richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Create a tcg global temp for this, and use it instead of explicit stores.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <20221001140935.465607-25-richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <20221001140935.465607-24-richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
These functions have only one caller, and the logic is more
obvious this way.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <20221001140935.465607-23-richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
These functions are always passed aflag, so we might as well
read it from DisasContext directly. While we're at it, use
a common subroutine for these two functions.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <20221001140935.465607-22-richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
With gen_jmp_rel, we may chain between two translation blocks
which may only be separated because of TB size limits.
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20221001140935.465607-21-richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <20221001140935.465607-20-richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
With gen_jmp_rel, we may chain to the next tb instead of merely
writing to eip and exiting. For repz, subtract cur_insn_len to
restart the current insn.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <20221001140935.465607-19-richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Create a common helper for pc-relative branches. The jmp jb insn
was missing a mask for CODE32. In all cases the CODE64 check was
incorrectly placed, allowing PREFIX_DATA to truncate %rip to 16 bits.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <20221001140935.465607-18-richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
We can set is_jmp early, using only one if, and let that
be overwritten by gen_rep*'s calls to gen_jmp_tb.
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20221001140935.465607-17-richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Create helpers for loading the address of the next insn.
Use tcg_constant_* in adjacent code where convenient.
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20221001140935.465607-16-richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Use i32 not int or tl for eip and cs arguments.
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20221001140935.465607-15-richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Drop the unused dest argument to gen_jr().
Remove most of the calls to gen_jr, and use DISAS_JUMP.
Remove some unused loads of eip for lcall and ljmp.
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20221001140935.465607-14-richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
All callers pass s->base.pc_next and s->pc, which we can just
as well compute within the functions. Pull out common helpers
and reduce the amount of code under macros.
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20221001140935.465607-13-richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Create common routines for computing the length of the insn.
Use tcg_constant_i32 in the new function, while we're at it.
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20221001140935.465607-12-richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Replace lone calls to gen_eob() with the new enumerator.
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20221001140935.465607-11-richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Replace sequences of gen_update_cc_op, gen_update_eip_next,
and gen_eob with the new is_jmp enumerator.
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20221001140935.465607-10-richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Set is_jmp properly in gen_movl_seg_T0, so that the callers
need to nothing special.
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20221001140935.465607-9-richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Add a few DISAS_TARGET_* aliases to reduce the number of
calls to gen_eob() and gen_eob_inhibit_irq(). So far,
only update i386_tr_translate_insn for exiting the block
because of single-step or previous inhibit irq.
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20221001140935.465607-8-richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Sync EIP before exiting a translation block.
Replace all gen_jmp_im that use s->pc.
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20221001140935.465607-7-richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Like gen_update_cc_op, sync EIP before doing something
that could raise an exception. Replace all gen_jmp_im
that use s->base.pc_next.
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20221001140935.465607-6-richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
All callers pass s->base.pc_next and s->pc, which we can just as
well compute within the function. Adjust to use tcg_constant_i32
while we're at it.
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20221001140935.465607-5-richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
All callers pass s->base.pc_next - s->cs_base, which we can just
as well compute within the function. Note the special case of
EXCP_VSYSCALL in which s->cs_base wasn't subtracted, but cs_base
is always zero in 64-bit mode, when vsyscall is used.
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20221001140935.465607-4-richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Instead of returning the new pc, which is present in
DisasContext, return true if an insn was translated.
This is false when we detect a page crossing and must
undo the insn under translation.
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <20221001140935.465607-3-richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
The DisasContext member and the disas_insn local variable of
the same name are identical to DisasContextBase.pc_next.
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20221001140935.465607-2-richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
There are cases that malicious virtual machine can cause CPU stuck (due
to event windows don't open up), e.g., infinite loop in microcode when
nested #AC (CVE-2015-5307). No event window means no event (NMI, SMI and
IRQ) can be delivered. It leads the CPU to be unavailable to host or
other VMs. Notify VM exit is introduced to mitigate such kind of
attacks, which will generate a VM exit if no event window occurs in VM
non-root mode for a specified amount of time (notify window).
A new KVM capability KVM_CAP_X86_NOTIFY_VMEXIT is exposed to user space
so that the user can query the capability and set the expected notify
window when creating VMs. The format of the argument when enabling this
capability is as follows:
Bit 63:32 - notify window specified in qemu command
Bit 31:0 - some flags (e.g. KVM_X86_NOTIFY_VMEXIT_ENABLED is set to
enable the feature.)
Users can configure the feature by a new (x86 only) accel property:
qemu -accel kvm,notify-vmexit=run|internal-error|disable,notify-window=n
The default option of notify-vmexit is run, which will enable the
capability and do nothing if the exit happens. The internal-error option
raises a KVM internal error if it happens. The disable option does not
enable the capability. The default value of notify-window is 0. It is valid
only when notify-vmexit is not disabled. The valid range of notify-window
is non-negative. It is even safe to set it to zero since there's an
internal hardware threshold to be added to ensure no false positive.
Because a notify VM exit may happen with VM_CONTEXT_INVALID set in exit
qualification (no cases are anticipated that would set this bit), which
means VM context is corrupted. It would be reflected in the flags of
KVM_EXIT_NOTIFY exit. If KVM_NOTIFY_CONTEXT_INVALID bit is set, raise a KVM
internal error unconditionally.
Acked-by: Peter Xu <peterx@redhat.com>
Signed-off-by: Chenyi Qiang <chenyi.qiang@intel.com>
Message-Id: <20220929072014.20705-5-chenyi.qiang@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Now we have an enum for the granule size, use it in the
ARMVAParameters struct instead of the using16k/using64k bools.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20221003162315.2833797-3-peter.maydell@linaro.org
Arm CPUs support some subset of the granule (page) sizes 4K, 16K and
64K. The guest selects the one it wants using bits in the TCR_ELx
registers. If it tries to program these registers with a value that
is either reserved or which requests a size that the CPU does not
implement, the architecture requires that the CPU behaves as if the
field was programmed to some size that has been implemented.
Currently we don't implement this, and instead let the guest use any
granule size, even if the CPU ID register fields say it isn't
present.
Make aa64_va_parameters() check against the supported granule size
and force use of a different one if it is not implemented.
(A subsequent commit will make ARMVAParameters use the new enum
rather than the current pair of using16k/using64k bools.)
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20221003162315.2833797-2-peter.maydell@linaro.org
Adjust GetPhysAddrResult to fill in CPUTLBEntryFull,
so that it may be passed directly to tlb_set_page_full.
The change is large, but mostly mechanical. The major
non-mechanical change is page_size -> lg_page_size.
Most of the time this is obvious, and is related to
TARGET_PAGE_BITS.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20221001162318.153420-21-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Do not apply memattr or shareability for Stage2 translations.
Make sure to apply HCR_{DC,DCT} only to Regime_EL10, per the
pseudocode in AArch64.S1DisabledOutput.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20221001162318.153420-20-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20221001162318.153420-19-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Use arm_hcr_el2_eff_secstate instead of arm_hcr_el2_eff, so
that we use is_secure instead of the current security state.
These AT* operations have been broken since arm_hcr_el2_eff
gained a check for "el2 enabled" for Secure EL2.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20221001162318.153420-18-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
These subroutines did not need ENV for anything except
retrieving the effective value of HCR anyway.
We have computed the effective value of HCR in the callers,
and this will be especially important for interpreting HCR
in a non-current security state.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20221001162318.153420-17-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This value is unused.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20221001162318.153420-16-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Rename the argument to is_secure_ptr, and introduce a
local variable is_secure with the value. We only write
back to the pointer toward the end of the function.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20221001162318.153420-15-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
For page walking, we may require HCR for a security state
that is not "current".
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20221001162318.153420-14-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The effect of TGE does not only apply to non-secure state,
now that Secure EL2 exists.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20221001162318.153420-13-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Use a switch on mmu_idx for the a-profile indexes, instead of
three different if's vs regime_el and arm_mmu_idx_is_stage1_of_2.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20221001162318.153420-12-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
For a-profile aarch64, which does not bank system registers, it takes
quite a lot of code to switch between security states. In the process,
registers such as TCR_EL{1,2} must be swapped, which in itself requires
the flushing of softmmu tlbs. Therefore it doesn't buy us anything to
separate tlbs by security state.
Retain the distinction between Stage2 and Stage2_S.
This will be important as we implement FEAT_RME, and do not wish to
add a third set of mmu indexes for Realm state.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20221001162318.153420-11-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Use get_phys_addr_with_secure directly. For a-profile, this is the
one place where the value of is_secure may not equal arm_is_secure(env).
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20221001162318.153420-10-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This is the last use of regime_is_secure; remove it
entirely before changing the layout of ARMMMUIdx.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20221001162318.153420-9-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Remove the use of regime_is_secure from arm_tr_init_disas_context.
Instead, provide the value of v8m_secure directly from tb_flags.
Rather than use regime_is_secure, use the env->v7m.secure directly,
as per arm_mmu_idx_el.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20221001162318.153420-8-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Remove the use of regime_is_secure from v7m_read_half_insn, using
the new parameter instead.
As it happens, both callers pass true, propagated from the argument
to arm_v7m_mmu_idx_for_secstate which created the mmu_idx argument,
but that is a detail of v7m_handle_execute_nsc we need not expose
to the callee.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20221001162318.153420-7-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Retain the existing get_phys_addr interface using the security
state derived from mmu_idx. Move the kerneldoc comments to the
header file where they belong.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20221001162318.153420-6-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Remove the use of regime_is_secure from regime_translation_disabled,
using the new parameter instead.
This fixes a bug in S1_ptw_translate and get_phys_addr where we had
passed ARMMMUIdx_Stage2 and not ARMMMUIdx_Stage2_S to determine if
Stage2 is disabled, affecting FEAT_SEL2.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20221001162318.153420-5-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Pass the correct stage2 mmu_idx to regime_translation_disabled,
which we computed afterward.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20221001162318.153420-4-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Remove the use of regime_is_secure from get_phys_addr_lpae,
using the new parameter instead.
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20221001162318.153420-3-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
While the stage2 call to get_phys_addr_lpae should never set
attrs.secure when given a non-secure input, it's just as easy
to make the final update to attrs.secure be unconditional and
false in the case of non-secure input.
Suggested-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20221007152159.1414065-1-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The starting security state comes with the translation regime,
not the current state of arm_is_secure_below_el3().
Create a new local variable, s2walk_secure, which does not need
to be written back to result->attrs.secure -- we compute that
value later, after the S2 walk is complete.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20221001162318.153420-2-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Updates write_scr() to allow setting SCR_EL3.EnTP2 when FEAT_SME is
implemented. SCR_EL3 being a 64-bit register, valid_mask is changed
to uint64_t and the SCR_* constants in target/arm/cpu.h are extended
to 64-bit so that masking and bitwise not (~) behave as expected.
This enables booting Linux with Trusted Firmware-A at EL3 with
"-M virt,secure=on -cpu max".
Cc: qemu-stable@nongnu.org
Fixes: 78cb977666 ("target/arm: Enable SME for -cpu max")
Signed-off-by: Jerome Forissier <jerome.forissier@linaro.org>
Reviewed-by: Andre Przywara <andre.przywara@arm.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20221004072354.27037-1-jerome.forissier@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Occasionally the KVM_CREATE_VM ioctl can return EINTR, even though
there is no pending signal to be taken. In commit 94ccff1338
we added a retry-on-EINTR loop to the KVM_CREATE_VM call in the
generic KVM code. Adopt the same approach for the use of the
ioctl in the Arm-specific KVM code (where we use it to create a
scratch VM for probing for various things).
For more information, see the mailing list thread:
https://lore.kernel.org/qemu-devel/8735e0s1zw.wl-maz@kernel.org/
Reported-by: Vitaly Chikunov <vt@altlinux.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Vitaly Chikunov <vt@altlinux.org>
Reviewed-by: Eric Auger <eric.auger@redhat.com>
Acked-by: Marc Zyngier <maz@kernel.org>
Message-id: 20220930113824.1933293-1-peter.maydell@linaro.org
Several hypervisor capabilities in KVM are target-specific. When exposed
to QEMU users as accelerator properties (i.e. -accel kvm,prop=value), they
should not be available for all targets.
Add a hook for targets to add their own properties to -accel kvm, for
now no such property is defined.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <20220929072014.20705-3-chenyi.qiang@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
For the direct triple faults, i.e. hardware detected and KVM morphed
to VM-Exit, KVM will never lose them. But for triple faults sythesized
by KVM, e.g. the RSM path, if KVM exits to userspace before the request
is serviced, userspace could migrate the VM and lose the triple fault.
A new flag KVM_VCPUEVENT_VALID_TRIPLE_FAULT is defined to signal that
the event.triple_fault_pending field contains a valid state if the
KVM_CAP_X86_TRIPLE_FAULT_EVENT capability is enabled.
Acked-by: Peter Xu <peterx@redhat.com>
Signed-off-by: Chenyi Qiang <chenyi.qiang@intel.com>
Message-Id: <20220929072014.20705-2-chenyi.qiang@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
It's always better to convey the type of a pointer if at all
possible. So let's add the DumpState typedef to typedefs.h and move
the dump note functions from the opaque pointers to DumpState
pointers.
Signed-off-by: Janosch Frank <frankja@linux.ibm.com>
CC: Peter Maydell <peter.maydell@linaro.org>
CC: Cédric Le Goater <clg@kaod.org>
CC: Daniel Henrique Barboza <danielhb413@gmail.com>
CC: David Gibson <david@gibson.dropbear.id.au>
CC: Greg Kurz <groug@kaod.org>
CC: Palmer Dabbelt <palmer@dabbelt.com>
CC: Alistair Francis <alistair.francis@wdc.com>
CC: Bin Meng <bin.meng@windriver.com>
CC: Cornelia Huck <cohuck@redhat.com>
CC: Thomas Huth <thuth@redhat.com>
CC: Richard Henderson <richard.henderson@linaro.org>
CC: David Hildenbrand <david@redhat.com>
Acked-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Message-Id: <20220811121111.9878-2-frankja@linux.ibm.com>
This helps us construct strings elsewhere before echoing to the
monitor. It avoids having to jump through hoops like:
monitor_printf(mon, "%s", s->str);
It will be useful in following patches but for now convert all
existing plain "%s" printfs to use the _puts api.
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20220929114231.583801-33-alex.bennee@linaro.org>
Bug fix in gen_tcg_funcs.py
-----BEGIN PGP SIGNATURE-----
iQEzBAABCgAdFiEENjXHiM5iuR/UxZq0ewJE+xLeRCIFAmM7JS4ACgkQewJE+xLe
RCJXxQf9ESfI6LVoB1VBsMs69WOHqhy1HUEVzM4Ku+CgDCNaFRRz7xFoy/sv4FOX
D7h5aYVuCLrX/KfttV6V+1GXX/XIyjMN81uZZ8/eiCvjt7D/9fkrUxp9E1Gh6KlV
Dci21OYjh4aStd4tXin0vPHN5wG+IuuYuSzj0Xvu8SzRjFYKsFkjfxPrVsm1zWvN
G1FfiUJ6AveRf9SJVuMTmLHY7jo9hg0/tpm7YpnxlIgzDVZbZDa1yDwaLEg/m6AT
GFHli/nOEsL1c6mbYmvVnGoSupjEj0+MfNIeOUrn8D5Gd66OgvU+FVVFBJQ4ZKi6
ZuckxBjBE3d5XKyxCVryRA3at+WLYA==
=ron6
-----END PGP SIGNATURE-----
Merge tag 'pull-hex-20221003' of https://github.com/quic/qemu into staging
Make store handling faster and more robust
Bug fix in gen_tcg_funcs.py
# -----BEGIN PGP SIGNATURE-----
#
# iQEzBAABCgAdFiEENjXHiM5iuR/UxZq0ewJE+xLeRCIFAmM7JS4ACgkQewJE+xLe
# RCJXxQf9ESfI6LVoB1VBsMs69WOHqhy1HUEVzM4Ku+CgDCNaFRRz7xFoy/sv4FOX
# D7h5aYVuCLrX/KfttV6V+1GXX/XIyjMN81uZZ8/eiCvjt7D/9fkrUxp9E1Gh6KlV
# Dci21OYjh4aStd4tXin0vPHN5wG+IuuYuSzj0Xvu8SzRjFYKsFkjfxPrVsm1zWvN
# G1FfiUJ6AveRf9SJVuMTmLHY7jo9hg0/tpm7YpnxlIgzDVZbZDa1yDwaLEg/m6AT
# GFHli/nOEsL1c6mbYmvVnGoSupjEj0+MfNIeOUrn8D5Gd66OgvU+FVVFBJQ4ZKi6
# ZuckxBjBE3d5XKyxCVryRA3at+WLYA==
# =ron6
# -----END PGP SIGNATURE-----
# gpg: Signature made Mon 03 Oct 2022 14:08:46 EDT
# gpg: using RSA key 3635C788CE62B91FD4C59AB47B0244FB12DE4422
# gpg: Good signature from "Taylor Simpson (Rock on) <tsimpson@quicinc.com>" [unknown]
# gpg: WARNING: This key is not certified with a trusted signature!
# gpg: There is no indication that the signature belongs to the owner.
# Primary key fingerprint: 3635 C788 CE62 B91F D4C5 9AB4 7B02 44FB 12DE 4422
* tag 'pull-hex-20221003' of https://github.com/quic/qemu:
Hexagon (gen_tcg_funcs.py): avoid duplicated tcg code on A_CVI_NEW
Hexagon (target/hexagon) move store size tracking to translation
Hexagon (target/hexagon) Change decision to set pkt_has_store_s[01]
Hexagon (target/hexagon) add instruction attributes from archlib
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
The value previously chosen overlaps GUSA_MASK.
Rename all DELAY_SLOT_* and GUSA_* defines to emphasize
that they are included in TB_FLAGs. Add aliases for the
FPSCR and SR bits that are included in TB_FLAGS, so that
we don't accidentally reassign those bits.
Fixes: 4da06fb306 ("target/sh4: Implement prctl_unalign_sigbus")
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/856
Reviewed-by: Yoshinori Sato <ysato@users.sourceforge.jp>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
The availability of tb->pc will shortly be conditional.
Introduce accessor functions to minimize ifdefs.
Pass around a known pc to places like tcg_gen_code,
where the caller must already have the value.
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
When PAGE_WRITE_INV is set when calling tlb_set_page,
we immediately set TLB_INVALID_MASK in order to force
tlb_fill to be called on the next lookup. Here in
probe_access_internal, we have just called tlb_fill
and eliminated true misses, thus the lookup must be valid.
This allows us to remove a warning comment from s390x.
There doesn't seem to be a reason to change the code though.
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: David Hildenbrand <david@redhat.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
This structure will shortly contain more than just
data for accessing MMIO. Rename the 'addr' member
to 'xlat_section' to more clearly indicate its purpose.
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
There is no need to guard g_free(P) with if (P): g_free(NULL) is safe.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20220923090428.93529-1-armbru@redhat.com>
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
Hexagon instructions with the A_CVI_NEW attribute produce a vector value
that can be used in the same packet. The python function responsible for
generating code for such instructions has a typo ("if" instead of
"elif"), which makes genptr_dst_write_ext() be executed twice, thus also
generating the same tcg code twice. Fortunately, this doesn't cause any
problems for correctness, but it is less efficient than it could be. Fix
it by using an "elif" and avoiding the unnecessary extra code gen.
Signed-off-by: Matheus Tavares Bernardino <quic_mathbern@quicinc.com>
Signed-off-by: Taylor Simpson <tsimpson@quicinc.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Taylor Simpson <tsimpson@quicinc.com>
Message-Id: <fa706b192b2a3a0ffbd399fa8dbf0d5b2c5b82d9.1664568492.git.quic_mathbern@quicinc.com>
New KVM_CLOCK flags were added in the kernel.(c68dc1b577eabd5605c6c7c08f3e07ae18d30d5d)
```
+ #define KVM_CLOCK_VALID_FLAGS \
+ (KVM_CLOCK_TSC_STABLE | KVM_CLOCK_REALTIME | KVM_CLOCK_HOST_TSC)
case KVM_CAP_ADJUST_CLOCK:
- r = KVM_CLOCK_TSC_STABLE;
+ r = KVM_CLOCK_VALID_FLAGS;
```
kvm_has_adjust_clock_stable needs to handle additional flags,
so that s->clock_is_reliable can be true and kvmclock_current_nsec doesn't need to be called.
Signed-off-by: Ray Zhang <zhanglei002@gmail.com>
Message-Id: <20220922100523.2362205-1-zhanglei002@gmail.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
The store width is needed for packet commit, so it is stored in
ctx->store_width. Currently, it is set when a store has a TCG
override instead of a QEMU helper. In the QEMU helper case, the
ctx->store_width is not set, we invoke a helper during packet commit
that uses the runtime store width.
This patch ensures ctx->store_width is set for all store instructions,
so performance is improved because packet commit can generate the proper
TCG store rather than the generic helper.
We do this by
- Use the attributes from the instructions during translation to
set ctx->store_width
- Remove setting of ctx->store_width from genptr.c
Signed-off-by: Taylor Simpson <tsimpson@quicinc.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220920080746.26791-3-tsimpson@quicinc.com>
We have found cases where pkt_has_store_s[01] is set incorrectly.
This leads to generating an unnecessary store that is left over
from a previous packet.
Add an attribute to determine if an instruction is a scalar store
The attribute is attached to the fSTORE macro (hex_common.py)
Update the logic in decode.c that sets pkt_has_store_s[01]
Signed-off-by: Taylor Simpson <tsimpson@quicinc.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220920080746.26791-4-tsimpson@quicinc.com>
The imported files from the architecture library have added some
instruction attributes. Some of these will be used in a subsequent
patch for determing the size of a store.
Signed-off-by: Taylor Simpson <tsimpson@quicinc.com>
Acked-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220920080746.26791-2-tsimpson@quicinc.com>
SP_EL1 must be kept when EL3 is present but EL2 is not. Therefore mark
it with ARM_CP_EL3_NO_EL2_KEEP.
Cc: qemu-stable@nongnu.org
Fixes: 696ba37718 ("target/arm: Handle cpreg registration for missing EL")
Signed-off-by: Jerome Forissier <jerome.forissier@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220927120058.670901-1-jerome.forissier@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
cpu64.c has ended up in a slightly odd order -- it starts with the
initfns for most of the models-real-hardware CPUs; after that comes a
bunch of support code for SVE, SME, pauth and LPA2 properties. Then
come the initfns for the 'host' and 'max' CPU types, and then after
that one more models-real-hardware CPU initfn, for a64fx. (This
ordering is partly historical and partly required because a64fx needs
the SVE properties.)
Reorder the file into:
* CPU property support functions
* initfns for real hardware CPUs
* initfns for host and max
* class boilerplate
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Our SDCR_VALID_MASK doesn't include all of the bits which are defined
by the current architecture. In particular in commit 0b42f4fab9 we
forgot to add SCCD, which meant that an AArch32 guest couldn't
actually use the SCCD bit to disable counting in Secure state.
Add all the currently defined bits; we don't implement all of them,
but this makes them be reads-as-written, which is architecturally
valid and matches how we currently handle most of the others in the
mask.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220923123412.1214041-4-peter.maydell@linaro.org
In commit 01765386a8 we fixed a bug where we weren't correctly
bracketing changes to some registers with pmu_op_start() and
pmu_op_finish() calls for changes which affect whether the PMU
counters might be enabled. However, we missed the case of writes to
the AArch64 MDCR_EL3 register, because (unlike its AArch32
counterpart) they are currently done directly to the CPU state struct
without going through the sdcr_write() function.
Give MDCR_EL3 a writefn which handles the PMU start/finish calls.
The SDCR writefn then simplfies to "call the MDCR_EL3 writefn after
masking off the bits which don't exist in the AArch32 register".
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220923123412.1214041-3-peter.maydell@linaro.org
In commit 01765386a8 we made some system register write functions
call pmu_op_start()/pmu_op_finish(). This means that they now touch
timers, so for icount to work these registers must have the ARM_CP_IO
flag set.
This fixes a bug where when icount is enabled a guest that touches
MDCR_EL3, MDCR_EL2, PMCNTENSET_EL0 or PMCNTENCLR_EL0 would cause
QEMU to print an error message and exit, for example:
[ 2.495971] TCP: Hash tables configured (established 1024 bind 1024)
[ 2.496213] UDP hash table entries: 256 (order: 1, 8192 bytes)
[ 2.496386] UDP-Lite hash table entries: 256 (order: 1, 8192 bytes)
[ 2.496917] NET: Registered protocol family 1
qemu-system-aarch64: Bad icount read
Reported-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220923123412.1214041-2-peter.maydell@linaro.org
Include the IIR register (which holds the opcode of the failing
instruction) when dumping the hppa registers.
Signed-off-by: Helge Deller <deller@gmx.de>
Reviewed-by: Laurent Vivier <laurent@vivier.eu>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20220918194555.83535-7-deller@gmx.de>
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
Starting with RVV1.0, the original vf[w]redsum_vs instruction was renamed
to vf[w]redusum_vs. The distinction between ordered and unordered is also
more consistent with other instructions, although there is no difference
in implementation between the two for QEMU.
Signed-off-by: Yang Liu <liuyang22@iscas.ac.cn>
Acked-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Frank Chang <frank.chang@sifive.com>
Message-Id: <20220817074802.20765-2-liuyang22@iscas.ac.cn>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Remove duplicate code by wrapping vfwredsum_vs's OP function.
Signed-off-by: Yang Liu <liuyang22@iscas.ac.cn>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Frank Chang <frank.chang@sifive.com>
Message-Id: <20220817074802.20765-1-liuyang22@iscas.ac.cn>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Type 6 trigger is similar to a type 2 trigger, but provides additional
functionality and should be used instead of type 2 in newer
implementations.
Signed-off-by: Frank Chang <frank.chang@sifive.com>
Reviewed-by: Bin Meng <bmeng.cn@gmail.com>
Signed-off-by: Bin Meng <bmeng.cn@gmail.com>
Message-Id: <20220909134215.1843865-9-bmeng.cn@gmail.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Type 2 trigger cannot be fired in VU/VS modes.
Signed-off-by: Frank Chang <frank.chang@sifive.com>
Reviewed-by: Bin Meng <bmeng.cn@gmail.com>
Signed-off-by: Bin Meng <bmeng.cn@gmail.com>
Message-Id: <20220909134215.1843865-8-bmeng.cn@gmail.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Trigger actions are shared among all triggers. Extract to a common
function.
Signed-off-by: Frank Chang <frank.chang@sifive.com>
Reviewed-by: Bin Meng <bmeng.cn@gmail.com>
Signed-off-by: Bin Meng <bmeng.cn@gmail.com>
Reviewed-by: LIU Zhiwei <zhiwei_liu@linux.alibaba.com>
[bmeng: handle the DBG_ACTION_NONE case]
Signed-off-by: Bin Meng <bmeng.cn@gmail.com>
Message-Id: <20220909134215.1843865-7-bmeng.cn@gmail.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
tinfo.info:
One bit for each possible type enumerated in tdata1.
If the bit is set, then that type is supported by the currently
selected trigger.
Signed-off-by: Frank Chang <frank.chang@sifive.com>
Reviewed-by: Bin Meng <bmeng.cn@gmail.com>
Signed-off-by: Bin Meng <bmeng.cn@gmail.com>
Reviewed-by: LIU Zhiwei <zhiwei_liu@linux.alibaba.com>
Message-Id: <20220909134215.1843865-6-bmeng.cn@gmail.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
The value of tselect CSR can be written should be limited within the
range of supported triggers number.
Signed-off-by: Frank Chang <frank.chang@sifive.com>
Reviewed-by: Bin Meng <bmeng.cn@gmail.com>
Signed-off-by: Bin Meng <bmeng.cn@gmail.com>
Reviewed-by: LIU Zhiwei <zhiwei_liu@linux.alibaba.com>
Message-Id: <20220909134215.1843865-5-bmeng.cn@gmail.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Replace type2_trigger_t with the real tdata1, tdata2, and tdata3 CSRs,
which allows us to support more types of triggers in the future.
Signed-off-by: Frank Chang <frank.chang@sifive.com>
Reviewed-by: Bin Meng <bmeng.cn@gmail.com>
Signed-off-by: Bin Meng <bmeng.cn@gmail.com>
Reviewed-by: LIU Zhiwei <zhiwei_liu@linux.alibaba.com>
Message-Id: <20220909134215.1843865-4-bmeng.cn@gmail.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Introduce build_tdata1() to build tdata1 register content, which can be
shared among all types of triggers.
Signed-off-by: Frank Chang <frank.chang@sifive.com>
Reviewed-by: Bin Meng <bmeng.cn@gmail.com>
Signed-off-by: Bin Meng <bmeng.cn@gmail.com>
Reviewed-by: LIU Zhiwei <zhiwei_liu@linux.alibaba.com>
[bmeng: moved RV{32,64}_DATA_MASK definition to this patch]
Signed-off-by: Bin Meng <bmeng.cn@gmail.com>
Message-Id: <20220909134215.1843865-3-bmeng.cn@gmail.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Current RISC-V debug assumes that only type 2 trigger is supported.
To allow more types of triggers to be supported in the future
(e.g. type 6 trigger, which is similar to type 2 trigger with additional
functionality), we should determine the trigger type from tdata1.type.
RV_MAX_TRIGGERS is also introduced in replacement of TRIGGER_TYPE2_NUM.
Signed-off-by: Frank Chang <frank.chang@sifive.com>
Reviewed-by: Bin Meng <bmeng.cn@gmail.com>
Signed-off-by: Bin Meng <bmeng.cn@gmail.com>
Reviewed-by: LIU Zhiwei <zhiwei_liu@linux.alibaba.com>
[bmeng: fixed MXL_RV128 case, and moved macros to the following patch]
Signed-off-by: Bin Meng <bmeng.cn@gmail.com>
Message-Id: <20220909134215.1843865-2-bmeng.cn@gmail.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Now that M68K_FEATURE_M68000 has been renamed to M68K_FEATURE_M68K it is easier
to see that the privilege exception check is wrong: it is currently only generated
for ColdFire CPUs when in fact it should also be generated for Motorola CPUs from
the 68010 onwards.
Introduce a new M68K_FEATURE_MOVEFROMSR_PRIV feature which is set for all non-
Motorola CPUs, and for all Motorola CPUs from the 68010 onwards and use it to
determine whether a privilege exception should be generated for the MOVE-from-SR
instruction.
Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220925134804.139706-3-mark.cave-ayland@ilande.co.uk>
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
There are already 32 feature bits in use, so change the size of the m68k
CPU features to uint64_t (along with the associated m68k_feature()
functions) to allow up to 64 feature bits to be used.
At the same time make use of the BIT_ULL() macro when reading/writing
the CPU feature bits to improve readability, and also update m68k_feature()
to return a bool rather than an int.
Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20220925134804.139706-2-mark.cave-ayland@ilande.co.uk>
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
After RISCVException enum is introduced, riscv_csrrw_debug() returns
RISCV_EXCP_NONE to indicate there's no error. RISC-V vector GDB stub
should check the result against RISCV_EXCP_NONE instead of value 0.
Otherwise, 'E14' packet would be incorrectly reported for vector CSRs
when using "info reg vector" GDB command.
Signed-off-by: Frank Chang <frank.chang@sifive.com>
Reviewed-by: Jim Shu <jim.shu@sifive.com>
Reviewed-by: Tommy Wu <tommy.wu@sifive.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: LIU Zhiwei <zhiwei_liu@linux.alibaba.com>
Message-Id: <20220918083245.13028-1-frank.chang@sifive.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Instead of using our properties to set a config value which then might
be used to set the resetvec (depending on your timing), let's instead
just set the resetvec directly in the env struct.
This allows us to set the reset vec from the command line with:
-global driver=riscv.hart_array,property=resetvec,value=0x20000400
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20220914101108.82571-2-alistair.francis@wdc.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
While testing some changes to GDB's handling for the RISC-V registers
fcsr, fflags, and frm, I spotted that QEMU includes these registers
twice in the target description it sends to GDB, once in the fpu
feature, and once in the csr feature.
Right now things basically work OK, QEMU maps these registers onto two
different register numbers, e.g. fcsr maps to both 68 and 73, and GDB
can use either of these to access the register.
However, GDB's target descriptions don't really work this way, each
register should appear just once in a target description, mapping the
register name onto the number GDB should use when accessing the
register on the target. Duplicate register names actually result in
duplicate registers on the GDB side, however, as the registers have
the same name, the user can only access one of these registers.
Currently GDB has a hack in place, specifically for RISC-V, to spot
the duplicate copies of these three registers, and hide them from the
user, ensuring the user only ever sees a single copy of each.
In this commit I propose fixing this issue on the QEMU side, and in
the process, simplify the fpu register handling a little.
I think we should, remove fflags, frm, and fcsr from the two (32-bit
and 64-bit) fpu feature xml files. These files will only contain the
32 core floating point register f0 to f31. The fflags, frm, and fcsr
registers will continue to be advertised in the csr feature as they
currently are.
With that change made, I will simplify riscv_gdb_get_fpu and
riscv_gdb_set_fpu, removing the extra handling for the 3 status
registers.
Signed-off-by: Andrew Burgess <aburgess@redhat.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-Id: <0fbf2a5b12e3210ff3867d5cf7022b3f3462c9c8.1661934573.git.aburgess@redhat.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
- modify check for mcounteren to work in all less-privilege mode
- modify check for scounteren to work only when S mode is enabled
- distinguish the exception type raised by check for scounteren between U
and VU mode
Signed-off-by: Weiwei Li <liweiwei@iscas.ac.cn>
Signed-off-by: Junqiang Wang <wangjunqiang@iscas.ac.cn>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-Id: <20220817083756.12471-1-liweiwei@iscas.ac.cn>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
sideleg and sedeleg csrs are not part of riscv isa spec
anymore, these csrs were part of N extension which
is removed from the riscv isa specification.
These commits removed all traces of these csrs from
riscv spec (https://github.com/riscv/riscv-isa-manual) -
commit f8d27f805b65 ("Remove or downgrade more references to N extension (#674)")
commit b6cade07034d ("Remove N extension chapter for now")
Signed-off-by: Rahul Pathak <rpathak@ventanamicro.com>
Reviewed-by: Andrew Jones <ajones@ventanamicro.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-Id: <20220824145255.400040-1-rpathak@ventanamicro.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
If the ZPCI_OP ioctl reports that is is available and usable, then the
underlying KVM host will enable load/store intepretation for any guest
device without a SHM bit in the guest function handle. For a device that
will be using interpretation support, ensure the guest function handle
matches the host function handle; this value is re-checked every time the
guest issues a SET PCI FN to enable the guest device as it is the only
opportunity to reflect function handle changes.
By default, unless interpret=off is specified, interpretation support will
always be assumed and exploited if the necessary ioctl and features are
available on the host kernel. When these are unavailable, we will silently
revert to the interception model; this allows existing guest configurations
to work unmodified on hosts with and without zPCI interpretation support,
allowing QEMU to choose the best support model available.
Signed-off-by: Matthew Rosato <mjrosato@linux.ibm.com>
Acked-by: Thomas Huth <thuth@redhat.com>
Message-Id: <20220902172737.170349-4-mjrosato@linux.ibm.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
In order for hosts running inside of TCG to initialize the kernel's
random number generator, we should support the PRNO_TRNG instruction,
backed in the usual way with the qemu_guest_getrandom helper. This is
confirmed working on Linux 5.19.
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
Message-Id: <20220921100729.2942008-2-Jason@zx2c4.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
[thuth: turn prno-trng off in avocado test to avoid breaking it]
Signed-off-by: Thomas Huth <thuth@redhat.com>
In order to fully support MSA_EXT_5, we have to support the SHA-512
special instructions. So implement those.
The implementation began as something TweetNacl-like, and then was
adjusted to be useful here. It's not very beautiful, but it is quite
short and compact, which is what we're going for.
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
[ restructure, add missing exception, add comments, fixup CPU model ]
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20220922153820.221811-1-david@redhat.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
remove unused encodings
add fmin/fmax tests for signed zero
-----BEGIN PGP SIGNATURE-----
iQEzBAABCgAdFiEENjXHiM5iuR/UxZq0ewJE+xLeRCIFAmMou7IACgkQewJE+xLe
RCIYbQgAgjFujecgbbCJfBPVMmpTXNOgk+Jt3w+jfg7/WJRZuhxAU3xB2qpismUH
5MntMlFHAGOjlPXfg6U5AZFSw3RhlanH/RChHpVKuL6peOXFImIfEqdyVXHXfCuu
FlpQFGwJ3Rs50UJhd7lVdlx0I7lup4E4X77hFvFcZQP6aNrt6Ic1Zq5eXhEq9k2A
NnXol1R416JRT/senujYVvcTpgYVHlQCS+4dJEzKUqvFlTdo7lnAbPdjO8MPrz7B
0NgPUGjGZJ70Dcqvd1n8HePIU1YyKTlHJNaWyTlAmw4MECyHyAJnd64jEMNECDb5
0BrpHcY1HCt1Rh4QratemTfJglAJlA==
=UUyr
-----END PGP SIGNATURE-----
Merge tag 'pull-hex-20220919' of https://github.com/quic/qemu into staging
Hexagon update
remove unused encodings
add fmin/fmax tests for signed zero
# -----BEGIN PGP SIGNATURE-----
#
# iQEzBAABCgAdFiEENjXHiM5iuR/UxZq0ewJE+xLeRCIFAmMou7IACgkQewJE+xLe
# RCIYbQgAgjFujecgbbCJfBPVMmpTXNOgk+Jt3w+jfg7/WJRZuhxAU3xB2qpismUH
# 5MntMlFHAGOjlPXfg6U5AZFSw3RhlanH/RChHpVKuL6peOXFImIfEqdyVXHXfCuu
# FlpQFGwJ3Rs50UJhd7lVdlx0I7lup4E4X77hFvFcZQP6aNrt6Ic1Zq5eXhEq9k2A
# NnXol1R416JRT/senujYVvcTpgYVHlQCS+4dJEzKUqvFlTdo7lnAbPdjO8MPrz7B
# 0NgPUGjGZJ70Dcqvd1n8HePIU1YyKTlHJNaWyTlAmw4MECyHyAJnd64jEMNECDb5
# 0BrpHcY1HCt1Rh4QratemTfJglAJlA==
# =UUyr
# -----END PGP SIGNATURE-----
# gpg: Signature made Mon 19 Sep 2022 14:57:54 EDT
# gpg: using RSA key 3635C788CE62B91FD4C59AB47B0244FB12DE4422
# gpg: Good signature from "Taylor Simpson (Rock on) <tsimpson@quicinc.com>" [unknown]
# gpg: WARNING: This key is not certified with a trusted signature!
# gpg: There is no indication that the signature belongs to the owner.
# Primary key fingerprint: 3635 C788 CE62 B91F D4C5 9AB4 7B02 44FB 12DE 4422
* tag 'pull-hex-20220919' of https://github.com/quic/qemu:
Hexagon (tests/tcg/hexagon): add fmin/fmax tests for signed zero
Hexagon (target/hexagon) remove unused encodings
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Remove the use of regime_is_secure from get_phys_addr_pmsav5.
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220822152741.1617527-21-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Remove the use of regime_is_secure from get_phys_addr_pmsav7,
using the new parameter instead.
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220822152741.1617527-19-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Remove the use of regime_is_secure from pmsav7_use_background_region,
using the new parameter instead.
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220822152741.1617527-17-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Remove the use of regime_is_secure from get_phys_addr_pmsav8.
Since we already had a local variable named secure, use that.
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220822152741.1617527-16-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Remove the use of regime_is_secure from get_phys_addr_v6,
passing the new parameter to the lookup instead.
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220822152741.1617527-15-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Remove the use of regime_is_secure from get_phys_addr_v5,
passing the new parameter to the lookup instead.
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
[PMM: Folded in definition of local is_secure in get_phys_addr(),
since I dropped the earlier patch that would have provided it]
Message-id: 20220822152741.1617527-14-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Remove the use of regime_is_secure from pmsav8_mpu_lookup,
passing the new parameter to the lookup instead.
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220822152741.1617527-13-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Remove the use of regime_is_secure from v8m_security_lookup,
passing the new parameter to the lookup instead.
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220822152741.1617527-12-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This can be made redundant with result->page_size, by moving the basic
set of page_size from get_phys_addr_pmsav8. We still need to overwrite
page_size when v8m_security_lookup signals a subpage.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220822152741.1617527-11-richard.henderson@linaro.org
[PMM: Update a comment that used to refer to is_subpage]
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220822152741.1617527-10-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220822152741.1617527-9-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220822152741.1617527-8-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220822152741.1617527-7-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220822152741.1617527-6-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220822152741.1617527-5-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220822152741.1617527-4-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Combine 5 output pointer arguments from get_phys_addr
into a single struct. Adjust all callers.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220822152741.1617527-2-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
When requested, the alignment for VLD4.32 is 8 and not 16.
See ARM documentation about VLD4 encoding:
ebytes = 1 << UInt(size);
if size == '10' then
alignment = if a == '0' then 1 else 8;
else
alignment = if a == '0' then 1 else 4*ebytes;
Signed-off-by: Clément Chigot <chigot@adacore.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220914105058.2787404-1-chigot@adacore.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This queue contains a implementation of PowerISA 3.1B hash insns, ppc
TCG insns cleanups and fixes, and miscellaneus fixes in the spapr and
pnv_phb models.
-----BEGIN PGP SIGNATURE-----
iHUEABYKAB0WIQQX6/+ZI9AYAK8oOBk82cqW3gMxZAUCYyoWlAAKCRA82cqW3gMx
ZDYhAP0eQMeA4NS3hiw7WMcAVg0pei3ZJL9oEh1UE3+MfK7MhQEA0q8qExWnQJAA
a0hfnFH9pLjI+v0f/FbFK6QJBpu/bg8=
=qT+H
-----END PGP SIGNATURE-----
Merge tag 'pull-ppc-20220920' of https://gitlab.com/danielhb/qemu into staging
ppc patch queue for 2022-09-20:
This queue contains a implementation of PowerISA 3.1B hash insns, ppc
TCG insns cleanups and fixes, and miscellaneus fixes in the spapr and
pnv_phb models.
# -----BEGIN PGP SIGNATURE-----
#
# iHUEABYKAB0WIQQX6/+ZI9AYAK8oOBk82cqW3gMxZAUCYyoWlAAKCRA82cqW3gMx
# ZDYhAP0eQMeA4NS3hiw7WMcAVg0pei3ZJL9oEh1UE3+MfK7MhQEA0q8qExWnQJAA
# a0hfnFH9pLjI+v0f/FbFK6QJBpu/bg8=
# =qT+H
# -----END PGP SIGNATURE-----
# gpg: Signature made Tue 20 Sep 2022 15:37:56 EDT
# gpg: using EDDSA key 17EBFF9923D01800AF2838193CD9CA96DE033164
# gpg: Good signature from "Daniel Henrique Barboza <danielhb413@gmail.com>" [unknown]
# gpg: WARNING: This key is not certified with a trusted signature!
# gpg: There is no indication that the signature belongs to the owner.
# Primary key fingerprint: 17EB FF99 23D0 1800 AF28 3819 3CD9 CA96 DE03 3164
* tag 'pull-ppc-20220920' of https://gitlab.com/danielhb/qemu:
hw/ppc/spapr: Fix code style problems reported by checkpatch
hw/pci-host: pnv_phb{3, 4}: Fix heap out-of-bound access failure
hw/ppc: spapr: Use qemu_vfree() to free spapr->htab
target/ppc: Clear fpstatus flags on helpers missing it
target/ppc: Zero second doubleword of VSR registers for FPR insns
target/ppc: Set OV32 when OV is set
target/ppc: Zero second doubleword for VSX madd instructions
target/ppc: Set result to QNaN for DENBCD when VXCVI occurs
target/ppc: Zero second doubleword in DFP instructions
target/ppc: Remove unused xer_* macros
target/ppc: Remove extra space from s128 field in ppc_vsr_t
target/ppc: Merge fsqrt and fsqrts helpers
target/ppc: Move fsqrts to decodetree
target/ppc: Move fsqrt to decodetree
target/ppc: Implement hashstp and hashchkp
target/ppc: Implement hashst and hashchk
target/ppc: Add HASHKEYR and HASHPKEYR SPRs
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Any write to SR can change the security state so always call gen_exit_tb() when
this occurs. In particular MacOS makes use of andiw/oriw in a few places to
handle the switch between user and supervisor mode.
Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20220917112515.83905-5-mark.cave-ayland@ilande.co.uk>
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
The M68K_FEATURE_M68000 feature is misleading in that its name suggests the feature
is defined just for Motorola 68000 CPUs, whilst in fact it is defined for all
Motorola 680X0 CPUs.
In order to avoid confusion with the other M68K_FEATURE_M680X0 constants which
define the features available for specific Motorola CPU models, rename
M68K_FEATURE_M68000 to M68K_FEATURE_M68K and add comments to clarify its usage.
Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20220917112515.83905-2-mark.cave-ayland@ilande.co.uk>
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
Writes to SR may change security state, which may involve
a swap of %ssp with %usp as reflected in %a7. Finish the
writeback of %sp@+ before swapping stack pointers.
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1206
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Laurent Vivier <laurent@vivier.eu>
Reviewed-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Message-Id: <20220913142818.7802-3-richard.henderson@linaro.org>
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
First, we were writing to the entire SR register, instead
of only the flags portion. Second, we were not clearing C
as per the documentation (X was cleared via the 0xf mask).
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Laurent Vivier <laurent@vivier.eu>
Message-Id: <20220913142818.7802-2-richard.henderson@linaro.org>
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
This is slightly more complicated than cas,
because tas is allowed on data registers.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Laurent Vivier <laurent@vivier.eu>
Message-Id: <20220829051746.227094-1-richard.henderson@linaro.org>
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
In ppc emulation, exception flags are not cleared at the end of an
instruction. Instead, the next instruction is responsible to clear
it before its emulation. However, some helpers are not doing it,
causing an issue where the previously set exception flags are being
used and leading to incorrect values being set in FPSCR.
Fix this by clearing fp_status before doing the instruction 'real' work
for the following helpers that were missing this behavior:
- VSX_CVT_INT_TO_FP_VECTOR
- VSX_CVT_FP_TO_FP
- VSX_CVT_FP_TO_INT_VECTOR
- VSX_CVT_FP_TO_INT2
- VSX_CVT_FP_TO_INT
- VSX_CVT_FP_TO_FP_HP
- VSX_CVT_FP_TO_FP_VECTOR
- VSX_CMP
- VSX_ROUND
- xscvqpdp
- xscvdpsp[n]
Signed-off-by: Víctor Colombo <victor.colombo@eldorado.org.br>
Reviewed-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Message-Id: <20220906125523.38765-9-victor.colombo@eldorado.org.br>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
FPR register are mapped to the first doubleword of the VSR registers.
Since PowerISA v3.1, the second doubleword of the target register
must be zeroed for FP instructions.
This patch does it by writting 0 to the second dw everytime the
first dw is being written using set_fpr.
Signed-off-by: Víctor Colombo <victor.colombo@eldorado.org.br>
Reviewed-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Message-Id: <20220906125523.38765-8-victor.colombo@eldorado.org.br>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
According to PowerISA: "OV32 is set whenever OV is implicitly set, and
is set to the same value that OV is defined to be set to in 32-bit
mode".
This patch changes helper_update_ov_legacy to set/clear ov32 when
applicable.
Signed-off-by: Víctor Colombo <victor.colombo@eldorado.org.br>
Reviewed-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Message-Id: <20220906125523.38765-7-victor.colombo@eldorado.org.br>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
In 205eb5a89e we updated most VSX instructions to zero the
second doubleword, as is requested by PowerISA since v3.1.
However, VSX_MADD helper was left behind unchanged, while it
is also affected and should be fixed as well.
This patch applies the fix for MADD instructions.
Fixes: 205eb5a89e ("target/ppc: Change VSX instructions behavior to fill with zeros")
Signed-off-by: Víctor Colombo <victor.colombo@eldorado.org.br>
Reviewed-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Message-Id: <20220906125523.38765-6-victor.colombo@eldorado.org.br>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
According to the ISA, for instruction DENBCD:
"If an invalid BCD digit or sign code is detected in the source
operand, an invalid-operation exception (VXCVI) occurs."
In the Invalid Operation Exception section, there is the situation:
"When Invalid Operation Exception is disabled (VE=0) and Invalid
Operation occurs (...) If the operation is an (...) or format the
target FPR is set to a Quiet NaN". This was not being done in
QEMU.
This patch sets the result to QNaN when the instruction DENBCD causes
an Invalid Operation Exception.
Signed-off-by: Víctor Colombo <victor.colombo@eldorado.org.br>
Reviewed-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Message-Id: <20220906125523.38765-5-victor.colombo@eldorado.org.br>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Starting at PowerISA v3.1, the second doubleword of the registers
used to store results in DFP instructions are supposed to be zeroed.
From the ISA, chapter 7.2.1.1 Floating-Point Registers:
"""
Chapter 4. Floating-Point Facility provides 32 64-bit
FPRs. Chapter 5. Decimal Floating-Point also employs
FPRs in decimal floating-point (DFP) operations. When
VSX is implemented, the 32 FPRs are mapped to
doubleword 0 of VSRs 0-31. (...)
All instructions that operate on an FPR are redefined
to operate on doubleword element 0 of the
corresponding VSR. (...)
and the contents of doubleword element 1 of the
VSR corresponding to the target FPR or FPR pair for these
instructions are set to 0.
"""
Before, the result stored at doubleword 1 was said to be undefined.
With that, this patch changes the DFP facility to zero doubleword 1
when using set_dfp64 and set_dfp128. This fixes the behavior for ISA
3.1 while keeping the behavior correct for previous ones.
Signed-off-by: Víctor Colombo <victor.colombo@eldorado.org.br>
Reviewed-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Message-Id: <20220906125523.38765-4-victor.colombo@eldorado.org.br>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
The macros xer_ov, xer_ca, xer_ov32, and xer_ca32 are both unused and
hiding the usage of env. Remove them.
Signed-off-by: Víctor Colombo <victor.colombo@eldorado.org.br>
Reviewed-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Message-Id: <20220906125523.38765-3-victor.colombo@eldorado.org.br>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Very trivial rogue space removal. There are two spaces between Int128
and s128 in ppc_vsr_t struct, where it should be only one.
Signed-off-by: Víctor Colombo <victor.colombo@eldorado.org.br>
Reviewed-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Message-Id: <20220906125523.38765-2-victor.colombo@eldorado.org.br>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
These two helpers are almost identical, differing only by the softfloat
operation it calls. Merge them into one using a macro.
Also, take this opportunity to capitalize the helper name as we moved
the instruction to decodetree in a previous patch.
Signed-off-by: Víctor Colombo <victor.colombo@eldorado.org.br>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220905123746.54659-4-victor.colombo@eldorado.org.br>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Implementation for instructions hashstp and hashchkp, the privileged
versions of hashst and hashchk, which were added in Power ISA 3.1B.
Signed-off-by: Víctor Colombo <victor.colombo@eldorado.org.br>
Reviewed-by: Lucas Mateus Castro <lucas.araujo@eldorado.org.br>
Message-Id: <20220715205439.161110-4-victor.colombo@eldorado.org.br>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Implementation for instructions hashst and hashchk, which were added
in Power ISA 3.1B.
It was decided to implement the hash algorithm from ground up in this
patch exactly as described in Power ISA.
Signed-off-by: Víctor Colombo <victor.colombo@eldorado.org.br>
Reviewed-by: Lucas Mateus Castro <lucas.araujo@eldorado.org.br>
Message-Id: <20220715205439.161110-3-victor.colombo@eldorado.org.br>
[danielhb: fix block comment in excp_helper.c]
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Add the Special Purpose Registers HASHKEYR and HASHPKEYR, which were
introduced by the Power ISA 3.1B. They are used by the new instructions
hashchk(p) and hashst(p).
The ISA states that the Operating System should generate the value for
these registers when creating a process, so it's its responsability to
do so. We initialize it with 0 for qemu-softmmu, and set a random 64
bits value for linux-user.
Signed-off-by: Víctor Colombo <victor.colombo@eldorado.org.br>
Reviewed-by: Lucas Mateus Castro <lucas.araujo@eldorado.org.br>
Message-Id: <20220715205439.161110-2-victor.colombo@eldorado.org.br>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Remove encodings guarded by ifdef that is not defined
Signed-off-by: Taylor Simpson <tsimpson@quicinc.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20220606222327.7682-4-tsimpson@quicinc.com>
The "O" operand type in the Intel SDM needs to load an 8- to 64-bit
unsigned value, while insn_get is limited to 32 bits. Extract the code
out of disas_insn and into a separate function.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
The later prefix wins if both are present, make it show in s->prefix too.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
INSERTQ is defined to not modify any bits in the lower 64 bits of the
destination, other than the ones being replaced with bits from the
source operand. QEMU instead is using unshifted bits from the source
for those bits.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
SSE4a instructions EXTRQ and INSERTQ have two bit index operands, that can be
immediates or taken from an XMM register. In both cases, the fields are
6-bit wide and the top two bits in the byte are ignored. translate.c is
doing that correctly for the immediate case, but not for the XMM case, so
fix it.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Many instructions which load/store 128-bit values are supposed to
raise #GP when the memory operand isn't 16-byte aligned. This includes:
- Instructions explicitly requiring memory alignment (Exceptions Type 1
in the "AVX and SSE Instruction Exception Specification" section of
the SDM)
- Legacy SSE instructions that load/store 128-bit values (Exceptions
Types 2 and 4).
This change sets MO_ALIGN_16 on 128-bit memory accesses that require
16-byte alignment. It adds cpu_record_sigbus and cpu_do_unaligned_access
hooks that simulate a #GP exception in qemu-user and qemu-system,
respectively.
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/217
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Ricky Zhou <ricky@rzhou.org>
Message-Id: <20220830034816.57091-2-ricky@rzhou.org>
[Do not bother checking PREFIX_VEX, since AVX is not supported. - Paolo]
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Update the ID registers for TCG's '-cpu max' to report a FEAT_PMUv3p5
compliant PMU.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220822132358.3524971-11-peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
With FEAT_PMUv3p5, the event counters are now 64 bit, rather than 32
bit. (Previously, only the cycle counter could be 64 bit, and other
event counters were always 32 bits). For any given event counter,
whether the overflow event is noted for overflow from bit 31 or from
bit 63 is controlled by a combination of PMCR.LP, MDCR_EL2.HLP and
MDCR_EL2.HPMN.
Implement the 64-bit event counter handling. We choose to make our
counters always 64 bits, and mask out the top 32 bits on read or
write of PMXEVCNTR for CPUs which don't have FEAT_PMUv3p5.
(Note that the changes to pmenvcntr_op_start() and
pmenvcntr_op_finish() bring their logic closer into line with that of
pmccntr_op_start() and pmccntr_op_finish(), which already had to cope
with the overflow being either at 32 or 64 bits.)
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220822132358.3524971-10-peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
FEAT_PMUv3p5 introduces new bits which disable the cycle
counter from counting:
* MDCR_EL2.HCCD disables the counter when in EL2
* MDCR_EL3.SCCD disables the counter when Secure
Add the code to support these bits.
(Note that there is a third documented counter-disable
bit, MDCR_EL3.MCCD, which disables the counter when in
EL3. This is not present until FEAT_PMUv3p7, so is
out of scope for now.)
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220822132358.3524971-9-peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Our feature test functions that check the PMU version are named
isar_feature_{aa32,aa64,any}_pmu_8_{1,4}. This doesn't match the
current Arm ARM official feature names, which are FEAT_PMUv3p1 and
FEAT_PMUv3p4. Rename these functions to _pmuv3p1 and _pmuv3p4.
This commit was created with:
sed -i -e 's/pmu_8_/pmuv3p/g' target/arm/*.[ch]
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220822132358.3524971-8-peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
In pmccntr_op_finish() and pmevcntr_op_finish() we calculate the next
point at which we will get an overflow and need to fire the PMU
interrupt or set the overflow flag. We do this by calculating the
number of nanoseconds to the overflow event and then adding it to
qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL). However, we don't check
whether that signed addition overflows, which can happen if the next
PMU interrupt would happen massively far in the future (250 years or
more).
Since QEMU assumes that "when the QEMU_CLOCK_VIRTUAL rolls over" is
"never", the sensible behaviour in this situation is simply to not
try to set the timer if it would be beyond that point. Detect the
overflow, and skip setting the timer in that case.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220822132358.3524971-7-peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
The logic in pmu_counter_enabled() for handling the 'prohibit event
counting' bits MDCR_EL2.HPMD and MDCR_EL3.SPME is written in a way
that assumes that EL2 is never Secure. This used to be true, but the
architecture now permits Secure EL2, and QEMU can emulate this.
Refactor the prohibit logic so that we effectively OR together
the various prohibit bits when they apply, rather than trying to
construct an if-else ladder where any particular state of the CPU
ends up in exactly one branch of the ladder.
This fixes the Secure EL2 case and also is a better structure for
adding the PMUv8.5 bits MDCR_EL2.HCCD and MDCR_EL3.SCCD.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220822132358.3524971-6-peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
The architecture requires that if PMCR.LC is set (for a 64-bit cycle
counter) then PMCR.D (which enables the clock divider so the counter
ticks every 64 cycles rather than every cycle) should be ignored. We
were always honouring PMCR.D; fix the bug so we correctly ignore it
in this situation.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220822132358.3524971-5-peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
The PMU cycle and event counter infrastructure design requires that
operations on the PMU register fields are wrapped in pmu_op_start()
and pmu_op_finish() calls (or their more specific pmmcntr and
pmevcntr equivalents). This includes any changes to registers which
affect whether the counter should be enabled or disabled, but we
forgot to do this.
The effect of this bug is that in sequences like:
* disable the cycle counter (PMCCNTR) using the PMCNTEN register
* write a value such as 0xfffff000 to the PMCCNTR
* restart the counter by writing to PMCNTEN
the value written to the cycle counter is corrupted, and it starts
counting from the wrong place. (Essentially, we fail to record that
the QEMU_CLOCK_VIRTUAL timestamp when the counter should be considered
to have started counting is the point when PMCNTEN is written to enable
the counter.)
Add the necessary bracketing calls, so that updates to the various
registers which affect whether the PMU is counting are handled
correctly.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220822132358.3524971-4-peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
pmu_counter_mask() accidentally returns a value with bits [63:32]
set, because the expression it returns is evaluated as a signed value
that gets sign-extended to 64 bits. Force the whole expression to be
evaluated with 64-bit arithmetic with ULL suffixes.
The main effect of this bug was that a guest could write to the bits
in the high half of registers like PMCNTENSET_EL0 that are supposed
to be RES0.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220822132358.3524971-3-peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
When the cycle counter overflows, we are intended to set bit 31 in PMOVSR
to indicate this. However a missing ULL suffix means that we end up
setting all of bits 63-31. Fix the bug.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220822132358.3524971-2-peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Fix a missing space before a comment terminator.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220819110052.2942289-7-peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
The architectural feature FEAT_ETS (Enhanced Translation
Synchronization) is a set of tightened guarantees about memory
ordering involving translation table walks:
* if memory access RW1 is ordered-before memory access RW2 then it
is also ordered-before any translation table walk generated by RW2
that generates a translation fault, address size fault or access
fault
* TLB maintenance on non-exec-permission translations is guaranteed
complete after a DSB (ie it does not need the context
synchronization event that you have to have if you don’t have
FEAT_ETS)
For QEMU’s implementation we don’t reorder translation table walk
accesses, and we guarantee to finish the TLB maintenance as soon as
the TLB op is done (the tlb_flush functions will complete at the end
of the TLB, and TLB ops always end the TB because they’re sysreg
writes).
So we’re already compliant and all we need to do is say so in the ID
registers for the 'max' CPU.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220819110052.2942289-6-peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
In Armv8.6, a new AArch32 ID register ID_DFR1 is defined; implement
it. We don't have any CPUs with features that they need to advertise
here yet, but plumbing in the ID register gives it the right name
when debugging and will help in future when we do add a CPU that
has non-zero ID_DFR1 fields.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220819110052.2942289-5-peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
In Armv8.6 a new AArch32 ID register ID_MMFR5 is defined.
Implement this; we want to be able to use it to report to
the guest that we implement FEAT_ETS.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220819110052.2942289-4-peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
The code that reads the AArch32 ID registers from KVM in
kvm_arm_get_host_cpu_features() does so almost but not quite in
encoding order. Move the read of ID_PFR2 down so it's really in
encoding order.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220819110052.2942289-3-peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
In the AArch32 ID register scheme, coprocessor registers with
encoding cp15, 0, c0, c{0-7}, {0-7} are all in the space covered by
what in v6 and v7 was called the "CPUID scheme", and are supposed to
RAZ if they're not allocated to a specific ID register. For our
pre-v8 CPUs we get this right, because the regdefs in
id_pre_v8_midr_cp_reginfo[] cover these RAZ requirements. However
for v8 we failed to put in the necessary patterns to cover this, so
we end up UNDEFing on everything we didn't have an ID register for.
This is a problem because in Armv8 some encodings in 0, c0, c3, {0-7}
are now being used for new ID registers, and guests might thus start
trying to read them. (We already have one of these: ID_PFR2.)
For v8 CPUs, we already have regdefs for 0, c0, c{0-2}, {0-7} (that
is, the space is completely allocated with no reserved spaces). Add
entries to v8_idregs[] covering 0, c0, c3, {0-7}:
* c3, {0-2} is the reserved AArch32 space corresponding to the
AArch64 MVFR[012]_EL1
* c3, {3,5,6,7} are reserved RAZ for both AArch32 and AArch64
(in fact some of these are given defined meanings in Armv8.6,
but we don't implement them yet)
* c3, 4 is ID_PFR2 (already defined)
We then programmatically add RAZ patterns for AArch32 for
0, c0, c{4..15}, {0-7}:
* c4-c7 are unused, and not shared with AArch64 (these
are the encodings corresponding to where the AArch64
specific ID registers live in the system register space)
* c8-c15 weren't required to RAZ in v6/v7, but v8 extends
the AArch32 reserved-should-RAZ space to cover these;
the equivalent area of the AArch64 sysreg space is not
defined as must-RAZ
Note that the architecture allows some registers in this space
to return an UNKNOWN value; we always return 0.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220819110052.2942289-2-peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Add cortex A35 core and enable it for virt board.
Signed-off-by: Hao Wu <wuhaotsh@google.com>
Reviewed-by: Joe Komlodi <komlodi@google.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-Id: <20220819002015.1663247-1-wuhaotsh@google.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
The riscv target incorrectly enabled semihosting always, whether the
user asked for it or not. Call semihosting_enabled() passing the
correct value to the is_userspace argument, which fixes this and also
handles the userspace=on argument. Because we do this at translate
time, we no longer need to check the privilege level in
riscv_cpu_do_interrupt().
Note that this is a behaviour change: we used to default to
semihosting being enabled, and now the user must pass
"-semihosting-config enable=on" if they want it.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-Id: <20220822141230.3658237-8-peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Honour the commandline -semihosting-config userspace=on option,
instead of always permitting userspace semihosting calls in system
emulation mode, by passing the correct value to the is_userspace
argument of semihosting_enabled().
Note that this is a behaviour change: if the user wants to
do semihosting calls from userspace they must now specifically
enable them on the command line.
xtensa semihosting is not implemented for linux-user builds.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Acked-by: Max Filippov <jcmvbkbc@gmail.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220822141230.3658237-7-peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Honour the commandline -semihosting-config userspace=on option,
instead of always permitting userspace semihosting calls in system
emulation mode, by passing the correct value to the is_userspace
argument of semihosting_enabled().
Note that this is a behaviour change: if the user wants to
do semihosting calls from userspace they must now specifically
enable them on the command line.
nios2 semihosting is not implemented for linux-user builds.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220822141230.3658237-6-peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Honour the commandline -semihosting-config userspace=on option,
instead of always permitting userspace semihosting calls in system
emulation mode, by passing the correct value to the is_userspace
argument of semihosting_enabled().
Note that this is a behaviour change: if the user wants to
do semihosting calls from userspace they must now specifically
enable them on the command line.
MIPS semihosting is not implemented for linux-user builds.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220822141230.3658237-5-peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Honour the commandline -semihosting-config userspace=on option,
instead of never permitting userspace semihosting calls in system
emulation mode, by passing the correct value to the is_userspace
argument of semihosting_enabled(), instead of manually checking and
always forbidding semihosting if the guest is in userspace.
(Note that target/m68k doesn't support semihosting at all
in the linux-user build.)
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Laurent Vivier <laurent@vivier.eu>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220822141230.3658237-4-peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Honour the commandline -semihosting-config userspace=on option,
instead of never permitting userspace semihosting calls in system
emulation mode, by passing the correct value to the is_userspace
argument of semihosting_enabled(), instead of manually checking and
always forbidding semihosting if the guest is in userspace and this
isn't the linux-user build.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220822141230.3658237-3-peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Currently our semihosting implementations generally prohibit use of
semihosting calls in system emulation from the guest userspace. This
is a very long standing behaviour justified originally "to provide
some semblance of security" (since code with access to the
semihosting ABI can do things like read and write arbitrary files on
the host system). However, it is sometimes useful to be able to run
trusted guest code which performs semihosting calls from guest
userspace, notably for test code. Add a command line suboption to
the existing semihosting-config option group so that you can
explicitly opt in to semihosting from guest userspace with
-semihosting-config userspace=on
(There is no equivalent option for the user-mode emulator, because
there by definition all code runs in userspace and has access to
semihosting already.)
This commit adds the infrastructure for the command line option and
adds a bool 'is_user' parameter to the function
semihosting_userspace_enabled() that target code can use to check
whether it should be permitting the semihosting call for userspace.
It mechanically makes all the callsites pass 'false', so they
continue checking "is semihosting enabled in general". Subsequent
commits will make each target that implements semihosting honour the
userspace=on option by passing the correct value and removing
whatever "don't do this for userspace" checking they were doing by
hand.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Acked-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220822141230.3658237-2-peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
The semihosting abi used by m68k uses the gdb remote
protocol filesys errnos.
Acked-by: Laurent Vivier <laurent@vivier.eu>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
This separates guest file descriptors from host file descriptors,
and utilizes shared infrastructure for integration with gdbstub.
Acked-by: Laurent Vivier <laurent@vivier.eu>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
This separates guest file descriptors from host file descriptors,
and utilizes shared infrastructure for integration with gdbstub.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
The sscofpmf extension was ratified as a part of priv spec v1.12.
Mark the csr_ops accordingly.
Reviewed-by: Weiwei Li <liweiwei@iscas.ac.cn>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Signed-off-by: Atish Patra <atishp@rivosinc.com>
Message-Id: <20220824221701.41932-6-atishp@rivosinc.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Qemu virt machine can support few cache events and cycle/instret counters.
It also supports counter overflow for these events.
Add a DT node so that OpenSBI/Linux kernel is aware of the virt machine
capabilities. There are some dummy nodes added for testing as well.
Acked-by: Alistair Francis <alistair.francis@wdc.com>
Signed-off-by: Atish Patra <atish.patra@wdc.com>
Signed-off-by: Atish Patra <atishp@rivosinc.com>
Message-Id: <20220824221701.41932-5-atishp@rivosinc.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Qemu can monitor the following cache related PMU events through
tlb_fill functions.
1. DTLB load/store miss
3. ITLB prefetch miss
Increment the PMU counter in tlb_fill function.
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Tested-by: Heiko Stuebner <heiko@sntech.de>
Signed-off-by: Atish Patra <atish.patra@wdc.com>
Signed-off-by: Atish Patra <atishp@rivosinc.com>
Message-Id: <20220824221701.41932-4-atishp@rivosinc.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
All the hpmcounters and the fixed counters (CY, IR, TM) can be represented
as a unified counter. Thus, the predicate function doesn't need handle each
case separately.
Simplify the predicate function so that we just handle things differently
between RV32/RV64 and S/HS mode.
Reviewed-by: Bin Meng <bmeng.cn@gmail.com>
Acked-by: Alistair Francis <alistair.francis@wdc.com>
Signed-off-by: Atish Patra <atishp@rivosinc.com>
Message-Id: <20220824221701.41932-3-atishp@rivosinc.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
The Sscofpmf ('Ss' for Privileged arch and Supervisor-level extensions,
and 'cofpmf' for Count OverFlow and Privilege Mode Filtering)
extension allows the perf to handle overflow interrupts and filtering
support. This patch provides a framework for programmable
counters to leverage the extension. As the extension doesn't have any
provision for the overflow bit for fixed counters, the fixed events
can also be monitoring using programmable counters. The underlying
counters for cycle and instruction counters are always running. Thus,
a separate timer device is programmed to handle the overflow.
Tested-by: Heiko Stuebner <heiko@sntech.de>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Signed-off-by: Atish Patra <atish.patra@wdc.com>
Signed-off-by: Atish Patra <atishp@rivosinc.com>
Message-Id: <20220824221701.41932-2-atishp@rivosinc.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
vstimecmp CSR allows the guest OS or to program the next guest timer
interrupt directly. Thus, hypervisor no longer need to inject the
timer interrupt to the guest if vstimecmp is used. This was ratified
as a part of the Sstc extension.
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Signed-off-by: Atish Patra <atishp@rivosinc.com>
Message-Id: <20220824221357.41070-4-atishp@rivosinc.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
stimecmp allows the supervisor mode to update stimecmp CSR directly
to program the next timer interrupt. This CSR is part of the Sstc
extension which was ratified recently.
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Signed-off-by: Atish Patra <atishp@rivosinc.com>
Message-Id: <20220824221357.41070-3-atishp@rivosinc.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Historically, The mtime/mtimecmp has been part of the CPU because
they are per hart entities. However, they actually belong to aclint
which is a MMIO device.
Move them to the ACLINT device. This also emulates the real hardware
more closely.
Reviewed-by: Anup Patel <anup@brainfault.org>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Andrew Jones <ajones@ventanamicro.com>
Signed-off-by: Atish Patra <atishp@rivosinc.com>
Message-Id: <20220824221357.41070-2-atishp@rivosinc.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
The arch review of AIA spec is completed and we now have official
extension names for AIA: Smaia (M-mode AIA CSRs) and Ssaia (S-mode
AIA CSRs).
Refer, section 1.6 of the latest AIA v0.3.1 stable specification at
https://github.com/riscv/riscv-aia/releases/download/0.3.1-draft.32/riscv-interrupts-032.pdf)
Based on above, we update QEMU RISC-V to:
1) Have separate config options for Smaia and Ssaia extensions
which replace RISCV_FEATURE_AIA in CPU features
2) Not generate AIA INTC compatible string in virt machine
Signed-off-by: Anup Patel <apatel@ventanamicro.com>
Reviewed-by: Andrew Jones <ajones@ventanamicro.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-id: 20220820042958.377018-1-apatel@ventanamicro.com
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
XVentanaCondOps is Ventana custom extension. Add
its extension entry in the ISA Ext array
Signed-off-by: Rahul Pathak <rpathak@ventanamicro.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-id: 20220816045408.1231135-1-rpathak@ventanamicro.com
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Normally, riscv_csrrw_check is called when executing Zicsr instructions.
And we can only do access control for existed CSRs. So the priority of
CSR related check, from highest to lowest, should be as follows:
1) check whether Zicsr is supported: raise RISCV_EXCP_ILLEGAL_INST if not
2) check whether csr is existed: raise RISCV_EXCP_ILLEGAL_INST if not
3) do access control: raise RISCV_EXCP_ILLEGAL_INST or RISCV_EXCP_VIRT_
INSTRUCTION_FAULT if not allowed
The predicates contain parts of function of both 2) and 3), So they need
to be placed in the middle of riscv_csrrw_check
Signed-off-by: Weiwei Li <liweiwei@iscas.ac.cn>
Signed-off-by: Junqiang Wang <wangjunqiang@iscas.ac.cn>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-Id: <20220803123652.3700-1-liweiwei@iscas.ac.cn>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Added support for RISC-V PAUSE instruction from Zihintpause extension,
enabled by default.
Tested-by: Heiko Stuebner <heiko@sntech.de>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Signed-off-by: Dao Lu <daolu@rivosinc.com>
Message-Id: <20220725034728.2620750-2-daolu@rivosinc.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
According to v-spec, mask agnostic behavior can be either kept as
undisturbed or set elements' bits to all 1s. To distinguish the
difference of mask policies, QEMU should be able to simulate the mask
agnostic behavior as "set mask elements' bits to all 1s".
There are multiple possibility for agnostic elements according to
v-spec. The main intent of this patch-set tries to add option that
can distinguish between mask policies. Setting agnostic elements to
all 1s allows QEMU to express this.
This commit adds option 'rvv_ma_all_1s' is added to enable the
behavior, it is default as disabled.
Signed-off-by: eop Chen <eop.chen@sifive.com>
Reviewed-by: Frank Chang <frank.chang@sifive.com>
Reviewed-by: Weiwei Li <liweiwei@iscas.ac.cn>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-Id: <165570784143.17634.35095816584573691-10@git.sr.ht>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
According to v-spec, mask agnostic behavior can be either kept as
undisturbed or set elements' bits to all 1s. To distinguish the
difference of mask policies, QEMU should be able to simulate the mask
agnostic behavior as "set mask elements' bits to all 1s".
There are multiple possibility for agnostic elements according to
v-spec. The main intent of this patch-set tries to add option that
can distinguish between mask policies. Setting agnostic elements to
all 1s allows QEMU to express this.
This is the first commit regarding the optional mask agnostic
behavior. Follow-up commits will add this optional behavior
for all rvv instructions.
Signed-off-by: eop Chen <eop.chen@sifive.com>
Reviewed-by: Frank Chang <frank.chang@sifive.com>
Reviewed-by: Weiwei Li <liweiwei@iscas.ac.cn>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-Id: <165570784143.17634.35095816584573691-1@git.sr.ht>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Just add 1 to the effective privledge level when in HS mode, then reuse
the check of 'effective_priv < csr_priv' in riscv_csrrw_check to replace
the privilege level related check in hmode. Then, hmode will only check
whether H extension is supported.
When accessing Hypervior CSRs:
1) If accessing from M privilege level, the check of
'effective_priv< csr_priv' passes, returns hmode(...) which will return
RISCV_EXCP_ILLEGAL_INST when H extension is not supported and return
RISCV_EXCP_NONE otherwise.
2) If accessing from HS privilege level, effective_priv will add 1,
the check passes and also returns hmode(...) too.
3) If accessing from VS/VU privilege level, the check fails, and
returns RISCV_EXCP_VIRT_INSTRUCTION_FAULT
4) If accessing from U privilege level, the check fails, and returns
RISCV_EXCP_ILLEGAL_INST
Signed-off-by: Weiwei Li <liweiwei@iscas.ac.cn>
Signed-off-by: Junqiang Wang <wangjunqiang@iscas.ac.cn>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Andrew Jones <ajones@ventanamicro.com>
Message-Id: <20220718130955.11899-7-liweiwei@iscas.ac.cn>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Add check for the implicit dependence between H and S
Csrs only existed in RV32 will not trigger virtual instruction fault
when not in RV32 based on section 8.6.1 of riscv-privileged spec
(draft-20220717)
Signed-off-by: Weiwei Li <liweiwei@iscas.ac.cn>
Signed-off-by: Junqiang Wang <wangjunqiang@iscas.ac.cn>
Reviewed-by: Andrew Jones <ajones@ventanamicro.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-Id: <20220718130955.11899-6-liweiwei@iscas.ac.cn>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Add umode/umode32 predicate for mcounteren, menvcfg/menvcfgh
Signed-off-by: Weiwei Li <liweiwei@iscas.ac.cn>
Signed-off-by: Junqiang Wang <wangjunqiang@iscas.ac.cn>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Andrew Jones <ajones@ventanamicro.com>
Message-Id: <20220718130955.11899-5-liweiwei@iscas.ac.cn>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Fix the lines with over 80 characters
Fix the lines which are obviously misalgined with other lines in the
same group
Signed-off-by: Weiwei Li <liweiwei@iscas.ac.cn>
Signed-off-by: Junqiang Wang <wangjunqiang@iscas.ac.cn>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Andrew Jones <ajones@ventanamicro.com>
Message-Id: <20220718130955.11899-4-liweiwei@iscas.ac.cn>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Add check for "H depends on an I base integer ISA with 32 x registers"
which is stated at the beginning of chapter 8 of the riscv-privileged
spec(draft-20220717)
Signed-off-by: Weiwei Li <liweiwei@iscas.ac.cn>
Signed-off-by: Junqiang Wang <wangjunqiang@iscas.ac.cn>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Andrew Jones <ajones@ventanamicro.com>
Message-Id: <20220718130955.11899-3-liweiwei@iscas.ac.cn>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
There are 3 suggested privilege mode combinations listed in section 1.2
of the riscv-privileged spec(draft-20220717):
1) M, 2) M, U 3) M, S, U
Signed-off-by: Weiwei Li <liweiwei@iscas.ac.cn>
Signed-off-by: Junqiang Wang <wangjunqiang@iscas.ac.cn>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Andrew Jones <ajones@ventanamicro.com>
Message-Id: <20220718130955.11899-2-liweiwei@iscas.ac.cn>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
- Zmmul is ratified and is now version 1.0
Signed-off-by: Weiwei Li <liweiwei@iscas.ac.cn>
Signed-off-by: Junqiang Wang <wangjunqiang@iscas.ac.cn>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-Id: <20220710101546.3907-1-liweiwei@iscas.ac.cn>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
For rv128c shifts, a shamt of 0 is a shamt of 64, while for rv32c/rv64c
it stays 0 and is a hint instruction that does not change processor state.
For rv128c right shifts, the 6-bit shamt is in addition sign extended to
7 bits.
Signed-off-by: Frédéric Pétrot <frederic.petrot@univ-grenoble-alpes.fr>
Reviewed-by: Weiwei Li <liweiwei@iscas.ac.cn>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220710110451.245567-1-frederic.petrot@univ-grenoble-alpes.fr>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
We should disable extensions in riscv_cpu_realize() if minimum required
priv spec version is not satisfied. This also ensures that machines with
priv spec v1.11 (or lower) cannot enable H, V, and various multi-letter
extensions.
Fixes: a775398be2 ("target/riscv: Add isa extenstion strings to the device tree")
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Signed-off-by: Anup Patel <apatel@ventanamicro.com>
Signed-off-by: Rahul Pathak <rpathak@ventanamicro.com>
Message-Id: <20220630061150.905174-3-apatel@ventanamicro.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
We should write transformed instruction encoding of the trapped
instruction in [m|h]tinst CSR at time of taking trap as defined
by the RISC-V privileged specification v1.12.
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Signed-off-by: Anup Patel <apatel@ventanamicro.com>
Acked-by: dramforever <dramforever@live.com>
Message-Id: <20220630061150.905174-2-apatel@ventanamicro.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Right now the translator stops right *after* the end of a page, which
breaks reporting of fault locations when the last instruction of a
multi-insn translation block crosses a page boundary.
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1155
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Acked-by: Ilya Leoshkevich <iii@linux.ibm.com>
Tested-by: Ilya Leoshkevich <iii@linux.ibm.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
These will be useful in properly ending the TB.
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Acked-by: Ilya Leoshkevich <iii@linux.ibm.com>
Tested-by: Ilya Leoshkevich <iii@linux.ibm.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Right now translator stops right *after* the end of a page, which
breaks reporting of fault locations when the last instruction of a
multi-insn translation block crosses a page boundary.
An implementation, like the one arm and s390x have, would require an
i386 length disassembler, which is burdensome to maintain. Another
alternative would be to single-step at the end of a guest page, but
this may come with a performance impact.
Fix by snapshotting disassembly state and restoring it after we figure
out we crossed a page boundary. This includes rolling back cc_op
updates and emitted ops.
Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1143
Message-Id: <20220817150506.592862-4-iii@linux.ibm.com>
[rth: Simplify end-of-insn cross-page checks.]
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Right now translator stops right *after* the end of a page, which
breaks reporting of fault locations when the last instruction of a
multi-insn translation block crosses a page boundary.
Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220817150506.592862-3-iii@linux.ibm.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Pass these along to translator_loop -- pc may be used instead
of tb->pc, and host_pc is currently unused. Adjust all targets
at one time.
Acked-by: Alistair Francis <alistair.francis@wdc.com>
Acked-by: Ilya Leoshkevich <iii@linux.ibm.com>
Tested-by: Ilya Leoshkevich <iii@linux.ibm.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
The only user can easily use translator_lduw and
adjust the type to signed during the return.
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Acked-by: Ilya Leoshkevich <iii@linux.ibm.com>
Tested-by: Ilya Leoshkevich <iii@linux.ibm.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
When running SMP systems we sometimes were seeing lockups where
IPI interrupts were being raised by never handled.
This looks to be caused by 2 issues in the openrisc interrupt handling
logic.
1. After clearing an interrupt the openrisc_cpu_set_irq handler will
always clear PICSR. This is not correct as masked interrupts
should still be visible in PICSR.
2. After setting PICMR (mask register) and exposed interrupts should
cause an interrupt to be raised. This was not being done so add it.
This patch fixes both issues.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Stafford Horne <shorne@gmail.com>
This patch enables multithread TCG for OpenRISC. Since the or1k shared
syncrhonized timer can be updated from each vCPU via helpers we use a
mutex to synchronize updates.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Stafford Horne <shorne@gmail.com>
When we are tracing it's helpful to know which CPU's are getting
interrupted, add that detail to the log line.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Stafford Horne <shorne@gmail.com>
In commit f0655423ca ("target/openrisc: Reorg tlb lookup") data and
instruction TLB reads were combined. This, broke debugger reads where
we first tried to map using the data tlb then fall back to the
instruction tlb.
This patch replicates this logic by first requesting a PAGE_READ
protection mapping then falling back to PAGE_EXEC.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Stafford Horne <shorne@gmail.com>
Make the AES vector helpers AVX ready
No functional changes to existing helpers
Signed-off-by: Paul Brook <paul@nowt.org>
Message-Id: <20220424220204.2493824-22-paul@nowt.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Make the pclmulqdq helper AVX ready
Signed-off-by: Paul Brook <paul@nowt.org>
Message-Id: <20220424220204.2493824-21-paul@nowt.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Rewrite the blendv helpers so that they can easily be extended to support
the AVX encodings, which make all 4 arguments explicit.
No functional changes to the existing helpers
Signed-off-by: Paul Brook <paul@nowt.org>
Message-Id: <20220424220204.2493824-20-paul@nowt.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Fixup various vector helpers that either trivially exten to 256 bit,
or don't have 256 bit variants.
No functional changes to existing helpers
Signed-off-by: Paul Brook <paul@nowt.org>
Message-Id: <20220424220204.2493824-19-paul@nowt.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Perpare the horizontal atithmetic vector helpers for AVX
These currently use a dummy Reg typed variable to store the result then
assign the whole register. This will cause 128 bit operations to corrupt
the upper half of the register, so replace it with explicit temporaries
and element assignments.
Signed-off-by: Paul Brook <paul@nowt.org>
Message-Id: <20220424220204.2493824-18-paul@nowt.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Make the dpps and dppd helpers AVX-ready
I can't see any obvious reason why dppd shouldn't work on 256 bit ymm
registers, but both AMD and Intel agree that it's xmm only.
Signed-off-by: Paul Brook <paul@nowt.org>
Message-Id: <20220424220204.2493824-17-paul@nowt.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
AVX includes an additional set of comparison predicates, some of which
our softfloat implementation does not expose as separate functions.
Rewrite the helpers in terms of floatN_compare for future extensibility.
Signed-off-by: Paul Brook <paul@nowt.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220424220204.2493824-24-paul@nowt.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Prepare the "easy" floating point vector helpers for AVX
No functional changes to existing helpers.
Signed-off-by: Paul Brook <paul@nowt.org>
Message-Id: <20220424220204.2493824-16-paul@nowt.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
These helpers need to take special care to avoid overwriting source values
before the wole result has been calculated. Currently they use a dummy
Reg typed variable to store the result then assign the whole register.
This will cause 128 bit operations to corrupt the upper half of the register,
so replace it with explicit temporaries and element assignments.
Signed-off-by: Paul Brook <paul@nowt.org>
Message-Id: <20220424220204.2493824-14-paul@nowt.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
More preparatory work for AVX support in various integer vector helpers
No functional changes to existing helpers.
Signed-off-by: Paul Brook <paul@nowt.org>
Message-Id: <20220424220204.2493824-13-paul@nowt.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Rewrite the "simple" vector integer helpers in preperation for AVX support.
While the current code is able to use the same prototype for unary
(a = F(b)) and binary (a = F(b, c)) operations, future changes will cause
them to diverge.
No functional changes to existing helpers
Signed-off-by: Paul Brook <paul@nowt.org>
Message-Id: <20220424220204.2493824-12-paul@nowt.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Rewrite the vector shift helpers in preperation for AVX support (3 operand
form and 256 bit vectors).
For now keep the existing two operand interface.
No functional changes to existing helpers.
Signed-off-by: Paul Brook <paul@nowt.org>
Message-Id: <20220424220204.2493824-11-paul@nowt.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Use a union to store the various possible kinds of function pointers, and
access the correct one based on the flags.
SSEOpHelper_table6 and SSEOpHelper_table7 right now only have one case,
but this would change with AVX's 3- and 4-argument operations. Use
unions there too, to keep the code more similar for the three tables.
Extracted from a patch by Paul Brook <paul@nowt.org>.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
For AVX we're going to need both 128 bit (xmm) and 256 bit (ymm) variants of
floating point helpers. Add the register type suffix to the existing
*PS and *PD helpers (SS and SD variants are only valid on 128 bit vectors)
No functional changes.
Signed-off-by: Paul Brook <paul@nowt.org>
Message-Id: <20220424220204.2493824-15-paul@nowt.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Extracted from a patch by Paul Brook <paul@nowt.org>.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Put more flags to work to avoid hardcoding lists of opcodes. The op7 case
for SSE_OPF_CMP is included for homogeneity and because AVX needs it, but
it is never used by SSE or MMX.
Extracted from a patch by Paul Brook <paul@nowt.org>.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Handle 3DNOW instructions early to avoid complicating the MMX/SSE logic.
Signed-off-by: Paul Brook <paul@nowt.org>
Message-Id: <20220424220204.2493824-25-paul@nowt.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Add a flags field each row in sse_op_table6 and sse_op_table7.
Initially this is only used as a replacement for the magic SSE41_SPECIAL
pointer. The other flags are mostly relevant for the AVX implementation
but can be applied to SSE as well.
Signed-off-by: Paul Brook <paul@nowt.org>
Message-Id: <20220424220204.2493824-6-paul@nowt.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Add a flags field to each row in sse_op_table1.
Initially this is only used as a replacement for the magic
SSE_SPECIAL and SSE_DUMMY pointers, the other flags are mostly
relevant for the AVX implementation but can be applied to SSE as well.
Signed-off-by: Paul Brook <paul@nowt.org>
Message-Id: <20220424220204.2493824-5-paul@nowt.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Add a convenience macro to get the address of an xmm_regs element within
CPUX86State.
This was originally going to be the basis of an implementation that broke
operations into 128 bit chunks. I scrapped that idea, so this is now a purely
cosmetic change. But I think a worthwhile one - it reduces the number of
function calls that need to be split over multiple lines.
No functional changes.
Signed-off-by: Paul Brook <paul@nowt.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220424220204.2493824-9-paul@nowt.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Extracted from a patch by Paul Brook <paul@nowt.org>.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Write down explicitly the load/store sequence.
Extracted from a patch by Paul Brook <paul@nowt.org>.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
In the first 7.2 queue we have changes in the powernv pnv-phb handling,
the start of the QOMification of the ppc405 model, the removal of the
taihu machine, a new SLOF image and others.
-----BEGIN PGP SIGNATURE-----
iHUEABYKAB0WIQQX6/+ZI9AYAK8oOBk82cqW3gMxZAUCYw/AFgAKCRA82cqW3gMx
ZI6XAP0d8m6r1JqKXPSfCwVYy+AfrwY7oZWYbeTqdamK6xHcUQD+JyCcFcogY4Vz
YwvHLd9W2cqvoWiZ4tmkK4Mb0Xt0Xg4=
=0uL/
-----END PGP SIGNATURE-----
Merge tag 'pull-ppc-20220831' of https://gitlab.com/danielhb/qemu into staging
ppc patch queue for 2022-08-31:
In the first 7.2 queue we have changes in the powernv pnv-phb handling,
the start of the QOMification of the ppc405 model, the removal of the
taihu machine, a new SLOF image and others.
# -----BEGIN PGP SIGNATURE-----
#
# iHUEABYKAB0WIQQX6/+ZI9AYAK8oOBk82cqW3gMxZAUCYw/AFgAKCRA82cqW3gMx
# ZI6XAP0d8m6r1JqKXPSfCwVYy+AfrwY7oZWYbeTqdamK6xHcUQD+JyCcFcogY4Vz
# YwvHLd9W2cqvoWiZ4tmkK4Mb0Xt0Xg4=
# =0uL/
# -----END PGP SIGNATURE-----
# gpg: Signature made Wed 31 Aug 2022 16:09:58 EDT
# gpg: using EDDSA key 17EBFF9923D01800AF2838193CD9CA96DE033164
# gpg: Good signature from "Daniel Henrique Barboza <danielhb413@gmail.com>" [unknown]
# gpg: WARNING: This key is not certified with a trusted signature!
# gpg: There is no indication that the signature belongs to the owner.
# Primary key fingerprint: 17EB FF99 23D0 1800 AF28 3819 3CD9 CA96 DE03 3164
* tag 'pull-ppc-20220831' of https://gitlab.com/danielhb/qemu: (60 commits)
ppc4xx: Fix code style problems reported by checkpatch
ppc/ppc4xx: Fix sdram trace events
hw/ppc/Kconfig: Move imply before select
hw/ppc/sam460ex: Remove PPC405 dependency from sam460ex
ppc405: Move machine specific code to ppc405_boards.c
ppc/ppc405: QOM'ify FPGA
ppc/ppc405: Use an explicit I2C object
hw/intc/ppc-uic: Convert ppc-uic to a PPC4xx DCR device
ppc/ppc405: Use an embedded PPCUIC model in SoC state
ppc4xx: Rename ppc405-ebc to ppc4xx-ebc
ppc4xx: Move EBC model to ppc4xx_devs.c
ppc4xx: Rename ppc405-plb to ppc4xx-plb
ppc4xx: Move PLB model to ppc4xx_devs.c
ppc/ppc405: QOM'ify MAL
ppc/ppc405: QOM'ify PLB
ppc/ppc405: QOM'ify POB
ppc/ppc405: QOM'ify OPBA
ppc/ppc405: QOM'ify EBC
ppc/ppc405: QOM'ify DMA
ppc/ppc405: QOM'ify GPIO
...
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
The DPPS (Dot Product) instruction is defined to first sum pairs of
intermediate results, then sum those values to get the final result.
i.e. (A+B)+(C+D)
We incrementally sum the results, i.e. ((A+B)+C)+D, which can result
in incorrect rouding.
For consistency, also change the variable names to the ones used
in the Intel SDM and implement DPPD following the manual.
Based on a patch by Paul Brook <paul@nowt.org>.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
The computation must not overwrite neither the destination
nor the source before the last element has been computed.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
kvm_put_sregs2() fails to reset 'locked' CR4/CR0 bits upon vCPU reset when
it is in VMX root operation. Do kvm_put_msr_feature_control() before
kvm_put_sregs2() to (possibly) kick vCPU out of VMX root operation. It also
seems logical to do kvm_put_msr_feature_control() before
kvm_put_nested_state() and not after it, especially when 'real' nested
state is set.
Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Message-Id: <20220818150113.479917-3-vkuznets@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Make sure env->nested_state is cleaned up when a vCPU is reset, it may
be stale after an incoming migration, kvm_arch_put_registers() may
end up failing or putting vCPU in a weird state.
Reviewed-by: Maxim Levitsky <mlevitsk@redhat.com>
Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Message-Id: <20220818150113.479917-2-vkuznets@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
This bit is not saved across interrupts, so we must
delay delivering the interrupt until the skip has
been processed.
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1118
Reviewed-by: Michael Rolnik <mrolnik@gmail.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
We cannot deliver two interrupts simultaneously;
the first interrupt handler must execute first.
Reviewed-by: Michael Rolnik <mrolnik@gmail.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
There is no need to go through cc->tcg_ops when
we know what value that must have.
Reviewed-by: Michael Rolnik <mrolnik@gmail.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
While there are no target-specific nonfaulting probes,
generic code may grow some uses at some point.
Note that the attrs argument was incorrect -- it should have
been MEMTXATTRS_UNSPECIFIED. Just use the simpler interface.
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
When an overflow exception occurs and OE is set the intermediate result
should be adjusted (by subtracting from the exponent) to avoid rounding
to inf. The same applies to an underflow exceptionion and UE (but adding
to the exponent). To do this set the fp_status.rebias_overflow when OE
is set and fp_status.rebias_underflow when UE is set as the FPU will
recalculate in case of a overflow/underflow if the according rebias* is
set.
Signed-off-by: Lucas Mateus Castro (alqotel) <lucas.araujo@eldorado.org.br>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220805141522.412864-3-lucas.araujo@eldorado.org.br>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
ppc_cpu_compare_class_pvr_mask() should match the best CPU class in the
family, because it is used by the KVM subsystem to find the host CPU
class. Since commit 03ae4133ab ("target-ppc: Add pvr_match()
callback"), it matches any class in the family (the first one in the
comparison list).
Since commit f30c843ced ("ppc/pnv: Introduce PowerNV machines with
fixed CPU models"), pnv has relied on pnv_match having these new
semantics to check machine compatibility with a CPU family.
Resolve this by adding a parameter to the pvr_match function to select
the best or any match, and restore the old behaviour for the KVM case.
Prior to this fix, e.g., a POWER9 DD2.3 KVM host matches to the
power9_v1.0 class (because that happens to be the first POWER9 family
CPU compared). After the patch, it matches the power9_v2.0 class.
This approach requires pnv_match contain knowledge of the CPU classes
implemented in the same family, which feels ugly. But pushing the 'best'
match down to the class would still require they know about one another
which is not obviously much better. For now this gets things working.
Fixes: 03ae4133ab ("target-ppc: Add pvr_match() callback")
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Reviewed-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Message-Id: <20220731013358.170187-1-npiggin@gmail.com>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
I2 is 16 bits, not 32.
Found by running valgrind's none/tests/s390x/traps.
Fixes: 1c26875182 ("target-s390: Implement COMPARE AND TRAP")
Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com>
Message-Id: <20220817161529.597414-1-iii@linux.ibm.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
Add stfle 197 (processor-activity-instrumentation extension 1) to the
gen16 default model and fence it off for 7.1 and older.
Signed-off-by: Christian Borntraeger <borntraeger@linux.ibm.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
Message-Id: <20220727135120.12784-1-borntraeger@linux.ibm.com>
Acked-by: Cornelia Huck <cohuck@redhat.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
The proberi assembler instruction checks the read/write access rights
for the page of a given address and shall return a value of 1 if the
test succeeds and a value of 0 on failure in the target register.
But when run in linux-user mode, qemu currently simply returns the
return code of page_check_range() which returns 0 on success and -1 on
failure, which is the opposite of what proberi should return.
Fix it by checking the return code of page_check_range() and return the
expected return value.
The easiest way to reproduce the issue is by running
"/lib/ld.so.1 --version" in a chroot which fails without this patch.
At startup of ld.so the __canonicalize_funcptr_for_compare() function is
used to resolve the function address out of a function descriptor, which
fails because proberi (due to the wrong return code) seems to indicate
that the given address isn't accessible.
Signed-off-by: Helge Deller <deller@gmx.de>
1. Add some information about how to boot the LoongArch virt
machine by uefi bios and linux kernel and how to access the
source code or binary file.
2. Move the explanation of LoongArch system emulation in the
target/loongarch/README to docs/system/loongarch/loongson3.rst
Signed-off-by: Xiaojuan Yang <yangxiaojuan@loongson.cn>
Reviewed-by: Song Gao <gaosong@loongson.cn>
Message-Id: <20220812091957.3338126-1-yangxiaojuan@loongson.cn>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
The newly added neoverse-n1 CPU has ID register values which indicate
the presence of the Statistical Profiling Extension, because the real
hardware has this feature. QEMU's TCG emulation does not yet
implement SPE, though (not even as a minimal stub implementation), so
guests will crash if they try to use it because the SPE system
registers don't exist.
Force ID_AA64DFR0_EL1.PMSVer to 0 in CPU realize for TCG, so that
we don't advertise to the guest a feature that doesn't exist.
(We could alternatively do this by editing the value that
aarch64_neoverse_n1_initfn() sets for this ID register, but
suppressing the field in realize means we won't re-introduce this bug
when we add other CPUs that have SPE in hardware, such as the
Neoverse-V1.)
An example of a non-booting guest is current mainline Linux (5.19),
when booting in EL2 on the virt board (ie with -machine
virtualization=on).
Reported-by: Zenghui Yu <yuzenghui@huawei.com>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Zenghui Yu <yuzenghui@huawei.com>
Message-id: 20220811131127.947334-1-peter.maydell@linaro.org
All of the fpu operations are defined with TCG_CALL_NO_WG, but they
all modify FCSR0. The most efficient way to fix this is to remove
cpu_fcsr0, and instead use explicit load and store operations for the
two instructions that manipulate that value.
Acked-by: Qi Hu <huqi@loongson.cn>
Reviewed-by: Song Gao <gaosong@loongson.cn>
Reported-by: Feiyang Chen <chenfeiyang@loongson.cn>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Coverity notes that we forgot to check the error return from
lock_user() in one place in the handling of the UHI_plog semihosting
call. Add the missing error handling.
report_fault() is rather brutal in that it will call abort(), but
this is the same error-handling used in the rest of this file.
Resolves: Coverity CID 1490684
Fixes: ea4210600d ("target/mips: Avoid qemu_semihosting_log_out for UHI_plog")
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20220719191737.384744-1-peter.maydell@linaro.org>
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
GDB LoongArch fpu use fcc register, update gdb_set_fpu()
and gdb_get_fpu() to match it.
Signed-off-by: Song Gao <gaosong@loongson.cn>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Acked-by: Alex Bennée <alex.bennee@linaro.org>
Message-Id: <20220805033523.1416837-6-gaosong@loongson.cn>
Rename loongarch-fpu64.xml to loongarch-fpu.xml and update
loongarch-fpu.xml to match upstream GDB [1]
[1]:https://github.com/bminor/binutils-gdb/blob/master/gdb/features/loongarch/fpu.xml
Signed-off-by: Song Gao <gaosong@loongson.cn>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Acked-by: Alex Bennée <alex.bennee@linaro.org>
Message-Id: <20220805033523.1416837-5-gaosong@loongson.cn>
GDB LoongArch add a register orig_a0, see the base64.xml [1].
We should add the orig_a0 to match the upstream GDB.
[1]: https://github.com/bminor/binutils-gdb/blob/master/gdb/features/loongarch/base64.xml
Signed-off-by: Song Gao <gaosong@loongson.cn>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Acked-by: Alex Bennée <alex.bennee@linaro.org>
Message-Id: <20220805033523.1416837-2-gaosong@loongson.cn>
The macros SET_FPU_* are used to set corresponding bits of fcsr.
Unfortunately it forgets to set the result and it causes fcsr's
"CAUSE" never being updated. This patch is to fix this bug.
Signed-off-by: Qi Hu <huqi@loongson.cn>
Reviewed-by: Song Gao <gaosong@loongson.cn>
Message-Id: <20220804132450.314329-1-huqi@loongson.cn>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
When the user queries CPU models via QMP there is a 'deprecated' flag
present, however, this is not done for the CLI '-cpu help' command.
Reviewed-by: Cornelia Huck <cohuck@redhat.com>
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
When the user queries CPU models via QMP there is a 'deprecated' flag
present, however, this is not done for the CLI '-cpu help' command.
Reviewed-by: Cornelia Huck <cohuck@redhat.com>
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
When the user queries CPU models via QMP there is a 'deprecated' flag
present, however, this is not done for the CLI '-cpu help' command.
Reviewed-by: Cornelia Huck <cohuck@redhat.com>
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
Delay generating the exception until after we know the
insn length, and record that length in env->error_code.
Fixes: 8ec7e3c53d ("target/mips: Use an exception for semihosting")
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1126
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
- Improve wordings in some files
-----BEGIN PGP SIGNATURE-----
iQJFBAABCAAvFiEEJ7iIR+7gJQEY8+q5LtnXdP5wLbUFAmLn6aYRHHRodXRoQHJl
ZGhhdC5jb20ACgkQLtnXdP5wLbWekg/+NVIT1jp3tcbfPIE6pB0vI/AhqN3i2hUd
zfJ4V3rSe5tg54JpmuuSt542mp4BDM9bPfYcY/DYESWEtW0c9wv80iP/5LFdJF0G
GYtk7Q4pRXvB32kF0v9OxjCEGPUeEqSRrDrsI/Ify5evEIhr55oHPnDjN/US1Bx+
TIuVfmyz8jhSPHsUvZzfVyFxkHre1+BWDxgM3zxoHFIaWEscIPE1KhwRILbKIxWx
MHpL8JLAneGFwljQoUAMCl7GzHkVna59RhqkbBJ+8iTaNGipQj9FhHZBo2CulO0J
SR7scWowYN8Jt2FNMe3tcKM2xQn/2Fg2TEK4sp6q+hCXhJuvFfWFHBiFYTNpagFA
LGgZmPfDr4uZtMEqY4AdEZdL14YZcoM9E/RpW7GhSvMHy73wOj16O8luH1bU0jtG
6X1VvAZlw8/Son1Tbq2CC6WejlMfJFXSzF6Fy6M7SflMPW44vJOs5uKdW405MYjE
Pksbfz1rwoNfK+1qBNQop7SccgDRvPtlLf3lDAU9V/JHWVEITs1KTfyS+46U8jKA
9SVBzKuTpVd+aXvMgvMAmmqnyvUBPHJ9KcFq4vHNbIETsGaQsXu0Q6waBmpcK8YB
KUL/g0EsdfhkpVVgKYZ4Bzj7shG6SKTdwc/lUcOt+wQuDrZZzaC+A2cu/6ReQN6T
BIHtoaxTz8E=
=K6RW
-----END PGP SIGNATURE-----
Merge tag 'pull-request-2022-08-01' of https://gitlab.com/thuth/qemu into staging
- Some fixes for various tests
- Improve wordings in some files
# -----BEGIN PGP SIGNATURE-----
#
# iQJFBAABCAAvFiEEJ7iIR+7gJQEY8+q5LtnXdP5wLbUFAmLn6aYRHHRodXRoQHJl
# ZGhhdC5jb20ACgkQLtnXdP5wLbWekg/+NVIT1jp3tcbfPIE6pB0vI/AhqN3i2hUd
# zfJ4V3rSe5tg54JpmuuSt542mp4BDM9bPfYcY/DYESWEtW0c9wv80iP/5LFdJF0G
# GYtk7Q4pRXvB32kF0v9OxjCEGPUeEqSRrDrsI/Ify5evEIhr55oHPnDjN/US1Bx+
# TIuVfmyz8jhSPHsUvZzfVyFxkHre1+BWDxgM3zxoHFIaWEscIPE1KhwRILbKIxWx
# MHpL8JLAneGFwljQoUAMCl7GzHkVna59RhqkbBJ+8iTaNGipQj9FhHZBo2CulO0J
# SR7scWowYN8Jt2FNMe3tcKM2xQn/2Fg2TEK4sp6q+hCXhJuvFfWFHBiFYTNpagFA
# LGgZmPfDr4uZtMEqY4AdEZdL14YZcoM9E/RpW7GhSvMHy73wOj16O8luH1bU0jtG
# 6X1VvAZlw8/Son1Tbq2CC6WejlMfJFXSzF6Fy6M7SflMPW44vJOs5uKdW405MYjE
# Pksbfz1rwoNfK+1qBNQop7SccgDRvPtlLf3lDAU9V/JHWVEITs1KTfyS+46U8jKA
# 9SVBzKuTpVd+aXvMgvMAmmqnyvUBPHJ9KcFq4vHNbIETsGaQsXu0Q6waBmpcK8YB
# KUL/g0EsdfhkpVVgKYZ4Bzj7shG6SKTdwc/lUcOt+wQuDrZZzaC+A2cu/6ReQN6T
# BIHtoaxTz8E=
# =K6RW
# -----END PGP SIGNATURE-----
# gpg: Signature made Mon 01 Aug 2022 07:56:38 AM PDT
# gpg: using RSA key 27B88847EEE0250118F3EAB92ED9D774FE702DB5
# gpg: issuer "thuth@redhat.com"
# gpg: Good signature from "Thomas Huth <th.huth@gmx.de>" [undefined]
# gpg: aka "Thomas Huth <thuth@redhat.com>" [undefined]
# gpg: aka "Thomas Huth <th.huth@posteo.de>" [unknown]
# gpg: aka "Thomas Huth <huth@tuxfamily.org>" [undefined]
# gpg: WARNING: This key is not certified with a trusted signature!
# gpg: There is no indication that the signature belongs to the owner.
# Primary key fingerprint: 27B8 8847 EEE0 2501 18F3 EAB9 2ED9 D774 FE70 2DB5
* tag 'pull-request-2022-08-01' of https://gitlab.com/thuth/qemu:
tests/qtest/migration-test: Run the dirty ring tests only with the x86 target
trivial: Fix duplicated words
misc: fix commonly doubled up words
tests/unit/test-qga: Replace the word 'blacklist' in the guest agent unit test
migration-test: Allow test to run without uffd
migration-test: Use migrate_ensure_converge() for auto-converge
tests/tcg/linux-test: Fix random hangs in test_socket
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
The test for the IF block indicates no ID registers are exposed, much
less host support for SVE. Move the SVE probe into the ELSE block.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220726045828.53697-4-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Because we weren't setting this flag, our probe of ID_AA64ZFR0
was always returning zero. This also obviates the adjustment
of ID_AA64PFR0, which had sanitized the SVE field.
The effects of the bug are not visible, because the only thing that
ID_AA64ZFR0 is used for within qemu at present is tcg translation.
The other tests for SVE within KVM are via ID_AA64PFR0.SVE.
Reported-by: Zenghui Yu <yuzenghui@huawei.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220726045828.53697-3-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Indication for support for SVE will not depend on whether we
perform the query on the main kvm_state or the temp vcpu.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220726045828.53697-2-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Some files wrongly contain the same word twice in a row.
One of them should be removed or replaced.
Message-Id: <20220722145859.1952732-1-thuth@redhat.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
Message-Id: <20220707163720.1421716-5-berrange@redhat.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Thomas Huth <thuth@redhat.com>
VyV operand is only used in the vshuff and vdeal instructions. These
instructions write to both VyV and VxV operands. In the case where
both operands are the same register, we need a separate location for
VyV. We use the existing vtmp field in CPUHexagonState.
Test case added in tests/tcg/hexagon/hvx_misc.c
Signed-off-by: Taylor Simpson <tsimpson@quicinc.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220718230320.24444-2-tsimpson@quicinc.com>
1. Rename 'loongson3.c' to 'virt.c' and change the meson.build file.
2. Rename 'loongson3.rst' to 'virt.rst'.
Signed-off-by: Xiaojuan Yang <yangxiaojuan@loongson.cn>
Message-Id: <20220729073018.27037-2-yangxiaojuan@loongson.cn>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
perror() is designed to append the decoded errno value to a
string. This, however, only makes sense if we called something that
actually sets errno prior to that.
For the callers that check for split irqchip support that is not the
case, and we end up with confusing error messages that end in
"success". Use error_report() instead.
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
Message-Id: <20220728142446.438177-1-cohuck@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Short queue with 2 Coverity fixes and one fix of the
'wait' insns that is causing hangs if the guest kernel uses
the most up to date wait opcode.
- target/ppc:
- implement new wait variants to fix guest hang when using the new opcode
- ppc440_uc: initialize length passed to cpu_physical_memory_map()
- spapr_nvdimm: check if spapr_drc_index() returns NULL
-----BEGIN PGP SIGNATURE-----
iHUEABYKAB0WIQQX6/+ZI9AYAK8oOBk82cqW3gMxZAUCYuK8VgAKCRA82cqW3gMx
ZOc7AQDPMsFY9NHNqJ3O0MiX4Qoy8IGUreZ9dzZSS3zT1nxtEAD+Lwl0/aGO+dk+
+NiIO80A5Agy/0g8PHie4qR3EqHEnwA=
=Q4eR
-----END PGP SIGNATURE-----
Merge tag 'pull-ppc-20220728' of https://gitlab.com/danielhb/qemu into staging
ppc patch queue for 2022-07-28:
Short queue with 2 Coverity fixes and one fix of the
'wait' insns that is causing hangs if the guest kernel uses
the most up to date wait opcode.
- target/ppc:
- implement new wait variants to fix guest hang when using the new opcode
- ppc440_uc: initialize length passed to cpu_physical_memory_map()
- spapr_nvdimm: check if spapr_drc_index() returns NULL
# -----BEGIN PGP SIGNATURE-----
#
# iHUEABYKAB0WIQQX6/+ZI9AYAK8oOBk82cqW3gMxZAUCYuK8VgAKCRA82cqW3gMx
# ZOc7AQDPMsFY9NHNqJ3O0MiX4Qoy8IGUreZ9dzZSS3zT1nxtEAD+Lwl0/aGO+dk+
# +NiIO80A5Agy/0g8PHie4qR3EqHEnwA=
# =Q4eR
# -----END PGP SIGNATURE-----
# gpg: Signature made Thu 28 Jul 2022 09:41:58 AM PDT
# gpg: using EDDSA key 17EBFF9923D01800AF2838193CD9CA96DE033164
# gpg: Good signature from "Daniel Henrique Barboza <danielhb413@gmail.com>" [unknown]
# gpg: WARNING: This key is not certified with a trusted signature!
# gpg: There is no indication that the signature belongs to the owner.
# Primary key fingerprint: 17EB FF99 23D0 1800 AF28 3819 3CD9 CA96 DE03 3164
* tag 'pull-ppc-20220728' of https://gitlab.com/danielhb/qemu:
target/ppc: Implement new wait variants
hw/ppc/ppc440_uc: Initialize length passed to cpu_physical_memory_map()
hw/ppc: check if spapr_drc_index() returns NULL in spapr_nvdimm.c
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
ISA v2.06 adds new variations of wait, specified by the WC field. These
are not all compatible with the prior wait implementation, because they
add additional conditions that cause the processor to resume, which can
cause software to hang or run very slowly.
At this moment, with the current wait implementation and a pseries guest
using mainline kernel with new wait upcodes [1], QEMU hangs during boot if
more than one CPU is present:
qemu-system-ppc64 -M pseries,x-vof=on -cpu POWER10 -smp 2 -nographic
-kernel zImage.pseries -no-reboot
QEMU will exit (as there's no filesystem) if the test "passes", or hang
during boot if it hits the bug.
ISA v3.0 changed the wait opcode and removed the new variants (retaining
the WC field but making non-zero values reserved).
ISA v3.1 added new WC values to the new wait opcode, and added a PL
field.
This patch implements the new wait encoding and supports WC variants
with no-op implementations, which provides basic correctness as
explained in comments.
[1] https://lore.kernel.org/all/20220720132132.903462-1-npiggin@gmail.com/
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Reviewed-by: Víctor Colombo <victor.colombo@eldorado.org.br>
Tested-by: Joel Stanley <joel@jms.id.au>
Reviewed-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Message-Id: <20220720133352.904263-1-npiggin@gmail.com>
[danielhb: added information about the bug being fixed]
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
We got to talking about how Zmmul and M interact with each other
https://github.com/riscv/riscv-isa-manual/issues/869 , and it turns out
that QEMU's behavior is slightly wrong: having Zmmul and M is a legal
combination, it just means that the multiplication instructions are
supported even when M is disabled at runtime via misa.
This just stops overriding M from Zmmul, with that the other checks for
the multiplication instructions work as per the ISA.
Signed-off-by: Palmer Dabbelt <palmer@rivosinc.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-Id: <20220714180033.22385-1-palmer@rivosinc.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
In commit 7390e0e9ab, we added support for SME loads and stores.
Unlike SVE loads and stores, these include handling of 128-bit
elements. The SME load/store functions call down into the existing
sve_cont_ldst_elements() function, which uses the element size MO_*
value as an index into the pred_esz_masks[] array. Because this code
path now has to handle MO_128, we need to add an extra element to the
array.
This bug was spotted by Coverity because it meant we were reading off
the end of the array.
Resolves: Coverity CID 1490539, 1490541, 1490543, 1490544, 1490545,
1490546, 1490548, 1490549, 1490550, 1490551, 1490555, 1490557,
1490558, 1490560, 1490561, 1490563
Fixes: 7390e0e9ab ("target/arm: Implement SME LD1, ST1")
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220718100144.3248052-1-peter.maydell@linaro.org
store effectively happens before the load. There are two bug fixes
in this series.
-----BEGIN PGP SIGNATURE-----
iQEzBAABCgAdFiEENjXHiM5iuR/UxZq0ewJE+xLeRCIFAmLXIT8ACgkQewJE+xLe
RCKL7gf9E1NE9EXNVpFUcv9E6SvEnZdqxYotXqNi2MduNKe+d9EYvDMEeNgZLWme
ltw9kYuAP41WLTYFQkrx2zCbonYljxEWqouQe8jZUwR78Ig/8GXk0MJ7C4l5AkEy
ZMuTMA59nbU98Ss6E9vbafFegWzZVP0SQcoV5YyHleihJU67we5kG8gNmQliUj93
rahG66gKxVbcUGsknSN6OeGjgPG1vD8EQyrTzIw55alD510Ih2C1I2Xhi1eibbfJ
B1sQulYJ3Q5qBq6GHlZovsz+c0fPQu4bY8j3G3fDcNCjYYEk37DqPNGKiZhIMjRE
DJ5uJhcot7ImCSB3tO5lKKTp9VSY5g==
=1Kkg
-----END PGP SIGNATURE-----
Merge tag 'pull-hex-20220719-1' of https://github.com/quic/qemu into staging
Recall that the semantics of a Hexagon mem_noshuf packet are that the
store effectively happens before the load. There are two bug fixes
in this series.
# gpg: Signature made Tue 19 Jul 2022 22:25:19 BST
# gpg: using RSA key 3635C788CE62B91FD4C59AB47B0244FB12DE4422
# gpg: Good signature from "Taylor Simpson (Rock on) <tsimpson@quicinc.com>" [undefined]
# gpg: WARNING: This key is not certified with a trusted signature!
# gpg: There is no indication that the signature belongs to the owner.
# Primary key fingerprint: 3635 C788 CE62 B91F D4C5 9AB4 7B02 44FB 12DE 4422
* tag 'pull-hex-20220719-1' of https://github.com/quic/qemu:
Hexagon (target/hexagon) fix bug in mem_noshuf load exception
Hexagon (target/hexagon) fix store w/mem_noshuf & predicated load
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The semantics of a mem_noshuf packet are that the store effectively
happens before the load. However, in cases where the load raises an
exception, we cannot simply execute the store first.
This change adds a probe to check that the load will not raise an
exception before executing the store.
If the load is predicated, this requires special handling. We check
the condition before performing the probe. Since, we need the EA to
perform the check, we move the GET_EA portion inside CHECK_NOSHUF_PRED.
Test case added in tests/tcg/hexagon/mem_noshuf_exception.c
Suggested-by: Alessandro Di Federico <ale@rev.ng>
Suggested-by: Anton Johansson <anjo@rev.ng>
Signed-off-by: Taylor Simpson <tsimpson@quicinc.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220707210546.15985-3-tsimpson@quicinc.com>
Call the CHECK_NOSHUF macro multiple times: once in the
fGEN_TCG_PRED_LOAD() and again in fLOAD().
Before this commit, a packet with a store and a predicated
load with mem_noshuf that gets encoded like this:
{ P0 = cmp.eq(R17,#0x0)
memw(R18+#0x0) = R2
if (!P0.new) R3 = memw(R17+#0x4) }
... would end up generating a branch over both the load
and the store like so:
...
brcond_i32 loc17,$0x0,eq,$L1
mov_i32 loc18,store_addr_1
qemu_st_i32 store_val32_1,store_addr_1,leul,0
qemu_ld_i32 loc16,loc7,leul,0
set_label $L1
...
Test cases added to tests/tcg/hexagon/mem_noshuf.c
Co-authored-by: Taylor Simpson <tsimpson@quicinc.com>
Signed-off-by: Brian Cain <bcain@quicinc.com>
Signed-off-by: Taylor Simpson <tsimpson@quicinc.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220707210546.15985-2-tsimpson@quicinc.com>
Add LoongArch flatted device tree, adding cpu device node, firmware cfg node,
pcie node into it, and create fdt rom memory region. Now fdt info is not
full since only uefi bios uses fdt, linux kernel does not use fdt.
Loongarch Linux kernel uses acpi table which is full in qemu virt
machine.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Xiaojuan Yang <yangxiaojuan@loongson.cn>
Message-Id: <20220712083206.4187715-7-yangxiaojuan@loongson.cn>
[rth: Set TARGET_NEED_FDT, add fdt to meson.build]
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
We should result zero when exception is invalid and operation is nan
Signed-off-by: Song Gao <gaosong@loongson.cn>
Message-Id: <20220716085426.3098060-4-gaosong@loongson.cn>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
We should config cpucfg[20] to set value for the scache's ways, sets,
and size arguments when loongarch cpu init. However, the old code
wirte 'sets argument' twice, so we change one of them to 'size argument'.
Signed-off-by: Xiaojuan Yang <yangxiaojuan@loongson.cn>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220715064829.1521482-1-yangxiaojuan@loongson.cn>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
The boundary size of cpucfg array should be 0 to ARRAY_SIZE(cpucfg)-1.
So, using index bigger than max boundary to access cpucfg[] must be
forbidden.
Fix coverity CID: 1489760
Signed-off-by: Xiaojuan Yang <yangxiaojuan@loongson.cn>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220715060740.1500628-6-yangxiaojuan@loongson.cn>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Fix out-of-bounds errors when access excp_names[] array. the valid
boundary size of excp_names should be 0 to ARRAY_SIZE(excp_names)-1.
However, the general code do not consider the max boundary.
Fix coverity CID: 1489758
Signed-off-by: Xiaojuan Yang <yangxiaojuan@loongson.cn>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220715060740.1500628-4-yangxiaojuan@loongson.cn>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
The cpu_model argument may already have the '-loongarch-cpu' suffix,
e.g. when using the default for the LS7A1000 machine. If that fails,
try again with the suffix. Validate that the object created by the
function is derived from the proper base class.
Signed-off-by: Xiaojuan Yang <yangxiaojuan@loongson.cn>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220715060740.1500628-2-yangxiaojuan@loongson.cn>
[rth: Try without and then with the suffix, to avoid testsuite breakage.]
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
vfmin_res() / vfmax_res() are trying to check whether a and b are both
zeroes, but in reality they check that they are the same kind of zero.
This causes incorrect results when comparing positive and negative
zeroes.
Fixes: da4807527f ("s390x/tcg: Implement VECTOR FP (MAXIMUM|MINIMUM)")
Co-developed-by: Ulrich Weigand <ulrich.weigand@de.ibm.com>
Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: David Hildenbrand <david@redhat.com>
Message-Id: <20220713182612.3780050-2-iii@linux.ibm.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
According to PowerISA 3.1B, Book III 6.7.6 programming note, the
page directory base addresses are expected to be aligned to their
size. Real hardware seems to rely on that and will access the
wrong address if they are misaligned. This results in a
translation failure even if the page tables seem to be properly
populated.
Signed-off-by: Leandro Lupori <leandro.lupori@eldorado.org.br>
Reviewed-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Message-Id: <20220628133959.15131-4-leandro.lupori@eldorado.org.br>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Check if the number and size of Radix levels are valid on
POWER9/POWER10 CPUs, according to the supported Radix Tree
Configurations described in their User Manuals.
Signed-off-by: Leandro Lupori <leandro.lupori@eldorado.org.br>
Reviewed-by: Fabiano Rosas <farosas@linux.ibm.com>
Message-Id: <20220628133959.15131-3-leandro.lupori@eldorado.org.br>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Check if partition and process tables are properly aligned, in
their size, according to PowerISA 3.1B, Book III 6.7.6 programming
note. Hardware and KVM also raise an exception in these cases.
Signed-off-by: Leandro Lupori <leandro.lupori@eldorado.org.br>
Reviewed-by: Fabiano Rosas <farosas@linux.ibm.com>
Message-Id: <20220628133959.15131-2-leandro.lupori@eldorado.org.br>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
When using "-machine none", env->tb_env is not allocated, causing the
segmentation fault reported in issue #85 (launchpad bug #811683). To
avoid this problem, check if the pointer != NULL before calling the
methods to print TBU/TBL/DECR.
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/85
Signed-off-by: Matheus Ferst <matheus.ferst@eldorado.org.br>
Reviewed-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Message-Id: <20220714172343.80539-1-matheus.ferst@eldorado.org.br>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Equivalent to CHK_SV and CHK_HV, but can be used in decodetree methods.
Reviewed-by: Leandro Lupori <leandro.lupori@eldorado.org.br>
Signed-off-by: Matheus Ferst <matheus.ferst@eldorado.org.br>
Signed-off-by: Lucas Coutinho <lucas.coutinho@eldorado.org.br>
Message-Id: <20220701133507.740619-3-lucas.coutinho@eldorado.org.br>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
GEN_PRIV and related CHK_* macros just assumed that variable named
"ctx" would be in scope when they are used, and that it would be a
pointer to DisasContext. Change these macros to receive the pointer
explicitly.
Reviewed-by: Leandro Lupori <leandro.lupori@eldorado.org.br>
Signed-off-by: Matheus Ferst <matheus.ferst@eldorado.org.br>
Signed-off-by: Lucas Coutinho <lucas.coutinho@eldorado.org.br>
Message-Id: <20220701133507.740619-2-lucas.coutinho@eldorado.org.br>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
This initial version supports the invalidation of one or all
TLB entries. Flush by PID/LPID, or based in process/partition
scope is not supported, because it would make using the
generic QEMU TLB implementation hard. In these cases, all
entries are flushed.
Signed-off-by: Leandro Lupori <leandro.lupori@eldorado.org.br>
Reviewed-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Message-Id: <20220712193741.59134-3-leandro.lupori@eldorado.org.br>
[danielhb: moved 'set' declaration to TLBIE_RIC_PWC block]
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Also decode RIC, PRS and R operands.
Signed-off-by: Leandro Lupori <leandro.lupori@eldorado.org.br>
Reviewed-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Message-Id: <20220712193741.59134-2-leandro.lupori@eldorado.org.br>
[danielhb: mark bit 31 in @X_tlbie pattern as ignored]
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
The 'error' argument of gen_inval_exception will be or-ed with
POWERPC_EXCP_INVAL, so it should always be a constant prefixed with
POWERPC_EXCP_INVAL_. No functional change is intended,
spr_write_excp_vector is only used by register_BookE_sprs, and
powerpc_excp_booke ignores the lower 4 bits of the error code on
POWERPC_EXCP_INVAL exceptions.
Also, take the opportunity to replace printf with qemu_log_mask.
Signed-off-by: Matheus Ferst <matheus.ferst@eldorado.org.br>
Reviewed-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Message-Id: <20220627141104.669152-7-matheus.ferst@eldorado.org.br>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
A call to "gen_(hv)priv_exception" should use POWERPC_EXCP_PRIV_* as the
'error' argument instead of POWERPC_EXCP_INVAL_*, and POWERPC_EXCP_FU is
an exception type, not an exception error code. To correctly set
FSCR[IC], we should raise Facility Unavailable with this exception type
and IC value as the error code.
Fixes: 565cb10967 ("target/ppc: add user read/write functions for MMCR0")
Signed-off-by: Matheus Ferst <matheus.ferst@eldorado.org.br>
Reviewed-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Message-Id: <20220627141104.669152-6-matheus.ferst@eldorado.org.br>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
POWERPC_EXCP_INVAL should only be or-ed with other constants prefixed
with POWERPC_EXCP_INVAL_. Also, take the opportunity to move both
helpers under #if !defined(CONFIG_USER_ONLY) as the instructions that
use them are privileged.
No functional change is intended, the lower 4 bits of the error code are
ignored by all powerpc_excp_* methods on POWERPC_EXCP_INVAL exceptions.
Reported-by: Laurent Vivier <laurent@vivier.eu>
Signed-off-by: Matheus Ferst <matheus.ferst@eldorado.org.br>
Reviewed-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Message-Id: <20220627141104.669152-5-matheus.ferst@eldorado.org.br>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
The only PowerPC implementations with these insns were the 460 and 460F,
which had their definitions removed in [1].
[1] 7ff26aa6c6 ("target/ppc: Remove unused PPC 460 and 460F definitions")
Signed-off-by: Matheus Ferst <matheus.ferst@eldorado.org.br>
Reviewed-by: Fabiano Rosas <farosas@linux.ibm.com>
Message-Id: <20220627141104.669152-4-matheus.ferst@eldorado.org.br>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Testing on a POWER9 DD2.3, we observed that the Linux kernel delivers a
signal with si_code ILL_PRVOPC (5) when a userspace application tries to
use slbfee. To obtain this behavior on linux-user, we should use
POWERPC_EXCP_PRIV with POWERPC_EXCP_PRIV_OPC.
No functional change is intended for softmmu targets as
gen_hvpriv_exception uses the same 'exception' argument
(POWERPC_EXCP_HV_EMU) for raise_exception_*, and the powerpc_excp_*
methods do not use lower bits of the exception error code when handling
POWERPC_EXCP_{INVAL,PRIV}.
Reported-by: Laurent Vivier <laurent@vivier.eu>
Signed-off-by: Matheus Ferst <matheus.ferst@eldorado.org.br>
Reviewed-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Message-Id: <20220627141104.669152-3-matheus.ferst@eldorado.org.br>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
The code in linux-user/ppc/cpu_loop.c expects POWERPC_EXCP_PRIV
exception with error POWERPC_EXCP_PRIV_OPC or POWERPC_EXCP_PRIV_REG,
while POWERPC_EXCP_INVAL_SPR is expected in POWERPC_EXCP_INVAL
exceptions. This mismatch caused an EXCP_DUMP with the message "Unknown
privilege violation (03)", as seen in [1].
[1] https://gitlab.com/qemu-project/qemu/-/issues/588
Fixes: 9b2fadda3e ("ppc: Rework generation of priv and inval interrupts")
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/588
Reviewed-by: Fabiano Rosas <farosas@linux.ibm.com>
Signed-off-by: Matheus Ferst <matheus.ferst@eldorado.org.br>
Message-Id: <20220627141104.669152-2-matheus.ferst@eldorado.org.br>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Some systems have /proc/device-tree/cpus/../clock-frequency. However,
this is not the expected path for a CPU device tree directory.
Signed-off-by: Murilo Opsfelder Araujo <muriloo@linux.ibm.com>
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Message-Id: <20220712210810.35514-1-muriloo@linux.ibm.com>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
The architecture requires that for faults on loads and stores which
do writeback, the syndrome information does not have the ISS
instruction syndrome information (i.e. ISV is 0). We got this wrong
for the load and store instructions covered by disas_ldst_reg_imm9().
Calculate iss_valid correctly so that if the insn is a writeback one
it is false.
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1057
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220715123323.1550983-1-peter.maydell@linaro.org
In regime_tcr() we return the appropriate TCR register for the
translation regime. For Secure EL2, we return the VSTCR_EL2 value,
but in this translation regime some fields that control behaviour are
in VTCR_EL2. When this code was originally written (as the comment
notes), QEMU didn't care about any of those fields, but we have since
added support for features such as LPA2 which do need the values from
those fields.
Synthesize a TCR value by merging in the relevant VTCR_EL2 fields to
the VSTCR_EL2 value.
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1103
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220714132303.1287193-8-peter.maydell@linaro.org
Change the representation of the TCR_EL* registers in the CPU state
struct from struct TCR to uint64_t. This allows us to drop the
custom vmsa_ttbcr_raw_write() function, moving the "enforce RES0"
checks to their more usual location in the writefn
vmsa_ttbcr_write(). We also don't need the resetfn any more.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220714132303.1287193-7-peter.maydell@linaro.org
Change the representation of the VSTCR_EL2 and VTCR_EL2 registers in
the CPU state struct from struct TCR to uint64_t.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220714132303.1287193-6-peter.maydell@linaro.org
We have a bug in our handling of accesses to the AArch32 VTCR
register on big-endian hosts: we were not adjusting the part of the
uint64_t field within TCR that the generated code would access. That
can be done with offsetoflow32(), by using an ARM_CP_STATE_BOTH cpreg
struct, or by defining a full set of read/write/reset functions --
the various other TCR cpreg structs used one or another of those
strategies, but for VTCR we did not, so on a big-endian host VTCR
accesses would touch the wrong half of the register.
Use offsetoflow32() in the VTCR register struct. This works even
though the field in the CPU struct is currently a struct TCR, because
the first field in that struct is the uint64_t raw_tcr.
None of the other TCR registers have this bug -- either they are
AArch64 only, or else they define resetfn, writefn, etc, and
expect to be passed the full struct pointer.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220714132303.1287193-5-peter.maydell@linaro.org
The only caller of regime_tcr() is now regime_tcr_value(); fold the
two together, and use the shorter and more natural 'regime_tcr'
name for the new function.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220714132303.1287193-4-peter.maydell@linaro.org
In get_level1_table_address(), instead of using precalculated values
of mask and base_mask from the TCR struct, calculate them directly
(in the same way we currently do in vmsa_ttbcr_raw_write() to
populate the TCR struct fields).
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220714132303.1287193-3-peter.maydell@linaro.org
The regime_tcr() function returns a pointer to a struct TCR
corresponding to the TCR controlling a translation regime. The
struct TCR has the raw value of the register, plus two fields mask
and base_mask which are used as a small optimization in the case of
32-bit short-descriptor lookups. Almost all callers of regime_tcr()
only want the raw register value. Define and use a new
regime_tcr_value() function which returns only the raw 64-bit
register value.
This is a preliminary to removing the 32-bit short descriptor
optimization -- it only saves a handful of bit operations, which is
tiny compared to the overhead of doing a page table walk at all, and
the TCR struct is awkward and makes fixing
https://gitlab.com/qemu-project/qemu/-/issues/1103 unnecessarily
difficult.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220714132303.1287193-2-peter.maydell@linaro.org
The documentation for PROT_MTE says that it cannot be cleared
by mprotect. Further, the implementation of the VM_ARCH_CLEAR bit,
contains PROT_BTI confiming that bit should be cleared.
Introduce PAGE_TARGET_STICKY to allow target/arch/cpu.h to control
which bits may be reset during page_set_flags. This is sort of the
opposite of VM_ARCH_CLEAR, but works better with qemu's PAGE_* bits
that are separate from PROT_* bits.
Reported-by: Vitaly Buka <vitalybuka@google.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220711031420.17820-1-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
We were only checking for SVE disabled and not taking into
account PSTATE.SM to check SME disabled, which resulted in
vectors being incorrectly truncated.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220713045848.217364-3-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
When PSTATE.SM, VL = SVL even if SVE is disabled.
This is visible in kselftest ssve-test.
Reported-by: Mark Brown <broonie@kernel.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220713045848.217364-2-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Pass through RDPID and RDTSCP support in CPUID if host supports it.
Correctly detect if CPU_BASED_TSC_OFFSET and CPU_BASED2_RDTSCP would
be supported in primary and secondary processor-based VM-execution
controls. Enable RDTSCP in secondary processor controls if RDTSCP
support is indicated in CPUID.
Signed-off-by: Cameron Esfahani <dirty@apple.com>
Message-Id: <20220214185605.28087-7-f4bug@amsat.org>
Tested-by: Silvio Moioli <moio@suse.com>
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1011
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Inline these macros into the only two callers.
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220628111701.677216-9-richard.henderson@linaro.org>
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
With semihosting_get_arg, we already have a check vs argc, so
there's no point replicating it -- just check the result vs NULL.
Merge copy_argn_to_target into its caller.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20220628111701.677216-8-richard.henderson@linaro.org>
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Always log the assert locally. Do not report_fault, but
instead include the fact of the fault in the assertion.
Don't bother freeing allocated strings before the abort().
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20220628111701.677216-6-richard.henderson@linaro.org>
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Use semihost_sys_write and/or qemu_semihosting_console_write
for implementing plog. When using gdbstub, copy the temp
string below the stack so that gdb has a guest address from
which to perform the log.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20220628111701.677216-5-richard.henderson@linaro.org>
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
This separates guest file descriptors from host file descriptors,
and utilizes shared infrastructure for integration with gdbstub.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20220628111701.677216-4-richard.henderson@linaro.org>
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
We don't implement it with _WIN32 hosts, and the syscall
is missing from the gdb remote file i/o interface.
Since we can't implement it universally, drop it.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20220628111701.677216-3-richard.henderson@linaro.org>
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
The UHI specification does not have an EFAULT value,
and further specifies that "undefined UHI operations
should not return control to the target".
So, log the error and abort.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20220628111701.677216-2-richard.henderson@linaro.org>
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
This patch introduces Octeon-specific decoder and implements
check-bit-and-jump instructions.
Signed-off-by: Pavel Dovgalyuk <Pavel.Dovgalyuk@ispras.ru>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <165572672705.167724.16667636081912075906.stgit@pasha-ThinkPad-X280>
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
This patch adds decodetree for Cavium Octeon extension and
an instruction set extension flag for using it in CPU models.
Signed-off-by: Pavel Dovgalyuk <Pavel.Dovgalyuk@ispras.ru>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <165572672162.167724.13656301229517693806.stgit@pasha-ThinkPad-X280>
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Enable SME, TPIDR2_EL0, and FA64 if supported by the cpu.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220708151540.18136-45-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
There's no reason to set CPACR_EL1.ZEN if SVE disabled.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220708151540.18136-44-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Note that SME remains effectively disabled for user-only,
because we do not yet set CPACR_EL1.SMEN. This needs to
wait until the kernel ABI is implemented.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220708151540.18136-33-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
We can handle both exception entry and exception return by
hooking into aarch64_sve_change_el.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220708151540.18136-32-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This is an SVE instruction that operates using the SVE vector
length but that it is present only if SME is implemented.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220708151540.18136-31-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This is an SVE instruction that operates using the SVE vector
length but that it is present only if SME is implemented.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220708151540.18136-30-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This is an SVE instruction that operates using the SVE vector
length but that it is present only if SME is implemented.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220708151540.18136-29-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This is SMOPA, SUMOPA, USMOPA_s, UMOPA, for both Int8 and Int16.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220708151540.18136-28-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220708151540.18136-27-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220708151540.18136-26-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220708151540.18136-25-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220708151540.18136-24-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
We can reuse the SVE functions for LDR and STR, passing in the
base of the ZA vector and a zero offset.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220708151540.18136-23-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Add a TCGv_ptr base argument, which will be cpu_env for SVE.
We will reuse this for SME save and restore array insns.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220708151540.18136-22-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
We cannot reuse the SVE functions for LD[1-4] and ST[1-4],
because those functions accept only a Zreg register number.
For SME, we want to pass a pointer into ZA storage.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220708151540.18136-21-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
We can reuse the SVE functions for implementing moves to/from
horizontal tile slices, but we need new ones for moves to/from
vertical tile slices.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220708151540.18136-20-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220708151540.18136-19-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
These SME instructions are nominally within the SVE decode space,
so we add them to sve.decode and translate-sve.c.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220708151540.18136-18-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The pseudocode for CheckSVEEnabled gains a check for Streaming
SVE mode, and for SME present but SVE absent.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220708151540.18136-17-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
These functions will be used to verify that the cpu
is in the correct state for a given instruction.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220708151540.18136-16-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Mark these as a non-streaming instructions, which should trap
if full a64 support is not enabled in streaming mode.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220708151540.18136-15-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Mark these as a non-streaming instructions, which should trap
if full a64 support is not enabled in streaming mode.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220708151540.18136-14-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Mark these as a non-streaming instructions, which should trap if full
a64 support is not enabled in streaming mode. In this case, introduce
PRF_ns (prefetch non-streaming) to handle the checks.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220708151540.18136-13-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Mark these as a non-streaming instructions, which should trap
if full a64 support is not enabled in streaming mode.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220708151540.18136-12-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Mark these as non-streaming instructions, which should trap
if full a64 support is not enabled in streaming mode.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220708151540.18136-11-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Mark these as a non-streaming instructions, which should trap
if full a64 support is not enabled in streaming mode.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220708151540.18136-10-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Mark these as a non-streaming instructions, which should trap
if full a64 support is not enabled in streaming mode.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220708151540.18136-9-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Mark these as a non-streaming instructions, which should trap
if full a64 support is not enabled in streaming mode.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220708151540.18136-8-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Mark these as a non-streaming instructions, which should trap
if full a64 support is not enabled in streaming mode.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220708151540.18136-7-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Mark these as a non-streaming instructions, which should trap
if full a64 support is not enabled in streaming mode.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220708151540.18136-6-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Mark ADR as a non-streaming instruction, which should trap
if full a64 support is not enabled in streaming mode.
Removing entries from sme-fa64.decode is an easy way to see
what remains to be done.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220708151540.18136-5-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This new behaviour is in the ARM pseudocode function
AArch64.CheckFPAdvSIMDEnabled, which applies to AArch32
via AArch32.CheckAdvSIMDOrFPEnabled when the EL to which
the trap would be delivered is in AArch64 mode.
Given that ARMv9 drops support for AArch32 outside EL0, the trap EL
detection ought to be trivially true, but the pseudocode still contains
a number of conditions, and QEMU has not yet committed to dropping A32
support for EL[12] when v9 features are present.
Since the computation of SME_TRAP_NONSTREAMING is necessarily different
for the two modes, we might as well preserve bits within TBFLAG_ANY and
allocate separate bits within TBFLAG_A32 and TBFLAG_A64 instead.
Note that DDI0616A.a has typos for bits [22:21] of LD1RO in the table
of instructions illegal in streaming mode.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220708151540.18136-4-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This includes the build rules for the decoder, and the
new file for translation, but excludes any instructions.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220708151540.18136-3-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Dump SVCR, plus use the correct access check for Streaming Mode.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220708151540.18136-2-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* Fix booting from devices that use 4k sectors, but are not like DASDs
* Re-evaluate pending interrupts after EXECUTE of certain instructions
-----BEGIN PGP SIGNATURE-----
iQJFBAABCAAvFiEEJ7iIR+7gJQEY8+q5LtnXdP5wLbUFAmLGhkURHHRodXRoQHJl
ZGhhdC5jb20ACgkQLtnXdP5wLbU76Q//Y4mEXxWZDpJTg7tL5SZP+UzBVttnCifv
6q+2I0keOUs6wFhPq8TzBqkazT9wlv51cNrY2Q3fU9I3dpDeRbAXZR34UD4kw5an
Q+ZQcebuGSKLjzMrIb1DLAieq8OmZR5FvDUu16BbeJr6GIQIE80lMRfWh9j30UfW
tlxkXr15BnyPx6m0rSGwzkZD2vgfj5zSUrDtYJcUsfypIA9OOBMA7yNGNlO+d94V
UZiKgNQtAoBNm4hZh2M86nsUtem+WpMTZQnDnCpMLYvFV/u9jRQBFSR+Ay41hcEN
WYuLK61rkjc9gPWSjeNNT28x8RMvFJU4YNn1UDiMRSzrigxeui6MOW3SI/h3y6tI
94yXmXV2IuDMibvOjK07nkDaEItqPxfj6zuM2xW1Nc+l8Sk12korFBpk/AZiD0Jo
R3u36efci3zNqDRDJvhGUv8sGcv0mwO7Agq1Bm3h5941gYwzQKILHCShL7DPzvQa
h+K1MsT7vWfh5++unkGUrN/Zd9CazEylbDuWtywK8lgQcTGDO/9rab8GeXfH/5es
Tp0RGJwxmalgrAHZPK9lqgpQaGw92ct2G5odvc82EXQhgccnN9mh54BHPfdKs95E
JZVrXtZH3Gtgl5MGZ+yJevWSc9h1iRnRF4a7QC3UlVBjA/9yAWzQUAnNGZOamE/s
F+pi89oWLn8=
=UsTi
-----END PGP SIGNATURE-----
Merge tag 'pull-request-2022-07-07' of https://gitlab.com/thuth/qemu into staging
* Check validity of the address in the SET PREFIX instruction
* Fix booting from devices that use 4k sectors, but are not like DASDs
* Re-evaluate pending interrupts after EXECUTE of certain instructions
# -----BEGIN PGP SIGNATURE-----
#
# iQJFBAABCAAvFiEEJ7iIR+7gJQEY8+q5LtnXdP5wLbUFAmLGhkURHHRodXRoQHJl
# ZGhhdC5jb20ACgkQLtnXdP5wLbU76Q//Y4mEXxWZDpJTg7tL5SZP+UzBVttnCifv
# 6q+2I0keOUs6wFhPq8TzBqkazT9wlv51cNrY2Q3fU9I3dpDeRbAXZR34UD4kw5an
# Q+ZQcebuGSKLjzMrIb1DLAieq8OmZR5FvDUu16BbeJr6GIQIE80lMRfWh9j30UfW
# tlxkXr15BnyPx6m0rSGwzkZD2vgfj5zSUrDtYJcUsfypIA9OOBMA7yNGNlO+d94V
# UZiKgNQtAoBNm4hZh2M86nsUtem+WpMTZQnDnCpMLYvFV/u9jRQBFSR+Ay41hcEN
# WYuLK61rkjc9gPWSjeNNT28x8RMvFJU4YNn1UDiMRSzrigxeui6MOW3SI/h3y6tI
# 94yXmXV2IuDMibvOjK07nkDaEItqPxfj6zuM2xW1Nc+l8Sk12korFBpk/AZiD0Jo
# R3u36efci3zNqDRDJvhGUv8sGcv0mwO7Agq1Bm3h5941gYwzQKILHCShL7DPzvQa
# h+K1MsT7vWfh5++unkGUrN/Zd9CazEylbDuWtywK8lgQcTGDO/9rab8GeXfH/5es
# Tp0RGJwxmalgrAHZPK9lqgpQaGw92ct2G5odvc82EXQhgccnN9mh54BHPfdKs95E
# JZVrXtZH3Gtgl5MGZ+yJevWSc9h1iRnRF4a7QC3UlVBjA/9yAWzQUAnNGZOamE/s
# F+pi89oWLn8=
# =UsTi
# -----END PGP SIGNATURE-----
# gpg: Signature made Thu 07 Jul 2022 12:37:49 PM +0530
# gpg: using RSA key 27B88847EEE0250118F3EAB92ED9D774FE702DB5
# gpg: issuer "thuth@redhat.com"
# gpg: Good signature from "Thomas Huth <th.huth@gmx.de>" [undefined]
# gpg: aka "Thomas Huth <thuth@redhat.com>" [undefined]
# gpg: aka "Thomas Huth <th.huth@posteo.de>" [unknown]
# gpg: aka "Thomas Huth <huth@tuxfamily.org>" [undefined]
# gpg: WARNING: This key is not certified with a trusted signature!
# gpg: There is no indication that the signature belongs to the owner.
# Primary key fingerprint: 27B8 8847 EEE0 2501 18F3 EAB9 2ED9 D774 FE70 2DB5
* tag 'pull-request-2022-07-07' of https://gitlab.com/thuth/qemu:
target/s390x: Exit tb after executing ex_value
target/s390x: Remove DISAS_PC_STALE_NOCHAIN
target/s390x: Remove DISAS_PC_STALE
target/s390x: Remove DISAS_GOTO_TB
pc-bios/s390-ccw: Update the s390-ccw bios binaries with the virtio-blk fixes
pc-bios/s390-ccw/netboot.mak: Ignore Clang's warnings about GNU extensions
pc-bios/s390-ccw/virtio: Remove "extern" keyword from prototypes
pc-bios/s390-ccw/virtio-blkdev: Request the right feature bits
pc-bios/s390-ccw: Split virtio-scsi code from virtio_blk_setup_device()
pc-bios/s390-ccw/virtio: Beautify the code for reading virtqueue configuration
pc-bios/s390-ccw/virtio: Read device config after feature negotiation
pc-bios/s390-ccw/virtio: Set missing status bits while initializing
pc-bios/s390-ccw/virtio-blkdev: Remove virtio_assume_scsi()
pc-bios/s390-ccw/virtio-blkdev: Simplify/fix virtio_ipl_disk_is_valid()
pc-bios/s390-ccw/bootmap: Improve the guessing logic in zipl_load_vblk()
pc-bios/s390-ccw/virtio: Introduce a macro for the DASD block size
pc-bios/s390-ccw: Add a proper prototype for main()
target/s390x/tcg: SPX: check validity of new prefix
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
In commit 39a1fd2528 we fixed a bug in the handling of LPAE block
descriptors where we weren't correctly zeroing out some RES0 bits.
However this fix has a bug because the calculation of the mask is
done at the wrong width: in
descaddr &= ~(page_size - 1);
page_size is a target_ulong, so in the 'qemu-system-arm' binary it is
only 32 bits, and the effect is that we always zero out the top 32
bits of the calculated address. Fix the calculation by forcing the
mask to be calculated with the same type as descaddr.
This only affects 32-bit CPUs which support LPAE (e.g. cortex-a15)
when used on board models which put RAM or devices above the 4GB
mark and when the 'qemu-system-arm' executable is being used.
It was also masked in 7.0 by the main bug reported in
https://gitlab.com/qemu-project/qemu/-/issues/1078 where the
virt board incorrectly does not enable 'highmem' for 32-bit CPUs.
The workaround is to use 'qemu-system-aarch64' with the same
command line.
Reported-by: He Zhe <zhe.he@windriver.com>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220627134620.3190252-1-peter.maydell@linaro.org
Fixes: 39a1fd2528 ("target/arm: Fix handling of LPAE block descriptors")
Cc: qemu-stable@nongnu.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The architecture defines the OS DoubleLock as a register which
(similarly to the OS Lock) suppresses debug events for use in CPU
powerdown sequences. This functionality is required in Arm v7 and
v8.0; from v8.2 it becomes optional and in v9 it must not be
implemented.
Currently in QEMU we implement the OSDLR_EL1 register as a NOP. This
is wrong both for the "feature implemented" and the "feature not
implemented" cases: if the feature is implemented then the DLK bit
should read as written and cause suppression of debug exceptions, and
if it is not implemented then the bit must be RAZ/WI.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Starting with v7 of the debug architecture, there are three extra
ID registers that add information on top of that provided in
DBGDIDR. These are DBGDEVID, DBGDEVID1 and DBGDEVID2. In the
v7 debug architecture, DBGDEVID is optional, present only of
DBGDIDR.DEVID_imp is set. In v7.1 all three must be present.
Implement the missing registers. Note that we only need to set the
values in the ARMISARegisters struct for the CPUs Cortex-A7, A15,
A53, A57 and A72 (plus the 32-bit 'max' which uses the Cortex-A53
values): earlier CPUs didn't implement v7 of the architecture, and
our other 64-bit CPUs (Cortex-A76, Neoverse-N1 and A64fx) don't have
AArch32 support at EL1.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220630194116.3438513-5-peter.maydell@linaro.org
The "OS Lock" in the Arm debug architecture is a way for software
to suppress debug exceptions while it is trying to power down
a CPU and save the state of the breakpoint and watchpoint
registers. In QEMU we implemented the support for writing
the OS Lock bit via OSLAR_EL1 and reading it via OSLSR_EL1,
but didn't implement the actual behaviour.
The required behaviour with the OS Lock set is:
* debug exceptions (apart from BKPT insns) are suppressed
* some MDSCR_EL1 bits allow write access to the corresponding
EDSCR external debug status register that they shadow
(we can ignore this because we don't implement external debug)
* similarly with the OSECCR_EL1 which shadows the EDECCR
(but we don't implement OSECCR_EL1 anyway)
Implement the missing behaviour of suppressing debug
exceptions.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220630194116.3438513-4-peter.maydell@linaro.org
The target/arm/helper.c file is very long and is a grabbag of all
kinds of functionality. We have already a debug_helper.c which has
code for implementing architectural debug. Move the code which
defines the debug-related system registers out to this file also.
This affects the define_debug_regs() function and the various
functions and arrays which are used only by it.
The functions raw_write() and arm_mdcr_el2_eff() and
define_debug_regs() now need to be global rather than local to
helper.c; everything else is pure code movement.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220630194116.3438513-3-peter.maydell@linaro.org
Before moving debug system register helper functions to a
different file, fix the code style issues (mostly block
comment syntax) so checkpatch doesn't complain about the
code-motion patch.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220630194116.3438513-2-peter.maydell@linaro.org
Fixes a bug in that we were not honoring MTE from user-only
SVE. Copy the user-only MTE logic from allocation_tag_mem
into sve_probe_page.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The comment was correct, but the test was not:
disable mte if tagged is *not* set.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
When EXECUTE sets ex_value to interrupt the constructed instruction,
we implicitly disable interrupts so that the value is not corrupted.
Exit to the main loop after execution, so that we re-evaluate any
pending interrupts.
Reported-by: Sven Schnelle <svens@linux.ibm.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220702060228.420454-5-richard.henderson@linaro.org>
Signed-off-by: Thomas Huth <thuth@redhat.com>
Replace this with a flag: exit_to_mainloop.
We can now control the exit for each of DISAS_TOO_MANY,
DISAS_PC_UPDATED, and DISAS_PC_CC_UPDATED, and fold in
the check for PER.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220702060228.420454-4-richard.henderson@linaro.org>
Signed-off-by: Thomas Huth <thuth@redhat.com>
There is nothing to distinguish this from DISAS_TOO_MANY.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220702060228.420454-3-richard.henderson@linaro.org>
Signed-off-by: Thomas Huth <thuth@redhat.com>
There is nothing to distinguish this from DISAS_NORETURN.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220702060228.420454-2-richard.henderson@linaro.org>
Signed-off-by: Thomas Huth <thuth@redhat.com>
Commit 80d11f4467 ("Add definitions for Freescale PowerPC implementations")
changed core type of MPC8555 and MPC8560 from e500v1 to e500v2.
But both MPC8555 and MPC8560 have just e500v1 cores, there are no features
of e500v2 cores. It can be verified by reading NXP documentations:
https://www.nxp.com/docs/en/data-sheet/MPC8555EEC.pdfhttps://www.nxp.com/docs/en/data-sheet/MPC8560EC.pdfhttps://www.nxp.com/docs/en/reference-manual/MPC8555ERM.pdfhttps://www.nxp.com/docs/en/reference-manual/MPC8560RM.pdf
Therefore fix core type of MPC8555 and MPC8560 back to e500v1.
Just for completeness, here is list of all Motorola/Freescale/NXP
processors which were released and have e500v1 or e500v2 cores:
e500v1: MPC8540 MPC8541 MPC8555 MPC8560
e500v2: BSC9131 BSC9132
C291 C292 C293
MPC8533 MPC8535 MPC8536 MPC8543 MPC8544 MPC8545 MPC8547
MPC8548 MPC8567 MPC8568 MPC8569 MPC8572
P1010 P1011 P1012 P1013 P1014 P1015 P1016 P1020 P1021
P1022 P1024 P1025 P2010 P2020
Sorted alphabetically; not by release date / generation / feature set.
All this is from public information available on NXP website.
Seems that qemu has support only for some subset of MPC85xx processors.
Historically processors with e500 cores have mpc85xx family codename and
lot of software have them in mpc85xx architecture subdirectory.
Note that GCC uses -mcpu=8540 option for specifying e500v1 core and
-mcpu=8548 option for specifying e500v2 core.
So sometimes (mpc)8540 is alias for e500v1 and (mpc)8548 is alias for
e500v2.
Fixes: 80d11f4467 ("Add definitions for Freescale PowerPC implementations")
Signed-off-by: Pali Rohár <pali@kernel.org>
Reviewed-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Message-Id: <20220703195029.23793-1-pali@kernel.org>
[danielhb: added more context in the commit msg]
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
QEMU emulates a *lot* of PowerPC-based machines - having a CPU
that is named "default" and cannot be used with most of those
machines sounds just wrong. Thus let's remove this old and confusing
alias now.
Signed-off-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: Greg Kurz <groug@kaod.org>
Reviewed-by: Cédric Le Goater <clg@kaod.org>
Message-Id: <20220705151030.662140-1-thuth@redhat.com>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
All ppc CPUs represent hardware that exists in the real world, i.e.: we
do not have a "max" CPU with all possible emulated features enabled.
Return the default CPU type for the machine because that has greater
chance of being useful as the "max" CPU.
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1038
Cc: Cédric Le Goater <clg@kaod.org>
Cc: Daniel Henrique Barboza <danielhb413@gmail.com>
Cc: Daniel P. Berrangé <berrange@redhat.com>
Cc: Greg Kurz <groug@kaod.org>
Cc: Matheus K. Ferst <matheus.ferst@eldorado.org.br>
Cc: Thomas Huth <thuth@redhat.com>
Signed-off-by: Murilo Opsfelder Araujo <muriloo@linux.ibm.com>
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Reviewed-by: Víctor Colombo <victor.colombo@eldorado.org.br>
Message-Id: <20220628205513.81917-1-muriloo@linux.ibm.com>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Implements the Convert Declets To Binary Coded Decimal instruction.
Since libdecnumber doesn't expose the methods for direct conversion
(decDigitsFromDPD, DPD2BCD, etc), a positive decimal32 with zero
exponent is used as an intermediate value to convert the declets.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Matheus Ferst <matheus.ferst@eldorado.org.br>
Signed-off-by: Víctor Colombo <victor.colombo@eldorado.org.br>
Message-Id: <20220629162904.105060-12-victor.colombo@eldorado.org.br>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Implements the Convert Binary Coded Decimal To Declets instruction.
Since libdecnumber doesn't expose the methods for direct conversion
(decDigitsToDPD, BCD2DPD, etc.), the BCD values are converted to
decimal32 format, from which the declets are extracted.
Where the behavior is undefined, we try to match the result observed in
a POWER9 DD2.3.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Matheus Ferst <matheus.ferst@eldorado.org.br>
Signed-off-by: Víctor Colombo <victor.colombo@eldorado.org.br>
Message-Id: <20220629162904.105060-11-victor.colombo@eldorado.org.br>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Adds an insns_flags2 for the BCD assist instructions introduced in
Power ISA 2.06. These instructions are not listed in the manuals for
e5500[1] and e6500[2], so the flag is only added for POWER7/8/9/10
models.
[1] https://www.nxp.com/files-static/32bit/doc/ref_manual/EREF_RM.pdf
[2] https://www.nxp.com/docs/en/reference-manual/E6500RM.pdf
Signed-off-by: Matheus Ferst <matheus.ferst@eldorado.org.br>
Signed-off-by: Víctor Colombo <victor.colombo@eldorado.org.br>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220629162904.105060-9-victor.colombo@eldorado.org.br>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Some lines in insn32.decode have inconsistent alignment when compared
to others.
Fix this by changing the alignment of some lines, making it more
consistent throughout the file.
Signed-off-by: Víctor Colombo <victor.colombo@eldorado.org.br>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220629162904.105060-2-victor.colombo@eldorado.org.br>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
It keeps repeating, move it to the header. This uses __builtin_ffsll() to
allow using the macros in #define.
This is not using the QEMU's FIELD macros as this would require changing
all such macros found in skiboot (the PPC PowerNV firmware).
Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Reviewed-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Message-Id: <20220628080544.1509428-1-aik@ozlabs.ru>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
And also move the insn to decodetree and remove the now unused
avr_qw_not, avr_qw_cmpu, and avr_qw_add methods.
Signed-off-by: Matheus Ferst <matheus.ferst@eldorado.org.br>
Reviewed-by: Víctor Colombo <victor.colombo@eldorado.org.br>
Message-Id: <20220606150037.338931-8-matheus.ferst@eldorado.org.br>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
And also move the insns to decodetree.
Signed-off-by: Matheus Ferst <matheus.ferst@eldorado.org.br>
Reviewed-by: Víctor Colombo <victor.colombo@eldorado.org.br>
Message-Id: <20220606150037.338931-7-matheus.ferst@eldorado.org.br>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
And also move the insn to decodetree
Signed-off-by: Matheus Ferst <matheus.ferst@eldorado.org.br>
Reviewed-by: Víctor Colombo <victor.colombo@eldorado.org.br>
Message-Id: <20220606150037.338931-6-matheus.ferst@eldorado.org.br>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
And also move the insn to decodetree.
Signed-off-by: Matheus Ferst <matheus.ferst@eldorado.org.br>
Reviewed-by: Víctor Colombo <victor.colombo@eldorado.org.br>
Message-Id: <20220606150037.338931-5-matheus.ferst@eldorado.org.br>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
And also move the insns to decodetree and remove the now unused
avr_qw_addc method.
Signed-off-by: Matheus Ferst <matheus.ferst@eldorado.org.br>
Reviewed-by: Víctor Colombo <victor.colombo@eldorado.org.br>
Message-Id: <20220606150037.338931-4-matheus.ferst@eldorado.org.br>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
And also move the insn to decodetree.
Signed-off-by: Matheus Ferst <matheus.ferst@eldorado.org.br>
Reviewed-by: Víctor Colombo <victor.colombo@eldorado.org.br>
Message-Id: <20220606150037.338931-3-matheus.ferst@eldorado.org.br>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Also drop VECTOR_FOR_INORDER_I usage since there is no need to access
the elements in any particular order, and move the instruction to
decodetree.
Signed-off-by: Matheus Ferst <matheus.ferst@eldorado.org.br>
Reviewed-by: Víctor Colombo <victor.colombo@eldorado.org.br>
Message-Id: <20220606150037.338931-2-matheus.ferst@eldorado.org.br>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
FPSCR_* bit values in QEMU are in the 'inverted' order from what Power
ISA defines (e.g. FPSCR.FI is bit 46 but is defined as 17 in cpu.h).
Now that PPC_BIT_NR macro was introduced to fix this situation for the
MSR bits, we can use it for the FPSCR bits too.
Also, adjust the comments to make then fit in 80 columns
Signed-off-by: Víctor Colombo <victor.colombo@eldorado.org.br>
Reviewed-by: Fabiano Rosas <farosas@linux.ibm.com>
Message-Id: <20220622193203.127698-1-victor.colombo@eldorado.org.br>
[danielhb: fixed 'exceptio' typo in target/ppc/cpu.h]
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
According to the architecture, SET PREFIX must try to access the new
prefix area and recognize an addressing exception if the area is not
accessible.
For qemu this check prevents a crash in cpu_map_lowcore after an
inaccessible prefix area has been set.
Signed-off-by: Janis Schoetterl-Glausch <scgl@linux.ibm.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
Message-Id: <20220630094340.3646279-1-scgl@linux.ibm.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
* Fix edk2/opensbi jobs to not run automatically by accident
* Improve timings in the migration qtest
* Remove libvixl disassembler
* Add ukrainian translation
* Require a recent version of libpng
-----BEGIN PGP SIGNATURE-----
iQJFBAABCAAvFiEEJ7iIR+7gJQEY8+q5LtnXdP5wLbUFAmLECEkRHHRodXRoQHJl
ZGhhdC5jb20ACgkQLtnXdP5wLbV7lxAAmEItM6PIoW58eWPzReKVH8LE2w3UlvOZ
JQhNgJjuN23fqjUVkcT0yCfdNCz/nKvafHnxfHQnrAXyB5V5vU8ovBgSuWK2mcmD
NTFK+/2x5lcsyBrOe3QoeD2g1r7+Os3AYVkdnN/t2HAMLwQyaoshKaMV/UHC9O/i
Kle1svYRNyCgyXJgxaOdbVMBSLi/L9h2R5AaG31GIi9wnf0n8HDH/ONtmeIpN09g
BlMeZqPhGJT+tpMvviif65/Za57Y9h/r+TOgEIIs00cWmxqaBmcXXN9qog2s0n7A
nOm3ck2lpGJCQ6+sl6/Mphyr3X6nWHsxGrLDElS0Ba5bg6T/Xqfg2pBcb81Klkjc
QcTdFPiMxKUczgpFq326sqiaVzMgys4vwnW5iPSd5swNzrkYKADAIreki5jyM3cH
lohBG/ruOmg5xMkX2K6pra0iOAeCz44Ku/HTREfY1CTUgEQZJY4SZrMJSnmUTnM+
EQCkDcmOsnFDaQazneCbo18l37cXOgEhH8VoGAOqg1aRjr7TNlsJzx87PoD+9zNR
GEh7kp18ABRGik5ZACdLQ/HhhOJa8+UWsGCwCdeBGv/TVug1Byz0OUG0PxX3X5SV
WwubeKyZcqzoH92SQI3jZGSmuGBySy9q51T2k8FjZvaDsPiUN/MLPspNezH1qj2B
W7qEaqIyGmo=
=Q2vV
-----END PGP SIGNATURE-----
Merge tag 'pull-request-2022-07-05' of https://gitlab.com/thuth/qemu into staging
* Fix memory leak in test-cutils
* Fix edk2/opensbi jobs to not run automatically by accident
* Improve timings in the migration qtest
* Remove libvixl disassembler
* Add ukrainian translation
* Require a recent version of libpng
# -----BEGIN PGP SIGNATURE-----
#
# iQJFBAABCAAvFiEEJ7iIR+7gJQEY8+q5LtnXdP5wLbUFAmLECEkRHHRodXRoQHJl
# ZGhhdC5jb20ACgkQLtnXdP5wLbV7lxAAmEItM6PIoW58eWPzReKVH8LE2w3UlvOZ
# JQhNgJjuN23fqjUVkcT0yCfdNCz/nKvafHnxfHQnrAXyB5V5vU8ovBgSuWK2mcmD
# NTFK+/2x5lcsyBrOe3QoeD2g1r7+Os3AYVkdnN/t2HAMLwQyaoshKaMV/UHC9O/i
# Kle1svYRNyCgyXJgxaOdbVMBSLi/L9h2R5AaG31GIi9wnf0n8HDH/ONtmeIpN09g
# BlMeZqPhGJT+tpMvviif65/Za57Y9h/r+TOgEIIs00cWmxqaBmcXXN9qog2s0n7A
# nOm3ck2lpGJCQ6+sl6/Mphyr3X6nWHsxGrLDElS0Ba5bg6T/Xqfg2pBcb81Klkjc
# QcTdFPiMxKUczgpFq326sqiaVzMgys4vwnW5iPSd5swNzrkYKADAIreki5jyM3cH
# lohBG/ruOmg5xMkX2K6pra0iOAeCz44Ku/HTREfY1CTUgEQZJY4SZrMJSnmUTnM+
# EQCkDcmOsnFDaQazneCbo18l37cXOgEhH8VoGAOqg1aRjr7TNlsJzx87PoD+9zNR
# GEh7kp18ABRGik5ZACdLQ/HhhOJa8+UWsGCwCdeBGv/TVug1Byz0OUG0PxX3X5SV
# WwubeKyZcqzoH92SQI3jZGSmuGBySy9q51T2k8FjZvaDsPiUN/MLPspNezH1qj2B
# W7qEaqIyGmo=
# =Q2vV
# -----END PGP SIGNATURE-----
# gpg: Signature made Tue 05 Jul 2022 03:15:45 PM +0530
# gpg: using RSA key 27B88847EEE0250118F3EAB92ED9D774FE702DB5
# gpg: issuer "thuth@redhat.com"
# gpg: Good signature from "Thomas Huth <th.huth@gmx.de>" [undefined]
# gpg: aka "Thomas Huth <thuth@redhat.com>" [undefined]
# gpg: aka "Thomas Huth <th.huth@posteo.de>" [unknown]
# gpg: aka "Thomas Huth <huth@tuxfamily.org>" [undefined]
# gpg: WARNING: This key is not certified with a trusted signature!
# gpg: There is no indication that the signature belongs to the owner.
# Primary key fingerprint: 27B8 8847 EEE0 2501 18F3 EAB9 2ED9 D774 FE70 2DB5
* tag 'pull-request-2022-07-05' of https://gitlab.com/thuth/qemu:
include/qemu/host-utils: Remove unused code in the *_overflow wrappers
meson.build: Require a recent version of libpng
po: add ukrainian translation
disas: Remove libvixl disassembler
tests: use consistent bandwidth/downtime limits in migration tests
tests: increase migration test converge downtime to 30 seconds
tests: wait for migration completion before looking for STOP event
tests: wait max 120 seconds for migration test status changes
gitlab-ci: Extend timeout for ubuntu-20.04-s390x-all to 75m
gitlab: honour QEMU_CI variable in edk2/opensbi jobs
gitlab: tweak comments in edk2/opensbi jobs
gitlab: normalize indentation in edk2/opensbi rules
tests/fp: Do not build softfloat3 tests if TCG is disabled
tests: fix test-cutils leaks
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
We should make sure that tlb is clean when cpu reset.
Signed-off-by: Song Gao <gaosong@loongson.cn>
Message-Id: <20220705070950.2364243-1-gaosong@loongson.cn>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
The disassembly via capstone should be superiour to our old vixl
sources nowadays, so let's finally cut this old disassembler out
of the QEMU source tree.
Message-Id: <20220603164249.112459-1-thuth@redhat.com>
Tested-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Thomas Huth <thuth@redhat.com>
There is such error info when running linux kernel:
tcg_handle_interrupt: assertion failed: (qemu_mutex_iothread_locked()).
calling stack:
#0 in raise () at /lib64/libc.so.6
#1 in abort () at /lib64/libc.so.6
#2 in g_assertion_message_expr.cold () at /lib64/libglib-2.0.so.0
#3 in g_assertion_message_expr () at /lib64/libglib-2.0.so.0
#4 in tcg_handle_interrupt (cpu=0x632000030800, mask=2) at ../accel/tcg/tcg-accel-ops.c:79
#5 in cpu_interrupt (cpu=0x632000030800, mask=2) at ../softmmu/cpus.c:248
#6 in loongarch_cpu_set_irq (opaque=0x632000030800, irq=11, level=0)
at ../target/loongarch/cpu.c:100
#7 in helper_csrwr_ticlr (env=0x632000039440, val=1) at ../target/loongarch/csr_helper.c:85
#8 in code_gen_buffer ()
#9 in cpu_tb_exec (cpu=0x632000030800, itb=0x7fff946ac280, tb_exit=0x7ffe4fcb6c30)
at ../accel/tcg/cpu-exec.c:358
Add mutex iothread lock around loongarch_cpu_set_irq in csrwr_ticlr() to
fix the bug.
Signed-off-by: Xiaojuan Yang <yangxiaojuan@loongson.cn>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220701093407.2150607-10-yangxiaojuan@loongson.cn>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
By the manual of LoongArch CSR, the VS field(18:16 bits) of
ECFG reg means that the number of instructions between each
exception entry is 2^VS.
Signed-off-by: Xiaojuan Yang <yangxiaojuan@loongson.cn>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220701093407.2150607-9-yangxiaojuan@loongson.cn>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Some functions and member of the structure are different with softmmu-mode
So we need adjust them to support user-mode.
Signed-off-by: Song Gao <gaosong@loongson.cn>
Signed-off-by: Xiaojuan Yang <yangxiaojuan@loongson.cn>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220624031049.1716097-12-gaosong@loongson.cn>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Song Gao <gaosong@loongson.cn>
Signed-off-by: Xiaojuan Yang <yangxiaojuan@loongson.cn>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220624031049.1716097-11-gaosong@loongson.cn>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Raise EXCCODE_BCE instead of EXCCODE_ADEM for helper_asrtle_d/asrtgt_d.
Signed-off-by: Song Gao <gaosong@loongson.cn>
Signed-off-by: Xiaojuan Yang <yangxiaojuan@loongson.cn>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220624031049.1716097-10-gaosong@loongson.cn>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
loongarch_cpu_do_interrupt() should update CSR_BADV for some EXCCODE.
Signed-off-by: Song Gao <gaosong@loongson.cn>
Signed-off-by: Xiaojuan Yang <yangxiaojuan@loongson.cn>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220624031049.1716097-9-gaosong@loongson.cn>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
We can use CSR_BADV to replace badaddr.
Signed-off-by: Song Gao <gaosong@loongson.cn>
Signed-off-by: Xiaojuan Yang <yangxiaojuan@loongson.cn>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220624031049.1716097-8-gaosong@loongson.cn>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
The latest AIA draft v0.3.0 defines a relatively simpler scheme for
default priority assignments where:
1) local interrupts 24 to 31 and 48 to 63 are reserved for custom use
and have implementation specific default priority.
2) remaining local interrupts 0 to 23 and 32 to 47 have a recommended
(not mandatory) priority assignments.
We update the default priority table and hviprio mapping as-per above.
Signed-off-by: Anup Patel <apatel@ventanamicro.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-Id: <20220616031543.953776-3-apatel@ventanamicro.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Based on architecture review committee feedback, the [m|s|vs]seteienum,
[m|s|vs]clreienum, [m|s|vs]seteipnum, and [m|s|vs]clreipnum CSRs are
removed in the latest AIA draft v0.3.0 specification.
(Refer, https://github.com/riscv/riscv-aia/releases/tag/0.3.0-draft.31)
These CSRs were mostly for software convenience and software can always
use [m|s|vs]iselect and [m|s|vs]ireg CSRs to update the IMSIC interrupt
file bits.
We update the IMSIC CSR emulation as-per above to match the latest AIA
draft specification.
Signed-off-by: Anup Patel <apatel@ventanamicro.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-Id: <20220616031543.953776-2-apatel@ventanamicro.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
The minimum priv spec versino for mcountinhibit to v1.11 so that it
is not available for v1.10 (or lower).
Fixes: eab4776b2bad ("target/riscv: Add support for hpmcounters/hpmevents")
Signed-off-by: Anup Patel <apatel@ventanamicro.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-Id: <20220628101737.786681-3-apatel@ventanamicro.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
The riscv_cpu_realize() sets priv spec version to v1.12 when it is
when "env->priv_ver == 0" (i.e. default v1.10) because the enum
value of priv spec v1.10 is zero.
Due to above issue, the sifive_u machine will see priv spec v1.12
instead of priv spec v1.10.
To fix this issue, we set latest priv spec version (i.e. v1.12)
for base rv64/rv32 cpu and riscv_cpu_realize() will override priv
spec version only when "cpu->cfg.priv_spec != NULL".
Fixes: 7100fe6c24 ("target/riscv: Enable privileged spec version 1.12")
Signed-off-by: Anup Patel <apatel@ventanamicro.com>
Reviewed-by: Frank Chang <frank.chang@sifive.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Atish Patra <atishp@rivosinc.com>
Reviewed-by: Bin Meng <bmeng.cn@gmail.com>
Message-Id: <20220611080107.391981-2-apatel@ventanamicro.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
The Ibex CPU supports version 1.11 of the priv spec [1], so let's
correct that in QEMU as well.
1: https://ibex-core.readthedocs.io/en/latest/01_overview/compliance.html
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Bin Meng <bmeng.cn@gmail.com>
Message-Id: <20220629233102.275181-3-alistair.francis@opensource.wdc.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
There is nothing in the RISC-V spec that mandates version 1.12 is
required for ePMP and there is currently hardware [1] that implements
ePMP (a draft version though) with the 1.11 priv spec.
1: https://ibex-core.readthedocs.io/en/latest/01_overview/compliance.html
Fixes: a4b2fa4331 ("target/riscv: Introduce privilege version field in the CSR ops.")
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Bin Meng <bmeng.cn@gmail.com>
Message-Id: <20220629233102.275181-2-alistair.francis@opensource.wdc.com>
mcycle/minstret are actually WARL registers and can be written with any
given value. With SBI PMU extension, it will be used to store a initial
value provided from supervisor OS. The Qemu also need prohibit the counter
increment if mcountinhibit is set.
Support mcycle/minstret through generic counter infrastructure.
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Signed-off-by: Atish Patra <atish.patra@wdc.com>
Signed-off-by: Atish Patra <atishp@rivosinc.com>
Message-Id: <20220620231603.2547260-8-atishp@rivosinc.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
With SBI PMU extension, user can use any of the available hpmcounters to
track any perf events based on the value written to mhpmevent csr.
Add read/write functionality for these csrs.
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Bin Meng <bmeng.cn@gmail.com>
Signed-off-by: Atish Patra <atish.patra@wdc.com>
Signed-off-by: Atish Patra <atishp@rivosinc.com>
Message-Id: <20220620231603.2547260-7-atishp@rivosinc.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
As per the privilege specification v1.11, mcountinhibit allows to start/stop
a pmu counter selectively.
Reviewed-by: Bin Meng <bmeng.cn@gmail.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Signed-off-by: Atish Patra <atish.patra@wdc.com>
Signed-off-by: Atish Patra <atishp@rivosinc.com>
Message-Id: <20220620231603.2547260-6-atishp@rivosinc.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
The RISC-V privilege specification provides flexibility to implement
any number of counters from 29 programmable counters. However, the QEMU
implements all the counters.
Make it configurable through pmu config parameter which now will indicate
how many programmable counters should be implemented by the cpu.
Reviewed-by: Bin Meng <bmeng.cn@gmail.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Signed-off-by: Atish Patra <atish.patra@wdc.com>
Signed-off-by: Atish Patra <atishp@rivosinc.com>
Message-Id: <20220620231603.2547260-5-atishp@rivosinc.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
The PMU counters are supported via cpu config "Counters" which doesn't
indicate the correct purpose of those counters.
Rename the config property to pmu to indicate that these counters
are performance monitoring counters. This aligns with cpu options for
ARM architecture as well.
Reviewed-by: Bin Meng <bmeng.cn@gmail.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Signed-off-by: Atish Patra <atish.patra@wdc.com>
Signed-off-by: Atish Patra <atishp@rivosinc.com>
Message-Id: <20220620231603.2547260-4-atishp@rivosinc.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Currently, the predicate function for PMU related CSRs only works if
virtualization is enabled. It also does not check mcounteren bits before
before cycle/minstret/hpmcounterx access.
Support supervisor mode access in the predicate function as well.
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Bin Meng <bmeng.cn@gmail.com>
Signed-off-by: Atish Patra <atish.patra@wdc.com>
Signed-off-by: Atish Patra <atishp@rivosinc.com>
Message-Id: <20220620231603.2547260-3-atishp@rivosinc.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
The predicate function calculates the counter index incorrectly for
hpmcounterx. Fix the counter index to reflect correct CSR number.
Fixes: e39a8320b0 ("target/riscv: Support the Virtual Instruction fault")
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Bin Meng <bmeng.cn@gmail.com>
Signed-off-by: Atish Patra <atish.patra@wdc.com>
Signed-off-by: Atish Patra <atishp@rivosinc.com>
Message-Id: <20220620231603.2547260-2-atishp@rivosinc.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
For a TOR entry to match, the stard address must be lower than the end
address. Normally this is always the case, but correct code might still
run into the following scenario:
Initial state:
pmpaddr3 = 0x2000 pmp3cfg = OFF
pmpaddr4 = 0x3000 pmp4cfg = TOR
Execution:
1. write 0x40ff to pmpaddr3
2. write 0x32ff to pmpaddr4
3. set pmp3cfg to NAPOT with a read-modify-write on pmpcfg0
4. set pmp4cfg to NAPOT with a read-modify-write on pmpcfg1
When (2) is emulated, a call to pmp_update_rule() creates a negative
range for pmp4 as pmp4cfg is still set to TOR. And when (3) is emulated,
a call to tlb_flush() is performed, causing pmp_get_tlb_size() to return
a very creatively large TLB size for pmp4. This, in turn, may result in
accesses to non-existent/unitialized memory regions and a fault, so that
(4) ends up never being executed.
This is in m-mode with MPRV unset, meaning that unlocked PMP entries
should have no effect. Therefore such a behavior based on PMP content
is very unexpected.
Make sure no negative PMP range can be created, whether explicitly by
the emulated code or implicitly like the above.
Signed-off-by: Nicolas Pitre <nico@fluxnic.net>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-Id: <3oq0sqs1-67o0-145-5n1s-453o118804q@syhkavp.arg>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
The set of instructions that require decode_save_opc for
unwinding is really fairly small -- only insns that can
raise ILLEGAL_INSN at runtime. This includes CSR, anything
that uses a *new* fp rounding mode, and many privileged insns.
Since unwind info is stored as the difference from the
previous insn, storing a 0 for most insns minimizes the
size of the unwind info.
Booting a debian kernel image to the missing rootfs panic yields
- gen code size 22226819/1026886656
+ gen code size 21601907/1026886656
on 41k TranslationBlocks, a savings of 610kB or a bit less than 3%.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-Id: <20220604231004.49990-4-richard.henderson@linaro.org>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
The function doesn't set mtval, it sets badaddr. Move the set
of badaddr directly into gen_exception_inst_addr_mis and use
generate_exception.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-Id: <20220604231004.49990-3-richard.henderson@linaro.org>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
While we set env->bins when unwinding for ILLEGAL_INST,
from e.g. csrrw, we weren't setting it for immediately
illegal instructions.
Add a testcase for mtval via both exception paths.
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1060
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-Id: <20220604231004.49990-2-richard.henderson@linaro.org>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Commit 57c108b864 introduced gen_set_gpri(), which already contains
a check for if the destination register is 'zero'. The check in auipc
and lui are then redundant. This patch removes those checks.
Signed-off-by: Víctor Colombo <victor.colombo@eldorado.org.br>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-Id: <20220610165517.47517-1-victor.colombo@eldorado.org.br>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Semihosting is not enabled for nios2-linux-user.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reorg nios2_semi_return_* to gdb_syscall_complete_cb.
Use the 32-bit version normally, and the 64-bit version
for HOSTED_LSEEK.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
We don't implement it with _WIN32 hosts, and the syscalls
are missing from the gdb remote file i/o interface.
Since we can't implement them universally, drop them.
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
From the Unified Hosting Interface, MD01069 Reference Manual,
version 1.1.6, 06 July 2015.
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Within do_interrupt, we hold the iothread lock, which
is required for Chardev access for the console, and for
the round trip for use_gdb_syscalls().
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
While we had a call to do_m68k_semihosting in linux-user, it
wasn't actually reachable. We don't include DISAS_INSN(halt)
as an instruction unless system mode.
Reviewed-by: Laurent Vivier <laurent@vivier.eu>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reorg m68k_semi_return_* to gdb_syscall_complete_cb.
Use the 32-bit version normally, and the 64-bit version
for HOSTED_LSEEK.
Reviewed-by: Laurent Vivier <laurent@vivier.eu>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Change 'ret' to uint64_t. This resolves a FIXME in the
m68k and nios2 semihosting that we've lost data.
Change 'err' to int. There is nothing target-specific
about the width of the errno value.
Reviewed-by: Luc Michel <lmichel@kalray.eu>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Move the ARM and RISCV specific helpers into
their own header file.
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Luc Michel <lmichel@kalray.eu>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
We have two copies of these structures, and require them
in semihosting/ going forward.
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
There were 3 copies of these flags. Place them in the
file with gdb_do_syscall, with which they belong.
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Perform the cleanup in the FIXME comment in common_semi_gdb_syscall.
Do not modify guest registers until the syscall is complete,
which in the gdbstub case is asynchronous.
In the synchronous non-gdbstub case, use common_semi_set_ret
to set the result. Merge set_swi_errno into common_semi_cb.
Rely on the latter for combined return value / errno setting.
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
We have a subdirectory for semihosting; move this file out of exec.
Rename to emphasize the contents are a replacement for the functions
in linux-user/bsd-user uaccess.c.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
In machvirt_init we create a cpu but do not fully initialize it.
Thus the propagation of V7VE to LPAE has not been done, and we
compute the wrong value for some v7 cpus, e.g. cortex-a15.
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1078
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reported-by: He Zhe <zhe.he@windriver.com>
Message-id: 20220619001541.131672-3-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Move the code from hw/arm/virt.c that is supposed
to handle v7 into the one function.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reported-by: He Zhe <zhe.he@windriver.com>
Message-id: 20220619001541.131672-2-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
We will need these functions in translate-sme.c.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220620175235.60881-21-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
We need SVL separate from VL for RDSVL et al, as well as
ZA storage loads and stores, which do not require PSTATE.SM.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220620175235.60881-20-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
When Streaming SVE mode is enabled, the size is taken from
SMCR_ELx instead of ZCR_ELx. The format is shared, but the
set of vector lengths is not. Further, Streaming SVE does
not require any particular length to be supported.
Adjust sve_vqm1_for_el to pass the current value of PSTATE.SM
to the new function.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220620175235.60881-19-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Mirror the properties for SVE. The main difference is
that any arbitrary set of powers of 2 may be supported,
and not the stricter constraints that apply to SVE.
Include a property to control FEAT_SME_FA64, as failing
to restrict the runtime to the proper subset of insns
could be a major point for bugs.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20220620175235.60881-18-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
These functions are not used outside cpu64.c,
so make them static.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220620175235.60881-17-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Drop the aa32-only inline fallbacks,
and just use a couple of ifdefs.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220620175235.60881-16-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Rename from cpu_arm_{get,set}_sve_default_vec_len,
and take the pointer to default_vq from opaque.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220620175235.60881-15-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Rename from cpu_arm_{get,set}_sve_vq, and take the
ARMVQMap as the opaque parameter.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220620175235.60881-14-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Pull the three sve_vq_* values into a structure.
This will be reused for SME.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220620175235.60881-13-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Keep all of the error messages together. This does mean that
when setting many sve length properties we'll only generate
one error, but we only really need one.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220620175235.60881-12-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
These two instructions are aliases of MSR (immediate).
Use the two helpers to properly implement svcr_write.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220620175235.60881-11-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Place this late in the resettable section of the structure,
to keep the most common element offsets from being > 64k.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220620175235.60881-10-richard.henderson@linaro.org
[PMM: expanded comment on zarray[] format]
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
These are required to determine if various insns
are allowed to issue.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220620175235.60881-9-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Implement the streaming mode identification register, and the
two streaming priority registers. For QEMU, they are all RES0.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220620175235.60881-8-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
These cpregs control the streaming vector length and whether the
full a64 instruction set is allowed while in streaming mode.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220620175235.60881-7-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This cpreg is used to access two new bits of PSTATE
that are not visible via any other mechanism.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220620175235.60881-6-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This will be used for controlling access to SME cpregs.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220620175235.60881-5-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This will be used for raising various traps for SME.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220620175235.60881-4-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This is CheckSMEAccess, which is the basis for a set of
related tests for various SME cpregs and instructions.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220620175235.60881-3-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This register is part of SME, but isn't closely related to the
rest of the extension.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220620175235.60881-2-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Some features such as running in EL3 or running M profile code are
incompatible with virtualization as QEMU implements it today. To prevent
users from picking invalid configurations on other virt solutions like
Hvf, let's run the same checks there too.
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1073
Signed-off-by: Alexander Graf <agraf@csgraf.de>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220620192242.70573-2-agraf@csgraf.de
[PMM: Allow qtest accelerator too; tweak comment]
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
CPUClass::memory_rw_debug() holds a callback for GDB memory access.
If not provided, cpu_memory_rw_debug() is used by the GDB stub.
Drop avr_cpu_memory_rw_debug() which does nothing special.
Signed-off-by: Bin Meng <bmeng.cn@gmail.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20220322095004.70682-1-bmeng.cn@gmail.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
The 'resume_as_sreset' attribute of a cpu is set when a thread is
entering a stop state on ppc books. It causes the thread to be
re-routed to vector 0x100 when woken up by an exception. So it must be
cleared on reset or a thread might be re-routed unexpectedly after a
reset, when it was not in a stop state and/or when the appropriate
exception handler isn't set up yet.
Using skiboot, it can be tested by resetting the system when it is
quiet and most threads are idle and in stop state.
After the reset occurs, skiboot elects a primary thread and all the
others wait in secondary_wait. The primary thread does all the system
initialization from main_cpu_entry() and at some point, the
decrementer interrupt starts ticking. The exception vector for the
decrementer interrupt is in place, so that shouldn't be a
problem. However, if that primary thread was in stop state prior to
the reset, and because the resume_as_sreset parameters is still set,
it is re-routed to exception vector 0x100. Which, at that time, is
still defined as the entry point for BML. So that primary thread
restarts as new and ends up being treated like any other secondary
thread. All threads are now waiting in secondary_wait.
It results in a full system hang with no message on the console, as
the uart hasn't been init'ed yet. It's actually not obvious to realise
what's happening if not tracing reset (-d cpu_reset). The fix is
simply to clear the 'resume_as_sreset' attribute on reset.
Signed-off-by: Frederic Barrat <fbarrat@linux.ibm.com>
Reviewed-by: Fabiano Rosas <farosas@linux.ibm.com>
Reviewed-by: Cédric Le Goater <clg@kaod.org>
Message-Id: <20220617095222.612212-1-fbarrat@linux.ibm.com>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Commit c29018cc73 added an env->fpscr OR operation using a ternary
that checks if 'error' is not zero:
env->fpscr |= error ? FP_FEX : 0;
However, in the current body of do_fpscr_check_status(), 'error' is
granted to be always non-zero at that point. The result is that Coverity
is less than pleased:
Control flow issues (DEADCODE)
Execution cannot reach the expression "0ULL" inside this statement:
"env->fpscr |= (error ? 1073...".
Remove the ternary and always make env->fpscr |= FP_FEX.
Cc: Lucas Mateus Castro (alqotel) <lucas.araujo@eldorado.org.br>
Cc: Richard Henderson <richard.henderson@linaro.org>
Fixes: Coverity CID 1489442
Fixes: c29018cc73 ("target/ppc: Implemented xvf*ger*")
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Reviewed-by: Lucas Mateus Castro (alqotel) <lucas.araujo@eldorado.org.br>
Message-Id: <20220602191048.137511-1-danielhb413@gmail.com>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Coverity is not thrilled about the multiply operations being done in
ger_rank8() and ger_rank2(), giving an error like the following:
Integer handling issues (OVERFLOW_BEFORE_WIDEN)
Potentially overflowing expression "sextract32(a, 4 * i, 4) *
sextract32(b, 4 * i, 4)" with type "int" (32 bits, signed) is evaluated
using 32-bit arithmetic, and then used in a context that expects an
expression of type "int64_t" (64 bits, signed).
Fix both instances where this occur by adding an int64_t cast in the
first operand, forcing the result to be 64 bit.
Fixes: Coverity CID 1489444, 1489443
Fixes: 345531533f ("target/ppc: Implemented xvi*ger* instructions")
Cc: Lucas Mateus Castro (alqotel) <lucas.araujo@eldorado.org.br>
Cc: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Lucas Mateus Castro (alqotel) <lucas.araujo@eldorado.org.br>
Message-Id: <20220602141449.118173-1-danielhb413@gmail.com>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
The extract64 arguments are not endian dependent as they are only used
for bitwise operations. The current behavior in little-endian hosts is
correct; since the indexes in VRB are in PowerISA-ordering, we should
always invert the value before calling extract64. Also, using the VsrD
macro, we can have a single EXTRACT_BIT definition for big and
little-endian with the correct behavior.
Signed-off-by: Matheus Ferst <matheus.ferst@eldorado.org.br>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220601125355.1266165-1-matheus.ferst@eldorado.org.br>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Implement the following PowerISA v3.1 instructions:
vmodsw: Vector Modulo Signed Word
vmoduw: Vector Modulo Unsigned Word
vmodsd: Vector Modulo Signed Doubleword
vmodud: Vector Modulo Unsigned Doubleword
Signed-off-by: Lucas Mateus Castro (alqotel) <lucas.araujo@eldorado.org.br>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220525134954.85056-8-lucas.araujo@eldorado.org.br>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Implement the following PowerISA v3.1 instructions:
vdivesw: Vector Divide Extended Signed Word
vdiveuw: Vector Divide Extended Unsigned Word
Signed-off-by: Lucas Mateus Castro (alqotel) <lucas.araujo@eldorado.org.br>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220525134954.85056-4-lucas.araujo@eldorado.org.br>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Implement the following PowerISA v3.1 instructions:
vdivsw: Vector Divide Signed Word
vdivuw: Vector Divide Unsigned Word
vdivsd: Vector Divide Signed Doubleword
vdivud: Vector Divide Unsigned Doubleword
Signed-off-by: Lucas Mateus Castro (alqotel) <lucas.araujo@eldorado.org.br>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220525134954.85056-2-lucas.araujo@eldorado.org.br>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Switch statements for the code segments that handle nanoMIPS
instruction pools P.LL, P.SC, P.SHIFT, P.LS.S1, P.LS.E0, PP.LSXS
do not have proper default case, resulting in not generating
reserved instruction exception for certain illegal opcodes.
Fix this by adding default cases for these switch statements that
trigger reserved instruction exception.
Signed-off-by: Stefan Pejic <stefan.pejic@syrmia.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20220504110403.613168-7-stefan.pejic@syrmia.com>
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
nanoMIPS ISA does not support unaligned memory access. Adjust
DisasContext's default_tcg_memop_mask to reflect this.
Signed-off-by: Dragan Mladjenovic <dragan.mladjenovic@syrmia.com>
Signed-off-by: Stefan Pejic <stefan.pejic@syrmia.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20220504110403.613168-6-stefan.pejic@syrmia.com>
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
If both rs and rt are the same register, the nanoMIPS instruction
BNEC[32] rs, rt, address is equivalent to NOP (branch is not taken and
there is no delay slot). This commit provides such behavior. Without
this commit, this scenario results in an incorrect behavior.
Signed-off-by: Dragan Mladjenovic <dragan.mladjenovic@syrmia.com>
Signed-off-by: Stefan Pejic <stefan.pejic@syrmia.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20220504110403.613168-5-stefan.pejic@syrmia.com>
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
There are currently two problems related to the emulation of the
instruction BPOSGE32C.
The nanoMIPS instruction BPOSGE32C belongs to DSP R3 instructions
(actually, as of now, it is the only instruction of DSP R3). The
presence of DSP R3 instructions in QEMU is indicated by the flag
MIPS_HFLAG_DSP_R3 (0x20000000). This flag is currently being properly
set in CPUMIPSState's hflags (for example, for I7200 nanoMIPS CPU).
However, it is not propagated to DisasContext's hflags, since the flag
MIPS_HFLAG_DSP_R3 is not set in MIPS_HFLAG_TMASK (while similar flags
MIPS_HFLAG_DSP_R2 and MIPS_HFLAG_DSP are set in this mask, and there
is no problem in functioning check_dsp_r2(), check_dsp()). This means
the function check_dsp_r3() currently does not work properly, and the
emulation of BPOSGE32C can not work properly as well.
Change MIPS_HFLAG_TMASK from 0x1F5807FF to 0x3F5807FF (logical OR
with 0x20000000) to fix this.
Additionally, check_cp1_enabled() is currently incorrectly called
while emulating BPOSGE32C. BPOSGE32C is in the same pool (P.BR1) as
FPU branch instruction BC1EQZC and BC1NEZC, but it not a part of FPU
(CP1) instructions, and check_cp1_enabled() should not be involved
while emulating BPOSGE32C.
Rearrange invocations of check_cp1_enabled() within P.BR1 pool
handling to affect only BC1EQZC and BC1NEZC emulation, and not
BPOSGE32C emulation.
Signed-off-by: Dragan Mladjenovic <dragan.mladjenovic@syrmia.com>
Signed-off-by: Stefan Pejic <stefan.pejic@syrmia.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20220504110403.613168-4-stefan.pejic@syrmia.com>
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
The field rs in the instruction EXTRV_S.H rt, ac, rs is specified in
nanoMIPS documentation as opcode[20..16]. It is, however, erroneously
considered as opcode[25..21] in the current QEMU implementation. In
function gen_pool32axf_2_nanomips_insn(), the variable v0_t corresponds
to rt/opcode[25..21], and v1_t corresponds to rs/opcode[20..16]), and
v0_t is by mistake passed to the helper gen_helper_extr_s_h().
Use v1_t rather than v0_t in the invocation of gen_helper_extr_s_h()
to fix this.
Signed-off-by: Dragan Mladjenovic <dragan.mladjenovic@syrmia.com>
Signed-off-by: Stefan Pejic <stefan.pejic@syrmia.com>
Fixes: 8b3698b294 ("target/mips: Add emulation of DSP ASE for nanoMIPS")
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20220504110403.613168-3-stefan.pejic@syrmia.com>
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
The field ac in nanoMIPS instruction MTHLIP rs, ac is specified in
nanoMIPS documentation as opcode[15..14] (2 bits). However, in the
current QEMU code, the corresponding argument passed to the helper
gen_helper_mthlip() has the value of opcode[15..11] (5 bits). Right
shift the value of this argument by three bits to fix this.
Signed-off-by: Stefan Pejic <stefan.pejic@syrmia.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20220504110403.613168-2-stefan.pejic@syrmia.com>
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Fix the FTRUNC_S and FTRUNC_U trans helper problem.
Fixes: 5c5b64000c ("target/mips: Convert MSA 2RF instruction format to decodetree")
Signed-off-by: nihui <shuizhuyuanluo@126.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20220503144241.289239-1-shuizhuyuanluo@126.com>
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
This patch fix the issue that helper_msa_st_b() write high 64bit
data to where the low 64bit resides, leaving high 64bit undefined.
Fixes: 68ad9260e0 ("target/mips: Use 8-byte memory ops for msa load/store")
Signed-off-by: Ni Hui <shuizhuyuanluo@126.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20220504023319.12923-1-shuizhuyuanluo@126.com>
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Only for msa COPY_U/COPY_S with wd zero, we treat it as NOP.
Move this special rule into COPY_U and COPY_S trans function.
Fixes: 97fe675519 ("target/mips: Convert MSA COPY_S and INSERT opcodes to decodetree")
Signed-off-by: Ni Hui <shuizhuyuanluo@126.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20220503130708.272850-4-shuizhuyuanluo@126.com>
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Fix issue that condition of check_msa_enabled(ctx) is reversed
that causes segfault when msa elm_fn op encountered.
Fixes: 2f2745c81a ("target/mips: Convert MSA COPY_U opcode to decodetree")
Fixes: 97fe675519 ("target/mips: Convert MSA COPY_S and INSERT opcodes to decodetree")
Signed-off-by: Ni Hui <shuizhuyuanluo@126.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20220503130708.272850-3-shuizhuyuanluo@126.com>
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Actually look into dfe structure data so that df_extract_val() and
df_extract_df() can return immediate and datafield other than BYTE.
Fixes: 4701d23aef ("target/mips: Convert MSA BIT instruction format to decodetree")
Signed-off-by: Ni Hui <shuizhuyuanluo@126.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20220503130708.272850-2-shuizhuyuanluo@126.com>
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Fix the SAT_S and SAT_U trans helper confusion.
Fixes: 4701d23aef ("target/mips: Convert MSA BIT instruction format to decodetree")
Signed-off-by: Ni Hui <shuizhuyuanluo@126.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20220503130708.272850-1-shuizhuyuanluo@126.com>
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
bit 31 (M) of WatchHiN register is a read-only register indicating
whether the next WatchHi register is present. It must not be reset
during user writes to the register.
Signed-off-by: Marcin Nowakowski <marcin.nowakowski@fungible.com>
Reviewed-by: David Daney <david.daney@fungible.com>
Signed-off-by: Philippe Mathieu-Daudé <philmd@fungible.com>
Message-Id: <20220511212953.74738-1-philmd@fungible.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Since DDI0487F.a, the RW bit is RAO/WI. When specifically
targeting such a cpu, e.g. cortex-a76, it is legitimate to
ignore the bit within the secure monitor.
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1062
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220609214657.1217913-3-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Because reset always initializes the AA64 version, SCR_EL3,
test the mode of EL3 instead of the type of the cpreg.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220609214657.1217913-2-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
We were using arm_is_secure and is_a64, which are
tests against the current EL, as opposed to
arm_el_is_aa64 and arm_is_secure_below_el3, which
can be applied to a different EL than current.
Consolidate the two tests.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220609202901.1177572-24-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This function is no longer used outside debug_helper.c.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220609202901.1177572-23-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Handle the debug vs current el exception test in one place.
Leave EXCP_BKPT alone, since that treats debug < current differently.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220609202901.1177572-22-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This function is no longer used. At the same time, remove
DisasContext.secure_routed_to_el3, as it in turn becomes unused.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220609202901.1177572-21-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
With the helper we can use exception_target_el at runtime,
instead of default_exception_el at translate time.
While we're at it, remove the DisasContext parameter from
gen_exception, as it is no longer used.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220609202901.1177572-20-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Split out a common helper function for gen_exception_el
and gen_exception_insn_el_v.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220609202901.1177572-19-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Create a new wrapper function that passes the default
exception target to gen_exception_el.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220609202901.1177572-18-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220609202901.1177572-17-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This function is not required by any other translation file.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220609202901.1177572-16-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
We no longer need this value during translation,
as it is now handled within the helpers.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220609202901.1177572-15-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Move the computation from gen_swstep_exception into a helper.
This fixes a bug when:
- MDSCR_EL1.KDE == 1 to enable debug exceptions within EL_D itself
- we singlestep an ERET from EL_D to some lower EL
Previously we were computing 'same el' based on the EL which
executed the ERET instruction, whereas it ought to be computed
based on the EL to which ERET returned. This happens naturally
with the new helper, which runs after EL has been changed.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220609202901.1177572-14-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Create a new wrapper function that passes the default
exception target to gen_exception_insn_el.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220609202901.1177572-13-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220609202901.1177572-12-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Create a function below gen_exception_insn that takes
the target_el as a TCGv_i32, replacing gen_exception_el.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220609202901.1177572-11-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Rename to helper_exception_with_syndrome_el, to emphasize
that the target el is a parameter.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220609202901.1177572-10-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This function now now only used in debug_helper.c, so there is
no reason to have a declaration in a header.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220609202901.1177572-9-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220609202901.1177572-8-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Use the accessor rather than the raw structure member.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220609202901.1177572-7-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Move arm_generate_debug_exceptions and its two subroutines,
{aa32,aa64}_generate_debug_exceptions into debug_helper.c,
and the one interface declaration to internals.h.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220609202901.1177572-6-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Move the function to debug_helper.c, and the
declaration to internals.h.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220609202901.1177572-5-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Move the function to op_helper.c, near raise_exception.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220609202901.1177572-4-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
With ARMv8, this field is always RES0.
With ARMv7, targeting EL2 and TA=0, it is always 0xA.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220609202901.1177572-3-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220609202901.1177572-2-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
When running a 32-bit guest, with a e64 vmv.v.x and vl_eq_vlmax set to
true the `tcg_debug_assert(vece <= MO_32)` will be triggered inside
tcg_gen_gvec_dup_i32().
This patch checks that condition and instead uses tcg_gen_gvec_dup_i64()
is required.
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1028
Suggested-by: Robert Bu <robert.bu@gmail.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220608234701.369536-1-alistair.francis@opensource.wdc.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
There are currently two types of RISC-V CPUs:
- Generic CPUs (base or any) that allow complete custimisation
- "Named" CPUs that match existing hardware
Users can use the base CPUs to custimise the extensions that they want, for
example -cpu rv64,v=true.
We originally exposed these as part of the named CPUs as well, but that was
by accident.
Exposing the CPU properties to named CPUs means that we accidently
enable extensions that don't exist on the CPUs by default. For example
the SiFive E CPU currently support the zba extension, which is a bug.
This patch instead only exposes the CPU extensions to the generic CPUs.
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Bin Meng <bmeng.cn@gmail.com>
Message-Id: <20220608061437.314434-1-alistair.francis@opensource.wdc.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
According to v-spec, tail agnostic behavior can be either kept as
undisturbed or set elements' bits to all 1s. To distinguish the
difference of tail policies, QEMU should be able to simulate the tail
agnostic behavior as "set tail elements' bits to all 1s".
There are multiple possibility for agnostic elements according to
v-spec. The main intent of this patch-set tries to add option that
can distinguish between tail policies. Setting agnostic elements to
all 1s allows QEMU to express this.
This commit adds option 'rvv_ta_all_1s' is added to enable the
behavior, it is default as disabled.
Signed-off-by: eop Chen <eop.chen@sifive.com>
Reviewed-by: Frank Chang <frank.chang@sifive.com>
Reviewed-by: Weiwei Li <liweiwei@iscas.ac.cn>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-Id: <165449614532.19704.7000832880482980398-16@git.sr.ht>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
The tail elements in the destination mask register are updated under
a tail-agnostic policy.
Signed-off-by: eop Chen <eop.chen@sifive.com>
Reviewed-by: Frank Chang <frank.chang@sifive.com>
Reviewed-by: Weiwei Li <liweiwei@iscas.ac.cn>
Acked-by: Alistair Francis <alistair.francis@wdc.com>
Message-Id: <165449614532.19704.7000832880482980398-14@git.sr.ht>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Compares write mask registers, and so always operate under a tail-
agnostic policy.
Signed-off-by: eop Chen <eop.chen@sifive.com>
Reviewed-by: Frank Chang <frank.chang@sifive.com>
Reviewed-by: Weiwei Li <liweiwei@iscas.ac.cn>
Acked-by: Alistair Francis <alistair.francis@wdc.com>
Message-Id: <165449614532.19704.7000832880482980398-12@git.sr.ht>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Compares write mask registers, and so always operate under a tail-
agnostic policy.
Signed-off-by: eop Chen <eop.chen@sifive.com>
Reviewed-by: Frank Chang <frank.chang@sifive.com>
Reviewed-by: Weiwei Li <liweiwei@iscas.ac.cn>
Acked-by: Alistair Francis <alistair.francis@wdc.com>
Message-Id: <165449614532.19704.7000832880482980398-9@git.sr.ht>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
`vmadc` and `vmsbc` produces a mask value, they always operate with
a tail agnostic policy.
Signed-off-by: eop Chen <eop.chen@sifive.com>
Reviewed-by: Frank Chang <frank.chang@sifive.com>
Reviewed-by: Weiwei Li <liweiwei@iscas.ac.cn>
Acked-by: Alistair Francis <alistair.francis@wdc.com>
Message-Id: <165449614532.19704.7000832880482980398-7@git.sr.ht>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Destination register of unit-stride mask load and store instructions are
always written with a tail-agnostic policy.
A vector segment load / store instruction may contain fractional lmul
with nf * lmul > 1. The rest of the elements in the last register should
be treated as tail elements.
Signed-off-by: eop Chen <eop.chen@sifive.com>
Reviewed-by: Frank Chang <frank.chang@sifive.com>
Reviewed-by: Weiwei Li <liweiwei@iscas.ac.cn>
Acked-by: Alistair Francis <alistair.francis@wdc.com>
Message-Id: <165449614532.19704.7000832880482980398-6@git.sr.ht>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
According to v-spec, tail agnostic behavior can be either kept as
undisturbed or set elements' bits to all 1s. To distinguish the
difference of tail policies, QEMU should be able to simulate the tail
agnostic behavior as "set tail elements' bits to all 1s".
There are multiple possibility for agnostic elements according to
v-spec. The main intent of this patch-set tries to add option that
can distinguish between tail policies. Setting agnostic elements to
all 1s allows QEMU to express this.
This is the first commit regarding the optional tail agnostic
behavior. Follow-up commits will add this optional behavior
for all rvv instructions.
Signed-off-by: eop Chen <eop.chen@sifive.com>
Reviewed-by: Frank Chang <frank.chang@sifive.com>
Reviewed-by: Weiwei Li <liweiwei@iscas.ac.cn>
Acked-by: Alistair Francis <alistair.francis@wdc.com>
Message-Id: <165449614532.19704.7000832880482980398-5@git.sr.ht>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
According to v-spec (section 5.4):
When vstart ≥ vl, there are no body elements, and no elements are
updated in any destination vector register group, including that
no tail elements are updated with agnostic values.
vmsbf.m, vmsif.m, vmsof.m, viota.m, vcompress instructions themselves
require vstart to be zero. So they don't need the early exit.
Signed-off-by: eop Chen <eop.chen@sifive.com>
Reviewed-by: Frank Chang <frank.chang@sifive.com>
Reviewed-by: Weiwei Li <liweiwei@iscas.ac.cn>
Acked-by: Alistair Francis <alistair.francis@wdc.com>
Message-Id: <165449614532.19704.7000832880482980398-4@git.sr.ht>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
No functional change intended in this commit.
Signed-off-by: eop Chen <eop.chen@sifive.com>
Reviewed-by: Frank Chang <frank.chang@sifive.com>
Reviewed-by: Weiwei Li <liweiwei@iscas.ac.cn>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-Id: <165449614532.19704.7000832880482980398-3@git.sr.ht>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
No functional change intended in this commit.
Signed-off-by: eop Chen <eop.chen@sifive.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-Id: <165449614532.19704.7000832880482980398-2@git.sr.ht>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
No functional change intended in this commit.
Signed-off-by: eop Chen <eop.chen@sifive.com>
Reviewed-by: Frank Chang <frank.chang@sifive.com>
Reviewed-by: Weiwei Li <liweiwei@iscas.ac.cn>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-Id: <165449614532.19704.7000832880482980398-1@git.sr.ht>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Add an MXL_RV128 case in two switches so that no error is triggered when
using the -cpu x-rv128 option.
Signed-off-by: Frédéric Pétrot <frederic.petrot@univ-grenoble-alpes.fr>
Acked-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Bin Meng <bmeng.cn@gmail.com>
Message-Id: <20220602155246.38837-1-frederic.petrot@univ-grenoble-alpes.fr>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Whether or not VSEIP is pending isn't reflected in env->mip and must
instead be determined from hstatus.vgein and hgeip. As a result a
CPU in WFI won't wake on a VSEIP, which violates the WFI behavior as
specified in the privileged ISA. Just use riscv_cpu_all_pending()
instead, which already accounts for VSEIP.
Signed-off-by: Andrew Bresticker <abrestic@rivosinc.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-Id: <20220531210544.181322-1-abrestic@rivosinc.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Add support for the zmmul extension v0.1. This extension includes all
multiplication operations from the M extension but not the divide ops.
Signed-off-by: Weiwei Li <liweiwei@iscas.ac.cn>
Signed-off-by: Junqiang Wang <wangjunqiang@iscas.ac.cn>
Reviewed-by: Víctor Colombo <victor.colombo@eldorado.org.br>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-Id: <20220531030732.3850-1-liweiwei@iscas.ac.cn>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
This register is allocated from the existing block of id registers,
so it is already RES0 for cpus that do not implement SME.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220607203306.657998-21-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This will be used for implementing FEAT_SME.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220607203306.657998-20-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
We will need this over in sme_helper.c.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220607203306.657998-19-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Move the data to vec_helper.c and the inline to vec_internal.h.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220607203306.657998-18-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Use the function instead of the array directly.
Because the function performs its own masking, via the uint8_t
parameter, we need to do nothing extra within the users: the bits
above the first 2 (_uh) or 4 (_uw) will be discarded by assignment
to the local bmask variables, and of course _uq uses the entire
uint64_t result.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220607203306.657998-17-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Put the inline function near the array declaration.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220607203306.657998-16-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Export all of the support functions for performing bulk
fault analysis on a set of elements at contiguous addresses
controlled by a predicate.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220607203306.657998-15-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Begin creation of sve_ldst_internal.h by moving the primitives
that access host and tlb memory.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220607203306.657998-14-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This will be used for both Normal and Streaming SVE, and the value
does not necessarily come from ZCR_ELx. While we're at it, emphasize
the units in which the value is returned.
Patch produced by
git grep -l sve_zcr_len_for_el | \
xargs -n1 sed -i 's/sve_zcr_len_for_el/sve_vqm1_for_el/g'
and then adding a function comment.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220607203306.657998-13-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The bitmap need only hold 15 bits; bitmap is over-complicated.
We can simplify operations quite a bit with plain logical ops.
The introduction of SVE_VQ_POW2_MAP eliminates the need for
looping in order to search for powers of two. Simply perform
the logical ops and use count leading or trailing zeros as
required to find the result.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220607203306.657998-12-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This function is used only once, and will need modification
for Streaming SVE mode.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220607203306.657998-11-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
We don't need to constrain the value set in zcr_el[1],
because it will be done by sve_zcr_len_for_el.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220607203306.657998-10-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This check is buried within arm_hcr_el2_eff(), but since we
have to have the explicit check for CPTR_EL2.TZ, we might as
well just check it once at the beginning of the block.
Once this is done, we can test HCR_EL2.{E2H,TGE} directly,
rather than going through arm_hcr_el2_eff().
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220607203306.657998-9-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The ARM pseudocode function CheckNormalSVEEnabled uses this
predicate now, and I think it's a bit clearer.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220607203306.657998-8-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The ARM pseudocode function NVL uses this predicate now,
and I think it's a bit clearer. Simplify the pseudocode
condition by noting that IsInHost is always false for EL1.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220607203306.657998-7-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This (newish) ARM pseudocode function is easier to work with
than open-coded tests for HCR_E2H etc. Use of the function
will be staged into the code base in parts.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220607203306.657998-6-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Instead of checking these bits in fp_exception_el and
also in sve_exception_el, document that we must compare
the results. The only place where we have not already
checked that FP EL is zero is in rebuild_hflags_a64.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220607203306.657998-5-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
We handle this routing in raise_exception. Promoting the value early
means that we can't directly compare FPEXC_EL and SVEEXC_EL.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220607203306.657998-4-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Add an interface function to extract the digested vector length
rather than the raw zcr_el[1] value. This fixes an incorrect
return from do_prctl_set_vl where we didn't take into account
the set of vector lengths supported by the cpu.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220607203306.657998-3-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
With SME, the vector length does not only come from ZCR_ELx.
Comment that this is either NVL or SVL, like the pseudocode.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220607203306.657998-2-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The use of ARM_CPU to recover env from cs calls
object_class_dynamic_cast, which shows up on the profile.
This is pointless, because all callers already have env, and
the reverse operation, env_cpu, is only pointer arithmetic.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220604040607.269301-29-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220604040607.269301-28-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220604040607.269301-27-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220604040607.269301-26-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220604040607.269301-25-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220604040607.269301-24-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220604040607.269301-23-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220604040607.269301-22-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220604040607.269301-21-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220604040607.269301-20-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220604040607.269301-19-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
These functions are used for both page table walking and for
deciding what format in which to deliver exception results.
Since ptw.c is only present for system mode, put the functions
into tlb_helper.c.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220604040607.269301-18-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Move the ptw load functions, plus 3 common subroutines:
S1_ptw_translate, ptw_attrs_are_device, and regime_translation_big_endian.
This also allows get_phys_addr_lpae to become static again.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220604040607.269301-17-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220604040607.269301-16-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
There are a handful of helpers for combine_cacheattrs
that we can move at the same time as the main entry point.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220604040607.269301-15-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220604040607.269301-14-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220604040607.269301-13-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This function has one private helper, v8m_is_sau_exempt,
so move that at the same time.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220604040607.269301-12-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220604040607.269301-11-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This is the final user of get_phys_addr_pmsav7_default
within helper.c, so make it static within ptw.c.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220604040607.269301-10-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220604040607.269301-9-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220604040607.269301-8-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220604040607.269301-7-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220604040607.269301-6-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220604040607.269301-5-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220604040607.269301-4-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Begin moving all of the page table walking functions
out of helper.c, starting with get_phys_addr().
Create a temporary header file, "ptw.h", in which to
share declarations between the two C files while we
are moving functions.
Move a few declarations to "internals.h", which will
remain used by multiple C files.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220604040607.269301-3-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Move the decl from ptw.h to internals.h. Provide an inline
version for user-only, just as we do for arm_stage1_mmu_idx.
Move an endif down to make the definition in helper.c be
system only.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220604040607.269301-2-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
We have about 30 instances of the typo/variant spelling 'writeable',
and over 500 of the more common 'writable'. Standardize on the
latter.
Change produced with:
sed -i -e 's/\([Ww][Rr][Ii][Tt]\)[Ee]\([Aa][Bb][Ll][Ee]\)/\1\2/g' $(git grep -il writeable)
and then hand-undoing the instance in linux-headers/linux/kvm.h.
Most of these changes are in comments or documentation; the
exceptions are:
* a local variable in accel/hvf/hvf-accel-ops.c
* a local variable in accel/kvm/kvm-all.c
* the PMCR_WRITABLE_MASK macro in target/arm/internals.h
* the EPT_VIOLATION_GPA_WRITABLE macro in target/i386/hvf/vmcs.h
(which is never used anywhere)
* the AR_TYPE_WRITABLE_MASK macro in target/i386/hvf/vmx.h
(which is never used anywhere)
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Stefan Weil <sw@weilnetz.de>
Message-id: 20220505095015.2714666-1-peter.maydell@linaro.org
The FEAT_DoubleFault extension adds the following:
* All external aborts on instruction fetches and translation table
walks for instruction fetches must be synchronous. For QEMU this
is already true.
* SCR_EL3 has a new bit NMEA which disables the masking of SError
interrupts by PSTATE.A when the SError interrupt is taken to EL3.
For QEMU we only need to make the bit writable, because we have no
sources of SError interrupts.
* SCR_EL3 has a new bit EASE which causes synchronous external
aborts taken to EL3 to be taken at the same entry point as SError.
(Note that this does not mean that they are SErrors for purposes
of PSTATE.A masking or that the syndrome register reports them as
SErrors: it just means that the vector offset is different.)
* The existing SCTLR_EL3.IESB has an effective value of 1 when
SCR_EL3.NMEA is 1. For QEMU this is a no-op because we don't need
different behaviour based on IESB (we don't need to do anything to
ensure that error exceptions are synchronized).
So for QEMU the things we need to change are:
* Make SCR_EL3.{NMEA,EASE} writable
* When taking a synchronous external abort at EL3, adjust the
vector entry point if SCR_EL3.EASE is set
* Advertise the feature in the ID registers
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220531151431.949322-1-peter.maydell@linaro.org
The architectural feature RASv1p1 introduces the following new
features:
* new registers ERXPFGCDN_EL1, ERXPFGCTL_EL1 and ERXPFGF_EL1
* new bits in the fine-grained trap registers that control traps
for these new registers
* new trap bits HCR_EL2.FIEN and SCR_EL3.FIEN that control traps
for ERXPFGCDN_EL1, ERXPFGCTL_EL1, ERXPFGP_EL1
* a larger number of the ERXMISC<n>_EL1 registers
* the format of ERR<n>STATUS registers changes
The architecture permits that if ERRIDR_EL1.NUM is 0 (as it is for
QEMU) then all these new registers may UNDEF, and the HCR_EL2.FIEN
and SCR_EL3.FIEN bits may be RES0. We don't have any ERR<n>STATUS
registers (again, because ERRIDR_EL1.NUM is 0). QEMU does not yet
implement the fine-grained-trap extension. So there is nothing we
need to implement to be compliant with the feature spec. Make the
'max' CPU report the feature in its ID registers, and document it.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220531114258.855804-1-peter.maydell@linaro.org
Signed-off-by: Xiaojuan Yang <yangxiaojuan@loongson.cn>
Signed-off-by: Song Gao <gaosong@loongson.cn>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220606124333.2060567-42-yangxiaojuan@loongson.cn>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Xiaojuan Yang <yangxiaojuan@loongson.cn>
Signed-off-by: Song Gao <gaosong@loongson.cn>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220606124333.2060567-40-yangxiaojuan@loongson.cn>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Emulate a 3A5000 board use the new loongarch instruction.
3A5000 belongs to the Loongson3 series processors.
The board consists of a 3A5000 cpu model and the virt
bridge. The host 3A5000 board is really complicated and
contains many functions.Now for the tcg softmmu mode
only part functions are emulated.
More detailed info you can see
https://github.com/loongson/LoongArch-Documentation
Signed-off-by: Xiaojuan Yang <yangxiaojuan@loongson.cn>
Signed-off-by: Song Gao <gaosong@loongson.cn>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220606124333.2060567-31-yangxiaojuan@loongson.cn>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
This includes:
-RDTIME{L/H}.W
-RDTIME.D
Signed-off-by: Xiaojuan Yang <yangxiaojuan@loongson.cn>
Signed-off-by: Song Gao <gaosong@loongson.cn>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220606124333.2060567-30-yangxiaojuan@loongson.cn>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
This includes:
- IOCSR{RD/WR}.{B/H/W/D}
Signed-off-by: Xiaojuan Yang <yangxiaojuan@loongson.cn>
Signed-off-by: Song Gao <gaosong@loongson.cn>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220606124333.2060567-27-yangxiaojuan@loongson.cn>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Xiaojuan Yang <yangxiaojuan@loongson.cn>
Signed-off-by: Song Gao <gaosong@loongson.cn>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220606124333.2060567-25-yangxiaojuan@loongson.cn>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Xiaojuan Yang <yangxiaojuan@loongson.cn>
Signed-off-by: Song Gao <gaosong@loongson.cn>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220606124333.2060567-24-yangxiaojuan@loongson.cn>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Xiaojuan Yang <yangxiaojuan@loongson.cn>
Signed-off-by: Song Gao <gaosong@loongson.cn>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220606124333.2060567-23-yangxiaojuan@loongson.cn>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Xiaojuan Yang <yangxiaojuan@loongson.cn>
Signed-off-by: Song Gao <gaosong@loongson.cn>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220606124333.2060567-22-yangxiaojuan@loongson.cn>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Xiaojuan Yang <yangxiaojuan@loongson.cn>
Signed-off-by: Song Gao <gaosong@loongson.cn>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220606124333.2060567-21-yangxiaojuan@loongson.cn>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Xiaojuan Yang <yangxiaojuan@loongson.cn>
Signed-off-by: Song Gao <gaosong@loongson.cn>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220606124333.2060567-20-yangxiaojuan@loongson.cn>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Xiaojuan Yang <yangxiaojuan@loongson.cn>
Signed-off-by: Song Gao <gaosong@loongson.cn>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220606124333.2060567-19-yangxiaojuan@loongson.cn>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Song Gao <gaosong@loongson.cn>
Signed-off-by: Xiaojuan Yang <yangxiaojuan@loongson.cn>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20220606124333.2060567-18-yangxiaojuan@loongson.cn>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
This patch adds support for disassembling via option '-d in_asm'.
Signed-off-by: Song Gao <gaosong@loongson.cn>
Signed-off-by: Xiaojuan Yang <yangxiaojuan@loongson.cn>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220606124333.2060567-17-yangxiaojuan@loongson.cn>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
This includes:
- FCMP.cond.{S/D}
Signed-off-by: Song Gao <gaosong@loongson.cn>
Signed-off-by: Xiaojuan Yang <yangxiaojuan@loongson.cn>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220606124333.2060567-12-yangxiaojuan@loongson.cn>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
This patch adds main translation routines and
basic functions for translation.
Signed-off-by: Song Gao <gaosong@loongson.cn>
Signed-off-by: Xiaojuan Yang <yangxiaojuan@loongson.cn>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220606124333.2060567-4-yangxiaojuan@loongson.cn>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
This patch adds target state header, target definitions
and initialization routines.
Signed-off-by: Song Gao <gaosong@loongson.cn>
Signed-off-by: Xiaojuan Yang <yangxiaojuan@loongson.cn>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20220606124333.2060567-3-yangxiaojuan@loongson.cn>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
This patch gives an introduction to the LoongArch target.
Signed-off-by: Song Gao <gaosong@loongson.cn>
Signed-off-by: Xiaojuan Yang <yangxiaojuan@loongson.cn>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220606124333.2060567-2-yangxiaojuan@loongson.cn>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
When QEMU is started with '-cpu host,host-cache-info=on', it will
passthrough host's number of logical processors sharing cache and
number of processor cores in the physical package. QEMU already
fixes up the later to correctly reflect number of configured cores
for VM, however number of logical processors sharing cache is still
comes from host CPU, which confuses guest started with:
-machine q35,accel=kvm \
-cpu host,host-cache-info=on,l3-cache=off \
-smp 20,sockets=2,dies=1,cores=10,threads=1 \
-numa node,nodeid=0,memdev=ram-node0 \
-numa node,nodeid=1,memdev=ram-node1 \
-numa cpu,socket-id=0,node-id=0 \
-numa cpu,socket-id=1,node-id=1
on 2 socket Xeon 4210R host with 10 cores per socket
with CPUID[04H]:
...
--- cache 3 ---
cache type = unified cache (3)
cache level = 0x3 (3)
self-initializing cache level = true
fully associative cache = false
maximum IDs for CPUs sharing cache = 0x1f (31)
maximum IDs for cores in pkg = 0xf (15)
...
that doesn't match number of logical processors VM was
configured with and as result RHEL 9.0 guest complains:
sched: CPU #10's llc-sibling CPU #0 is not on the same node! [node: 1 != 0]. Ignoring dependency.
WARNING: CPU: 10 PID: 0 at arch/x86/kernel/smpboot.c:421 topology_sane.isra.0+0x67/0x80
...
Call Trace:
set_cpu_sibling_map+0x176/0x590
start_secondary+0x5b/0x150
secondary_startup_64_no_verify+0xc2/0xcb
Fix it by capping max number of logical processors to vcpus/socket
as it was configured, which fixes the issue.
Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Fixes: https://bugzilla.redhat.com/show_bug.cgi?id=2088311
Message-Id: <20220524151020.2541698-3-imammedo@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Accourding Intel's CPUID[EAX=04H] resulting bits 31 - 26 in EAX
should be:
"
**** The nearest power-of-2 integer that is not smaller than (1 + EAX[31:26]) is the number of unique
Core_IDs reserved for addressing different processor cores in a physical package. Core ID is a subset of
bits of the initial APIC ID.
"
ensure that values stored in EAX[31-26] always meets this condition.
Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Message-Id: <20220524151020.2541698-2-imammedo@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
The previous patch used wrong count setting with index value, which got wrong
value from CPUID(EAX=12,ECX=0):EAX. So the SGX1 instruction can't be exposed
to VM and the SGX decice can't work in VM.
Fixes: d19d6ffa07 ("target/i386: introduce helper to access supported CPUID")
Signed-off-by: Yang Zhong <yang.zhong@intel.com>
Message-Id: <20220530131834.1222801-1-yang.zhong@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
The correct A20 masking is done if paging is enabled (protected mode) but it
seems to have been forgotten in real mode. For example from the AMD64 APM Vol. 2
section 1.2.4:
> If the sum of the segment base and effective address carries over into bit 20,
> that bit can be optionally truncated to mimic the 20-bit address wrapping of the
> 8086 processor by using the A20M# input signal to mask the A20 address bit.
Most BIOSes will enable the A20 line on boot, but I found by disabling the A20 line
afterwards, the correct wrapping wasn't taking place.
`handle_mmu_fault' in target/i386/tcg/sysemu/excp_helper.c seems to be the culprit.
In real mode, it fills the TLB with the raw unmasked address. However, for the
protected mode, the `mmu_translate' function does the correct A20 masking.
The fix then should be to just apply the A20 mask in the first branch of the if
statement.
Signed-off-by: Stephen Michael Jothen <sjothen@gmail.com>
Message-Id: <Yo5MUMSz80jXtvt9@air-old.local>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Storage key controlled protection is currently not honored when
emulating instructions.
If available, enable key protection for the MEM_OP ioctl, thereby
enabling it for the s390_cpu_virt_mem_* functions, when using kvm.
As a result, the emulation of the following instructions honors storage
keys:
* CLP
The Synch I/O CLP command would need special handling in order
to support storage keys, but is currently not supported.
* CHSC
Performing commands asynchronously would require special
handling, but commands are currently always synchronous.
* STSI
* TSCH
Must (and does) not change channel if terminated due to
protection.
* MSCH
Suppressed on protection, works because fetching instruction.
* SSCH
Suppressed on protection, works because fetching instruction.
* STSCH
* STCRW
Suppressed on protection, this works because no partial store is
possible, because the operand cannot span multiple pages.
* PCISTB
* MPCIFC
* STPCIFC
Signed-off-by: Janis Schoetterl-Glausch <scgl@linux.ibm.com>
Message-Id: <20220506153956.2217601-3-scgl@linux.ibm.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
One less P needed.
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Message-Id: <20220523115123.150340-1-dgilbert@redhat.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
Also mark raise_exception_ra and raise_exception, lest we
generate a warning about helper_raise_exception returning.
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Laurent Vivier <laurent@vivier.eu>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220602013401.303699-18-richard.henderson@linaro.org>
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
TPF stands for "trap false", and is a long-form nop for ColdFire.
Re-use the immediate consumption code from trapcc; the insn will
already expand to a nop because of the TCG_COND_NEVER test
within do_trapcc.
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Laurent Vivier <laurent@vivier.eu>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220602013401.303699-12-richard.henderson@linaro.org>
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
According to the M68040 Users Manual, section 8.4.1, Four word
stack frame (format 0), includes Illegal Instruction. Use the
correct frame format, which does not use the ADDR argument.
Reviewed-by: Laurent Vivier <laurent@vivier.eu>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220602013401.303699-10-richard.henderson@linaro.org>
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
According to the M68040 Users Manual, section 8.4.3,
Six word stack frame (format 2), Trace (and others) is
supposed to record the next insn in PC and the address
of the trapping instruction in ADDRESS.
Create gen_raise_exception_format2 to record the trapping
pc in env->mmu.ar. Update m68k_interrupt_all to pass the
value to do_stack_frame. Update cpu_loop to handle EXCP_TRACE.
Reviewed-by: Laurent Vivier <laurent@vivier.eu>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220602013401.303699-9-richard.henderson@linaro.org>
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
According to the M68040 Users Manual, section 8.4.3,
Six word stack frame (format 2), Zero Div (and others)
is supposed to record the next insn in PC and the
address of the trapping instruction in ADDRESS.
While the N, Z and V flags are documented to be undefine on DIV0,
the C flag is documented as always cleared.
Update helper_div* to take the instruction length as an argument
and use raise_exception_format2. Hoist the reset of the C flag
above the division by zero check.
Update m68k_interrupt_all to pass mmu.ar to do_stack_frame.
Update cpu_loop to pass mmu.ar to siginfo.si_addr, as the
kernel does in trap_c().
Reviewed-by: Laurent Vivier <laurent@vivier.eu>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220602013401.303699-8-richard.henderson@linaro.org>
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
According to the M68040 Users Manual, section 8.4.3,
Six word stack frame (format 2), CHK, CHK2 (and others)
are supposed to record the next insn in PC and the
address of the trapping instruction in ADDRESS.
Create a raise_exception_format2 function to centralize recording
of the trapping pc in mmu.ar, plus advancing to the next insn.
Update m68k_interrupt_all to pass mmu.ar to do_stack_frame.
Update cpu_loop to pass mmu.ar to siginfo.si_addr, as the
kernel does in trap_c().
Reviewed-by: Laurent Vivier <laurent@vivier.eu>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220602013401.303699-7-richard.henderson@linaro.org>
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
The only value this variable holds is now env->pc.
Reviewed-by: Laurent Vivier <laurent@vivier.eu>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220602013401.303699-6-richard.henderson@linaro.org>
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
Add parenthesis around & vs &&.
Remove assignment to sr in function call argument -- note that
sr is unused after the call, so the assignment was never needed,
only the result of the & expression.
Suggested-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Laurent Vivier <laurent@vivier.eu>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220602013401.303699-4-richard.henderson@linaro.org>
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
Replace an if ladder with a switch for clarity.
Reviewed-by: Laurent Vivier <laurent@vivier.eu>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220602013401.303699-3-richard.henderson@linaro.org>
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
Rather than adjust the PC in all of the consumers, raise
the exception with the correct PC in the first place.
Reviewed-by: Laurent Vivier <laurent@vivier.eu>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220602013401.303699-2-richard.henderson@linaro.org>
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
We now have individual checks on all insns within disas_sve.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220527181907.189259-115-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
For all remaining trans_* functions that do not already
have a check, add one now.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220527181907.189259-114-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220527181907.189259-113-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220527181907.189259-112-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Rename from do_sve2_shr_narrow and hoist the sve2
check into the TRANS_FEAT macro.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220527181907.189259-111-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Rename from do_sve2_shll_tb and hoist the sve2
check into the TRANS_FEAT macro.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220527181907.189259-110-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Rename from do_sve2_narrow_extract and hoist the sve2
check into the TRANS_FEAT macro.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220527181907.189259-109-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220527181907.189259-108-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Since 636ddeb15c, we do not require rd == ra.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220527181907.189259-107-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220527181907.189259-106-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220527181907.189259-105-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220527181907.189259-104-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220527181907.189259-103-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220527181907.189259-102-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220527181907.189259-101-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220527181907.189259-100-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Rename the function to match other expansion functions and
move to be adjacent. Split out gen_gvec_fpst_zzzp as a
helper while we're at it.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220527181907.189259-99-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220527181907.189259-98-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220527181907.189259-97-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220527181907.189259-96-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220527181907.189259-95-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Rename the function to match other expansion function and
move to be adjacent. Split out gen_gvec_fpst_zzp as a
helper while we're at it.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220527181907.189259-94-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Simplify indexing of this array. This will allow folding
of the illegal esz == 0 into the normal fn == NULL check.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220527181907.189259-93-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Rename do_zz_fp to gen_gvec_fpst_arg_zz, and move up.
Split out gen_gvec_fpst_zz as a helper while we're at it.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220527181907.189259-92-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220527181907.189259-91-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220527181907.189259-90-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220527181907.189259-89-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220527181907.189259-88-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220527181907.189259-87-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Rename the function to match gen_gvec_ool_arg_zzz,
and move to be adjacent. Split out gen_gvec_fpst_zzz
as a helper while we're at it.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220527181907.189259-86-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220527181907.189259-85-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220527181907.189259-84-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220527181907.189259-83-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220527181907.189259-82-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This alias is defined on EOR (prediates). While the
same operation could be performed with NAND or NOR,
only bother with the official alias.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220527181907.189259-81-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Combined with the check already present in gen_mov_p,
we can simplify some special cases in trans_AND_pppp
and trans_BIC_pppp.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220527181907.189259-80-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Being able to specify the feature predicate in TRANS_FEAT
makes it easier to split trans_FMMLA by element size,
which also happens to simplify the decode.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220527181907.189259-79-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Use these for the several varieties of floating-point
multiply-add instructions.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220527181907.189259-78-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220527181907.189259-77-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220527181907.189259-76-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220527181907.189259-75-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Remove the unparsed extractions in trans_CPY_{m,z}_i which are intended
to reject an 8-bit shift of an 8-bit constant for 8-bit element.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220527181907.189259-74-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Remove the unparsed extractions in trans_ADD_zzi, trans_SUBR_zzi,
and do_zzi_sat which are intended to reject an 8-bit shift of an
8-bit constant for 8-bit element.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220527181907.189259-73-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Remove the unparsed extraction in trans_DUP_i,
which is intended to reject an 8-bit shift of
an 8-bit constant for 8-bit element.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220527181907.189259-72-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220527181907.189259-71-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220527181907.189259-70-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220527181907.189259-69-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220527181907.189259-68-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220527181907.189259-67-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220527181907.189259-66-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220527181907.189259-65-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220527181907.189259-64-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220527181907.189259-63-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220527181907.189259-62-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220527181907.189259-61-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Convert SVE translation functions using do_zip*
to use TRANS_FEAT and gen_gvec_ool_arg_zzz.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220527181907.189259-60-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220527181907.189259-59-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This is in line with how we treat uzp, and will
eliminate the special case code during translation.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220527181907.189259-58-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220527181907.189259-57-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220527181907.189259-56-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220527181907.189259-55-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220527181907.189259-54-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220527181907.189259-53-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220527181907.189259-52-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220527181907.189259-51-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220527181907.189259-50-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220527181907.189259-49-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Remove the DO_ZPZZZ macro, as it had just the two uses.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220527181907.189259-48-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220527181907.189259-47-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Share code between the various shifts using arg_rpri_esz.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220527181907.189259-46-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220527181907.189259-45-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220527181907.189259-44-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Convert SVE translation functions using do_sve2_fn2i
to use TRANS_FEAT and gen_gvec_fn_arg_zzi.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220527181907.189259-43-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
We have two places that perform this particular operation.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220527181907.189259-42-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The check is already done in gen_gvec_ool_zzzp,
which is called by do_sel_z; remove from callers.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220527181907.189259-41-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220527181907.189259-40-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
We have two places that perform this particular operation.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220527181907.189259-39-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Convert SVE translation functions using do_sve2_zzzz_fn
to use TRANS_FEAT and gen_gvec_fn_arg_zzzz.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220527181907.189259-38-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Merge gen_gvec_fn_zzzz with the sve access check and the
dereference of arg_rrrr_esz.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220527181907.189259-37-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The decode for RAX1 sets esz to MO_8, because that's what
we use by default for "no esz present". We changed that
to MO_64 during translation because it is more logical for
the operation. However, the esz argument to gen_gvec_rax1
is unused and forces MO_64 within that function, so there
is no need to do it here as well.
Simplify to use gen_gvec_fn_arg_zzz and TRANS_FEAT.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220527181907.189259-36-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Convert SVE translation functions using do_sve2_fn_zzz
to use TRANS_FEAT and gen_gvec_fn_arg_zzz.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220527181907.189259-35-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Convert SVE translation functions directly using
gen_gvec_fn_arg_zzz to TRANS_FEAT.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220527181907.189259-34-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Two uses of gen_gvec_fn_zzz can pass on arg_rrr_esz instead.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220527181907.189259-33-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Rename the function to match gen_gvec_fn_zzz,
and move to be adjacent.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220527181907.189259-32-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220527181907.189259-31-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
There is only one caller for gen_gvec_fn_zz; inline it.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220527181907.189259-30-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Convert SVE translation functions using do_sve2_zpzz_ool
to use TRANS_FEAT and gen_gvec_ool_arg_zpzz.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220527181907.189259-29-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Convert SVE translation functions directly using
gen_gvec_ool_arg_zpzz to TRANS_FEAT.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220527181907.189259-28-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Use gen_gvec_ool_arg_zpzz instead of gen_gvec_ool_zzzp
when the arguments come from arg_rprr_esz.
Replaces do_zpzz_ool.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220527181907.189259-27-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220527181907.189259-26-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Convert some SVE translation functions using
gen_gvec_ool_arg_zpzi to TRANS_FEAT.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220527181907.189259-25-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Rename the function to match gen_gvec_ool_arg_zpz,
and move to be adjacent.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220527181907.189259-24-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Convert SVE translation functions using do_sve2_zpz_data
to use TRANS_FEAT and gen_gvec_ool_arg_zpz.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220527181907.189259-23-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Convert SVE translation functions directly using
gen_gvec_ool_arg_zpz to TRANS_FEAT.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220527181907.189259-22-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Use gen_gvec_ool_arg_zpz instead of gen_gvec_ool_zzp
when the arguments come from arg_rpr_esz.
Replaces do_zpz_ool.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220527181907.189259-21-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220527181907.189259-20-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This is the last direct user of tcg_gen_gvec_4_ool.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220527181907.189259-19-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Convert SVE translation functions using do_sve2_zzw_data
to use TRANS_FEAT and gen_gvec_ool_arg_zzz.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220527181907.189259-18-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Convert SVE translation functions using do_sve2_zzzz_data
to use TRANS_FEAT and gen_gvec_ool_{zzzz,zzxz}.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220527181907.189259-17-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Convert SVE translation functions using do_sve2_zzz_data
to use TRANS_FEAT and gen_gvec_ool_zzz.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220527181907.189259-16-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Convert SVE translation functions directly using
gen_gvec_ool_arg_zzxz to TRANS_FEAT. Also include
BFDOT_zzxz, which was using gen_gvec_ool_zzzz.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220527181907.189259-15-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Rename the function to match gen_gvec_ool_arg_zzzz,
and move to be adjacent.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220527181907.189259-14-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Convert SVE translation functions directly using
gen_gvec_ool_arg_zzzz to TRANS_FEAT.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220527181907.189259-13-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Convert SVE translation functions using do_sve2_zzzz_ool
to use TRANS_FEAT and gen_gvec_ool_arg_zzzz.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220527181907.189259-12-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Use gen_gvec_ool_arg_zzzz instead of gen_gvec_ool_zzzz
when the arguments come from arg_rrrr_esz.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220527181907.189259-11-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Convert SVE translation functions directly using
gen_gvec_ool_zzzz to TRANS_FEAT.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220527181907.189259-10-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220527181907.189259-9-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Convert SVE translation functions using do_sve2_zzz_ool
to use TRANS_FEAT and gen_gvec_ool_arg_zzz.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220527181907.189259-8-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Convert SVE translation functions using
gen_gvec_ool_arg_zzz to TRANS_FEAT.
Remove trivial wrappers do_aese, do_sm4.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220527181907.189259-7-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Use gen_gvec_ool_arg_zzz instead of gen_gvec_ool_zzz
when the arguments come from arg_rrr_esz.
Replaces do_zzw_ool and do_zzz_data_ool.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220527181907.189259-6-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220527181907.189259-5-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Convert SVE translation functions using gen_gvec_ool_zz to TRANS_FEAT.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220527181907.189259-4-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220527181907.189259-3-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Steal the idea for these leaf function expanders from PowerPC.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220527181907.189259-2-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Fix when building HVF on macOS Aarch64:
target/arm/hvf/hvf.c:586:15: error: unknown type name 'ARMCPRegInfo'; did you mean 'ARMCPUInfo'?
const ARMCPRegInfo *ri;
^~~~~~~~~~~~
ARMCPUInfo
target/arm/cpu-qom.h:38:3: note: 'ARMCPUInfo' declared here
} ARMCPUInfo;
^
target/arm/hvf/hvf.c:589:14: error: implicit declaration of function 'get_arm_cp_reginfo' is invalid in C99 [-Werror,-Wimplicit-function-declaration]
ri = get_arm_cp_reginfo(arm_cpu->cp_regs, key);
^
target/arm/hvf/hvf.c:589:12: warning: incompatible integer to pointer conversion assigning to 'const ARMCPUInfo *' (aka 'const struct ARMCPUInfo *') from 'int' [-Wint-conversion]
ri = get_arm_cp_reginfo(arm_cpu->cp_regs, key);
^ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
target/arm/hvf/hvf.c:591:26: error: no member named 'type' in 'struct ARMCPUInfo'
assert(!(ri->type & ARM_CP_NO_RAW));
~~ ^
/Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/include/assert.h:99:25: note: expanded from macro 'assert'
(__builtin_expect(!(e), 0) ? __assert_rtn(__func__, __ASSERT_FILE_NAME, __LINE__, #e) : (void)0)
^
target/arm/hvf/hvf.c:591:33: error: use of undeclared identifier 'ARM_CP_NO_RAW'
assert(!(ri->type & ARM_CP_NO_RAW));
^
1 warning and 4 errors generated.
Fixes: cf7c6d1004 ("target/arm: Split out cpregs.h")
Reported-by: Duncan Bayne <duncan@bayne.id.au>
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220525161926.34233-1-philmd@fungible.com
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1029
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Implement the following PowerISA v3.1 instructions:
xxmfacc: VSX Move From Accumulator
xxmtacc: VSX Move To Accumulator
xxsetaccz: VSX Set Accumulator to Zero
The PowerISA 3.1 mentions that for the current version of the
architecture, "the hardware implementation provides the effect of ACC[i]
and VSRs 4*i to 4*i + 3 logically containing the same data" and "The
Accumulators introduce no new logical state at this time" (page 501).
For now it seems unnecessary to create new structures, so this patch
just uses ACC[i] as VSRs 4*i to 4*i+3 and therefore move to and from
accumulators are no-ops.
Signed-off-by: Lucas Mateus Castro (alqotel) <lucas.araujo@eldorado.org.br>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220524140537.27451-2-lucas.araujo@eldorado.org.br>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
This allows an x86 host to no-op lwsyncs, and ppc host can use lwsync
rather than sync.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220519135908.21282-5-npiggin@gmail.com>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
The generated eieio memory ordering semantics do not match the
instruction definition in the architecture. Add a big comment to
explain this strange instruction and correct the memory ordering
behaviour.
Signed-off: Nicholas Piggin <npiggin@gmail.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220519135908.21282-2-npiggin@gmail.com>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Move vmsumshm and vmsumshs to decodetree, declare vmsumshm helper with
TCG_CALL_NO_RWG, and drop the unused env argument.
Signed-off-by: Matheus Ferst <matheus.ferst@eldorado.org.br>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220517123929.284511-13-matheus.ferst@eldorado.org.br>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Move vmsumuhm and vmsumuhs to decodetree, declare vmsumuhm helper with
TCG_CALL_NO_RWG, and drop the unused env argument.
Signed-off-by: Matheus Ferst <matheus.ferst@eldorado.org.br>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220517123929.284511-12-matheus.ferst@eldorado.org.br>
[danielhb: added #undef VMSUMUHM to fix ppc64 build]
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Move vmsumubm and vmsummbm to decodetree, declare both helpers with
TCG_CALL_NO_RWG, and drop the unused env argument.
Signed-off-by: Matheus Ferst <matheus.ferst@eldorado.org.br>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220517123929.284511-11-matheus.ferst@eldorado.org.br>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Move xxextractuw and xxinsertw to decodetree, declare both helpers with
TCG_CALL_NO_RWG, and drop the unused env argument.
Signed-off-by: Matheus Ferst <matheus.ferst@eldorado.org.br>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220517123929.284511-9-matheus.ferst@eldorado.org.br>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>