Fix vCPU hot-unplug related leak reported by Valgrind:
==132362== 4,096 bytes in 1 blocks are definitely lost in loss record 8,440 of 8,549
==132362== at 0x4C3B15F: memalign (vg_replace_malloc.c:1265)
==132362== by 0x4C3B288: posix_memalign (vg_replace_malloc.c:1429)
==132362== by 0xB41195: qemu_try_memalign (memalign.c:53)
==132362== by 0xB41204: qemu_memalign (memalign.c:73)
==132362== by 0x7131CB: kvm_init_xsave (kvm.c:1601)
==132362== by 0x7148ED: kvm_arch_init_vcpu (kvm.c:2031)
==132362== by 0x91D224: kvm_init_vcpu (kvm-all.c:516)
==132362== by 0x9242C9: kvm_vcpu_thread_fn (kvm-accel-ops.c:40)
==132362== by 0xB2EB26: qemu_thread_start (qemu-thread-posix.c:556)
==132362== by 0x7EB2159: start_thread (in /usr/lib64/libpthread-2.28.so)
==132362== by 0x9D45DD2: clone (in /usr/lib64/libc-2.28.so)
Reported-by: Mark Kanda <mark.kanda@oracle.com>
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Tested-by: Mark Kanda <mark.kanda@oracle.com>
Message-Id: <20220322120522.26200-1-philippe.mathieu.daude@gmail.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
The instruction description says "It is loaded without rounding
errors." which implies we should have the widest rounding mode
possible.
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/888
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220315121251.2280317-4-alex.bennee@linaro.org>
g_new(T, n) is neater than g_malloc(sizeof(T) * n). It's also safer,
for two reasons. One, it catches multiplication overflowing size_t.
Two, it returns T * rather than void *, which lets the compiler catch
more type errors.
This commit only touches allocations with size arguments of the form
sizeof(T).
Patch created mechanically with:
$ spatch --in-place --sp-file scripts/coccinelle/use-g_new-etc.cocci \
--macro-file scripts/cocci-macro-file.h FILES...
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Cédric Le Goater <clg@kaod.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Acked-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Message-Id: <20220315144156.1595462-4-armbru@redhat.com>
Reviewed-by: Pavel Dovgalyuk <Pavel.Dovgalyuk@ispras.ru>
KVM support for AMX includes a new system attribute, KVM_X86_XCOMP_GUEST_SUPP.
Commit 19db68ca68 ("x86: Grant AMX permission for guest", 2022-03-15) however
did not fully consider the behavior on older kernels. First, it warns
too aggressively. Second, it invokes the KVM_GET_DEVICE_ATTR ioctl
unconditionally and then uses the "bitmask" variable, which remains
uninitialized if the ioctl fails. Third, kvm_ioctl returns -errno rather
than -1 on errors.
While at it, explain why the ioctl is needed and KVM_GET_SUPPORTED_CPUID
is not enough.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
- Remove various build warnings
- Fix building with modules on macOS
- Fix mouse/keyboard GUI interactions
-----BEGIN PGP SIGNATURE-----
iQIzBAABCAAdFiEE+qvnXhKRciHc/Wuy4+MsLN6twN4FAmIwjAMACgkQ4+MsLN6t
wN6AhBAAm4GBwQ5FYeFtKk2CmlTbWJtwsc4eRVnRnxRV/83scI+oWAl/jHRiAqHp
Z3eKVD911UDmHUlajWu3UXulnZQZeh1kOrAYCnDvP/wbRAiKjTLzPhoiu2qsKgg7
UT5bmm8/vY51DuCdEbbhqFSjp6X4L7E8UJLm3SlqADd5YXlNeX4D/58RPLbOgS1b
QX7eDREc/6ITVvsNrDeYmIf/AN3O0Rt+Spz7nruvIQd31tiLIXqrOtR4VfWIWvKz
HFvOGD7bOYByt7NJN+Q1sdR8twzaoENV8lqbHROGNo/6uBlz7ciCNRly76u3nd4u
uoFmpgWi9VDhxZztzM1V0qiD0VjyN+NnemAuexqbYrbT8Ym7AJt5hwLeWRjUqf1z
hCMR4Jc+3VCGoNI2yTyAnWdzIQvBUNRfKvFgLeLNzGZmP9fzNAWurFL/p8xD1m7i
lgZ5LAecIFkdtpwpzNKUnllTsRKBJDMc5g7tkm3gBosU0B4IFQuBDnwUQYlHcAhb
+lFVWU6H/gD/FRjfGVI64yZ940u91vShmE72K+04EqH+s0efMOwC/LPmXdF2MaQq
W7KyeWnBLvAFKgyYA6oM9+EWFeZ9KCFs+CXpujPEogJh3RloJNNNAtETu0keI0HZ
gGx0QCNekrZ4u2mZPi1S1xwoJTPeowThQHxUj/MEJghtvYaID/A=
=PLdU
-----END PGP SIGNATURE-----
Merge tag 'darwin-20220315' of https://github.com/philmd/qemu into staging
Darwin-based host patches
- Remove various build warnings
- Fix building with modules on macOS
- Fix mouse/keyboard GUI interactions
# gpg: Signature made Tue 15 Mar 2022 12:52:19 GMT
# gpg: using RSA key FAABE75E12917221DCFD6BB2E3E32C2CDEADC0DE
# gpg: Good signature from "Philippe Mathieu-Daudé (F4BUG) <f4bug@amsat.org>" [unknown]
# gpg: WARNING: This key is not certified with a trusted signature!
# gpg: There is no indication that the signature belongs to the owner.
# Primary key fingerprint: FAAB E75E 1291 7221 DCFD 6BB2 E3E3 2C2C DEAD C0DE
* tag 'darwin-20220315' of https://github.com/philmd/qemu: (21 commits)
MAINTAINERS: Volunteer to maintain Darwin-based hosts support
ui/cocoa: add option to swap Option and Command
ui/cocoa: capture all keys and combos when mouse is grabbed
ui/cocoa: release mouse when user switches away from QEMU window
ui/cocoa: add option to disable left-command forwarding to guest
ui/cocoa: Constify qkeycode translation arrays
configure: Pass filtered QEMU_OBJCFLAGS to meson
meson: Log QEMU_CXXFLAGS content in summary
meson: Resolve the entitlement.sh script once for good
osdep: Avoid using Clang-specific __builtin_available()
audio: Rename coreaudio extension to use Objective-C compiler
coreaudio: Always return 0 in handle_voice_change
audio: Log context for audio bug
audio/dbus: Fix building with modules on macOS
audio/coreaudio: Remove a deprecation warning on macOS 12
block/file-posix: Remove a deprecation warning on macOS 12
hvf: Remove deprecated hv_vcpu_flush() calls
hvf: Make hvf_get_segments() / hvf_put_segments() local
hvf: Use standard CR0 and CR4 register definitions
tests/fp/berkeley-testfloat-3: Ignore ignored #pragma directives
...
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
When building on macOS 11 [*], we get:
In file included from ../target/i386/hvf/hvf.c:59:
../target/i386/hvf/vmx.h:174:5: error: 'hv_vcpu_flush' is deprecated: first deprecated in macOS 11.0 - This API has no effect and always returns HV_UNSUPPORTED [-Werror,-Wdeprecated-declarations]
hv_vcpu_flush(vcpu);
^
/Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/System/Library/Frameworks/Hypervisor.framework/Headers/hv.h:364:20: note: 'hv_vcpu_flush' has been explicitly marked deprecated here
extern hv_return_t hv_vcpu_flush(hv_vcpuid_t vcpu)
^
Since this call "has no effect", simply remove it ¯\_(ツ)_/¯
Not very useful deprecation doc:
https://developer.apple.com/documentation/hypervisor/1441386-hv_vcpu_flush
[*] Also 10.15 (Catalina):
https://lore.kernel.org/qemu-devel/Yd3DmSqZ1SiJwd7P@roolebo.dev/
Reviewed-by: Roman Bolshakov <r.bolshakov@yadro.com>
Tested-by: Roman Bolshakov <r.bolshakov@yadro.com>
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Both hvf_get_segments/hvf_put_segments() functions are only
used within x86hvf.c: do not declare them as public API.
Reviewed-by: Roman Bolshakov <r.bolshakov@yadro.com>
Tested-by: Roman Bolshakov <r.bolshakov@yadro.com>
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
No need to have our own definitions of these registers.
Signed-off-by: Cameron Esfahani <dirty@apple.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Even when the feature is not supported in guest CPUID,
still set the msr to the default value which will
be the only value KVM will accept in this case
Signed-off-by: Maxim Levitsky <mlevitsk@redhat.com>
Message-Id: <20220223115824.319821-1-mlevitsk@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Windows 11 with WSL2 enabled (Hyper-V) fails to boot with Icelake-Server
{-v5} CPU model but boots well with '-cpu host'. Apparently, it expects
5-level paging and 5-level EPT support to come in pair but QEMU's
Icelake-Server CPU model lacks the later. Introduce 'Icelake-Server-v6'
CPU model with 'vmx-page-walk-5' enabled by default.
Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Message-Id: <20220221145316.576138-1-vkuznets@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
XFD(eXtended Feature Disable) allows to enable a
feature on xsave state while preventing specific
user threads from using the feature.
Support save and restore XFD MSRs if CPUID.D.1.EAX[4]
enumerate to be valid. Likewise migrate the MSRs and
related xsave state necessarily.
Signed-off-by: Zeng Guang <guang.zeng@intel.com>
Signed-off-by: Wei Wang <wei.w.wang@intel.com>
Signed-off-by: Yang Zhong <yang.zhong@intel.com>
Message-Id: <20220217060434.52460-8-yang.zhong@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
When dynamic xfeatures (e.g. AMX) are used by the guest, the xsave
area would be larger than 4KB. KVM_GET_XSAVE2 and KVM_SET_XSAVE
under KVM_CAP_XSAVE2 works with a xsave buffer larger than 4KB.
Always use the new ioctls under KVM_CAP_XSAVE2 when KVM supports it.
Signed-off-by: Jing Liu <jing2.liu@intel.com>
Signed-off-by: Zeng Guang <guang.zeng@intel.com>
Signed-off-by: Wei Wang <wei.w.wang@intel.com>
Signed-off-by: Yang Zhong <yang.zhong@intel.com>
Message-Id: <20220217060434.52460-7-yang.zhong@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Add AMX primary feature bits XFD and AMX_TILE to
enumerate the CPU's AMX capability. Meanwhile, add
AMX TILE and TMUL CPUID leaf and subleaves which
exist when AMX TILE is present to provide the maximum
capability of TILE and TMUL.
Signed-off-by: Jing Liu <jing2.liu@intel.com>
Signed-off-by: Yang Zhong <yang.zhong@intel.com>
Message-Id: <20220217060434.52460-6-yang.zhong@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Intel introduces XFD faulting mechanism for extended
XSAVE features to dynamically enable the features in
runtime. If CPUID (EAX=0Dh, ECX=n, n>1).ECX[2] is set
as 1, it indicates support for XFD faulting of this
state component.
Signed-off-by: Jing Liu <jing2.liu@intel.com>
Signed-off-by: Yang Zhong <yang.zhong@intel.com>
Message-Id: <20220217060434.52460-5-yang.zhong@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Kernel allocates 4K xstate buffer by default. For XSAVE features
which require large state component (e.g. AMX), Linux kernel
dynamically expands the xstate buffer only after the process has
acquired the necessary permissions. Those are called dynamically-
enabled XSAVE features (or dynamic xfeatures).
There are separate permissions for native tasks and guests.
Qemu should request the guest permissions for dynamic xfeatures
which will be exposed to the guest. This only needs to be done
once before the first vcpu is created.
KVM implemented one new ARCH_GET_XCOMP_SUPP system attribute API to
get host side supported_xcr0 and Qemu can decide if it can request
dynamically enabled XSAVE features permission.
https://lore.kernel.org/all/20220126152210.3044876-1-pbonzini@redhat.com/
Suggested-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Yang Zhong <yang.zhong@intel.com>
Signed-off-by: Jing Liu <jing2.liu@intel.com>
Message-Id: <20220217060434.52460-4-yang.zhong@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
The AMX TILECFG register and the TMMx tile data registers are
saved/restored via XSAVE, respectively in state component 17
(64 bytes) and state component 18 (8192 bytes).
Add AMX feature bits to x86_ext_save_areas array to set
up AMX components. Add structs that define the layout of
AMX XSAVE areas and use QEMU_BUILD_BUG_ON to validate the
structs sizes.
Signed-off-by: Jing Liu <jing2.liu@intel.com>
Signed-off-by: Yang Zhong <yang.zhong@intel.com>
Message-Id: <20220217060434.52460-3-yang.zhong@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
The extended state subleaves (EAX=0Dh, ECX=n, n>1).ECX[1]
indicate whether the extended state component locates
on the next 64-byte boundary following the preceding state
component when the compacted format of an XSAVE area is
used.
Right now, they are all zero because no supported component
needed the bit to be set, but the upcoming AMX feature will
use it. Fix the subleaves value according to KVM's supported
cpuid.
Signed-off-by: Jing Liu <jing2.liu@intel.com>
Signed-off-by: Yang Zhong <yang.zhong@intel.com>
Message-Id: <20220217060434.52460-2-yang.zhong@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Loading a non-canonical address into rsp when handling an interrupt or
performing a far call should raise a #SS not a #GP.
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/870
Signed-off-by: Gareth Webb <gareth.webb@umbralsoftware.co.uk>
Message-Id: <164529651121.25406.15337137068584246397-0@git.sr.ht>
[Move get_pg_mode to seg_helper.c for user-mode emulators. - Paolo]
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
LA57/PKE/PKS is only relevant in 64-bit mode, and NXE is only relevant if
PAE is in use. Since there is code that checks PG_MODE_LA57 to determine
the canonicality of addresses, make sure that the bit is not set by
mistake in 32-bit mode. While it would not be a problem because 32-bit
addresses by definition fit in both 48-bit and 57-bit address spaces,
it is nicer if get_pg_mode() actually returns whether a feature is enabled,
and it allows a few simplifications in the page table walker.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
We invoke the kvm_irqchip_commit_routes() for each addition to MSI route
table, which is not efficient if we are adding lots of routes in some cases.
This patch lets callers invoke the kvm_irqchip_commit_routes(), so the
callers can decide how to optimize.
[1] https://lists.gnu.org/archive/html/qemu-devel/2021-11/msg00967.html
Signed-off-by: Longpeng <longpeng2@huawei.com>
Message-Id: <20220222141116.2091-3-longpeng2@huawei.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
This fixes the following error triggered when stopping and resuming a 64-bit
Linux kernel via gdb:
qemu-system-x86_64.exe: WHPX: Failed to set virtual processor context, hr=c0350005
The previous logic for synchronizing the values did not take into account
that the lower 4 bits of the CR8 register, containing the priority level,
mapped to bits 7:4 of the APIC.TPR register (see section 10.8.6.1 of
Volume 3 of Intel 64 and IA-32 Architectures Software Developer's Manual).
The caused WHvSetVirtualProcessorRegisters() to fail with an error,
effectively preventing GDB from changing the guest context.
Signed-off-by: Ivan Shcherbakov <ivan@sysprogs.com>
Message-Id: <010b01d82874$bb4ef160$31ecd420$@sysprogs.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Make sure that pausing the VM while in 64-bit mode will set the
HF_CS64_MASK flag in env->hflags (see x86_update_hflags() in
target/i386/cpu.c).
Without it, the code in gdbstub.c would only use the 32-bit register values
when debugging 64-bit targets, making debugging effectively impossible.
Signed-off-by: Ivan Shcherbakov <ivan@sysprogs.com>
Message-Id: <00f701d82874$68b02000$3a106000$@sysprogs.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* cleanups of qemu_oom_check() and qemu_memalign()
* target/arm/translate-neon: UNDEF if VLD1/VST1 stride bits are non-zero
* target/arm/translate-neon: Simplify align field check for VLD3
* GICv3 ITS: add more trace events
* GICv3 ITS: implement 8-byte accesses properly
* GICv3: fix minor issues with some trace/log messages
* ui/cocoa: Use the standard about panel
* target/arm: Provide cpu property for controling FEAT_LPA2
* hw/arm/virt: Disable LPA2 for -machine virt-6.2
-----BEGIN PGP SIGNATURE-----
iQJNBAABCAA3FiEE4aXFk81BneKOgxXPPCUl7RQ2DN4FAmImNs4ZHHBldGVyLm1h
eWRlbGxAbGluYXJvLm9yZwAKCRA8JSXtFDYM3q87D/0cMQeF00uVRNqftrQg2SDI
txJIG2QYUOPMCDfGWlGTfXv2TUc5y3XwA77C9vTcJcIWJlZ30DUa95DNYqA0BbOH
TEOzRuZME64wA/JndHadz7oh+xb3HYn+6aSr63LeQCI3/h1eXVHknnEcyF1danOb
YNB1T308THTEwJHQuKHYksIasgVwcjOf8FvMRYFozVkAKEx1SlabpFXST+aVNyx4
ASsC2PTiJYAqwnYrTX8lWOYKMiKfkNrQcTd6x7rkoDw1pV7ZDMw2/69tpkhdJ5Fa
lwxhwZ3+40x49eFGAhfuZWZmGLd4c+76u64pmWW429uk1JhaoXgErJM3xfHbI1er
d7XSQYkMhDrY5SFuoE5XYwOuxanPtn3f7luM236Uzgf4ZR6qTrf6x+R1xLPZVYa9
fWbjvR3g5sltTOzyc+9UsBq1OPCbRUbmhJtJDvojj5sWmNvgOwZnSkTu5kMAqvFP
T2cQIi6phRBo3oMN/fhEZi3g828JjYEA9QlpWZ74JOyiXjYUq9VVNpoe/dtAv4Yy
wZ+XhVNIK82/4Mxjr9SEeYeNzYrsEEvFAUqe9Bil2CpuIMV5ONEzs+UfQ/gyk4eq
QnGPiojCrpf6PPAfci0Y6b4RzO+loMFpLjCpurngB4g4cBdmThKip0sVZdTZAI9Y
lnusB8MR1sESoqYdPZsAfQ==
=ix0J
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/pmaydell/tags/pull-target-arm-20220307' into staging
target-arm queue:
* cleanups of qemu_oom_check() and qemu_memalign()
* target/arm/translate-neon: UNDEF if VLD1/VST1 stride bits are non-zero
* target/arm/translate-neon: Simplify align field check for VLD3
* GICv3 ITS: add more trace events
* GICv3 ITS: implement 8-byte accesses properly
* GICv3: fix minor issues with some trace/log messages
* ui/cocoa: Use the standard about panel
* target/arm: Provide cpu property for controling FEAT_LPA2
* hw/arm/virt: Disable LPA2 for -machine virt-6.2
# gpg: Signature made Mon 07 Mar 2022 16:46:06 GMT
# gpg: using RSA key E1A5C593CD419DE28E8315CF3C2525ED14360CDE
# gpg: issuer "peter.maydell@linaro.org"
# gpg: Good signature from "Peter Maydell <peter.maydell@linaro.org>" [ultimate]
# gpg: aka "Peter Maydell <pmaydell@gmail.com>" [ultimate]
# gpg: aka "Peter Maydell <pmaydell@chiark.greenend.org.uk>" [ultimate]
# Primary key fingerprint: E1A5 C593 CD41 9DE2 8E83 15CF 3C25 25ED 1436 0CDE
* remotes/pmaydell/tags/pull-target-arm-20220307:
hw/arm/virt: Disable LPA2 for -machine virt-6.2
target/arm: Provide cpu property for controling FEAT_LPA2
ui/cocoa: Use the standard about panel
hw/intc/arm_gicv3_cpuif: Fix register names in ICV_HPPIR read trace event
hw/intc/arm_gicv3: Fix missing spaces in error log messages
hw/intc/arm_gicv3: Specify valid and impl in MemoryRegionOps
hw/intc/arm_gicv3_its: Add trace events for table reads and writes
hw/intc/arm_gicv3_its: Add trace events for commands
target/arm/translate-neon: Simplify align field check for VLD3
target/arm/translate-neon: UNDEF if VLD1/VST1 stride bits are non-zero
osdep: Move memalign-related functions to their own header
util: Put qemu_vfree() in memalign.c
util: Use meson checks for valloc() and memalign() presence
util: Share qemu_try_memalign() implementation between POSIX and Windows
meson.build: Don't misdetect posix_memalign() on Windows
util: Return valid allocation for qemu_try_memalign() with zero size
util: Unify implementations of qemu_memalign()
util: Make qemu_oom_check() a static function
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Move the various memalign-related functions out of osdep.h and into
their own header, which we include only where they are used.
While we're doing this, add some brief documentation comments.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-id: 20220226180723.1706285-10-peter.maydell@linaro.org
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220305233415.64627-3-philippe.mathieu.daude@gmail.com>
ArchCPU is our interface with target-specific code. Use it as
a forward-declared opaque pointer (abstract type), having its
structure defined by each target.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20220214183144.27402-15-f4bug@amsat.org>
Replace the boilerplate code to declare CPU QOM types
and macros, and forward-declare the CPU instance type.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20220214183144.27402-14-f4bug@amsat.org>
While CPUState is our interface with generic code, CPUArchState is
our interface with target-specific code. Use CPUArchState as an
abstract type, defined by each target.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20220214183144.27402-13-f4bug@amsat.org>
excp_helper.c requires "exec/exec-all.h" for tlb_set_page_with_attrs()
and misc_helper.c for tlb_flush().
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20220214183144.27402-8-f4bug@amsat.org>
Add cpu_thread_is_idle() to AccelOps, and implement it for the
KVM / WHPX accelerators.
Suggested-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20220207075426.81934-11-f4bug@amsat.org>
Mirror "sysemu/kvm.h" #ifdef'ry to define CONFIG_HAX_IS_POSSIBLE,
expose hax_allowed to hax_enabled() macro.
Suggested-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20220207075426.81934-9-f4bug@amsat.org>
The dh_alias redirect is intended to handle TCG types as distinguished
from C types. TCG does not distinguish signed int from unsigned int,
because they are the same size. However, we need to retain this
distinction for dh_typecode, lest we fail to extend abi types properly
for the host call parameters.
This bug was detected when running the 'arm' emulator on an s390
system. The s390 uses TCG_TARGET_EXTEND_ARGS which triggers code
in tcg_gen_callN to extend 32 bit values to 64 bits; the incorrect
sign data in the typemask for each argument caused the values to be
extended as unsigned values.
This simple program exhibits the problem:
static volatile int num = -9;
static volatile int den = -5;
int main(void)
{
int quo = num / den;
printf("num %d den %d quo %d\n", num, den, quo);
exit(0);
}
When run on the broken qemu, this results in:
num -9 den -5 quo 0
The correct result is:
num -9 den -5 quo 1
Fixes: 7319d83a73 ("tcg: Combine dh_is_64bit and dh_is_signed to dh_typecode")
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/876
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reported-by: Christian Ehrhardt <christian.ehrhardt@canonical.com>
Tested-by: Christian Ehrhardt <christian.ehrhardt@canonical.com>
Tested-by: Keith Packard <keithp@keithp.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* Some small fixes for the qtests
* Misc header cleanups by Philippe
-----BEGIN PGP SIGNATURE-----
iQJFBAABCAAvFiEEJ7iIR+7gJQEY8+q5LtnXdP5wLbUFAmITejURHHRodXRoQHJl
ZGhhdC5jb20ACgkQLtnXdP5wLbUFaBAAsj/mMIHbP0pIetfbimxopqg85HhryO8R
P3a2k3+clN0dhIMaZKfnXKM2S03/xWDtXYATidiRpliRfaeZ8oPM9j3U1kqbsjQ9
u+IdVgYdy0ZoLINvSdLZQp+5ZdBL34KP7OYBdkJUyFV8n2CwFk9c/8tjazkqA3Il
8OwkrdMu+7E5KyhjeDByPAOyONN53vOZT4nXdD2EsQ7AbIzKfw41Bo2wJzJCOqB+
uX9JHv+mpKhhv5NZle/oaUF5lg+rqveg4LxSe8D9FIGfYiFMYG3HNq38St4NVXVc
knBqzQiQZm2MLviXQQ4ym9Q3BFd1QZLJH3TB9SfvJjGEvrErb0Xylcqra1EIxseG
xI34f9ER0usWSUcIe4t/WjzAjEr3ez+uDJ6ItNFRqPwsV4PGaSgP4auhNzMGlkTo
zr1O5o/hJdh3otDzM6Qu8FtnNUsKLb2KerveQW+a0uJj3BDKshbn7Au7d3+6eORJ
DuugBwzrtgvAKr1z/6pYFT8eXyhvI7w/rwtUJwNiBsHXvTBQ4UxEXlKpUCKqEQls
oqlTK3bezKJuURnuND88L410qUAuvTABjoYx9Y9abbrSqq91F/52bpB/jY2Lke+y
YoWPV13npdguG1eHB8DowF7MQRLVcULTshXLuM0A9NXkSLJfNY2gLb/I9+hXuQr0
PuLO5BfVyLE=
=/azS
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/thuth-gitlab/tags/pull-request-2022-02-21' into staging
* Improve virtio-net failover test
* Some small fixes for the qtests
* Misc header cleanups by Philippe
# gpg: Signature made Mon 21 Feb 2022 11:40:37 GMT
# gpg: using RSA key 27B88847EEE0250118F3EAB92ED9D774FE702DB5
# gpg: issuer "thuth@redhat.com"
# gpg: Good signature from "Thomas Huth <th.huth@gmx.de>" [full]
# gpg: aka "Thomas Huth <thuth@redhat.com>" [full]
# gpg: aka "Thomas Huth <huth@tuxfamily.org>" [full]
# gpg: aka "Thomas Huth <th.huth@posteo.de>" [unknown]
# Primary key fingerprint: 27B8 8847 EEE0 2501 18F3 EAB9 2ED9 D774 FE70 2DB5
* remotes/thuth-gitlab/tags/pull-request-2022-02-21: (25 commits)
hw/tricore: Remove unused and incorrect header
hw/m68k/mcf: Add missing 'exec/hwaddr.h' header
exec/exec-all: Move 'qemu/log.h' include in units requiring it
softmmu/runstate: Clean headers
linux-user: Add missing "qemu/timer.h" include
target: Add missing "qemu/timer.h" include
core/ptimers: Remove unnecessary 'sysemu/cpus.h' include
exec/ramblock: Add missing includes
qtest: Add missing 'hw/qdev-core.h' include
hw/acpi/memory_hotplug: Remove unused 'hw/acpi/pc-hotplug.h' header
hw/remote: Add missing include
hw/tpm: Clean includes
scripts: Remove the old switch-timer-api script
tests/qtest: failover: migration abort test with failover off
tests/qtest: failover: test migration if the guest doesn't support failover
tests/qtest: failover: check migration with failover off
tests/qtest: failover: check missing guest feature
tests/qtest: failover: check the feature is correctly provided
tests/qtest: failover: use a macro for check_one_card()
tests/qtest: failover: clean up pathname of tests
...
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The "hardware version" machinery (qemu_set_hw_version(),
qemu_hw_version(), and the QEMU_HW_VERSION define) is used by fewer
than 10 files. Move it out from osdep.h into a new
qemu/hw-version.h.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220208200856.3558249-6-peter.maydell@linaro.org
Many files use "qemu/log.h" declarations but neglect to include
it (they inherit it via "exec/exec-all.h"). "exec/exec-all.h" is
a core component and shouldn't be used that way. Move the
"qemu/log.h" inclusion locally to each unit requiring it.
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Acked-by: Thomas Huth <thuth@redhat.com>
Message-Id: <20220207082756.82600-10-f4bug@amsat.org>
Signed-off-by: Thomas Huth <thuth@redhat.com>
I think these have been wrong since f193c7979c (do not depend on
thunk.h - more log items). Fix them so as not to confuse other
debugging.
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220204204335.1689602-26-alex.bennee@linaro.org>
This is unnecessary, because the interrupt would be retrieved and queued
anyway by KVM_GET_VCPU_EVENTS and KVM_SET_VCPU_EVENTS respectively,
and it makes the flow more similar to the one for KVM_GET/SET_SREGS2.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
This allows to make PDPTRs part of the migration
stream and thus not reload them after migration which
is against X86 spec.
Signed-off-by: Maxim Levitsky <mlevitsk@redhat.com>
Message-Id: <20211101132300.192584-2-mlevitsk@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Renaming defines for quad in their various forms so that their signedness is
now explicit.
Done using git grep as suggested by Philippe, with a bit of hand edition to
keep assignments aligned.
Signed-off-by: Frédéric Pétrot <frederic.petrot@univ-grenoble-alpes.fr>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-id: 20220106210108.138226-2-frederic.petrot@univ-grenoble-alpes.fr
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
QEMU coding style mandates to not use Linux kernel internal
types for scalars types. Replace __u32 by uint32_t.
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20211116193955.2793171-1-philmd@redhat.com>
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
In the SSE decode function gen_sse(), we combine a byte
'b' and a value 'b1' which can be [0..3], and switch on them:
b |= (b1 << 8);
switch (b) {
...
default:
unknown_op:
gen_unknown_opcode(env, s);
return;
}
In three cases inside this switch, we were then also checking for
"if (b1 >= 2) { goto unknown_op; }".
However, this can never happen, because the 'case' values in each place
are 0x0nn or 0x1nn and the switch will have directed the b1 == (2, 3)
cases to the default already.
This check was added in commit c045af25a5 in 2010; the added code
was unnecessary then as well, and was apparently intended only to
ensure that we never accidentally ended up indexing off the end
of an sse_op_table with only 2 entries as a result of future bugs
in the decode logic.
Change the checks to assert() instead, and make sure they're always
immediately before the array access they are protecting.
Fixes: Coverity CID 1460207
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
-----BEGIN PGP SIGNATURE-----
iQFIBAABCAAyFiEE8TM4V0tmI4mGbHaCv/vSX3jHroMFAmGXb2kUHHBib256aW5p
QHJlZGhhdC5jb20ACgkQv/vSX3jHroNkQggArLP8V1vL9XW9LJ6v4UdO4dp78MRb
mP5zZhzY6CZ9NnbLW3rtS8GLrQ7XJdHl4huVIckIF3HW+TOBas2gXOiyWdskZRFS
UGSoeejj2RHmTzBKWZN77G/GB0ElyKt3wvXeCEx0F8yiJXZhkw6n6tp3U/lGnic3
sJmniTBZ+m3GGtrAaEL11x8ITdtdGEE4uIhspqN8X0kU+WkSsklm8+GAORzMpWGq
Fqo46dd1C+st1GvbDOnghltYOEPPyKE6pZYOnZwB8X8hi01Ex53wXBXl98G8r8rt
qA6P0lc5Eh8E3M7xm+kJBw1XNSkxjQioJ1SElmQFW5w5rVtYVSDEsaL4MQ==
=3kjz
-----END PGP SIGNATURE-----
Merge tag 'for-upstream' of https://gitlab.com/bonzini/qemu into staging
Bugfixes for 6.2.
# gpg: Signature made Fri 19 Nov 2021 10:33:29 AM CET
# gpg: using RSA key F13338574B662389866C7682BFFBD25F78C7AE83
# gpg: issuer "pbonzini@redhat.com"
# gpg: Good signature from "Paolo Bonzini <bonzini@gnu.org>" [full]
# gpg: aka "Paolo Bonzini <pbonzini@redhat.com>" [full]
* tag 'for-upstream' of https://gitlab.com/bonzini/qemu:
chardev/wctable: don't free the instance in wctablet_chr_finalize
meson.build: Support ncurses on MacOS and OpenBSD
docs: Spell QEMU all caps
qtest/am53c974-test: add test for reset before transfer
esp: ensure that async_len is reset to 0 during esp_hard_reset()
nvmm: Fix support for stable version
meson: fix botched compile check conversions
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
NVMM user version 1 is the version being shipped with netbsd-9,
which is the most recent stable branch of NetBSD. This makes it
possible to use the NVMM accelerator on the most recent NetBSD
release, 9.2, which lacks nvmm_cpu_stop.
(CC'ing maintainers)
Signed-off-by: Nia Alarie <nia@NetBSD.org>
Reviewed-by: Kamil Rytarowski <kamil@netbsd.org>
Message-Id: <YWblCe2J8GwCaV9U@homeworld.netbsd.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Use address_space_map/unmap and check for errors.
Signed-off-by: Dov Murik <dovmurik@linux.ibm.com>
Acked-by: Brijesh Singh <brijesh.singh@amd.com>
[Two lines wrapped for length - Daniel]
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
In sev_add_kernel_loader_hashes, the sizes of structs are known at
compile-time, so calculate needed padding at compile-time.
No functional change intended.
Signed-off-by: Dov Murik <dovmurik@linux.ibm.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Acked-by: Brijesh Singh <brijesh.singh@amd.com>
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
Commit cff03145ed ("sev/i386: Introduce sev_add_kernel_loader_hashes
for measured linux boot", 2021-09-30) introduced measured direct boot
with -kernel, using an OVMF-designated hashes table which QEMU fills.
However, no checks are performed on the validity of the hashes area
designated by OVMF. Specifically, if OVMF publishes the
SEV_HASH_TABLE_RV_GUID entry but it is filled with zeroes, this will
cause QEMU to write the hashes entries over the first page of the
guest's memory (GPA 0).
Add validity checks to the published area. If the hashes table area's
base address is zero, or its size is too small to fit the aligned hashes
table, display an error and stop the guest launch. In such case, the
following error will be displayed:
qemu-system-x86_64: SEV: guest firmware hashes table area is invalid (base=0x0 size=0x0)
Signed-off-by: Dov Murik <dovmurik@linux.ibm.com>
Reported-by: Brijesh Singh <brijesh.singh@amd.com>
Acked-by: Brijesh Singh <brijesh.singh@amd.com>
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
Commit cff03145ed ("sev/i386: Introduce sev_add_kernel_loader_hashes
for measured linux boot", 2021-09-30) introduced measured direct boot
with -kernel, using an OVMF-designated hashes table which QEMU fills.
However, if OVMF doesn't designate such an area, QEMU would completely
abort the VM launch. This breaks launching with -kernel using older
OVMF images which don't publish the SEV_HASH_TABLE_RV_GUID.
Fix that so QEMU will only look for the hashes table if the sev-guest
kernel-hashes option is set to on. Otherwise, QEMU won't look for the
designated area in OVMF and won't fill that area.
To enable addition of kernel hashes, launch the guest with:
-object sev-guest,...,kernel-hashes=on
Signed-off-by: Dov Murik <dovmurik@linux.ibm.com>
Reported-by: Tom Lendacky <thomas.lendacky@amd.com>
Acked-by: Brijesh Singh <brijesh.singh@amd.com>
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
Introduce new boolean 'kernel-hashes' option on the sev-guest object.
It will be used to to decide whether to add the hashes of
kernel/initrd/cmdline to SEV guest memory when booting with -kernel.
The default value is 'off'.
Signed-off-by: Dov Murik <dovmurik@linux.ibm.com>
Acked-by: Brijesh Singh <brijesh.singh@amd.com>
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
mmu_translate is supposed to return an error code for page faults; it is
not able to handle other exceptions. The #GP case for noncanonical
virtual addresses is not handled correctly, and incorrectly raised as
a page fault with error code 1. Since it cannot happen for nested
page tables, move it directly to handle_mmu_fault, even before the
invocation of mmu_translate.
Fixes: #676
Fixes: 661ff4879e ("target/i386: extract mmu_translate", 2021-05-11)
Cc: qemu-stable@nongnu.org
Tested-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Correctly look up the paging mode of the hypervisor when it is using 64-bit
mode but the guest is not.
Fixes: 68746930ae ("target/i386: use mmu_translate for NPT walk", 2021-05-11)
Cc: qemu-stable@nongnu.org
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* DMA support in the multiboot option ROM
* Rename default-bus-bypass-iommu
* Deprecate -watchdog and cleanup -watchdog-action
* HVF fix for <PAGE_SIZE regions
* Support TSC scaling for AMD nested virtualization
* Fix for ESP fuzzing bug
-----BEGIN PGP SIGNATURE-----
iQFIBAABCAAyFiEE8TM4V0tmI4mGbHaCv/vSX3jHroMFAmGBUeEUHHBib256aW5p
QHJlZGhhdC5jb20ACgkQv/vSX3jHroOh+Qf+OMRhRiv6dYjbK/5zXrx81AgxYAY3
dBUSr8v16LyrMl1U3DZWzhD+MzQsC83m/Xsh4lGxlHDWtkK9QQA5xDG95JZdY26i
MGCbbjnFHISbyBQV9Y724gPfPjOOODuoFbzafSx6VLITOcyv1ye0cm7TOjOPB+tt
E4c3JqTZ7g8a5yMe8ItkVhz5pPY+oVw8dxMNRp6Sup5Dbfx0DjacIwLasLsHfPL7
qBADfqB20ovHUzLjXu7oWgEd4KxJ6kiSCaJJu/KD36hg0wB8+WVP1o43j4PkczHT
QjU7eZaeaTrN5Cf34ttPge6QReMi5SFNCaA9O9/HLqrQgdEtt/diZWuqjQ==
=a2mC
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/bonzini/tags/for-upstream' into staging
* Build system fixes and cleanups
* DMA support in the multiboot option ROM
* Rename default-bus-bypass-iommu
* Deprecate -watchdog and cleanup -watchdog-action
* HVF fix for <PAGE_SIZE regions
* Support TSC scaling for AMD nested virtualization
* Fix for ESP fuzzing bug
# gpg: Signature made Tue 02 Nov 2021 10:57:37 AM EDT
# gpg: using RSA key F13338574B662389866C7682BFFBD25F78C7AE83
# gpg: issuer "pbonzini@redhat.com"
# gpg: Good signature from "Paolo Bonzini <bonzini@gnu.org>" [full]
# gpg: aka "Paolo Bonzini <pbonzini@redhat.com>" [full]
* remotes/bonzini/tags/for-upstream: (27 commits)
configure: fix --audio-drv-list help message
configure: Remove the check for the __thread keyword
Move the l2tpv3 test from configure to meson.build
meson: remove unnecessary coreaudio test program
meson: remove pointless warnings
meson.build: Allow to disable OSS again
meson: bump submodule to 0.59.3
qtest/am53c974-test: add test for cancelling in-flight requests
esp: ensure in-flight SCSI requests are always cancelled
KVM: SVM: add migration support for nested TSC scaling
hw/i386: fix vmmouse registration
watchdog: remove select_watchdog_action
vl: deprecate -watchdog
watchdog: add information from -watchdog help to -device help
hw/i386: Rename default_bus_bypass_iommu
hvf: Avoid mapping regions < PAGE_SIZE as ram
configure: do not duplicate CPU_CFLAGS into QEMU_LDFLAGS
configure: remove useless NPTL probe
target/i386: use DMA-enabled multiboot ROM for new-enough QEMU machine types
optionrom: add a DMA-enabled multiboot ROM
...
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
This command was turned into a no-op four years ago in
commit 0c8465440d
Author: Peter Xu <peterx@redhat.com>
Date: Fri Dec 29 15:31:04 2017 +0800
hmp: obsolete "info ioapic"
Reviewed-by: Peter Xu <peterx@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
Record cr2, error_code, and exception_index. That last means
that we must exit to cpu_loop ourselves, instead of letting
exception_index being overwritten.
Use the maperr parameter to properly set PG_ERROR_P_MASK.
Reviewed by: Warner Losh <imp@bsdimp.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
I noticed -cpu help printing enough trailing spaces to make the output
at least 84 characters wide. Looks ugly unless the terminal is wider.
Ugly or not, trailing spaces are stupid.
The culprit is this line in x86_cpu_list_entry():
qemu_printf("x86 %-20s %-58s\n", name, desc);
This prints a string with minimum field left-justified right before a
newline. Change it to
qemu_printf("x86 %-20s %s\n", name, desc);
which avoids the trailing spaces and is simpler to boot.
A search for the pattern with "git-grep -E '%-[0-9]+s\\n'" found a few
more instances. Change them similarly.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Acked-by: Greg Kurz <groug@kaod.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Acked-by: Max Filippov <jcmvbkbc@gmail.com>
Message-Id: <20211009152401.2982862-1-armbru@redhat.com>
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
Because core-capability releated features are model-specific and KVM
won't support it, remove the core-capability in CPU model to avoid the
warning message.
Signed-off-by: Chenyi Qiang <chenyi.qiang@intel.com>
Message-Id: <20210827064818.4698-3-chenyi.qiang@intel.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
We were using singlestep_enabled as a proxy for whether
translator_use_goto_tb would always return false.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Add cpu_ld/st_mmu memory primitives.
Move helper_ld/st memory helpers out of tcg.h.
Canonicalize alignment flags in MemOp.
-----BEGIN PGP SIGNATURE-----
iQFRBAABCgA7FiEEekgeeIaLTbaoWgXAZN846K9+IV8FAmFnG/0dHHJpY2hhcmQu
aGVuZGVyc29uQGxpbmFyby5vcmcACgkQZN846K9+IV/P8Qf/TIb+nP/q4ZesoHV5
hNuKIMcGMiIWjP7YkuXg7H8n4QQxSK+nKXI3qlWCTIVtKOQFC3jkqNnxV8ncHUyS
RW6ePEcmJfb+yv20MnDLObxMcAq6mIkHtOjARQcvcHiXxMNEZdIvJ8f8/qrkYib1
RRJarqIGlYFJvGyfbplq/JA/WYcJleIElEUx7JPSewz38Kk0gDIH2+BR2TBFrWAD
TDfh+GvlHeX8IYU19rWnt7pFv8TVPVQODqJBtlRPEYnl+LGdpJPCP2ATUAggWHiA
hucYKsuMWXXXhGx2nsurkpSNrBfGe6OHybOE5d1ARqmq0MnyHJat+ryh6qTx3Z9w
oZKi+Q==
=QpK0
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/rth/tags/pull-tcg-20211013' into staging
Use MO_128 for 16-byte atomic memory operations.
Add cpu_ld/st_mmu memory primitives.
Move helper_ld/st memory helpers out of tcg.h.
Canonicalize alignment flags in MemOp.
# gpg: Signature made Wed 13 Oct 2021 10:48:45 AM PDT
# gpg: using RSA key 7A481E78868B4DB6A85A05C064DF38E8AF7E215F
# gpg: issuer "richard.henderson@linaro.org"
# gpg: Good signature from "Richard Henderson <richard.henderson@linaro.org>" [ultimate]
* remotes/rth/tags/pull-tcg-20211013:
tcg: Canonicalize alignment flags in MemOp
tcg: Move helper_*_mmu decls to tcg/tcg-ldst.h
target/arm: Use cpu_*_mmu instead of helper_*_mmu
target/sparc: Use cpu_*_mmu instead of helper_*_mmu
target/s390x: Use cpu_*_mmu instead of helper_*_mmu
target/mips: Use 8-byte memory ops for msa load/store
target/mips: Use cpu_*_data_ra for msa load/store
accel/tcg: Move cpu_atomic decls to exec/cpu_ldst.h
accel/tcg: Add cpu_{ld,st}*_mmu interfaces
target/hexagon: Implement cpu_mmu_index
target/s390x: Use MO_128 for 16 byte atomics
target/ppc: Use MO_128 for 16 byte atomics
target/i386: Use MO_128 for 16 byte atomics
target/arm: Use MO_128 for 16 byte atomics
memory: Log access direction for invalid accesses
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
The struct kvm_sev_launch_measure has a constant and small size, and
therefore we can use a regular local variable for it instead of
allocating and freeing heap memory for it.
Signed-off-by: Dov Murik <dovmurik@linux.ibm.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Message-Id: <20211011173026.2454294-3-dovmurik@linux.ibm.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
The struct kvm_sev_launch_start has a constant and small size, and
therefore we can use a regular local variable for it instead of
allocating and freeing heap memory for it.
Signed-off-by: Dov Murik <dovmurik@linux.ibm.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Message-Id: <20211011173026.2454294-2-dovmurik@linux.ibm.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Move qmp_query_sgx() and hmp_info_sgx() from target/i386/monitor.c
to hw/i386/sgx.c, removing the sgx_get_info() indirection and the
"hw/i386/sgx.h" header.
Suggested-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Message-Id: <20211007175612.496366-5-philmd@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Move qmp_query_sgx_capabilities() from target/i386/monitor.c to
hw/i386/sgx.c, removing the sgx_get_capabilities() indirection.
Suggested-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Message-Id: <20211007175612.496366-4-philmd@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
While being conditionally used for TARGET_I386 in hmp-commands-info.hx,
hmp_info_sev() is declared for all targets. Reduce its declaration
to target including "monitor/hmp-target.h". This is a minor cleanup.
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Message-Id: <20211007161716.453984-23-philmd@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Move qmp_query_sev() & hmp_info_sev()() from monitor.c to sev.c
and make sev_get_info() static. We don't need the stub anymore,
remove it. Add a stub for hmp_info_sev().
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Message-Id: <20211007161716.453984-22-philmd@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Move qmp_query_sev_launch_measure() from monitor.c to sev.c
and make sev_get_launch_measurement() static. We don't need the
stub anymore, remove it.
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Message-Id: <20211007161716.453984-21-philmd@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Move qmp_query_sev_capabilities() from monitor.c to sev.c
and make sev_get_capabilities() static. We don't need the
stub anymore, remove it.
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Message-Id: <20211007161716.453984-20-philmd@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Move qmp_sev_inject_launch_secret() from monitor.c to sev.c
and make sev_inject_launch_secret() static. We don't need the
stub anymore, remove it.
Previously with binaries built without SEV, management layer
was getting an empty response:
{ "execute": "sev-inject-launch-secret",
"arguments": { "packet-header": "mypkt", "secret": "mypass", "gpa": 4294959104 }
}
{
"return": {
}
}
Now the response is explicit, mentioning the feature is disabled:
{ "execute": "sev-inject-launch-secret",
"arguments": { "packet-header": "mypkt", "secret": "mypass", "gpa": 4294959104 }
}
{
"error": {
"class": "GenericError",
"desc": "this feature or command is not currently supported"
}
}
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Message-Id: <20211007161716.453984-19-philmd@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Move qmp_query_sev_attestation_report() from monitor.c to sev.c
and make sev_get_attestation_report() static. We don't need the
stub anymore, remove it.
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Message-Id: <20211007161716.453984-18-philmd@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Only declare sev_enabled() and sev_es_enabled() when CONFIG_SEV is
set, to allow the compiler to elide unused code. Remove unnecessary
stubs.
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-Id: <20211007161716.453984-17-philmd@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
"sysemu/sev.h" is only used from x86-specific files. Let's move it
to include/hw/i386, and merge it with target/i386/sev.h.
Suggested-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Message-Id: <20211007161716.453984-16-philmd@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
SEV is a x86 specific feature, and the "sev_i386.h" header
is already in target/i386/. Rename it as "sev.h" to simplify.
Patch created mechanically using:
$ git mv target/i386/sev_i386.h target/i386/sev.h
$ sed -i s/sev_i386.h/sev.h/ $(git grep -l sev_i386.h)
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Message-Id: <20211007161716.453984-15-philmd@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
SEV is irrelevant on user emulation, so restrict it to sysemu.
Some stubs are still required because used in cpu.c by
x86_register_cpudef_types(), so move the sysemu specific stubs
to sev-sysemu-stub.c instead. This will allow us to simplify
monitor.c (which is not available in user emulation) in the
next commit.
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Message-Id: <20211007161716.453984-14-philmd@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Use g_autofree to remove a pair of g_free/goto.
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Message-Id: <20211007161716.453984-13-philmd@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Removes a whole bunch of g_free's and a goto.
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Reviewed-by: Connor Kuehl <ckuehl@redhat.com>
Reviewed-by: Brijesh Singh <brijesh.singh@amd.com>
Message-Id: <20210603113017.34922-1-dgilbert@redhat.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Message-Id: <20211007161716.453984-12-philmd@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
The unique sev_encrypt_flash() invocation (in pc_system_flash_map)
is protected by the "if (sev_enabled())" check, so is not
reacheable.
Replace the abort() call in sev_es_save_reset_vector() by
g_assert_not_reached() which meaning is clearer.
Reviewed-by: Connor Kuehl <ckuehl@redhat.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Message-Id: <20211007161716.453984-11-philmd@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Unused dead code makes review harder, so remove it.
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Reviewed-by: Connor Kuehl <ckuehl@redhat.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Message-Id: <20211007161716.453984-10-philmd@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Commit 00b8105324 ("target-i386: Remove assert_no_error usage")
forgot to add the "qapi/error.h" for &error_abort, add it now.
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Reviewed-by: Connor Kuehl <ckuehl@redhat.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Message-Id: <20211007161716.453984-8-philmd@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
If the management layer tries to inject a secret, it gets an empty
response in case the guest doesn't have SEV enabled, or the binary
is built without SEV:
{ "execute": "sev-inject-launch-secret",
"arguments": { "packet-header": "mypkt", "secret": "mypass", "gpa": 4294959104 }
}
{
"return": {
}
}
Make it clearer by returning an error:
{ "execute": "sev-inject-launch-secret",
"arguments": { "packet-header": "mypkt", "secret": "mypass", "gpa": 4294959104 }
}
{
"error": {
"class": "GenericError",
"desc": "SEV not enabled for guest"
}
}
Note: we will remove the sev_inject_launch_secret() stub in few commits,
so we don't bother to add error_setg() there.
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Reviewed-by: Connor Kuehl <ckuehl@redhat.com>
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Message-Id: <20211007161716.453984-7-philmd@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Multiple errors might be reported to the monitor,
better to prefix the SEV ones so we can distinct them.
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Message-Id: <20211007161716.453984-6-philmd@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
SEV is x86-specific, no need to add its stub to other
architectures. Move the stub file to target/i386/kvm/.
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Message-Id: <20211007161716.453984-5-philmd@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Introduce the i386_softmmu_kvm Meson source set to be able to
add features dependent on CONFIG_KVM.
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Message-Id: <20211007161716.453984-4-philmd@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
We're about to move this out of tcg.h, so rename it
as we did when moving MemOp.
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Add the sev_add_kernel_loader_hashes function to calculate the hashes of
the kernel/initrd/cmdline and fill a designated OVMF encrypted hash
table area. For this to work, OVMF must support an encrypted area to
place the data which is advertised via a special GUID in the OVMF reset
table.
The hashes of each of the files is calculated (or the string in the case
of the cmdline with trailing '\0' included). Each entry in the hashes
table is GUID identified and since they're passed through the
sev_encrypt_flash interface, the hashes will be accumulated by the AMD
PSP measurement (SEV_LAUNCH_MEASURE).
Co-developed-by: James Bottomley <jejb@linux.ibm.com>
Signed-off-by: James Bottomley <jejb@linux.ibm.com>
Signed-off-by: Dov Murik <dovmurik@linux.ibm.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
Message-Id: <20210930054915.13252-2-dovmurik@linux.ibm.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
KVM implements some Hyper-V 2016 functions so providing WS2008R2 version
is somewhat incorrect. While generally guests shouldn't care about it
and always check feature bits, it is known that some tools in Windows
actually check version info.
For compatibility reasons make the change for 6.2 machine types only.
Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Message-Id: <20210902093530.345756-9-vkuznets@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Currently, we hardcode Hyper-V version id (CPUID 0x40000002) to
WS2008R2 and it is known that certain tools in Windows check this. It
seems useful to provide some flexibility by making it possible to change
this info at will. CPUID information is defined in TLFS as:
EAX: Build Number
EBX Bits 31-16: Major Version
Bits 15-0: Minor Version
ECX Service Pack
EDX Bits 31-24: Service Branch
Bits 23-0: Service Number
Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Message-Id: <20210902093530.345756-8-vkuznets@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
The enlightenment allows to use Hyper-V SynIC with hardware APICv/AVIC
enabled. Normally, Hyper-V SynIC disables these hardware features and
suggests the guest to use paravirtualized AutoEOI feature. Linux-4.15
gains support for conditional APICv/AVIC disablement, the feature
stays on until the guest tries to use AutoEOI feature with SynIC. With
'HV_DEPRECATING_AEOI_RECOMMENDED' bit exposed, modern enough Windows/
Hyper-V versions should follow the recommendation and not use the
(unwanted) feature.
Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Message-Id: <20210902093530.345756-7-vkuznets@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
In preparation to enabling Hyper-V + APICv/AVIC move
HV_APIC_ACCESS_RECOMMENDED setting out of kvm_hyperv_properties[]: the
'real' feature bit for the vAPIC features is HV_APIC_ACCESS_AVAILABLE,
HV_APIC_ACCESS_RECOMMENDED is a recommendation to use the feature which
we may not always want to give.
Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Message-Id: <20210902093530.345756-6-vkuznets@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
By default, KVM allows the guest to use all currently supported Hyper-V
enlightenments when Hyper-V CPUID interface was exposed, regardless of if
some features were not announced in guest visible CPUIDs. hv-enforce-cpuid
feature alters this behavior and only allows the guest to use exposed
Hyper-V enlightenments. The feature is supported by Linux >= 5.14 and is
not enabled by default in QEMU.
Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Message-Id: <20210902093530.345756-5-vkuznets@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
By default, KVM allows the guest to use all currently supported PV features
even when they were not announced in guest visible CPUIDs. Introduce a new
"kvm-pv-enforce-cpuid" flag to limit the supported feature set to the
exposed features. The feature is supported by Linux >= 5.10 and is not
enabled by default in QEMU.
Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Message-Id: <20210902093530.345756-4-vkuznets@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Provide a name field for all the memory listeners. It can be used to identify
which memory listener is which.
Signed-off-by: Peter Xu <peterx@redhat.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
Message-Id: <20210817013553.30584-2-peterx@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
In sev_read_file_base64() we call g_file_get_contents(), which
allocates memory for the file contents. We then base64-decode the
contents (which allocates another buffer for the decoded data), but
forgot to free the memory for the original file data.
Use g_autofree to ensure that the file data is freed.
Fixes: Coverity CID 1459997
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Message-Id: <20210820165650.2839-1-peter.maydell@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Libvirt can use query-sgx-capabilities to get the host
sgx capabilities to decide how to allocate SGX EPC size to VM.
Signed-off-by: Yang Zhong <yang.zhong@intel.com>
Message-Id: <20210910102258.46648-3-yang.zhong@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
The QMP and HMP interfaces can be used by monitor or QMP tools to retrieve
the SGX information from VM side when SGX is enabled on Intel platform.
Signed-off-by: Yang Zhong <yang.zhong@intel.com>
Message-Id: <20210910102258.46648-2-yang.zhong@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
SGX capabilities are enumerated through CPUID_0x12.
Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Signed-off-by: Yang Zhong <yang.zhong@intel.com>
Message-Id: <20210719112136.57018-16-yang.zhong@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
The SGX sub-leafs are enumerated at CPUID 0x12. Indices 0 and 1 are
always present when SGX is supported, and enumerate SGX features and
capabilities. Indices >=2 are directly correlated with the platform's
EPC sections. Because the number of EPC sections is dynamic and user
defined, the number of SGX sub-leafs is "NULL" terminated.
Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Signed-off-by: Yang Zhong <yang.zhong@intel.com>
Message-Id: <20210719112136.57018-15-yang.zhong@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
If the guest want to fully use SGX, the guest needs to be able to
access provisioning key. Add a new KVM_CAP_SGX_ATTRIBUTE to KVM to
support provisioning key to KVM guests.
Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Signed-off-by: Yang Zhong <yang.zhong@intel.com>
Message-Id: <20210719112136.57018-14-yang.zhong@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Expose SGX to the guest if and only if KVM is enabled and supports
virtualization of SGX. While the majority of ENCLS can be emulated to
some degree, because SGX uses a hardware-based root of trust, the
attestation aspects of SGX cannot be emulated in software, i.e.
ultimately emulation will fail as software cannot generate a valid
quote/report. The complexity of partially emulating SGX in Qemu far
outweighs the value added, e.g. an SGX specific simulator for userspace
applications can emulate SGX for development and testing purposes.
Note, access to the PROVISIONKEY is not yet advertised to the guest as
KVM blocks access to the PROVISIONKEY by default and requires userspace
to provide additional credentials (via ioctl()) to expose PROVISIONKEY.
Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Signed-off-by: Yang Zhong <yang.zhong@intel.com>
Message-Id: <20210719112136.57018-13-yang.zhong@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
SGX adds multiple flags to FEATURE_CONTROL to enable SGX and Flexible
Launch Control.
Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Signed-off-by: Yang Zhong <yang.zhong@intel.com>
Message-Id: <20210719112136.57018-12-yang.zhong@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
On real hardware, on systems that supports SGX Launch Control, those
MSRs are initialized to digest of Intel's signing key; on systems that
don't support SGX Launch Control, those MSRs are not available but
hardware always uses digest of Intel's signing key in EINIT.
KVM advertises SGX LC via CPUID if and only if the MSRs are writable.
Unconditionally initialize those MSRs to digest of Intel's signing key
when CPU is realized and reset to reflect the fact. This avoids
potential bug in case kvm_arch_put_registers() is called before
kvm_arch_get_registers() is called, in which case guest's virtual
SGX_LEPUBKEYHASH MSRs will be set to 0, although KVM initializes those
to digest of Intel's signing key by default, since KVM allows those MSRs
to be updated by Qemu to support live migration.
Save/restore the SGX Launch Enclave Public Key Hash MSRs if SGX Launch
Control (LC) is exposed to the guest. Likewise, migrate the MSRs if they
are writable by the guest.
Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Signed-off-by: Kai Huang <kai.huang@intel.com>
Signed-off-by: Yang Zhong <yang.zhong@intel.com>
Message-Id: <20210719112136.57018-11-yang.zhong@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
CPUID leaf 12_1_EAX is an Intel-defined feature bits leaf enumerating
the platform's SGX capabilities that may be utilized by an enclave, e.g.
whether or not an enclave can gain access to the provision key.
Currently there are six capabilities:
- INIT: set when the enclave has has been initialized by EINIT. Cannot
be set by software, i.e. forced to zero in CPUID.
- DEBUG: permits a debugger to read/write into the enclave.
- MODE64BIT: the enclave runs in 64-bit mode
- PROVISIONKEY: grants has access to the provision key
- EINITTOKENKEY: grants access to the EINIT token key, i.e. the
enclave can generate EINIT tokens
- KSS: Key Separation and Sharing enabled for the enclave.
Note that the entirety of CPUID.0x12.0x1, i.e. all registers, enumerates
the allowed ATTRIBUTES (128 bits), but only bits 31:0 are directly
exposed to the user (via FEAT_12_1_EAX). Bits 63:32 are currently all
reserved and bits 127:64 correspond to the allowed XSAVE Feature Request
Mask, which is calculated based on other CPU features, e.g. XSAVE, MPX,
AVX, etc... and is not exposed to the user.
Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Signed-off-by: Yang Zhong <yang.zhong@intel.com>
Message-Id: <20210719112136.57018-10-yang.zhong@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
CPUID leaf 12_0_EBX is an Intel-defined feature bits leaf enumerating
the platform's SGX extended capabilities. Currently there is a single
capabilitiy:
- EXINFO: record information about #PFs and #GPs in the enclave's SSA
Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Signed-off-by: Yang Zhong <yang.zhong@intel.com>
Message-Id: <20210719112136.57018-9-yang.zhong@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
CPUID leaf 12_0_EAX is an Intel-defined feature bits leaf enumerating
the CPU's SGX capabilities, e.g. supported SGX instruction sets.
Currently there are four enumerated capabilities:
- SGX1 instruction set, i.e. "base" SGX
- SGX2 instruction set for dynamic EPC management
- ENCLV instruction set for VMM oversubscription of EPC
- ENCLS-C instruction set for thread safe variants of ENCLS
Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Signed-off-by: Yang Zhong <yang.zhong@intel.com>
Message-Id: <20210719112136.57018-8-yang.zhong@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Add CPUID defines for SGX and SGX Launch Control (LC), as well as
defines for their associated FEATURE_CONTROL MSR bits. Define the
Launch Enclave Public Key Hash MSRs (LE Hash MSRs), which exist
when SGX LC is present (in CPUID), and are writable when SGX LC is
enabled (in FEATURE_CONTROL).
Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Signed-off-by: Yang Zhong <yang.zhong@intel.com>
Message-Id: <20210719112136.57018-7-yang.zhong@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
There is nothing target specific about this. The implementation
is host specific, but the declaration is 100% common.
Reviewed-By: Warner Losh <imp@bsdimp.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
With Apple Silicon available to the masses, it's a good time to add support
for driving its virtualization extensions from QEMU.
This patch adds all necessary architecture specific code to get basic VMs
working, including save/restore.
Known limitations:
- WFI handling is missing (follows in later patch)
- No watchpoint/breakpoint support
Signed-off-by: Alexander Graf <agraf@csgraf.de>
Reviewed-by: Roman Bolshakov <r.bolshakov@yadro.com>
Reviewed-by: Sergio Lopez <slp@redhat.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20210916155404.86958-5-agraf@csgraf.de
[PMM: added missing #include]
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
We will need to install a migration helper for the ARM hvf backend.
Let's introduce an arch callback for the overall hvf init chain to
do so.
Signed-off-by: Alexander Graf <agraf@csgraf.de>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20210916155404.86958-4-agraf@csgraf.de
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
cpu_get_pic_interrupt() is now unreachable from user-mode,
delete the unnecessary stubs.
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Warner Losh <imp@bsdimp.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20210911165434.531552-25-f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Following the logic of commit 30493a030f ("i386: split seg_helper
into user-only and sysemu parts"), move x86_cpu_exec_interrupt()
under sysemu/seg_helper.c.
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-By: Warner Losh <imp@bsdimp.com>
Message-Id: <20210911165434.531552-12-f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Restrict cpu_exec_interrupt() and its callees to sysemu.
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Warner Losh <imp@bsdimp.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20210911165434.531552-11-f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
do_interrupt() is sysemu specific. However due to some X86
specific hack, it is also used in user-mode emulation, which
is why it couldn't be restricted to CONFIG_SOFTMMU (see the
comment around added in commit 78271684719: "cpu: tcg_ops:
move to tcg-cpu-ops.h, keep a pointer in CPUClass").
Keep the hack but rename the handler as fake_user_interrupt()
and restrict do_interrupt() to sysemu.
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Warner Losh <imp@bsdimp.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20210911165434.531552-6-f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Merge two TARGET_X86_64 consecutive blocks.
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Warner Losh <imp@bsdimp.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20210911165434.531552-4-f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Restrict some sysemu-only fpu_helper helpers (see commit
83a3d9c7402: "i386: separate fpu_helper sysemu-only parts").
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Warner Losh <imp@bsdimp.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20210911165434.531552-3-f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com>
[rth: Split out of a larger patch.]
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
A parameter max_size was added to the RAMBlockNotifier
ram_block_added function. Use the max_size for pre allocation
of hva space.
Signed-off-by: Reinoud Zandijk <Reinoud@NetBSD.org>
Message-Id: <20210718134650.1191-3-reinoud@NetBSD.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
The feature allows the VMSAVE and VMLOAD instructions to execute in guest mode without
causing a VMEXIT. (APM2 15.33.1)
Signed-off-by: Lara Lazier <laramglazier@gmail.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Writes to cr8 affect v_tpr. This could set or unset an interrupt
request as the priority might have changed.
Signed-off-by: Lara Lazier <laramglazier@gmail.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
The APM2 states that if V_IGN_TPR is nonzero, the current
virtual interrupt ignores the (virtual) TPR.
Signed-off-by: Lara Lazier <laramglazier@gmail.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
VGIF provides masking capability for when virtual interrupts
are taken. (APM2)
Signed-off-by: Lara Lazier <laramglazier@gmail.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Moved int_ctl into the CPUX86State structure. It removes some
unnecessary stores and loads, and prepares for tracking the vIRQ
state even when it is masked due to vGIF.
Signed-off-by: Lara Lazier <laramglazier@gmail.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
VGIF allows STGI and CLGI to execute in guest mode and control virtual
interrupts in guest mode.
When the VGIF feature is enabled then:
* executing STGI in the guest sets bit 9 of the VMCB offset 60h.
* executing CLGI in the guest clears bit 9 of the VMCB offset 60h.
Signed-off-by: Lara Lazier <laramglazier@gmail.com>
Message-Id: <20210730070742.9674-1-laramglazier@gmail.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
APM2 requires that VMRUN and VMLOAD canonicalize (sign extend to 63
from 48/57) all base addresses in the segment registers that have been
respectively loaded.
Signed-off-by: Lara Lazier <laramglazier@gmail.com>
Message-Id: <20210804113058.45186-1-laramglazier@gmail.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Booting Fedora kernels with -cpu max hangs very early in boot. Disabling
the la57 CPUID bit fixes the problem. git bisect traced the regression to
commit 213ff024a2 (HEAD, refs/bisect/bad)
Author: Lara Lazier <laramglazier@gmail.com>
Date: Wed Jul 21 17:26:50 2021 +0200
target/i386: Added consistency checks for CR4
All MBZ bits in CR4 must be zero. (APM2 15.5)
Added reserved bitmask and added checks in both
helper_vmrun and helper_write_crN.
Signed-off-by: Lara Lazier <laramglazier@gmail.com>
Message-Id: <20210721152651.14683-2-laramglazier@gmail.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
In this commit CR4_RESERVED_MASK is missing CR4_LA57_MASK and
two others. Adding this lets Fedora kernels boot once again.
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
Tested-by: Richard W.M. Jones <rjones@redhat.com>
Message-Id: <20210831175033.175584-1-berrange@redhat.com>
[Removed VMXE/SMXE, matching the commit message. - Paolo]
Fixes: 213ff024a2 ("target/i386: Added consistency checks for CR4", 2021-07-22)
Cc: qemu-stable@nongnu.org
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Most callers check the return value. Some check whether it set an
error. Functionally equivalent, but the former tends to be easier on
the eyes, so do that everywhere.
Prior art: commit c6ecec43b2 "qemu-option: Check return value instead
of @err where convenient".
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Message-Id: <20210720125408.387910-10-armbru@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Acked-by: Michael S. Tsirkin <mst@redhat.com>
There is nothing to delete after migrate_add_blocker() failed. Trying
anyway is safe, but useless. Don't.
Cc: Sunil Muthuswamy <sunilmut@microsoft.com>
Cc: Kamil Rytarowski <kamil@netbsd.org>
Cc: Reinoud Zandijk <reinoud@netbsd.org>
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Message-Id: <20210720125408.387910-9-armbru@redhat.com>
Reviewed-by: Reinoud Zandijk <reinoud@NetBSD.org>
Acked-by: Michael S. Tsirkin <mst@redhat.com>
invtsc_mig_blocker has static storage duration. When a CPU with
certain features is initialized, and invtsc_mig_blocker is still null,
we add a migration blocker and store it in invtsc_mig_blocker.
The object is freed when migrate_add_blocker() fails, leaving
invtsc_mig_blocker dangling. It is not freed on later failures.
Same for hv_passthrough_mig_blocker and hv_no_nonarch_cs_mig_blocker.
All failures are actually fatal, so whether we free or not doesn't
really matter, except as bad examples to be copied / imitated.
Clean this up in a minimal way: never free these blocker objects.
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Marcelo Tosatti <mtosatti@redhat.com>
Cc: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Message-Id: <20210720125408.387910-7-armbru@redhat.com>
Acked-by: Michael S. Tsirkin <mst@redhat.com>
We did this with scripts/coccinelle/use-error_fatal.cocci before, in
commit 50beeb6809 and 007b06578a. This commit cleans up rarer
variations that don't seem worth matching with Coccinelle.
Cc: Thomas Huth <thuth@redhat.com>
Cc: Cornelia Huck <cornelia.huck@de.ibm.com>
Cc: Peter Xu <peterx@redhat.com>
Cc: Juan Quintela <quintela@redhat.com>
Cc: Stefan Hajnoczi <stefanha@redhat.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Marc-André Lureau <marcandre.lureau@redhat.com>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Cc: Daniel P. Berrangé <berrange@redhat.com>
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Message-Id: <20210720125408.387910-2-armbru@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Peter Xu <peterx@redhat.com>
Acked-by: Michael S. Tsirkin <mst@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
The AVX_VNNI feature is not in Cooperlake platform, remove it
from cpu model.
Signed-off-by: Yang Zhong <yang.zhong@intel.com>
Message-Id: <20210820054611.84303-1-yang.zhong@intel.com>
Fixes: c1826ea6a0 ("i386/cpu: Expose AVX_VNNI instruction to guest")
Cc: qemu-stable@nongnu.org
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
At present, there's no mechanism intelligent enough to virtualize split
lock detection correctly. Remove it in Snowridge CPU model to avoid the
feature exposure.
Signed-off-by: Chenyi Qiang <chenyi.qiang@intel.com>
Message-Id: <20210630012053.10098-1-chenyi.qiang@intel.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
~0UL has 64 bits on Linux and 32 bits on Windows.
Fixes: https://gitlab.com/qemu-project/qemu/-/issues/512
Reported-by: Volker Rümelin <vr_qemu@t-online.de>
Signed-off-by: Lara Lazier <laramglazier@gmail.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20210812111056.26926-1-laramglazier@gmail.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
The shift constant was incorrect, causing int_prio to always be zero.
Signed-off-by: Lara Lazier <laramglazier@gmail.com>
[Rewritten commit message since v1 had already been included. - Paolo]
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
VMRUN exits with SVM_EXIT_ERR if either:
* The event injected has a reserved type.
* When the event injected is of type 3 (exception), and the vector that
has been specified does not correspond to an exception.
This does not fix the entire exc_inj test in kvm-unit-tests.
Signed-off-by: Lara Lazier <laramglazier@gmail.com>
Message-Id: <20210725090855.19713-1-laramglazier@gmail.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Coverity reports potential NULL pointer dereference in
get_supported_hv_cpuid_legacy() when 'cs->kvm_state' is NULL. While
'cs->kvm_state' can indeed be NULL in hv_cpuid_get_host(),
kvm_hyperv_expand_features() makes sure that it only happens when
KVM_CAP_SYS_HYPERV_CPUID is supported and KVM_CAP_SYS_HYPERV_CPUID
implies KVM_CAP_HYPERV_CPUID so get_supported_hv_cpuid_legacy() is
never really called. Add asserts to strengthen the protection against
broken KVM behavior.
Coverity: CID 1458243
Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Message-Id: <20210716115852.418293-1-vkuznets@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Missed in commit f3478392 "docs: Move deprecation, build
and license info out of system/"
Signed-off-by: Mao Zhongyi <maozhongyi@cmss.chinamobile.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20210723065828.1336760-1-maozhongyi@cmss.chinamobile.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Some cpu properties have to be set only for cpu models in builtin_x86_defs,
registered with x86_register_cpu_model_type, and not for
cpu models "base", "max", and the subclass "host".
These properties are the ones set by function x86_cpu_apply_props,
(also including kvm_default_props, tcg_default_props),
and the "vendor" property for the KVM and HVF accelerators.
After recent refactoring of cpu, which also affected these properties,
they were instead set unconditionally for all x86 cpus.
This has been detected as a bug with Nested on AMD with cpu "host",
as svm was not turned on by default, due to the wrongful setting of
kvm_default_props via x86_cpu_apply_props, which set svm to "off".
Rectify the bug introduced in commit "i386: split cpu accelerators"
and document the functions that are builtin_x86_defs-only.
Signed-off-by: Claudio Fontana <cfontana@suse.de>
Tested-by: Alexander Bulekov <alxndr@bu.edu>
Fixes: f5cc5a5c ("i386: split cpu accelerators from cpu.c,"...)
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/477
Message-Id: <20210723112921.12637-1-cfontana@suse.de>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
All MBZ in CR3 must be zero (APM2 15.5)
Added checks in both helper_vmrun and helper_write_crN.
When EFER.LMA is zero the upper 32 bits needs to be zeroed.
Signed-off-by: Lara Lazier <laramglazier@gmail.com>
Message-Id: <20210723112740.45962-1-laramglazier@gmail.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
EFER.SVME has to be set, and EFER reserved bits must
be zero.
In addition the combinations
* EFER.LMA or EFER.LME is non-zero and the processor does not support LM
* non-zero EFER.LME and CR0.PG and zero CR4.PAE
* non-zero EFER.LME and CR0.PG and zero CR0.PE
* non-zero EFER.LME, CR0.PG, CR4.PAE, CS.L and CS.D
are all invalid.
(AMD64 Architecture Programmer's Manual, V2, 15.5)
Signed-off-by: Lara Lazier <laramglazier@gmail.com>
Message-Id: <20210721152651.14683-3-laramglazier@gmail.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
All MBZ bits in CR4 must be zero. (APM2 15.5)
Added reserved bitmask and added checks in both
helper_vmrun and helper_write_crN.
Signed-off-by: Lara Lazier <laramglazier@gmail.com>
Message-Id: <20210721152651.14683-2-laramglazier@gmail.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
The APM2 states that The processor takes a virtual INTR interrupt
if V_IRQ and V_INTR_PRIO indicate that there is a virtual interrupt pending
whose priority is greater than the value in V_TPR.
Signed-off-by: Lara Lazier <laramglazier@gmail.com>
Message-Id: <20210721152651.14683-1-laramglazier@gmail.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
The hook is now unused, with breakpoints checked outside translation.
Tested-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Return false for RF set, as we do in i386_tr_breakpoint_check.
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Always provide the atomic interface using TCGMemOpIdx oi
and uintptr_t retaddr. Rename from helper_* to cpu_* so
as to (mostly) match the exec/cpu_ldst.h functions, and
to emphasize that they are not callable from TCG directly.
Tested-by: Cole Robinson <crobinso@redhat.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Implement x86 fcs:fip, fds:fdp.
Trivial x86 watchpoint cleanup.
-----BEGIN PGP SIGNATURE-----
iQFRBAABCgA7FiEEekgeeIaLTbaoWgXAZN846K9+IV8FAmDtwQ0dHHJpY2hhcmQu
aGVuZGVyc29uQGxpbmFyby5vcmcACgkQZN846K9+IV/GnAf/SYNhdmIuKCWk/uk8
IC0v2sm5KHVFfkfkobQ+04pFB26tX557i2zTtEfj/A5QVlJSvliZowCVIO6JV63N
9oedLSzdqrxRqDb+Mpmkwnam/k5XfrC20V7os17FuZE98u3Jgky8QNs7Uxq0bCBZ
01AKB9HNRFKeY2o55IxPwC7CLtyz3SStJJP28aa5ROYK7MIP303qsI5pezgkHgGo
/qo5GXwHs/Pu4pnFuAJyOfG38wT6uTt7NrAGjTH0VhbAKNMSP/QND+VvxbuCugZR
6MEVeb+rLy+MN4b3dH6kI89JQvQGBCaWZD/eTF5+8UDPj3I8vpRqufRh8l5WukT1
Q2g1zA==
=eqkT
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/rth-gitlab/tags/pull-misc-20210713' into staging
Cleanup alpha, hppa, or1k wrt tcg_constant_tl.
Implement x86 fcs:fip, fds:fdp.
Trivial x86 watchpoint cleanup.
# gpg: Signature made Tue 13 Jul 2021 17:36:29 BST
# gpg: using RSA key 7A481E78868B4DB6A85A05C064DF38E8AF7E215F
# gpg: issuer "richard.henderson@linaro.org"
# gpg: Good signature from "Richard Henderson <richard.henderson@linaro.org>" [full]
# Primary key fingerprint: 7A48 1E78 868B 4DB6 A85A 05C0 64DF 38E8 AF7E 215F
* remotes/rth-gitlab/tags/pull-misc-20210713:
target/hppa: Clean up DisasCond
target/hppa: Use tcg_constant_*
target/openrisc: Use dc->zero in gen_add, gen_addc
target/openrisc: Cache constant 0 in DisasContext
target/openrisc: Use tcg_constant_tl for dc->R0
target/openrisc: Use tcg_constant_*
target/alpha: Use tcg_constant_* elsewhere
target/alpha: Use tcg_constant_i64 for zero and lit
target/alpha: Use dest_sink for HW_RET temporary
target/alpha: Store set into rx flag
target/i386: Correct implementation for FCS, FIP, FDS and FDP
target/i386: Split out do_fninit
target/i386: Trivial code motion and code style fix
target/i386: Tidy hw_breakpoint_remove
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Update FCS:FIP and FDS:FDP according to the Intel Manual Vol.1 8.1.8.
Note that CPUID.(EAX=07H,ECX=0H):EBX[bit 13] is not implemented by
design in this patch and will be added along with TCG features flag
in a separate patch later.
Signed-off-by: Ziqiao Kong <ziqiaokong@gmail.com>
Message-Id: <20210530150112.74411-2-ziqiaokong@gmail.com>
[rth: Push FDS/FDP handling down into mod != 3 case; free last_addr.]
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Do not call helper_fninit directly from helper_xrstor.
Do call the new helper from do_fsave.
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
A new pair of braces has to be added to declare variables in the case block.
The code style is also fixed according to the transalte.c itself during the
code motion.
Signed-off-by: Ziqiao Kong <ziqiaokong@gmail.com>
Message-Id: <20210530150112.74411-1-ziqiaokong@gmail.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Since cpu_breakpoint and cpu_watchpoint are in a union,
the code should access only one of them.
Signed-off-by: Dmitry Voronetskiy <davoronetskiy@gmail.com>
Message-Id: <20210613180838.21349-1-davoronetskiy@gmail.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
A AMD server typically has cpuid level 0x10(test on Rome/Milan), it
should not be changed to 0x1f in multi-dies case.
* to maintain compatibility with older machine types, only implement
this change when the CPU's "x-vendor-cpuid-only" property is false
Cc: "Dr. David Alan Gilbert" <dgilbert@redhat.com>
Cc: Eduardo Habkost <ehabkost@redhat.com>
Cc: Richard Henderson <richard.henderson@linaro.org>
Cc: Igor Mammedov <imammedo@redhat.com>
Cc: zhenwei pi <pizhenwei@bytedance.com>
Fixes: a94e142899 (target/i386: Add CPUID.1F generation support for multi-dies PCMachine)
Signed-off-by: zhenwei pi <pizhenwei@bytedance.com>
Signed-off-by: Michael Roth <michael.roth@amd.com>
Message-Id: <20210708170641.49410-1-michael.roth@amd.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Currently all built-in CPUs report cache information via CPUID leaves 2
and 4, but these have never been defined for AMD. In the case of
SEV-SNP this can cause issues with CPUID enforcement. Address this by
allowing CPU types to suppress these via a new "x-vendor-cpuid-only"
CPU property, which is true by default, but switched off for older
machine types to maintain compatibility.
Cc: "Dr. David Alan Gilbert" <dgilbert@redhat.com>
Cc: Eduardo Habkost <ehabkost@redhat.com>
Cc: Richard Henderson <richard.henderson@linaro.org>
Cc: Igor Mammedov <imammedo@redhat.com>
Cc: zhenwei pi <pizhenwei@bytedance.com>
Suggested-by: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Michael Roth <michael.roth@amd.com>
Message-Id: <20210708003623.18665-1-michael.roth@amd.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
When Hyper-V SynIC is enabled, we may need to allow Windows guests to make
hypercalls (POST_MESSAGES/SIGNAL_EVENTS). No issue is currently observed
because KVM is very permissive, allowing these hypercalls regarding of
guest visible CPUid bits.
Reviewed-by: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Message-Id: <20210608120817.1325125-9-vkuznets@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
According to TLFS, Hyper-V guest is supposed to check
HV_HYPERCALL_AVAILABLE privilege bit before accessing
HV_X64_MSR_GUEST_OS_ID/HV_X64_MSR_HYPERCALL MSRs but at least some
Windows versions ignore that. As KVM is very permissive and allows
accessing these MSRs unconditionally, no issue is observed. We may,
however, want to tighten the checks eventually. Conforming to the
spec is probably also a good idea.
Enable HV_HYPERCALL_AVAILABLE bit unconditionally.
Reviewed-by: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Message-Id: <20210608120817.1325125-8-vkuznets@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
hv_cpuid_check_and_set() does too much:
- Checks if the feature is supported by KVM;
- Checks if all dependencies are enabled;
- Sets the feature bit in cpu->hyperv_features for 'passthrough' mode.
To reduce the complexity, move all the logic except for dependencies
check out of it. Also, in 'passthrough' mode we don't really need to
check dependencies because KVM is supposed to provide a consistent
set anyway.
Reviewed-by: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Message-Id: <20210608120817.1325125-7-vkuznets@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
To make Hyper-V features appear in e.g. QMP query-cpu-model-expansion we
need to expand and set the corresponding CPUID leaves early. Modify
x86_cpu_get_supported_feature_word() to call newly intoduced Hyper-V
specific kvm_hv_get_supported_cpuid() instead of
kvm_arch_get_supported_cpuid(). We can't use kvm_arch_get_supported_cpuid()
as Hyper-V specific CPUID leaves intersect with KVM's.
Note, early expansion will only happen when KVM supports system wide
KVM_GET_SUPPORTED_HV_CPUID ioctl (KVM_CAP_SYS_HYPERV_CPUID).
Reviewed-by: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Message-Id: <20210608120817.1325125-6-vkuznets@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Currently, the only eVMCS version, supported by KVM (and described in TLFS)
is '1'. When Enlightened VMCS feature is enabled, QEMU takes the supported
eVMCS version range (from KVM_CAP_HYPERV_ENLIGHTENED_VMCS enablement) and
puts it to guest visible CPUIDs. When (and if) eVMCS ver.2 appears a
problem on migration is expected: it doesn't seem to be possible to migrate
from a host supporting eVMCS ver.2 to a host, which only support eVMCS
ver.1.
Hardcode eVMCS ver.1 as the result of 'hv-evmcs' enablement for now. Newer
eVMCS versions will have to have their own enablement options (e.g.
'hv-evmcs=2').
Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Reviewed-by: Eduardo Habkost <ehabkost@redhat.com>
Message-Id: <20210608120817.1325125-4-vkuznets@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
The loop is performing a simple boolean test for the existence
of a BP_CPU breakpoint at EIP. Plus it gets the iteration wrong,
if we happen to have a BP_GDB breakpoint at the same address.
We have a function for this: cpu_breakpoint_test.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Eduardo Habkost <ehabkost@redhat.com>
Message-Id: <20210620062317.1399034-1-richard.henderson@linaro.org>
Just use translator_use_goto_tb directly at the one call site,
rather than maintaining a local wrapper.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
The root trace-events only declares a single TCG event:
$ git grep -w tcg trace-events
trace-events:115:# tcg/tcg-op.c
trace-events:137:vcpu tcg guest_mem_before(TCGv vaddr, uint16_t info) "info=%d", "vaddr=0x%016"PRIx64" info=%d"
and only a tcg/tcg-op.c uses it:
$ git grep -l trace_guest_mem_before_tcg
tcg/tcg-op.c
therefore it is pointless to include "trace-tcg.h" in each target
(because it is not used). Remove it.
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20210629050935.2570721-1-f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Add a target-specific Kconfig. We need the definitions in Kconfig so
the minikconf tool can verify they exits. However CONFIG_FOO is only
enabled for target foo via the meson.build rules.
Two architecture have a particularity, ARM and MIPS. As their
translators have been split you can potentially build a plain 32 bit
build along with a 64-bit version including the 32-bit subset.
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20210131111316.232778-6-f4bug@amsat.org>
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Message-Id: <20210707131744.26027-2-alex.bennee@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Use raise_exception_ra (without error code) when raising the illegal
opcode operation; raise #GP when setting bits 63:32 of DR6 or DR7.
Move helper_get_dr to sysemu/ since it is a privileged instruction
that is not needed on user-mode emulators.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
DR6[63:32] and DR7[63:32] are reserved and need to be zero.
(AMD64 Architecture Programmer's Manual, V2, 15.5)
Signed-off-by: Lara Lazier <laramglazier@gmail.com>
Message-Id: <20210705081802.18960-3-laramglazier@gmail.com>
[Ignore for 32-bit builds. - Paolo]
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
The address of the last entry in the MSRPM and
in the IOPM must be smaller than the largest physical address.
(APM2 15.10-15.11)
Signed-off-by: Lara Lazier <laramglazier@gmail.com>
Message-Id: <20210705081802.18960-2-laramglazier@gmail.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Given that TCG is now the only consumer of X86XSaveArea, move the
structure definition and associated offset declarations and checks to a
TCG specific header.
Signed-off-by: David Edmondson <david.edmondson@oracle.com>
Message-Id: <20210705104632.2902400-9-david.edmondson@oracle.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Rather than relying on the X86XSaveArea structure definition,
determine the offset of XSAVE state areas using CPUID leaf 0xd where
possible (KVM and HVF).
Signed-off-by: David Edmondson <david.edmondson@oracle.com>
Message-Id: <20210705104632.2902400-8-david.edmondson@oracle.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Rather than relying on the X86XSaveArea structure definition directly,
the routines that manipulate the XSAVE state area should observe the
offsets declared in the x86_ext_save_areas array.
Currently the offsets declared in the array are derived from the
structure definition, resulting in no functional change.
Signed-off-by: David Edmondson <david.edmondson@oracle.com>
Message-Id: <20210705104632.2902400-7-david.edmondson@oracle.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Provide visibility of the x86_ext_save_areas array and associated type
outside of cpu.c.
Signed-off-by: David Edmondson <david.edmondson@oracle.com>
Message-Id: <20210705104632.2902400-6-david.edmondson@oracle.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
In preparation for removing assumptions about XSAVE area offsets, pass
a buffer pointer and buffer length to the XSAVE helper functions.
Signed-off-by: David Edmondson <david.edmondson@oracle.com>
Message-Id: <20210705104632.2902400-5-david.edmondson@oracle.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Replace the hard-coded size of offsets or structure elements with
defined constants or sizeof().
Signed-off-by: David Edmondson <david.edmondson@oracle.com>
Message-Id: <20210705104632.2902400-4-david.edmondson@oracle.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Rather than having similar but different checks in cpu.h and kvm.c,
move them all to cpu.h.
Message-Id: <20210705104632.2902400-3-david.edmondson@oracle.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Declare and use manifest constants for the XSAVE state component
offsets.
Signed-off-by: David Edmondson <david.edmondson@oracle.com>
Message-Id: <20210705104632.2902400-2-david.edmondson@oracle.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Use a break instead of an ifdefed else.
There's no need to move the values through s->T0.
Remove TCG_BSWAP_IZ and the preceding zero-extension.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Implement the new semantics in the fallback expansion.
Change all callers to supply the flags that keep the
semantics unchanged locally.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Linux 5.14 will add support for nested TSC scaling. Add the
corresponding feature in QEMU; to keep support for existing kernels,
do not add it to any processor yet.
The handling of the VMCS enumeration MSR is ugly; once we have more than
one case, we may want to add a table to check VMX features against.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Features:
* Add ratelimit for bus locks acquired in guest (Chenyi Qiang)
Documentation:
* SEV documentation updates (Tom Lendacky)
* Add a table showing x86-64 ABI compatibility levels (Daniel P. Berrangé)
Automated changes:
* Update Linux headers to 5.13-rc4 (Eduardo Habkost)
-----BEGIN PGP SIGNATURE-----
iQJIBAABCAAyFiEEWjIv1avE09usz9GqKAeTb5hNxaYFAmDM+T4UHGVoYWJrb3N0
QHJlZGhhdC5jb20ACgkQKAeTb5hNxabUrQ/+PtiJjd1cW9nhA0kWu8dVGq3xXJb4
Nbma86tRPKBauTeQCLccXEvUjLqgFejeQlArhq4QKErLisXu4TDuQ+GeAfdR7h5P
MTMSo0C665cT2/NbrwQizSPQdrNEgZAYRaDRafZLQTJ1TStzWDB1Vg79rzpWPcn0
76XjIfSdGZUa4B1OvjNvUFq/SXf+0hW75soCwRhDNh5tfzfyct0XCSRF/wTXqyR/
7yxDtfTzUAvT+6l3qb8ky+wqUTIY58BgjbdIGhyAUr5/N8y5YystF41TUVoy772k
pmCXHniMmgmhH7HVwGujtc6mPe5y1VFJVaA08Pzb7KwSfdO9F/3Gk3DHpKW8/whi
tCGluBqz0qlyhsnP9wDRJb6BzCBl2hVqu50DL+uSNsJOSIW60LLMJV4ANlDYdDM3
s33S5NrM0DsRAjrtczPdvKPWwaVE4NB2bYX1I3yYGgflwzQYOjBmswM/UgymhlZk
5dxtF9CX2p+Vre6UoLDKum1DJDCcWjHouJAAqZzxxEko56yWgTUSzTcK4GVOlsAc
qX4gJbFpOzDlSdpDTG/fcnQlCnwc1jxCzsB8Wy2KJiBif3Sa3Wh1s00Cp7oGNQt+
P/z2Fp1agl8u83bbvlIjZnsv0O2g5Ks4r5tBhXmqI36aiU26F/x39SUfp7/7OAUd
CQBGBGXqpnOmUcE=
=YWGX
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/ehabkost-gl/tags/x86-next-pull-request' into staging
x86 queue, 2021-06-18
Features:
* Add ratelimit for bus locks acquired in guest (Chenyi Qiang)
Documentation:
* SEV documentation updates (Tom Lendacky)
* Add a table showing x86-64 ABI compatibility levels (Daniel P. Berrangé)
Automated changes:
* Update Linux headers to 5.13-rc4 (Eduardo Habkost)
# gpg: Signature made Fri 18 Jun 2021 20:51:26 BST
# gpg: using RSA key 5A322FD5ABC4D3DBACCFD1AA2807936F984DC5A6
# gpg: issuer "ehabkost@redhat.com"
# gpg: Good signature from "Eduardo Habkost <ehabkost@redhat.com>" [full]
# Primary key fingerprint: 5A32 2FD5 ABC4 D3DB ACCF D1AA 2807 936F 984D C5A6
* remotes/ehabkost-gl/tags/x86-next-pull-request:
scripts: helper to generate x86_64 CPU ABI compat info
docs: add a table showing x86-64 ABI compatibility levels
docs/interop/firmware.json: Add SEV-ES support
docs: Add SEV-ES documentation to amd-memory-encryption.txt
doc: Fix some mistakes in the SEV documentation
i386: Add ratelimit for bus locks acquired in guest
Update Linux headers to 5.13-rc4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
We will shortly be interested in distinguishing pointers
from integers in the helper's declaration, as well as a
true void return. We currently have two parallel 1 bit
fields; merge them and expand to a 3 bit field.
Our current maximum is 7 helper arguments, plus the return
makes 8 * 3 = 24 bits used within the uint32_t typemask.
Tested-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
A bus lock is acquired through either split locked access to writeback
(WB) memory or any locked access to non-WB memory. It is typically >1000
cycles slower than an atomic operation within a cache and can also
disrupts performance on other cores.
Virtual Machines can exploit bus locks to degrade the performance of
system. To address this kind of performance DOS attack coming from the
VMs, bus lock VM exit is introduced in KVM and it can report the bus
locks detected in guest. If enabled in KVM, it would exit to the
userspace to let the user enforce throttling policies once bus locks
acquired in VMs.
The availability of bus lock VM exit can be detected through the
KVM_CAP_X86_BUS_LOCK_EXIT. The returned bitmap contains the potential
policies supported by KVM. The field KVM_BUS_LOCK_DETECTION_EXIT in
bitmap is the only supported strategy at present. It indicates that KVM
will exit to userspace to handle the bus locks.
This patch adds a ratelimit on the bus locks acquired in guest as a
mitigation policy.
Introduce a new field "bus_lock_ratelimit" to record the limited speed
of bus locks in the target VM. The user can specify it through the
"bus-lock-ratelimit" as a machine property. In current implementation,
the default value of the speed is 0 per second, which means no
restrictions on the bus locks.
As for ratelimit on detected bus locks, simply set the ratelimit
interval to 1s and restrict the quota of bus lock occurence to the value
of "bus_lock_ratelimit". A potential alternative is to introduce the
time slice as a property which can help the user achieve more precise
control.
The detail of bus lock VM exit can be found in spec:
https://software.intel.com/content/www/us/en/develop/download/intel-architecture-instruction-set-extensions-programming-reference.html
Signed-off-by: Chenyi Qiang <chenyi.qiang@intel.com>
Message-Id: <20210521043820.29678-1-chenyi.qiang@intel.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
When the selective CR0 write intercept is set, all writes to bits in
CR0 other than CR0.TS or CR0.MP cause a VMEXIT.
Signed-off-by: Lara Lazier <laramglazier@gmail.com>
Message-Id: <20210616123907.17765-5-laramglazier@gmail.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
The combination of unset CD and set NW bit in CR0 is illegal.
CR0[63:32] are also reserved and need to be zero.
(AMD64 Architecture Programmer's Manual, V2, 15.5)
Signed-off-by: Lara Lazier <laramglazier@gmail.com>
Message-Id: <20210616123907.17765-4-laramglazier@gmail.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Zero VMRUN intercept and ASID should cause an immediate VMEXIT
during the consistency checks performed by VMRUN.
(AMD64 Architecture Programmer's Manual, V2, 15.5)
Signed-off-by: Lara Lazier <laramglazier@gmail.com>
Message-Id: <20210616123907.17765-3-laramglazier@gmail.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Added cpu_svm_has_intercept to reduce duplication when checking the
corresponding intercept bit outside of cpu_svm_check_intercept_param
Signed-off-by: Lara Lazier <laramglazier@gmail.com>
Message-Id: <20210616123907.17765-2-laramglazier@gmail.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
This fixes host and max cpu initialization, by running the accel cpu
initialization only after all instance init functions are called for all
X86 cpu subclasses.
The bug this is fixing is related to the "max" and "host" i386 cpu
subclasses, which set cpu->max_features, which is then used at cpu
realization time.
In order to properly split the accel-specific max features code that
needs to be executed at cpu instance initialization time,
we cannot call the accel cpu initialization at the end of the x86 base
class initialization, or we will have no way to specialize
"max features" cpu behavior, overriding the "max" cpu class defaults,
and checking for the "max features" flag itself.
This patch moves the accel-specific cpu instance initialization to after
all x86 cpu instance code has been executed, including subclasses,
so that proper initialization of cpu "host" and "max" can be restored.
Fixes: f5cc5a5c ("i386: split cpu accelerators from cpu.c,"...)
Cc: Eduardo Habkost <ehabkost@redhat.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Claudio Fontana <cfontana@suse.de>
Message-Id: <20210603123001.17843-3-cfontana@suse.de>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
i386 realizefn code is sensitive to ordering, and recent commits
aimed at refactoring it, splitting accelerator-specific code,
broke assumptions which need to be fixed.
We need to:
* process hyper-v enlightements first, as they assume features
not to be expanded
* only then, expand features
* after expanding features, attempt to check them and modify them in the
accel-specific realizefn code called by cpu_exec_realizefn().
* after the framework has been called via cpu_exec_realizefn,
the code can check for what has or hasn't been set by accel-specific
code, or extend its results, ie:
- check and evenually set code_urev default
- modify cpu->mwait after potentially being set from host CPUID.
- finally check for phys_bits assuming all user and accel-specific
adjustments have already been taken into account.
Fixes: f5cc5a5c ("i386: split cpu accelerators from cpu.c"...)
Fixes: 30565f10 ("cpu: call AccelCPUClass::cpu_realizefn in"...)
Cc: Eduardo Habkost <ehabkost@redhat.com>
Cc: Vitaly Kuznetsov <vkuznets@redhat.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Claudio Fontana <cfontana@suse.de>
Message-Id: <20210603123001.17843-2-cfontana@suse.de>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
A recent cleanup did not recognize that there are two ways
to encode cr8: one via the LOCK and the other via REX.
Fixes: 7eff2e7c
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/380
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20210602035511.96834-1-richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
The format of the task state segment is governed by bit 3 in the
descriptor type field. On a task switch, the format for saving
is given by the current value of TR's type field, while the
format for loading is given by the new descriptor.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
According to the manual, the high 16-bit of the registers are preserved
when switching to a 16-bit task. Implement this in switch_tss_ra.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
The TSS offsets in the manuals have only 2-byte slots for the
segment registers. QEMU incorrectly uses 4-byte slots, so
that SS overlaps the LDT selector.
Resolves: #382
Reported-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Use an enumeration instead of raw 32/64/80 values.
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
The hooks we have that call us after reset, init and loadvm really all
just want to say "The reference of all register state is in the QEMU
vcpu struct, please push it".
We already have a working pushing mechanism though called cpu->vcpu_dirty,
so we can just reuse that for all of the above, syncing state properly the
next time we actually execute a vCPU.
This fixes PSCI resets on ARM, as they modify CPU state even after the
post init call has completed, but before we execute the vCPU again.
To also make the scheme work for x86, we have to make sure we don't
move stale eflags into our env when the vcpu state is dirty.
Signed-off-by: Alexander Graf <agraf@csgraf.de>
Reviewed-by: Roman Bolshakov <r.bolshakov@yadro.com>
Tested-by: Roman Bolshakov <r.bolshakov@yadro.com>
Reviewed-by: Sergio Lopez <slp@redhat.com>
Message-id: 20210519202253.76782-13-agraf@csgraf.de
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
We will need more than a single field for hvf going forward. To keep
the global vcpu struct uncluttered, let's allocate a special hvf vcpu
struct, similar to how hax does it.
Signed-off-by: Alexander Graf <agraf@csgraf.de>
Reviewed-by: Roman Bolshakov <r.bolshakov@yadro.com>
Tested-by: Roman Bolshakov <r.bolshakov@yadro.com>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Sergio Lopez <slp@redhat.com>
Message-id: 20210519202253.76782-12-agraf@csgraf.de
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
We can move the definition of hvf_vcpu_exec() into our internal
hvf header, obsoleting the need for hvf-accel-ops.h.
Signed-off-by: Alexander Graf <agraf@csgraf.de>
Reviewed-by: Sergio Lopez <slp@redhat.com>
Message-id: 20210519202253.76782-11-agraf@csgraf.de
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
There is no reason to call the hvf specific hvf_cpu_synchronize_state()
when we can just use the generic cpu_synchronize_state() instead. This
allows us to have less dependency on internal function definitions and
allows us to make hvf_cpu_synchronize_state() static.
Signed-off-by: Alexander Graf <agraf@csgraf.de>
Reviewed-by: Sergio Lopez <slp@redhat.com>
Message-id: 20210519202253.76782-9-agraf@csgraf.de
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Until now, Hypervisor.framework has only been available on x86_64 systems.
With Apple Silicon shipping now, it extends its reach to aarch64. To
prepare for support for multiple architectures, let's start moving common
code out into its own accel directory.
This patch splits the vcpu init and destroy functions into a generic and
an architecture specific portion. This also allows us to move the generic
functions into the generic hvf code, removing exported functions.
Signed-off-by: Alexander Graf <agraf@csgraf.de>
Reviewed-by: Sergio Lopez <slp@redhat.com>
Message-id: 20210519202253.76782-8-agraf@csgraf.de
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Until now, Hypervisor.framework has only been available on x86_64 systems.
With Apple Silicon shipping now, it extends its reach to aarch64. To
prepare for support for multiple architectures, let's start moving common
code out into its own accel directory.
This patch moves a few internal struct and constant defines over.
Signed-off-by: Alexander Graf <agraf@csgraf.de>
Reviewed-by: Sergio Lopez <slp@redhat.com>
Message-id: 20210519202253.76782-5-agraf@csgraf.de
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Until now, Hypervisor.framework has only been available on x86_64 systems.
With Apple Silicon shipping now, it extends its reach to aarch64. To
prepare for support for multiple architectures, let's start moving common
code out into its own accel directory.
This patch moves CPU and memory operations over. While at it, make sure
the code is consumable on non-i386 systems.
Signed-off-by: Alexander Graf <agraf@csgraf.de>
Reviewed-by: Sergio Lopez <slp@redhat.com>
Message-id: 20210519202253.76782-4-agraf@csgraf.de
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Until now, Hypervisor.framework has only been available on x86_64 systems.
With Apple Silicon shipping now, it extends its reach to aarch64. To
prepare for support for multiple architectures, let's start moving common
code out into its own accel directory.
This patch moves the vCPU thread loop over.
Signed-off-by: Alexander Graf <agraf@csgraf.de>
Reviewed-by: Sergio Lopez <slp@redhat.com>
Message-id: 20210519202253.76782-3-agraf@csgraf.de
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Until now, Hypervisor.framework has only been available on x86_64 systems.
With Apple Silicon shipping now, it extends its reach to aarch64. To
prepare for support for multiple architectures, let's start moving common
code out into its own accel directory.
This patch moves assert_hvf_ok() and introduces generic build infrastructure.
Signed-off-by: Alexander Graf <agraf@csgraf.de>
Reviewed-by: Sergio Lopez <slp@redhat.com>
Message-id: 20210519202253.76782-2-agraf@csgraf.de
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* Bump minimum versions of some requirements after removing CentOS 7 support
-----BEGIN PGP SIGNATURE-----
iQJFBAABCAAvFiEEJ7iIR+7gJQEY8+q5LtnXdP5wLbUFAmC3L1IRHHRodXRoQHJl
ZGhhdC5jb20ACgkQLtnXdP5wLbU8wA//aRAIn0Qa3xoZdsT434P99GHauCZ5ePq3
meY4co69c+TfkQZ/b0xlyvT3+7bd9Ni92CQL7n/LtX5bs3pIhRjHNIsm50e4x6Da
0jb422cKWfRIltXlCfdL/dnFRtJluH83M0seRGMvmhveuWPZ19oSIasEiQjeg//e
KMNK5z4HEnJ1czb3Bf8p38bmmY/O/QEAA5wAqk7iJkJHz6T/GlqImLYYwpFPlHj6
JQttm0aWsHrsqEXxnuV0/DT1yHyXDB6S4iuAvABZWhv/M/nCaXo0ib0gW5NPtRPo
Yf3HO163F9/fewJCc1AUsBe1C/2UwmSWRhEtxpr9uuW2Mv9qEl3hkJwd4k6sEOvh
U4i+GONC4eElPcmECKUfHA9EP+7faDs6xnM6Ev/PIEp+cPJ2QRfklZv4qpMUWYtb
3KkADchOyVZAsdB8cGmnznDEVmno1Dt0adVUq8CF6uW6MwD3pb4838arrfwIfwOp
g4yTI1AQQykkxxOaR7GSNoxWRti7TH4fzLhx/xXDd9TKIOWuOepiyuhB7+Q48WAJ
6EW/JUIzOe7k3GsI0iBsk+y67ED2tpATiRKWWw4QS1BwIhNEMUqgZkucFfKE6Ze1
B1Xw+Di+1CzrLHV6lCjtdLVmqWqcGcO43HqhSK4sodE87BDGGwy9UxE6oBVt2C1x
HPQ6/omDlFk=
=RAkI
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/thuth-gitlab/tags/pull-request-2021-06-02' into staging
* Update the references to some doc files (use *.rst instead of *.txt)
* Bump minimum versions of some requirements after removing CentOS 7 support
# gpg: Signature made Wed 02 Jun 2021 08:12:18 BST
# gpg: using RSA key 27B88847EEE0250118F3EAB92ED9D774FE702DB5
# gpg: issuer "thuth@redhat.com"
# gpg: Good signature from "Thomas Huth <th.huth@gmx.de>" [full]
# gpg: aka "Thomas Huth <thuth@redhat.com>" [full]
# gpg: aka "Thomas Huth <huth@tuxfamily.org>" [full]
# gpg: aka "Thomas Huth <th.huth@posteo.de>" [unknown]
# Primary key fingerprint: 27B8 8847 EEE0 2501 18F3 EAB9 2ED9 D774 FE70 2DB5
* remotes/thuth-gitlab/tags/pull-request-2021-06-02:
configure: bump min required CLang to 6.0 / XCode 10.0
configure: bump min required GCC to 7.5.0
configure: bump min required glib version to 2.56
tests/docker: drop CentOS 7 container
tests/vm: convert centos VM recipe to CentOS 8
crypto: drop used conditional check
crypto: bump min gnutls to 3.5.18, dropping RHEL-7 support
crypto: bump min gcrypt to 1.8.0, dropping RHEL-7 support
crypto: drop back compatibility typedefs for nettle
crypto: bump min nettle to 3.4, dropping RHEL-7 support
patchew: move quick build job from CentOS 7 to CentOS 8 container
block/ssh: Bump minimum libssh version to 0.8.7
docs: fix references to docs/devel/s390-dasd-ipl.rst
docs: fix references to docs/specs/tpm.rst
docs: fix references to docs/devel/build-system.rst
docs: fix references to docs/devel/atomics.rst
docs: fix references to docs/devel/tracing.rst
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Commit e50caf4a5c ("tracing: convert documentation to rST")
converted docs/devel/tracing.txt to docs/devel/tracing.rst.
We still have several references to the old file, so let's fix them
with the following command:
sed -i s/tracing.txt/tracing.rst/ $(git grep -l docs/devel/tracing.txt)
Signed-off-by: Stefano Garzarella <sgarzare@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Message-Id: <20210517151702.109066-2-sgarzare@redhat.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
The SEV userspace header[1] exports a couple of other error conditions that
aren't listed in QEMU's SEV implementation, so let's just round out the
list.
[1] linux-headers/linux/psp-sev.h
Signed-off-by: Connor Kuehl <ckuehl@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Message-Id: <20210430134830.254741-3-ckuehl@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
This can help lower any margin for error when making future additions to
the list, especially if they're made out of order.
While doing so, make capitalization of ASID consistent with its usage in
the SEV firmware spec (Asid -> ASID).
Signed-off-by: Connor Kuehl <ckuehl@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Message-Id: <20210430134830.254741-2-ckuehl@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
The SEV FW >= 0.23 added a new command that can be used to query the
attestation report containing the SHA-256 digest of the guest memory
and VMSA encrypted with the LAUNCH_UPDATE and sign it with the PEK.
Note, we already have a command (LAUNCH_MEASURE) that can be used to
query the SHA-256 digest of the guest memory encrypted through the
LAUNCH_UPDATE. The main difference between previous and this command
is that the report is signed with the PEK and unlike the LAUNCH_MEASURE
command the ATTESATION_REPORT command can be called while the guest
is running.
Add a QMP interface "query-sev-attestation-report" that can be used
to get the report encoded in base64.
Cc: James Bottomley <jejb@linux.ibm.com>
Cc: Tom Lendacky <Thomas.Lendacky@amd.com>
Cc: Eric Blake <eblake@redhat.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: kvm@vger.kernel.org
Reviewed-by: James Bottomley <jejb@linux.ibm.com>
Tested-by: James Bottomley <jejb@linux.ibm.com>
Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
Reviewed-by: Connor Kuehl <ckuehl@redhat.com>
Message-Id: <20210429170728.24322-1-brijesh.singh@amd.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
There is no need to use vCPU-specific kvm state in hyperv_enabled() check
and we need to do that when feature expansion happens early, before vCPU
specific KVM state is created.
Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Reviewed-by: Eduardo Habkost <ehabkost@redhat.com>
Message-Id: <20210422161130.652779-15-vkuznets@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
KVM_GET_SUPPORTED_HV_CPUID was made a system wide ioctl which can be called
prior to creating vCPUs and we are going to use that to expand Hyper-V cpu
features early. Use it when it is supported by KVM.
Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Reviewed-by: Eduardo Habkost <ehabkost@redhat.com>
Message-Id: <20210422161130.652779-14-vkuznets@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
SYNDBG leaves were recently (Linux-5.8) added to KVM but we haven't
updated the expected size of KVM_GET_SUPPORTED_HV_CPUID output in
KVM so we now make serveral tries before succeeding. Update the
default.
Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Reviewed-by: Eduardo Habkost <ehabkost@redhat.com>
Message-Id: <20210422161130.652779-13-vkuznets@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
hyperv_expand_features() will be called before we create vCPU so
evmcs enablement should go away. hyperv_init_vcpu() looks like the
right place.
Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Reviewed-by: Eduardo Habkost <ehabkost@redhat.com>
Message-Id: <20210422161130.652779-11-vkuznets@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
The intention is to call hyperv_expand_features() early, before vCPUs
are created and use the acquired data later when we set guest visible
CPUID data.
Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Reviewed-by: Eduardo Habkost <ehabkost@redhat.com>
Message-Id: <20210422161130.652779-10-vkuznets@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Just like with cpuid_cache, it makes no sense to call
KVM_GET_SUPPORTED_HV_CPUID more than once and instead of (ab)using
env->features[] and/or trying to keep all the code in one place, it is
better to introduce persistent hv_cpuid_cache and hv_cpuid_get_host()
accessor to it.
Note, hv_cpuid_get_fw() is converted to using hv_cpuid_get_host()
just to be removed later with Hyper-V specific feature words.
Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Reviewed-by: Eduardo Habkost <ehabkost@redhat.com>
Message-Id: <20210422161130.652779-9-vkuznets@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Hyper-V feature leaves are weird. We have some of them in
feature_word_info[] array but we don't use feature_word_info
magic to enable them. Neither do we use feature_dependencies[]
mechanism to validate the configuration as it doesn't allign
well with Hyper-V's many-to-many dependency chains. Some of
the feature leaves hold not only feature bits, but also values.
E.g. FEAT_HV_NESTED_EAX contains both features and the supported
Enlightened VMCS range.
Hyper-V features are already represented in 'struct X86CPU' with
uint64_t hyperv_features so duplicating them in env->features adds
little (or zero) benefits. THe other half of Hyper-V emulation features
is also stored with values in hyperv_vendor_id[], hyperv_limits[],...
so env->features[] is already incomplete.
Remove Hyper-V feature leaves from env->features[] completely.
kvm_hyperv_properties[] is converted to using raw CPUID func/reg
pairs for features, this allows us to get rid of hv_cpuid_get_fw()
conversion.
Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Reviewed-by: Eduardo Habkost <ehabkost@redhat.com>
Message-Id: <20210422161130.652779-8-vkuznets@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
As a preparatory patch to dropping Hyper-V CPUID leaves from
feature_word_info[] stop using env->features[] as a temporary
storage of Hyper-V CPUIDs, just build Hyper-V CPUID leaves directly
from kvm_hyperv_properties[] data.
Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Reviewed-by: Eduardo Habkost <ehabkost@redhat.com>
Message-Id: <20210422161130.652779-5-vkuznets@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
We have all the required data in X86CPU already and as we are about to
split hyperv_handle_properties() into hyperv_expand_features()/
hyperv_fill_cpuids() we can remove the blind copy. The functional change
is that QEMU won't pass CPUID leaves it doesn't currently know about
to the guest but arguably this is a good change.
Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Reviewed-by: Eduardo Habkost <ehabkost@redhat.com>
Message-Id: <20210422161130.652779-4-vkuznets@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
There is no need to have this special case: like all other Hyper-V
enlightenments we can just use kernel's supplied value in hv_passthrough
mode.
Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Reviewed-by: Eduardo Habkost <ehabkost@redhat.com>
Message-Id: <20210422161130.652779-3-vkuznets@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
When cpu->hyperv_vendor is not set manually we default to "Microsoft Hv"
and in 'hv_passthrough' mode we get the information from the host. This
information is stored in cpu->hyperv_vendor_id[] array but we don't update
cpu->hyperv_vendor string so e.g. QMP's query-cpu-model-expansion output
is incorrect.
Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Reviewed-by: Eduardo Habkost <ehabkost@redhat.com>
Message-Id: <20210422161130.652779-2-vkuznets@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
The 'max' CPU under TCG currently reports a family/model/stepping that
approximately corresponds to an AMD K7 vintage architecture.
The K7 series predates the introduction of 64-bit support by AMD
in the K8 series. This has been reported to lead to LLVM complaints
about generating 64-bit code for a 32-bit CPU target
LLVM ERROR: 64-bit code requested on a subtarget that doesn't support it!
It appears LLVM looks at the family/model/stepping, despite qemu64
reporting it is 64-bit capable.
This patch changes 'max' to report a CPUID with the family, model
and stepping taken from a
AMD Athlon(tm) 64 X2 Dual Core Processor 4000+
which is one of the first 64-bit AMD CPUs.
Closes https://gitlab.com/qemu-project/qemu/-/issues/191
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
Message-Id: <20210507133650.645526-3-berrange@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
The 'qemu64' CPUID currently reports a family/model/stepping that
approximately corresponds to an AMD K7 vintage architecture.
The K7 series predates the introduction of 64-bit support by AMD
in the K8 series. This has been reported to lead to LLVM complaints
about generating 64-bit code for a 32-bit CPU target
LLVM ERROR: 64-bit code requested on a subtarget that doesn't support it!
It appears LLVM looks at the family/model/stepping, despite qemu64
reporting it is 64-bit capable.
This patch changes 'qemu64' to report a CPUID with the family, model
and stepping taken from a
AMD Athlon(tm) 64 X2 Dual Core Processor 4000+
which is one of the first 64-bit AMD CPUs.
Closes https://gitlab.com/qemu-project/qemu/-/issues/191
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
Reviewed-by: Eduardo Habkost <ehabkost@redhat.com>
Message-Id: <20210507133650.645526-2-berrange@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Since commit fa4518741e (target-i386: Rename struct XMMReg to ZMMReg),
CPUX86State.xmm_regs[] has already been extended to 512bit to support
AVX512.
Also, other qemu level supports for AVX512 registers are there for
years.
But in x86_cpu_dump_state(), still only dump XMM registers no matter
YMM/ZMM is enabled.
This patch is to complement this, let it dump XMM/YMM/ZMM accordingly.
Signed-off-by: Robert Hoo <robert.hu@linux.intel.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <1618986232-73826-1-git-send-email-robert.hu@linux.intel.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
It's very easy to mistakenly extend kvm_default_props to include
features that require a kernel version that's too recent. Add a
comment warning about that, pointing to the documentation file
where the minimum kernel version for KVM is documented.
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Acked-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <20200925211021.4158567-1-ehabkost@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Hyper-V 2016 refuses to boot on Skylake+ CPU models because they lack
'xsaves'/'vmx-xsaves' features and this diverges from real hardware. The
same issue emerges with AMD "EPYC" CPU model prior to version 3 which got
'xsaves' added. EPYC-Rome/EPYC-Milan CPU models have 'xsaves' enabled from
the very beginning so the comment blaming KVM to explain why other CPUs
lack 'xsaves' is likely outdated.
Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Message-Id: <20210412073952.860944-1-vkuznets@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
We no longer have any runtime modifications to this struct,
so declare them all const.
Tested-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-ID: <20210227232519.222663-3-richard.henderson@linaro.org>
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20210517105140.1062037-23-f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20210517105140.1062037-22-f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20210517105140.1062037-21-f4bug@amsat.org>
[rth: Drop declaration movement from target/*/cpu.h]
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20210517105140.1062037-20-f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
The write_elf*() handlers are used to dump vmcore images.
This feature is only meaningful for system emulation.
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20210517105140.1062037-19-f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
cpu_get_crash_info() is called on GUEST_PANICKED events,
which only occur in system emulation.
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20210517105140.1062037-18-f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Migration is specific to system emulation.
- Move the CPUClass::vmsd field to SysemuCPUOps,
- restrict VMSTATE_CPU() macro to sysemu,
- vmstate_dummy is now unused, remove it.
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20210517105140.1062037-16-f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Introduce a structure to hold handler specific to sysemu.
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20210517105140.1062037-15-f4bug@amsat.org>
[rth: Squash "restrict hw/core/sysemu-cpu-ops.h" patch]
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Quoting Peter Maydell [*]:
There are two ways to handle migration for
a CPU object:
(1) like any other device, so it has a dc->vmsd that covers
migration for the whole object. As usual for objects that are a
subclass of a parent that has state, the first entry in the
VMStateDescription field list is VMSTATE_CPU(), which migrates
the cpu_common fields, followed by whatever the CPU's own migration
fields are.
(2) a backwards-compatible mechanism for CPUs that were
originally migrated using manual "write fields to the migration
stream structures". The on-the-wire migration format
for those is based on the 'env' pointer (which isn't a QOM object),
and the cpu_common part of the migration data is elsewhere.
cpu_exec_realizefn() handles both possibilities:
* for type 1, dc->vmsd is set and cc->vmsd is not,
so cpu_exec_realizefn() does nothing, and the standard
"register dc->vmsd for a device" code does everything needed
* for type 2, dc->vmsd is NULL and so we register the
vmstate_cpu_common directly to handle the cpu-common fields,
and the cc->vmsd to handle the per-CPU stuff
You can't change a CPU from one type to the other without breaking
migration compatibility, which is why some guest architectures
are stuck on the cc->vmsd form. New targets should use dc->vmsd.
To avoid new targets to start using type (2), rename cc->vmsd as
cc->legacy_vmsd. The correct field to implement is dc->vmsd (the
DeviceClass one).
See also commit b170fce3dd ("cpu: Register VMStateDescription
through CPUState") for historic background.
[*] https://www.mail-archive.com/qemu-devel@nongnu.org/msg800849.html
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Cc: Peter Maydell <peter.maydell@linaro.org>
Message-Id: <20210517105140.1062037-13-f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Expose AVX (VEX-encoded) versions of the Vector Neural Network
Instructions to guest.
The bit definition:
CPUID.(EAX=7,ECX=1):EAX[bit 4] AVX_VNNI
The following instructions are available when this feature is
present in the guest.
1. VPDPBUS: Multiply and Add Unsigned and Signed Bytes
2. VPDPBUSDS: Multiply and Add Unsigned and Signed Bytes with Saturation
3. VPDPWSSD: Multiply and Add Signed Word Integers
4. VPDPWSSDS: Multiply and Add Signed Integers with Saturation
As for the kvm related code, please reference Linux commit id 1085a6b585d7.
The release document ref below link:
https://software.intel.com/content/www/us/en/develop/download/\
intel-architecture-instruction-set-extensions-programming-reference.html
Signed-off-by: Yang Zhong <yang.zhong@intel.com>
Message-Id: <20210407015609.22936-1-yang.zhong@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
tb_gen_code() assumes that tb->size must never be zero, otherwise it
may produce spurious exceptions. For x86_64 this may happen when
creating a translation block for the vsyscall page.
Fix by pretending that vsyscall translation blocks have at least one
instruction.
Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20210519045738.1335210-2-iii@linux.ibm.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
With the previous patch for check_io, we now have enough for
the compiler to dead-code eliminate all of the i/o helpers.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <20210514151342.384376-51-richard.henderson@linaro.org>
The we never allow i/o from user-only, and the tss check
that helper_check_io does will always fail. Use an ifdef
within gen_check_io and return false, indicating that an
exception is known to be raised.
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <20210514151342.384376-50-richard.henderson@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Drop helper_check_io[bwl] and expose their common
subroutine to tcg directly.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20210514151342.384376-49-richard.henderson@linaro.org>
Pass in a pre-truncated TCGv_i32 value. We were doing the
truncation of EDX in multiple places, now only once per insn.
While all callers use s->tmp2_i32, for cleanliness of the
subroutine, use a parameter anyway.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <20210514151342.384376-48-richard.henderson@linaro.org>
Get cur_eip from DisasContext. Do not require the caller
to use svm_is_rep; get prefix from DisasContext. Use the
proper symbolic constants for SVM_IOIO_*.
While we're touching all call sites, return bool in
preparation for gen_check_io raising #GP.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <20210514151342.384376-47-richard.henderson@linaro.org>
At minimum, wrmsr can change efer, which affects HF_LMA.
Cc: qemu-stable@nongnu.org
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <20210514151342.384376-46-richard.henderson@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <20210514151342.384376-45-richard.henderson@linaro.org>
The user-version is a no-op. This lets us completely
remove tcg/user/svm_stubs.c.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <20210514151342.384376-44-richard.henderson@linaro.org>
Use a single helper, flush_page, to do the work.
Use gen_svm_check_intercept.
Perform the zero-extension for invlpga inline.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <20210514151342.384376-43-richard.henderson@linaro.org>
These instructions are all privileged.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <20210514151342.384376-42-richard.henderson@linaro.org>
Having the callers upcast to X86CPU is a waste, since we
don't need it. We even have to recover env in do_hlt.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <20210514151342.384376-41-richard.henderson@linaro.org>
Pull the svm intercept check into the translator.
Pull the entire implementation of lmsw into the translator.
Push the check for CR8LEG into the regno validation switch.
Unify the gen_io_start check between read/write.
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20210514151342.384376-40-richard.henderson@linaro.org>
This function is only called from tcg/sysemu/.
There is no need for a stub in tcg/user/.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <20210514151342.384376-39-richard.henderson@linaro.org>
When exiting helper_svm_check_intercept via exception, cpu_vmexit
calls cpu_restore_state, which will recover eip and cc_op via unwind.
Therefore we do not need to store eip or cc_op before the call.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <20210514151342.384376-38-richard.henderson@linaro.org>
The param argument to helper_svm_check_intercept_param is always 0;
eliminate it and rename to helper_svm_check_intercept. Fold
gen_svm_check_intercept_param into gen_svm_check_intercept.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <20210514151342.384376-37-richard.henderson@linaro.org>
Both invocations pass the start of the current instruction,
which is available as s->base.pc_next. The function sets
is_jmp, so we can eliminate a second setting.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <20210514151342.384376-36-richard.henderson@linaro.org>
Any helper that always raises an exception or interrupt,
or simply exits to the main loop, can be so marked.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <20210514151342.384376-35-richard.henderson@linaro.org>
Use STUB_HELPER to ensure that such calls are always eliminated.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <20210514151342.384376-34-richard.henderson@linaro.org>
Our sysemu implementation is a stub. We can already intercept
instructions for vmexit, and raising #UD is trivial.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <20210514151342.384376-33-richard.henderson@linaro.org>
For user-only, we do not need to check for VMM intercept.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <20210514151342.384376-32-richard.henderson@linaro.org>
Most of the VMM instructions are already disabled for user-only,
by being usable only from ring 0.
The spec is intentionally loose for VMMCALL, allowing the VMM to
define syscalls for user-only. However, we're not emulating any
VMM, so VMMCALL can just raise #UD unconditionally.
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20210514151342.384376-31-richard.henderson@linaro.org>
This removes an ifdef from the middle of disas_insn,
and ensures that the branch is not reachable.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <20210514151342.384376-30-richard.henderson@linaro.org>
Sort all of the single-byte members to the same area
of the structure, eliminating 8 bytes of padding.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <20210514151342.384376-29-richard.henderson@linaro.org>
After fixing a typo in the comment, fixup for CODING_STYLE.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20210514151342.384376-28-richard.henderson@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <20210514151342.384376-27-richard.henderson@linaro.org>
It's just as easy to clear the flag with AND than assignment.
In two cases the test for the bit can be folded together with
the test for HF_INHIBIT_IRQ_MASK.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <20210514151342.384376-26-richard.henderson@linaro.org>
Both of these fields store the size of a single memory access,
so the range of values is 0-8.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <20210514151342.384376-25-richard.henderson@linaro.org>
Currently, vex_l is either {0,1}; if in the future we implement
AVX-512, the max value will be 2. In vex_v we store a register
number. This is 0-15 for SSE, and 0-31 for AVX-512.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <20210514151342.384376-24-richard.henderson@linaro.org>
The highest bit in this set is 0x40 (PREFIX_REX).
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <20210514151342.384376-23-richard.henderson@linaro.org>
The range of values is -1 (none) to 5 (R_GS).
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <20210514151342.384376-22-richard.henderson@linaro.org>
The value comes from tb->flags, which is uint32_t.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <20210514151342.384376-21-richard.henderson@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <20210514151342.384376-20-richard.henderson@linaro.org>
Treat this flag exactly like we treat the other rex bits.
The -1 initialization is unused; the two tests are > 0 and == 1,
so the value can be reduced to a bool.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <20210514151342.384376-19-richard.henderson@linaro.org>
Treat this flag exactly like we treat rex_b and rex_x.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <20210514151342.384376-18-richard.henderson@linaro.org>
Change the storage from int to uint8_t since the value is in {0,8}.
For x86_64 add 0 in the macros to (1) promote the type back to int,
and (2) make the macro an rvalue.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <20210514151342.384376-17-richard.henderson@linaro.org>
The existing flag, x86_64_hregs, does not accurately describe
its setting. It is true if and only if a REX prefix has been
seen. Yes, that affects the "h" regs, but that's secondary.
Add PREFIX_REX and include this bit in s->prefix. Add REX_PREFIX
so that the check folds away when x86_64 is compiled out.
Fold away the reg >= 8 check, because bit 3 of the register
number comes from the REX prefix in the first place.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <20210514151342.384376-16-richard.henderson@linaro.org>
LMA disables traditional segmentation, exposing a flat address space.
This means that ADDSEG is off.
Since we're adding an accessor macro, pull the value directly out
of flags otherwise.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <20210514151342.384376-15-richard.henderson@linaro.org>
LMA is a pre-requisite for CODE64, so there is no way to disable it
for x86_64-linux-user, and there is no way to enable it for i386.
Since we're adding an accessor macro, pull the value directly out
of flags when we're not assuming a constant.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <20210514151342.384376-14-richard.henderson@linaro.org>
For x86_64 user-only, there is no way to leave 64-bit mode.
Without x86_64, there is no way to enter 64-bit mode. There is
an existing macro to aid with that; simply place it in the right
place in the ifdef chain.
Since we're adding an accessor macro, pull the value directly out
of flags when we're not assuming a constant.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <20210514151342.384376-13-richard.henderson@linaro.org>
For user-only, SS32 == !VM86, because we are never in
real-mode. Since we cannot enter vm86 mode for x86_64
user-only, SS32 is always set.
Since we're adding an accessor macro, pull the value
directly out of flags otherwise.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <20210514151342.384376-12-richard.henderson@linaro.org>
For user-only, CODE32 == !VM86, because we are never in real-mode.
Since we cannot enter vm86 mode for x86_64 user-only, CODE32 is
always set.
Since we're adding an accessor macro, pull the value directly out
of flags otherwise.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <20210514151342.384376-11-richard.henderson@linaro.org>
For i386-linux-user, we can enter vm86 mode via the vm86(2) syscall.
That syscall explicitly returns to 32-bit mode, and the syscall does
not exist for a 64-bit x86_64 executable.
Since we're adding an accessor macro, pull the value directly out of
flags otherwise.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <20210514151342.384376-10-richard.henderson@linaro.org>
On real hardware, the linux kernel has the iopl(2) syscall which
can set IOPL to 3, to allow e.g. the xserver to briefly disable
interrupts while programming the graphics card.
However, QEMU cannot and does not implement this syscall, so the
IOPL is never changed from 0. Which means that all of the checks
vs CPL <= IOPL are false for user-only.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <20210514151342.384376-9-richard.henderson@linaro.org>
A user-mode executable always runs in ring 3.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <20210514151342.384376-8-richard.henderson@linaro.org>
A user-mode executable is never in real-mode. Since we're adding
an accessor macro, pull the value directly out of flags for sysemu.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <20210514151342.384376-7-richard.henderson@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <20210514151342.384376-6-richard.henderson@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <20210514151342.384376-5-richard.henderson@linaro.org>
In vm86 mode, we use the same helper as real-mode, but with
an extra check for IOPL. All non-exceptional paths set EFLAGS.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <20210514151342.384376-4-richard.henderson@linaro.org>
Split out the check for CPL != 0 and the raising of #GP.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <20210514151342.384376-3-richard.henderson@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <20210514151342.384376-2-richard.henderson@linaro.org>
Ram block notifiers are currently not aware of resizes. To properly
handle resizes during migration, we want to teach ram block notifiers about
resizeable ram.
Introduce the basic infrastructure but keep using max_size in the
existing notifiers. Supply the max_size when adding and removing ram
blocks. Also, notify on resizes.
Acked-by: Paul Durrant <paul@xen.org>
Reviewed-by: Peter Xu <peterx@redhat.com>
Cc: xen-devel@lists.xenproject.org
Cc: haxm-team@intel.com
Cc: Paul Durrant <paul@xen.org>
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: Anthony Perard <anthony.perard@citrix.com>
Cc: Wenchao Wang <wenchao.wang@intel.com>
Cc: Colin Xu <colin.xu@intel.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20210429112708.12291-3-david@redhat.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Unify the duplicate code between get_hphys and mmu_translate, by simply
making get_hphys call mmu_translate. This also fixes the support for
5-level nested page tables.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
In order to unify the two stages of page table lookup, we need
mmu_translate to use either the host CR0/EFER/CR4 or the guest's.
To do so, make mmu_translate use the same pg_mode constants that
were used for the NPT lookup.
This also prepares for adding 5-level NPT support, which however does
not work yet.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Extract the page table lookup out of handle_mmu_fault, which only has
to invoke mmu_translate and either fill the TLB or deliver the page
fault.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
We will reuse the page walker for both SVM and regular accesses. To do
so we will build a function that receives the currently active paging
mode; start by including in cpu.h the constants and the function to go
from cr4/hflags/efer to the paging mode.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
while on x86 all CPU classes can use the same set of TCGCPUOps,
on ARM the right accel behavior depends on the type of the CPU.
So we need a way to specialize the accel behavior according to
the CPU. Therefore, add a second initialization, after the
accel_cpu->cpu_class_init, that allows to do this.
Signed-off-by: Claudio Fontana <cfontana@suse.de>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <20210322132800.7470-24-cfontana@suse.de>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
cpu_load_efer is now used only for sysemu code.
Therefore, move this function implementation to
sysemu-only section of helper.c
Signed-off-by: Claudio Fontana <cfontana@suse.de>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20210322132800.7470-22-cfontana@suse.de>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
a number of registers are read as 64bit under the condition that
(hflags & HF_CS64_MASK) || TARGET_X86_64)
and a number of registers are written as 64bit under the condition that
(hflags & HF_CS64_MASK).
Provide some auxiliary functions that do that.
Signed-off-by: Claudio Fontana <cfontana@suse.de>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20210322132800.7470-20-cfontana@suse.de>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
For now we just copy over the previous user stubs, but really,
everything that requires s->cpl == 0 should be impossible
to trigger from user-mode emulation.
Later on we should add a check that asserts this easily f.e.:
static bool check_cpl0(DisasContext *s)
{
int cpl = s->cpl;
#ifdef CONFIG_USER_ONLY
assert(cpl == 3);
#endif
if (cpl != 0) {
gen_exception(s, EXCP0D_GPF, s->pc_start - s->cs_base);
return false;
}
return true;
}
Signed-off-by: Claudio Fontana <cfontana@suse.de>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20210322132800.7470-17-cfontana@suse.de>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
create a separate tcg/sysemu/fpu_helper.c for the sysemu-only parts.
For user mode, some small #ifdefs remain in tcg/fpu_helper.c
which do not seem worth splitting into their own user-mode module.
Signed-off-by: Claudio Fontana <cfontana@suse.de>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20210322132800.7470-16-cfontana@suse.de>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
for user-mode, assert that the hidden IOBPT flags are not set
while attempting to generate io_bpt helpers.
Signed-off-by: Claudio Fontana <cfontana@suse.de>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20210322132800.7470-14-cfontana@suse.de>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
smm is only really useful for sysemu, split in two modules
around the CONFIG_USER_ONLY, in order to remove the ifdef
and use the build system instead.
add cpu_abort() when detecting attempts to enter SMM mode via
SMI interrupt in user-mode, and assert that the cpu is not
in SMM mode while translating RSM instructions.
Signed-off-by: Claudio Fontana <cfontana@suse.de>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20210322132800.7470-12-cfontana@suse.de>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
overall, all devices' realize functions take an Error **errp, but return void.
hw/core/qdev.c code, which realizes devices, therefore does:
local_err = NULL;
dc->realize(dev, &local_err);
if (local_err != NULL) {
goto fail;
}
However, we can improve at least accel_cpu to return a meaningful bool value.
Signed-off-by: Claudio Fontana <cfontana@suse.de>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20210322132800.7470-9-cfontana@suse.de>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
move the check for phys_bits outside of host_cpu_adjust_phys_bits,
because otherwise it is impossible to return an error condition
explicitly.
Signed-off-by: Claudio Fontana <cfontana@suse.de>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20210322132800.7470-8-cfontana@suse.de>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
avoid open coding the accesses to cpu->accel_cpu interfaces,
and instead introduce:
accel_cpu_instance_init,
accel_cpu_realizefn
to be used by the targets/ initfn code,
and by cpu_exec_realizefn respectively.
Signed-off-by: Claudio Fontana <cfontana@suse.de>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20210322132800.7470-7-cfontana@suse.de>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
move the call to accel_cpu->cpu_realizefn to the general
cpu_exec_realizefn from target/i386, so it does not need to be
called for every target explicitly as we enable more targets.
Signed-off-by: Claudio Fontana <cfontana@suse.de>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20210322132800.7470-6-cfontana@suse.de>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
i386 is the first user of AccelCPUClass, allowing to split
cpu.c into:
cpu.c cpuid and common x86 cpu functionality
host-cpu.c host x86 cpu functions and "host" cpu type
kvm/kvm-cpu.c KVM x86 AccelCPUClass
hvf/hvf-cpu.c HVF x86 AccelCPUClass
tcg/tcg-cpu.c TCG x86 AccelCPUClass
Signed-off-by: Claudio Fontana <cfontana@suse.de>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
[claudio]:
Rebased on commit b8184135 ("target/i386: allow modifying TCG phys-addr-bits")
Signed-off-by: Claudio Fontana <cfontana@suse.de>
Message-Id: <20210322132800.7470-5-cfontana@suse.de>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
The helper_* functions must use GETPC() to unwind from TCG.
The cpu_x86_* functions cannot, and directly calling the
helper_* functions is a bug. Split out new functions that
perform the work and can be used by both.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Claudio Fontana <cfontana@suse.de>
Tested-by: Claudio Fontana <cfontana@suse.de>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Message-Id: <20210322132800.7470-4-cfontana@suse.de>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Change the prefix from "helper" to "do". The former should be
reserved for those functions that are called from TCG; the latter
is in use within the file already for those functions that are
called from the helper functions, adding a "retaddr" argument.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Claudio Fontana <cfontana@suse.de>
Tested-by: Claudio Fontana <cfontana@suse.de>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Message-Id: <20210322132800.7470-3-cfontana@suse.de>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Stop including exec/address-spaces.h in files that don't need it.
Signed-off-by: Thomas Huth <thuth@redhat.com>
Message-Id: <20210416171314.2074665-5-thuth@redhat.com>
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
Found the following cpu feature bits missing from EPYC-Rome model.
ibrs : Indirect Branch Restricted Speculation
ssbd : Speculative Store Bypass Disable
These new features will be added in EPYC-Rome-v2. The -cpu help output
after the change.
x86 EPYC-Rome (alias configured by machine type)
x86 EPYC-Rome-v1 AMD EPYC-Rome Processor
x86 EPYC-Rome-v2 AMD EPYC-Rome Processor
Reported-by: Pankaj Gupta <pankaj.gupta@cloud.ionos.com>
Signed-off-by: Babu Moger <babu.moger@amd.com>
Signed-off-by: Pankaj Gupta <pankaj.gupta@cloud.ionos.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Reviewed-by: David Edmondson <david.edmondson@oracle.com>
Message-Id: <161478622280.16275.6399866734509127420.stgit@bmoger-ubuntu>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
These two opcodes only allow a memory operand.
Lacking the check for a register operand, we used the A0 temp
without initialization, which led to a tcg abort.
Buglink: https://bugs.launchpad.net/qemu/+bug/1921138
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20210324164650.128608-1-richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
KVM doesn't fully support Hyper-V reenlightenment notifications on
migration. In particular, it doesn't support emulating TSC frequency
of the source host by trapping all TSC accesses so unless TSC scaling
is supported on the destination host and KVM_SET_TSC_KHZ succeeds, it
is unsafe to proceed with migration.
KVM_SET_TSC_KHZ is called from two sites: kvm_arch_init_vcpu() and
kvm_arch_put_registers(). The later (intentionally) doesn't propagate
errors allowing migrations to succeed even when TSC scaling is not
supported on the destination. This doesn't suit 're-enlightenment'
use-case as we have to guarantee that TSC frequency stays constant.
Require 'tsc-frequency=' command line option to be specified for successful
migration when re-enlightenment was enabled by the guest.
Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Message-Id: <20210319123801.1111090-1-vkuznets@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Even the name of this section is 'cpu/msr_hyperv_hypercall',
'hypercall_hypercall' is clearly a typo.
Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Message-Id: <20210318160249.1084178-3-vkuznets@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
env->error_code is only 32-bits wide, so the high 32 bits of EXITINFO1
are being lost. However, even though saving guest state and restoring
host state must be delayed to do_vmexit, because they might take tb_lock,
it is always possible to write to the VMCB. So do this for the exit
code and EXITINFO1, just like it is already being done for EXITINFO2.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
The 'running' argument from VMChangeStateHandler does not require
other value than 0 / 1. Make it a plain boolean.
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Acked-by: David Gibson <david@gibson.dropbear.id.au>
Message-Id: <20210111152020.1422021-3-philmd@redhat.com>
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
An assorted set of spelling fixes in various places.
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
Reviewed-by: Stefan Weil <sw@weilnetz.de>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-Id: <20210309111510.79495-1-mjt@msgid.tls.msk.ru>
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
If kvm_arch_remove_sw_breakpoint finds that a software breakpoint does not
have an INT3 instruction, it fails. This can happen if one sets a
software breakpoint in a kernel module and then reloads it. gdb then
thinks the breakpoint cannot be deleted and there is no way to add it
back.
Suggested-by: Maxim Levitsky <mlevitsk@redhat.com>
Reviewed-by: Maxim Levitsky <mlevitsk@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Bus lock debug exception is a feature that can notify the kernel by
generate an #DB trap after the instruction acquires a bus lock when
CPL>0. This allows the kernel to enforce user application throttling or
mitigations.
This feature is enumerated via CPUID.(EAX=7,ECX=0).ECX[bit 24].
Signed-off-by: Chenyi Qiang <chenyi.qiang@intel.com>
Message-Id: <20210202090224.13274-1-chenyi.qiang@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
The preferred syntax is to use "foo=on|off", rather than a bare
"+foo" or "-foo"
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
Message-Id: <20210216191027.595031-11-berrange@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Adds the support for AMD 3rd generation processors. The model
display for the new processor will be EPYC-Milan.
Adds the following new feature bits on top of the feature bits from
the first and second generation EPYC models.
pcid : Process context identifiers support
ibrs : Indirect Branch Restricted Speculation
ssbd : Speculative Store Bypass Disable
erms : Enhanced REP MOVSB/STOSB support
fsrm : Fast Short REP MOVSB support
invpcid : Invalidate processor context ID
pku : Protection keys support
svme-addr-chk : SVM instructions address check for #GP handling
Depends on the following kernel commits:
14c2bf81fcd2 ("KVM: SVM: Fix #GP handling for doubly-nested virtualization")
3b9c723ed7cf ("KVM: SVM: Add support for SVM instruction address check change")
4aa2691dcbd3 ("8ce1c461188799d863398dd2865d KVM: x86: Factor out x86 instruction emulation with decoding")
4407a797e941 ("KVM: SVM: Enable INVPCID feature on AMD")
9715092f8d7e ("KVM: X86: Move handling of INVPCID types to x86")
3f3393b3ce38 ("KVM: X86: Rename and move the function vmx_handle_memory_failure to x86.c")
830bd71f2c06 ("KVM: SVM: Remove set_cr_intercept, clr_cr_intercept and is_cr_intercept")
4c44e8d6c193 ("KVM: SVM: Add new intercept word in vmcb_control_area")
c62e2e94b9d4 ("KVM: SVM: Modify 64 bit intercept field to two 32 bit vectors")
9780d51dc2af ("KVM: SVM: Modify intercept_exceptions to generic intercepts")
30abaa88382c ("KVM: SVM: Change intercept_dr to generic intercepts")
03bfeeb988a9 ("KVM: SVM: Change intercept_cr to generic intercepts")
c45ad7229d13 ("KVM: SVM: Introduce vmcb_(set_intercept/clr_intercept/_is_intercept)")
a90c1ed9f11d ("(pcid) KVM: nSVM: Remove unused field")
fa44b82eb831 ("KVM: x86: Move MPK feature detection to common code")
38f3e775e9c2 ("x86/Kconfig: Update config and kernel doc for MPK feature on AMD")
37486135d3a7 ("KVM: x86: Fix pkru save/restore when guest CR4.PKE=0, move it to x86.c")
Signed-off-by: Babu Moger <babu.moger@amd.com>
Message-Id: <161290460478.11352.8933244555799318236.stgit@bmoger-ubuntu>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
The CPUID function 1 has a bit called OSXSAVE which tells user space the
status of the CR4.OSXSAVE bit. Our generic CPUID function injects that bit
based on the status of CR4.
With Hypervisor.framework, we do not synchronize full CPU state often enough
for this function to see the CR4 update before guest user space asks for it.
To be on the save side, let's just always synchronize it when we receive a
CPUID(1) request. That way we can set the bit with real confidence.
Reported-by: Asad Ali <asad@osaro.com>
Signed-off-by: Alexander Graf <agraf@csgraf.de>
Message-Id: <20210123004129.6364-1-agraf@csgraf.de>
[RB: resolved conflict with another CPUID change]
Signed-off-by: Roman Bolshakov <r.bolshakov@yadro.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Some guests (ex. Darwin-XNU) can attemp to read this MSR to retrieve and
validate CPU topology comparing it to ACPI MADT content
MSR description from Intel Manual:
35H: MSR_CORE_THREAD_COUNT: Configured State of Enabled Processor Core
Count and Logical Processor Count
Bits 15:0 THREAD_COUNT The number of logical processors that are
currently enabled in the physical package
Bits 31:16 Core_COUNT The number of processor cores that are currently
enabled in the physical package
Bits 63:32 Reserved
Signed-off-by: Vladislav Yaroshchuk <yaroshchuk2000@gmail.com>
Message-Id: <20210113205323.33310-1-yaroshchuk2000@gmail.com>
[RB: reordered MSR definition and dropped u suffix from shift offset]
Signed-off-by: Roman Bolshakov <r.bolshakov@yadro.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
The hvf i386 has a few struct and cpp definitions that are never
used. Remove them.
Suggested-by: Roman Bolshakov <r.bolshakov@yadro.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Signed-off-by: Alexander Graf <agraf@csgraf.de>
Message-Id: <20210120224444.71840-3-agraf@csgraf.de>
Signed-off-by: Roman Bolshakov <r.bolshakov@yadro.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
For `-accel hvf` cpu_x86_cpuid() is wrapped with hvf_cpu_x86_cpuid() to
add paravirtualization cpuid leaf 0x40000010
https://lkml.org/lkml/2008/10/1/246
Leaf 0x40000010, Timing Information:
EAX: (Virtual) TSC frequency in kHz.
EBX: (Virtual) Bus (local apic timer) frequency in kHz.
ECX, EDX: RESERVED (Per above, reserved fields are set to zero).
On macOS TSC and APIC Bus frequencies can be readed by sysctl call with
names `machdep.tsc.frequency` and `hw.busfrequency`
This options is required for Darwin-XNU guest to be synchronized with
host
Leaf 0x40000000 not exposes HVF leaving hypervisor signature empty
Signed-off-by: Vladislav Yaroshchuk <yaroshchuk2000@gmail.com>
Message-Id: <20210122150518.3551-1-yaroshchuk2000@gmail.com>
Signed-off-by: Roman Bolshakov <r.bolshakov@yadro.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
This prevents illegal instruction on cpus that do not support xgetbv.
Buglink: https://bugs.launchpad.net/qemu/+bug/1758819
Reviewed-by: Cameron Esfahani <dirty@apple.com>
Signed-off-by: Hill Ma <maahiuzeon@gmail.com>
Message-Id: <X/6OJ7qk0W6bHkHQ@Hills-Mac-Pro.local>
Signed-off-by: Roman Bolshakov <r.bolshakov@yadro.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Update the sev_es_enabled() function return value to be based on the SEV
policy that has been specified. SEV-ES is enabled if SEV is enabled and
the SEV-ES policy bit is set in the policy object.
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Richard Henderson <richard.henderson@linaro.org>
Cc: Eduardo Habkost <ehabkost@redhat.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
Reviewed-by: Venu Busireddy <venu.busireddy@oracle.com>
Message-Id: <c69f81c6029f31fc4c52a9f35f1bd704362476a5.1611682609.git.thomas.lendacky@amd.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
SMM is not currently supported for an SEV-ES guest by KVM. Change the SMM
capability check from a KVM-wide check to a per-VM check in order to have
a finer-grained SMM capability check.
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Richard Henderson <richard.henderson@linaro.org>
Cc: Eduardo Habkost <ehabkost@redhat.com>
Suggested-by: Sean Christopherson <seanjc@google.com>
Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
Reviewed-by: Venu Busireddy <venu.busireddy@oracle.com>
Message-Id: <f851903809e9d4e6a22d5dfd738dac8da991e28d.1611682609.git.thomas.lendacky@amd.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
An SEV-ES guest does not allow register state to be altered once it has
been measured. When an SEV-ES guest issues a reboot command, Qemu will
reset the vCPU state and resume the guest. This will cause failures under
SEV-ES. Prevent that from occuring by introducing an arch-specific
callback that returns a boolean indicating whether vCPUs are resettable.
Cc: Peter Maydell <peter.maydell@linaro.org>
Cc: Aurelien Jarno <aurelien@aurel32.net>
Cc: Jiaxun Yang <jiaxun.yang@flygoat.com>
Cc: Aleksandar Rikalo <aleksandar.rikalo@syrmia.com>
Cc: David Gibson <david@gibson.dropbear.id.au>
Cc: David Hildenbrand <david@redhat.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
Reviewed-by: Venu Busireddy <venu.busireddy@oracle.com>
Message-Id: <1ac39c441b9a3e970e9556e1cc29d0a0814de6fd.1611682609.git.thomas.lendacky@amd.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
When SEV-ES is enabled, it is not possible modify the guests register
state after it has been initially created, encrypted and measured.
Normally, an INIT-SIPI-SIPI request is used to boot the AP. However, the
hypervisor cannot emulate this because it cannot update the AP register
state. For the very first boot by an AP, the reset vector CS segment
value and the EIP value must be programmed before the register has been
encrypted and measured. Search the guest firmware for the guest for a
specific GUID that tells Qemu the value of the reset vector to use.
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: "Michael S. Tsirkin" <mst@redhat.com>
Cc: Marcel Apfelbaum <marcel.apfelbaum@gmail.com>
Cc: Richard Henderson <richard.henderson@linaro.org>
Cc: Eduardo Habkost <ehabkost@redhat.com>
Cc: Marcelo Tosatti <mtosatti@redhat.com>
Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
Message-Id: <22db2bfb4d6551aed661a9ae95b4fdbef613ca21.1611682609.git.thomas.lendacky@amd.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
In prep for AP booting, require the use of in-kernel irqchip support. This
lessens the Qemu support burden required to boot APs.
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Richard Henderson <richard.henderson@linaro.org>
Cc: Eduardo Habkost <ehabkost@redhat.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
Reviewed-by: Venu Busireddy <venu.busireddy@oracle.com>
Message-Id: <e9aec5941e613456f0757f5a73869cdc5deea105.1611682609.git.thomas.lendacky@amd.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Provide initial support for SEV-ES. This includes creating a function to
indicate the guest is an SEV-ES guest (which will return false until all
support is in place), performing the proper SEV initialization and
ensuring that the guest CPU state is measured as part of the launch.
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Richard Henderson <richard.henderson@linaro.org>
Cc: Eduardo Habkost <ehabkost@redhat.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Co-developed-by: Jiri Slaby <jslaby@suse.cz>
Signed-off-by: Jiri Slaby <jslaby@suse.cz>
Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
Reviewed-by: Venu Busireddy <venu.busireddy@oracle.com>
Message-Id: <2e6386cbc1ddeaf701547dd5677adf5ddab2b6bd.1611682609.git.thomas.lendacky@amd.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
If the gpa isn't specified, it's value is extracted from the OVMF
properties table located below the reset vector (and if this doesn't
exist, an error is returned). OVMF has defined the GUID for the SEV
secret area as 4c2eb361-7d9b-4cc3-8081-127c90d3d294 and the format of
the <data> is: <base>|<size> where both are uint32_t. We extract
<base> and use it as the gpa for the injection.
Note: it is expected that the injected secret will also be GUID
described but since qemu can't interpret it, the format is left
undefined here.
Signed-off-by: James Bottomley <jejb@linux.ibm.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Message-Id: <20210204193939.16617-3-jejb@linux.ibm.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
OVMF is developing a mechanism for depositing a GUIDed table just
below the known location of the reset vector. The table goes
backwards in memory so all entries are of the form
<data>|len|<GUID>
Where <data> is arbtrary size and type, <len> is a uint16_t and
describes the entire length of the entry from the beginning of the
data to the end of the guid.
The foot of the table is of this form and <len> for this case
describes the entire size of the table. The table foot GUID is
defined by OVMF as 96b582de-1fb2-45f7-baea-a366c55a082d and if the
table is present this GUID is just below the reset vector, 48 bytes
before the end of the firmware file.
Add a parser for the ovmf reset block which takes a copy of the block,
if the table foot guid is found, minus the footer and a function for
later traversal to return the data area of any specified GUIDs.
Signed-off-by: James Bottomley <jejb@linux.ibm.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Message-Id: <20210204193939.16617-2-jejb@linux.ibm.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Use g2h_untagged in contexts that have no cpu, e.g. the binary
loaders that operate before the primary cpu is created. As a
colollary, target_mmap and friends must use untagged addresses,
since they are used by the loaders.
Use g2h_untagged on values returned from target_mmap, as the
kernel never applies a tag itself.
Use g2h_untagged on all pc values. The only current user of
tags, aarch64, removes tags from code addresses upon branch,
so "pc" is always untagged.
Use g2h with the cpu context on hand wherever possible.
Use g2h_untagged in lock_user, which will be updated soon.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20210212184902.1251044-13-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Expose the VMX exit/entry load pkrs control bits in
VMX_TRUE_EXIT_CTLS/VMX_TRUE_ENTRY_CTLS MSRs to guest, which supports the
PKS in nested VM.
Signed-off-by: Chenyi Qiang <chenyi.qiang@intel.com>
Message-Id: <20210205083325.13880-3-chenyi.qiang@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
PKS introduces MSR IA32_PKRS(0x6e1) to manage the supervisor protection
key rights. Page access and writes can be managed via the MSR update
without TLB flushes when permissions change.
Add the support to save/load IA32_PKRS MSR in guest.
Signed-off-by: Chenyi Qiang <chenyi.qiang@intel.com>
Message-Id: <20210205083325.13880-2-chenyi.qiang@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Protection Keys for Supervisor-mode pages is a simple extension of
the PKU feature that QEMU already implements. For supervisor-mode
pages, protection key restrictions come from a new MSR. The MSR
has no XSAVE state associated to it.
PKS is only respected in long mode. However, in principle it is
possible to set the MSR even outside long mode, and in fact
even the XSAVE state for PKRU could be set outside long mode
using XRSTOR. So do not limit the migration subsections for
PKRU and PKRS to long mode.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
This patch fixes a translation bug for a subset of x86 BMI instructions
such as the following:
c4 e2 f9 f7 c0 shlxq %rax, %rax, %rax
Currently, these incorrectly generate an undefined instruction exception
when SSE is disabled via CR4, while instructions like "shrxq" work fine.
The problem appears to be related to BMI instructions encoded using VEX
and with a mandatory prefix of "0x66" (data). Instructions with this
data prefix (such as shlxq) are currently rejected. Instructions with
other mandatory prefixes (such as shrxq) translate as expected.
This patch removes the incorrect check in "gen_sse" that causes the
exception to be generated. For the non-BMI cases, the check is
redundant: prefixes are already checked at line 3696.
Buglink: https://bugs.launchpad.net/qemu/+bug/1748296
Signed-off-by: David Greenaway <dgreenaway@google.com>
Message-Id: <20210114063958.1508050-1-dgreenaway@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Newer AMD CPUs will add CPUID_0x8000000A_EDX[28] bit, which indicates
that SVM instructions (VMRUN/VMSAVE/VMLOAD) will trigger #VMEXIT before
CPU checking their EAX against reserved memory regions. This change will
allow the hypervisor to avoid intercepting #GP and emulating SVM
instructions. KVM turns on this CPUID bit for nested VMs. In order to
support it, let us populate this bit, along with other SVM feature bits,
in FEAT_SVM.
Signed-off-by: Wei Huang <wei.huang2@amd.com>
Message-Id: <20210126202456.589932-1-wei.huang2@amd.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
32-bit targets by definition do not support long mode; therefore, the
bit must be masked in the features supported by the accelerator.
As a side effect, this avoids setting up the 0x80000008 CPUID leaf
for
qemu-system-i386 -cpu host
which since commit 5a140b255d ("x86/cpu: Use max host physical address
if -cpu max option is applied") would have printed this error:
qemu-system-i386: phys-bits should be between 32 and 36 (but is 48)
Reported-by: Nathan Chancellor <natechancellor@gmail.com>
Tested-by: Nathan Chancellor <natechancellor@gmail.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
While we've abstracted some (potential) differences between mechanisms for
securing guest memory, the initialization is still specific to SEV. Given
that, move it into x86's kvm_arch_init() code, rather than the generic
kvm_init() code.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Cornelia Huck <cohuck@redhat.com>
Reviewed-by: Greg Kurz <groug@kaod.org>
The platform specific details of mechanisms for implementing
confidential guest support may require setup at various points during
initialization. Thus, it's not really feasible to have a single cgs
initialization hook, but instead each mechanism needs its own
initialization calls in arch or machine specific code.
However, to make it harder to have a bug where a mechanism isn't
properly initialized under some circumstances, we want to have a
common place, late in boot, where we verify that cgs has been
initialized if it was requested.
This patch introduces a ready flag to the ConfidentialGuestSupport
base type to accomplish this, which we verify in
qemu_machine_creation_done().
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Reviewed-by: Greg Kurz <groug@kaod.org>
This allows failures to be reported richly and idiomatically.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Cornelia Huck <cohuck@redhat.com>
Currently the "memory-encryption" property is only looked at once we
get to kvm_init(). Although protection of guest memory from the
hypervisor isn't something that could really ever work with TCG, it's
not conceptually tied to the KVM accelerator.
In addition, the way the string property is resolved to an object is
almost identical to how a QOM link property is handled.
So, create a new "confidential-guest-support" link property which sets
this QOM interface link directly in the machine. For compatibility we
keep the "memory-encryption" property, but now implemented in terms of
the new property.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Greg Kurz <groug@kaod.org>
Reviewed-by: Cornelia Huck <cohuck@redhat.com>
When AMD's SEV memory encryption is in use, flash memory banks (which are
initialed by pc_system_flash_map()) need to be encrypted with the guest's
key, so that the guest can read them.
That's abstracted via the kvm_memcrypt_encrypt_data() callback in the KVM
state.. except, that it doesn't really abstract much at all.
For starters, the only call site is in code specific to the 'pc'
family of machine types, so it's obviously specific to those and to
x86 to begin with. But it makes a bunch of further assumptions that
need not be true about an arbitrary confidential guest system based on
memory encryption, let alone one based on other mechanisms:
* it assumes that the flash memory is defined to be encrypted with the
guest key, rather than being shared with hypervisor
* it assumes that that hypervisor has some mechanism to encrypt data into
the guest, even though it can't decrypt it out, since that's the whole
point
* the interface assumes that this encrypt can be done in place, which
implies that the hypervisor can write into a confidential guests's
memory, even if what it writes isn't meaningful
So really, this "abstraction" is actually pretty specific to the way SEV
works. So, this patch removes it and instead has the PC flash
initialization code call into a SEV specific callback.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Cornelia Huck <cohuck@redhat.com>
Several architectures have mechanisms which are designed to protect
guest memory from interference or eavesdropping by a compromised
hypervisor. AMD SEV does this with in-chip memory encryption and
Intel's TDX can do similar things. POWER's Protected Execution
Framework (PEF) accomplishes a similar goal using an ultravisor and
new memory protection features, instead of encryption.
To (partially) unify handling for these, this introduces a new
ConfidentialGuestSupport QOM base class. "Confidential" is kind of vague,
but "confidential computing" seems to be the buzzword about these schemes,
and "secure" or "protected" are often used in connection to unrelated
things (such as hypervisor-from-guest or guest-from-guest security).
The "support" in the name is significant because in at least some of the
cases it requires the guest to take specific actions in order to protect
itself from hypervisor eavesdropping.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
This will allow us to centralize the registration of
the cpus.c module accelerator operations (in accel/accel-softmmu.c),
and trigger it automatically using object hierarchy lookup from the
new accel_init_interfaces() initialization step, depending just on
which accelerators are available in the code.
Rename all tcg-cpus.c, kvm-cpus.c, etc to tcg-accel-ops.c,
kvm-accel-ops.c, etc, matching the object type names.
Signed-off-by: Claudio Fontana <cfontana@suse.de>
Message-Id: <20210204163931.7358-18-cfontana@suse.de>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
we cannot in principle make the TCG Operations field definitions
conditional on CONFIG_TCG in code that is included by both common_ss
and specific_ss modules.
Therefore, what we can do safely to restrict the TCG fields to TCG-only
builds, is to move all tcg cpu operations into a separate header file,
which is only included by TCG, target-specific code.
This leaves just a NULL pointer in the cpu.h for the non-TCG builds.
This also tidies up the code in all targets a bit, having all TCG cpu
operations neatly contained by a dedicated data struct.
Signed-off-by: Claudio Fontana <cfontana@suse.de>
Message-Id: <20210204163931.7358-16-cfontana@suse.de>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
The TCG-specific CPU methods will be moved to a separate struct,
to make it easier to move accel-specific code outside generic CPU
code in the future. Start by moving tcg_initialize().
The new CPUClass.tcg_opts field may eventually become a pointer,
but keep it an embedded struct for now, to make code conversion
easier.
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
[claudio: move TCGCpuOperations inside include/hw/core/cpu.h]
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Message-Id: <20210204163931.7358-2-cfontana@suse.de>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
The easiest spots to use QAPI_LIST_APPEND are where we already have an
obvious pointer to the tail of a list. While at it, consistently use
the variable name 'tail' for that purpose.
Signed-off-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Message-Id: <20210113221013.390592-5-eblake@redhat.com>
Signed-off-by: Markus Armbruster <armbru@redhat.com>
QEMU option -cpu max(max_features) means "Enables all features supported by
the accelerator in the current host", this looks true for all the features
except guest max physical address width, so add this patch to enable it.
Signed-off-by: Yang Weijiang <weijiang.yang@intel.com>
Message-Id: <20210113090430.26394-1-weijiang.yang@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Use the dedicated X86Seg enum type for segment registers.
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20210109233427.749748-1-f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
This struct only used in whpx-apic.c, there is no need
expose it in whpx.h.
Signed-off-by: Yonggang Luo <luoyonggang@gmail.com>
Message-Id: <20210107101919.80-6-luoyonggang@gmail.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Only leave the external interface in sysemu/whpx.h. whpx_apic_in_platform
is moved to a .c file because it needs whpx_state.
Reported-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Message-Id: <20201219090637.1700900-3-pbonzini@redhat.com>
Rename the file in preparation for moving more implementation-internal
definitions to it. The build is still broken though.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Message-Id: <20201219090637.1700900-2-pbonzini@redhat.com>
There is nothing within the translators that ought to be
changing the TranslationBlock data, so make it const.
This does not actually use the read-only copy of the
data structure that exists within the rx region.
Reviewed-by: Joelle van Dyne <j@getutm.app>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
When the 'int N' instruction is executed in protected mode, the
pseudocode in the architecture manual specifies that we need to check:
* vector number within IDT limits
* selected IDT descriptor is a valid type (interrupt, trap or task gate)
* if this was a software interrupt then gate DPL < CPL
The way we had structured the code meant that the privilege check for
software interrupts ended up not in the code path taken for task gate
handling, because all of the task gate handling code was in the 'case 5'
of the switch which was checking "is this descriptor a valid type".
Move the task gate handling code out of that switch (so that it is now
purely doing the "valid type?" check) and below the software interrupt
privilege check.
The effect of this missing check was that in a guest userspace binary
executing 'int 8' would cause a guest kernel panic rather than the
userspace binary being handed a SEGV.
This is essentially the same bug fixed in VirtualBox in 2012:
https://www.halfdog.net/Security/2012/VirtualBoxSoftwareInterrupt0x8GuestCrash/
Note that for QEMU this is not a security issue because it is only
present when using TCG.
Fixes: https://bugs.launchpad.net/qemu/+bug/1813201
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Message-Id: <20201121224445.16236-1-peter.maydell@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
-----BEGIN PGP SIGNATURE-----
iQJGBAABCAAwFiEENUvIs9frKmtoZ05fOHC0AOuRhlMFAl/dynUSHGFybWJydUBy
ZWRoYXQuY29tAAoJEDhwtADrkYZT3igP/3bWwsKR5vKVsDUTmMfrhcgaFvQiaYoG
F29Bond8Xy0Zd0gl7OWh/5jKL0vGlrEVPrKfYLUjMnfkeRec/pOkIB2oOmIxpnPs
9zi4kh2hQ3dEoRBuvSnnZzedetYPTuCpWMIjlztkgfxgcimqm8TPNVSxRaSApjC3
Y8108wGwBWVf2C0rhKO9E2xA51uo6khy05i1psUtqUlC+PuDQ/OwzQHM2dnWdDB6
kUwBDK17nhL6WwsYqCyKLSiDModReYfDiY8GS5MDLo74dzwXiatEefCR7+sbM4xq
eX/SBoqoeS1jLPNuCryNeGNKvNA2KAbEJTnbQA2NxBXHgZ9/1SxVZFxuPp4nDMSQ
N7BDuDI8YtJE479RjT/ZzRG65xadGBSe/HXkXM9mZwh1zitop8SVZ9fArFBHvNzw
Y5zAv3fQd54+87psffg4dYFK0wGmqTabLEEuVzM8KIVqcAdYA2yC2b2EHy+vsxuq
GMkr0WaA6Sq2gthXmzdTjmUPuHdan/NIhuV6d66SbPNH2oH31piptFxuznyFWSKV
isciFFdUrkg5QrF8DSt2nmdwMFf8QGbszqP8QIGMzhJCCS9GXIiGG8f149++q8X8
HO1lFAdLQJdrDwCYmfx36tOvi2rS/rcoTGgvg66UX3xKko1ruoxR1ZWcS54obJN6
vEQDZ+PxubDg
=vGLy
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/armbru/tags/pull-qapi-2020-12-19' into staging
QAPI patches patches for 2020-12-19
# gpg: Signature made Sat 19 Dec 2020 09:40:05 GMT
# gpg: using RSA key 354BC8B3D7EB2A6B68674E5F3870B400EB918653
# gpg: issuer "armbru@redhat.com"
# gpg: Good signature from "Markus Armbruster <armbru@redhat.com>" [full]
# gpg: aka "Markus Armbruster <armbru@pond.sub.org>" [full]
# Primary key fingerprint: 354B C8B3 D7EB 2A6B 6867 4E5F 3870 B400 EB91 8653
* remotes/armbru/tags/pull-qapi-2020-12-19: (33 commits)
qobject: Make QString immutable
block: Use GString instead of QString to build filenames
keyval: Use GString to accumulate value strings
json: Use GString instead of QString to accumulate strings
migration: Replace migration's JSON writer by the general one
qobject: Factor JSON writer out of qobject_to_json()
qobject: Factor quoted_str() out of to_json()
qobject: Drop qstring_get_try_str()
qobject: Drop qobject_get_try_str()
Revert "qobject: let object_property_get_str() use new API"
block: Avoid qobject_get_try_str()
qmp: Fix tracing of non-string command IDs
qobject: Move internals to qobject-internal.h
hw/rdma: Replace QList by GQueue
Revert "qstring: add qstring_free()"
qobject: Change qobject_to_json()'s value to GString
qobject: Use GString instead of QString to accumulate JSON
qobject: Make qobject_to_json_pretty() take a pretty argument
monitor: Use GString instead of QString for output buffer
hmp: Simplify how qmp_human_monitor_command() gets output
...
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Anywhere we create a list of just one item or by prepending items
(typically because order doesn't matter), we can use
QAPI_LIST_PREPEND(). But places where we must keep the list in order
by appending remain open-coded until later patches.
Note that as a side effect, this also performs a cleanup of two minor
issues in qga/commands-posix.c: the old code was performing
new = g_malloc0(sizeof(*ret));
which 1) is confusing because you have to verify whether 'new' and
'ret' are variables with the same type, and 2) would conflict with C++
compilation (not an actual problem for this file, but makes
copy-and-paste harder).
Signed-off-by: Eric Blake <eblake@redhat.com>
Message-Id: <20201113011340.463563-5-eblake@redhat.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Acked-by: Stefan Hajnoczi <stefanha@redhat.com>
[Straightforward conflicts due to commit a8aa94b5f8 "qga: update
schema for guest-get-disks 'dependents' field" and commit a10b453a52
"target/mips: Move mips_cpu_add_definition() from helper.c to cpu.c"
resolved. Commit message tweaked.]
Signed-off-by: Markus Armbruster <armbru@redhat.com>
The current "#ifdef TARGET_X86_64" statement affects
the compiler's determination of fall through.
When using -Wimplicit-fallthrough in our CFLAGS, the compiler showed warning:
target/i386/translate.c: In function ‘gen_shiftd_rm_T1’:
target/i386/translate.c:1773:12: warning: this statement may fall through [-Wimplicit-fallthrough=]
if (is_right) {
^
target/i386/translate.c:1782:5: note: here
case MO_32:
^~~~
Reported-by: Euler Robot <euler.robot@huawei.com>
Signed-off-by: Chen Qun <kuhn.chenqun@huawei.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Message-Id: <20201211152426.350966-6-thuth@redhat.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
make it a regular function.
Suggested-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Claudio Fontana <cfontana@suse.de>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20201212155530.23098-9-cfontana@suse.de>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
to do this, we need to take code out of cpu.c and helper.c,
and also move some prototypes from cpu.h, for code that is
needed in tcg/xxx_helper.c, and which in turn is part of the
callbacks registered by the class initialization.
Therefore, do some shuffling of the parts of cpu.h that
are only relevant for tcg/, and put them in tcg/helper-tcg.h
For FT0 and similar macros, put them in tcg/fpu-helper.c
since they are used only there.
Signed-off-by: Claudio Fontana <cfontana@suse.de>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20201212155530.23098-8-cfontana@suse.de>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
AVX512 Half-precision floating point (FP16) has better performance
compared to FP32 if the presicion or magnitude requirements are met.
It's defined as CPUID.(EAX=7,ECX=0):EDX[bit 23].
Refer to
https://software.intel.com/content/www/us/en/develop/download/\
intel-architecture-instruction-set-extensions-programming-reference.html
Signed-off-by: Cathy Zhang <cathy.zhang@intel.com>
Message-Id: <20201216224002.32677-1-cathy.zhang@intel.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
As a preparation to expanding Hyper-V CPU features early, move
hyperv_limits initialization to x86_cpu_realizefn().
Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Message-Id: <20201119103221.1665171-5-vkuznets@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
As a preparation to expanding Hyper-V CPU features early, move
hyperv_version_id initialization to x86_cpu_realizefn().
Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Message-Id: <20201119103221.1665171-4-vkuznets@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
As a preparation to expanding Hyper-V CPU features early, move
hyperv_interface_id initialization to x86_cpu_realizefn().
Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Message-Id: <20201119103221.1665171-3-vkuznets@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
As a preparation to expanding Hyper-V CPU features early, move
hyperv_vendor_id initialization to x86_cpu_realizefn(). Introduce
x86_cpu_hyperv_realize() to not not pollute x86_cpu_realizefn()
itself.
Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Message-Id: <20201119103221.1665171-2-vkuznets@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Class properties make QOM introspection simpler and easier, as
they don't require an object to be instantiated.
Also, the hundreds of instance properties were having an impact
on QMP commands that create temporary CPU objects. On my
machine, run time of qmp_query_cpu_definitions() changed
from ~200ms to ~16ms after applying this patch.
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Message-Id: <20201111183823.283752-5-ehabkost@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Three trace events had a literal "0x" prefix but the 'len' values were
formatted in decimal base. Keep the prefix and format the numbers as
hex, as is usually the case for length of memory regions.
Signed-off-by: Dov Murik <dovmurik@linux.vnet.ibm.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Message-Id: <20201103100745.160409-1-dovmurik@linux.vnet.ibm.com>
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
The current implementation will disable the guest Intel PT feature
if the Intel PT LIP feature is supported on the host, but the LIP
feature is comming soon(e.g. SnowRidge and later).
This patch will make the guest LIP feature configurable and Intel
PT feature can be enabled in guest when the guest LIP status same
with the host.
Signed-off-by: Luwei Kang <luwei.kang@intel.com>
Message-Id: <20201202101042.11967-1-luwei.kang@intel.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
AMD SEV allows a guest owner to inject a secret blob
into the memory of a virtual machine. The secret is
encrypted with the SEV Transport Encryption Key and
integrity is guaranteed with the Transport Integrity
Key. Although QEMU facilitates the injection of the
launch secret, it cannot access the secret.
Signed-off-by: Tobin Feldman-Fitzthum <tobin@linux.ibm.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
Reviewed-by: Brijesh Singh <brijesh.singh@amd.com>
Message-Id: <20201027170303.47550-1-tobin@linux.ibm.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
This patch adds support the kernel-irqchip option for
WHPX with on or off value. 'split' value is not supported
for the option. The option only works for the latest version
of Windows (ones that are coming out on Insiders). The
change maintains backward compatibility on older version of
Windows where this option is not supported.
Signed-off-by: Sunil Muthuswamy <sunilmut@microsoft.com>
Message-Id: <SN4PR2101MB0880B13258DA9251F8459F4DC0170@SN4PR2101MB0880.namprd21.prod.outlook.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Per the SDM, when returning to outer privilege level, for segment
registers (ES, FS, GS, and DS) if the check fails, the segment
selector becomes null, but QEMU clears the base/limit/flags as well
as nullifying the segment selector, which should be a spec violation.
Real hardware seems to be compliant with the spec, at least on one
Coffee Lake board I tested.
Signed-off-by: Bin Meng <bin.meng@windriver.com>
Message-Id: <1605261378-77971-1-git-send-email-bmeng.cn@gmail.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
The IOAPIC has an 'Extended Destination ID' field in its RTE, which maps
to bits 11-4 of the MSI address. Since those address bits fall within a
given 4KiB page they were historically non-trivial to use on real hardware.
The Intel IOMMU uses the lowest bit to indicate a remappable format MSI,
and then the remaining 7 bits are part of the index.
Where the remappable format bit isn't set, we can actually use the other
seven to allow external (IOAPIC and MSI) interrupts to reach up to 32768
CPUs instead of just the 255 permitted on bare metal.
Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
Message-Id: <78097f9218300e63e751e077a0a5ca029b56ba46.camel@infradead.org>
[Fix UBSAN warning. - Paolo]
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: David Woodhouse <dwmw2@infradead.org>
For PDEP and PEXT, the mask is provided in the memory (mod+r/m)
operand, and therefore is loaded in s->T0 by gen_ldst_modrm.
The source is provided in the second source operand (VEX.vvvv)
and therefore is loaded in s->T1. Fix the order in which
they are passed to the helpers.
Reported-by: Lenard Szolnoki <blog@lenardszolnoki.com>
Analyzed-by: Lenard Szolnoki <blog@lenardszolnoki.com>
Fixes: https://bugs.launchpad.net/qemu/+bug/1605123
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
The Requested Privilege Level field is 2 bits, the Table Indicator field
is 1 bit and the Index field is the remaining 15 bits, with TI=0 meaning
GDT and TI=1 meaning LDT.
Signed-off-by: Jessica Clarke <jrtc27@jrtc27.com>
Message-Id: <20201116200414.28286-1-jrtc27@jrtc27.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
g_strdup_printf is used twice to write to the same variable, which
can theoretically cause a leak. In practice, it is extremely
unlikely that a guest is seeing a recursive MCE and has disabled
CR4.MCE between the first and the second error, but we can fix it
and we can also make a slight improvement on the logic: CR4.MCE=0
causes a triple fault even for a non-recursive machine check, so
let's place its test first.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Currently, the nested state format is hardcoded to VMX. This will result
in kvm_put_nested_state() returning an error because the KVM SVM support
checks for the nested state to be KVM_STATE_NESTED_FORMAT_SVM. As a
result, kvm_arch_put_registers() errors out early.
Update the setting of the format based on the virtualization feature:
VMX - KVM_STATE_NESTED_FORMAT_VMX
SVM - KVM_STATE_NESTED_FORMAT_SVM
Also, fix the code formatting while at it.
Fixes: b16c0e20c7 ("KVM: add support for AMD nested live migration")
Cc: Eduardo Habkost <ehabkost@redhat.com>
Cc: Richard Henderson <richard.henderson@linaro.org>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Marcelo Tosatti <mtosatti@redhat.com>
Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
Message-Id: <fe53d00fe0d884e812960781284cd48ae9206acc.1605546140.git.thomas.lendacky@amd.com>
Cc: qemu-stable@nongnu.org
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
There is no "version 2" of the "Lesser" General Public License.
It is either "GPL version 2.0" or "Lesser GPL version 2.1".
This patch replaces all occurrences of "Lesser GPL version 2" with
"Lesser GPL version 2.1" in comment section.
Signed-off-by: Chetan Pant <chetan4windows@gmail.com>
Message-Id: <20201023124012.20035-1-chetan4windows@gmail.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
There is no "version 2" of the "Lesser" General Public License.
It is either "GPL version 2.0" or "Lesser GPL version 2.1".
This patch replaces all occurrences of "Lesser GPL version 2" with
"Lesser GPL version 2.1" in comment section.
Signed-off-by: Chetan Pant <chetan4windows@gmail.com>
Message-Id: <20201023122801.19514-1-chetan4windows@gmail.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
mon_get_cpu_env() is indirectly called monitor_parse_arguments() where
the current monitor isn't set yet. Instead of using monitor_cur_env(),
explicitly pass the Monitor pointer to the function.
Without this fix, an HMP command like "x $pc" crashes like this:
#0 0x0000555555caa01f in mon_get_cpu_sync (mon=0x0, synchronize=true) at ../monitor/misc.c:270
#1 0x0000555555caa141 in mon_get_cpu (mon=0x0) at ../monitor/misc.c:294
#2 0x0000555555caa158 in mon_get_cpu_env () at ../monitor/misc.c:299
#3 0x0000555555b19739 in monitor_get_pc (mon=0x555556ad2de0, md=0x5555565d2d40 <monitor_defs+1152>, val=0) at ../target/i386/monitor.c:607
#4 0x0000555555cadbec in get_monitor_def (mon=0x555556ad2de0, pval=0x7fffffffc208, name=0x7fffffffc220 "pc") at ../monitor/misc.c:1681
#5 0x000055555582ec4f in expr_unary (mon=0x555556ad2de0) at ../monitor/hmp.c:387
#6 0x000055555582edbb in expr_prod (mon=0x555556ad2de0) at ../monitor/hmp.c:421
#7 0x000055555582ee79 in expr_logic (mon=0x555556ad2de0) at ../monitor/hmp.c:455
#8 0x000055555582eefe in expr_sum (mon=0x555556ad2de0) at ../monitor/hmp.c:484
#9 0x000055555582efe8 in get_expr (mon=0x555556ad2de0, pval=0x7fffffffc418, pp=0x7fffffffc408) at ../monitor/hmp.c:511
#10 0x000055555582fcd4 in monitor_parse_arguments (mon=0x555556ad2de0, endp=0x7fffffffc890, cmd=0x555556675b50 <hmp_cmds+7920>) at ../monitor/hmp.c:876
#11 0x00005555558306a8 in handle_hmp_command (mon=0x555556ad2de0, cmdline=0x555556ada452 "$pc") at ../monitor/hmp.c:1087
#12 0x000055555582df14 in monitor_command_cb (opaque=0x555556ad2de0, cmdline=0x555556ada450 "x $pc", readline_opaque=0x0) at ../monitor/hmp.c:47
After this fix, nothing is left in monitor_parse_arguments() that can
indirectly call monitor_cur(), so the fix is complete.
Fixes: ff04108a0e
Reported-by: lichun <lichun@ruijie.com.cn>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Message-Id: <20201113114326.97663-4-kwolf@redhat.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
All of these callbacks use mon_get_cpu_env(). Pass the Monitor
pointer to them it in preparation for adding a monitor argument to
mon_get_cpu_env().
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Message-Id: <20201113114326.97663-3-kwolf@redhat.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
mon_get_cpu() is indirectly called monitor_parse_arguments() where
the current monitor isn't set yet. Instead of using monitor_cur(),
explicitly pass the Monitor pointer to the function.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Message-Id: <20201113114326.97663-2-kwolf@redhat.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
during my split of cpus.c, code line
"current_cpu = cpu"
was removed by mistake, causing hax to break.
This commit fixes the situation restoring it.
Reported-by: Volker Rümelin <vr_qemu@t-online.de>
Fixes: e92558e4bf
Signed-off-by: Claudio Fontana <cfontana@suse.de>
Message-Id: <20201016080032.13914-1-cfontana@suse.de>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Icelake-Client CPU models will be removed in the future.
Signed-off-by: Robert Hoo <robert.hu@linux.intel.com>
Message-Id: <1600758855-80046-2-git-send-email-robert.hu@linux.intel.com>
[ehabkost: reword deprecation note, fix version in doc]
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Implement the ability of marking some versions deprecated. When
that CPU model is chosen, print a warning. The warning message
can be customized, e.g. suggesting an alternative CPU model to be
used instead.
The deprecation message will be printed by x86_cpu_list_entry(),
e.g. '-cpu help'.
QMP command 'query-cpu-definitions' will return a bool value
indicating the deprecation status.
Signed-off-by: Robert Hoo <robert.hu@linux.intel.com>
Message-Id: <1600758855-80046-1-git-send-email-robert.hu@linux.intel.com>
[ehabkost: reword commit message]
[ehabkost: Handle NULL cpu_type]
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
As IRQ routing is always available on x86,
kvm_allows_irq0_override() will always return true, so we don't
need the function anymore.
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Acked-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <20200922201922.2153598-4-ehabkost@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
KVM_CAP_IRQ_ROUTING is always available on x86, so replace checks
for kvm_has_gsi_routing() and KVM_CAP_IRQ_ROUTING with asserts.
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Acked-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <20200922201922.2153598-3-ehabkost@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
KVM_CAP_IRQ_ROUTING is available since 2009 (Linux v2.6.30), so
it's safe to just make it a requirement on x86.
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Acked-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <20200922201922.2153598-2-ehabkost@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
With x2apic enabled, configurations can have more that 255 cores.
Noticed the device add test is hitting an assert when during cpu
hotplug with core_id > 255. This is due to assert check in the
CPUID 0x8000001E.
Remove the assert check and fix the problem.
Fixes the bug:
Link: https://bugzilla.redhat.com/show_bug.cgi?id=1834200
Signed-off-by: Babu Moger <babu.moger@amd.com>
Message-Id: <160072824160.9666.8890355282135970684.stgit@naples-babu.amd.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Fix typo to use correct edx value for FEATURE_HYPERV_EDX when
hyperv_passthrough is enabled.
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
Message-Id: <20190820103030.12515-1-zhenyuw@linux.intel.com>
Fixes: e48ddcc6ce ("i386/kvm: implement 'hv-passthrough' mode")
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
We only use x86_cpu_get_supported_feature_word() after its implementation,
no forward declaration needed.
Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Message-Id: <20200904145431.196885-3-vkuznets@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Class properties make QOM introspection simpler and easier, as
they don't require an object to be instantiated.
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Reviewed-by: Igor Mammedov <imammedo@redhat.com>
Message-Id: <20200921221045.699690-14-ehabkost@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Missed in commit 41fba1618b "docs/system: convert the documentation of
deprecated features to rST."
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Message-Id: <20200929075824.1517969-3-armbru@redhat.com>
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
kvm: uses the generic handler
qtest: uses the generic handler
whpx: changed to use the generic handler (identical implementation)
hax: changed to use the generic handler (identical implementation)
hvf: changed to use the generic handler (identical implementation)
tcg: adapt tcg-cpus to point to the tcg-specific handler
Signed-off-by: Claudio Fontana <cfontana@suse.de>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
register a "CpusAccel" interface for HVF as well.
Signed-off-by: Claudio Fontana <cfontana@suse.de>
Reviewed-by: Roman Bolshakov <r.bolshakov@yadro.com>
[added const]
Signed-off-by: Claudio Fontana <cfontana@suse.de>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
register a "CpusAccel" interface for WHPX as well.
Signed-off-by: Claudio Fontana <cfontana@suse.de>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
register a "CpusAccel" interface for HAX as well.
Signed-off-by: Claudio Fontana <cfontana@suse.de>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Post memory failure event through QMP to handle hardware memory corrupted
event. Rather than simply printing to the log, QEMU could report more
effective message to the client. For example, if a guest receives an MCE,
evacuating the host could be a good idea.
Signed-off-by: zhenwei pi <pizhenwei@bytedance.com>
Message-Id: <20200930100440.1060708-4-pizhenwei@bytedance.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Previously we would only get a simple string "Triple fault" in qemu
log. Add detailed message for the two reasons to describe why qemu
has to reset the guest.
Signed-off-by: zhenwei pi <pizhenwei@bytedance.com>
Message-Id: <20200930100440.1060708-2-pizhenwei@bytedance.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Paravirtualized features have been listed in KVM_GET_SUPPORTED_CPUID since
Linux 2.6.35 (commit 84478c829d0f, "KVM: x86: export paravirtual cpuid flags
in KVM_GET_SUPPORTED_CPUID", 2010-05-19). It has been more than 10 years,
so remove the fallback code.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
QEMU's kvmclock device is only created when KVM PV feature bits for
kvmclock (KVM_FEATURE_CLOCKSOURCE/KVM_FEATURE_CLOCKSOURCE2) are
exposed to the guest. With 'kvm=off' cpu flag the device is not
created and we don't call KVM_GET_CLOCK/KVM_SET_CLOCK upon migration.
It was reported that without these call at least Hyper-V TSC page
clocksouce (which can be enabled independently) gets broken after
migration.
Switch to creating kvmclock QEMU device unconditionally, it seems
to always make sense to call KVM_GET_CLOCK/KVM_SET_CLOCK on migration.
Use KVM_CAP_ADJUST_CLOCK check instead of CPUID feature bits.
Reported-by: Antoine Damhet <antoine.damhet@blade-group.com>
Suggested-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Message-Id: <20200922151934.899555-1-vkuznets@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
VM with interrupt based APF enabled fails to migrate:
qemu-system-x86_64: error: failed to set MSR 0x4b564d02 to 0xf3
We have two issues:
1) There is a typo in kvm_put_msrs() and we write async_pf_int_msr
to MSR_KVM_ASYNC_PF_EN (instead of MSR_KVM_ASYNC_PF_INT)
2) We restore MSR_KVM_ASYNC_PF_EN before MSR_KVM_ASYNC_PF_INT is set
and this violates the check in KVM.
Re-order MSR_KVM_ASYNC_PF_EN/MSR_KVM_ASYNC_PF_INT setting (and
kvm_get_msrs() for consistency) and fix the typo.
Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Message-Id: <20200917102316.814804-1-vkuznets@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
The exception_is_int flag may be set on entry to helper_syscall,
e.g. after a prior interrupt that has returned, and processing
EXCP_SYSCALL as an interrupt causes it to fail so clear this flag.
Signed-off-by: Douglas Crosher <dtc-ubuntu@scieneer.com>
Message-Id: <a7dab33e-eda6-f988-52e9-f3d32db7538d@scieneer.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Per Intel SDM vol 1, 13.2, if CPUID.1:ECX.XSAVE[bit 26] is 0, the
processor provides no further enumeration through CPUID function 0DH.
QEMU does not do this for "-cpu host,-xsave".
Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
Message-Id: <20200716082019.215316-2-xiaoyao.li@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Linux-5.8 introduced interrupt based mechanism for 'page ready' events
delivery and disabled the old, #PF based one (see commit 2635b5c4a0e4
"KVM: x86: interrupt based APF 'page ready' event delivery"). Linux
guest switches to using in in 5.9 (see commit b1d405751cd5 "KVM: x86:
Switch KVM guest to using interrupts for page ready APF delivery").
The feature has a new KVM_FEATURE_ASYNC_PF_INT bit assigned and
the interrupt vector is set in MSR_KVM_ASYNC_PF_INT MSR. Support this
in QEMU.
Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Message-Id: <20200908141206.357450-1-vkuznets@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Newer versions of WHPX provide the capability to query the tsc
and apic frequency. Expose these through the vmware cpuid leaf.
This patch doesnt support setting the tsc frequency; that will
come as a separate fix.
Signed-off-by: Sunil Muthuswamy <sunilmut@microsoft.com>
Message-Id: <SN4PR2101MB08808DFDDC3F442BBEAADFF4C0710@SN4PR2101MB0880.namprd21.prod.outlook.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
clang's C11 atomic_fetch_*() functions only take a C11 atomic type
pointer argument. QEMU uses direct types (int, etc) and this causes a
compiler error when a QEMU code calls these functions in a source file
that also included <stdatomic.h> via a system header file:
$ CC=clang CXX=clang++ ./configure ... && make
../util/async.c:79:17: error: address argument to atomic operation must be a pointer to _Atomic type ('unsigned int *' invalid)
Avoid using atomic_*() names in QEMU's atomic.h since that namespace is
used by <stdatomic.h>. Prefix QEMU's APIs with 'q' so that atomic.h
and <stdatomic.h> can co-exist. I checked /usr/include on my machine and
searched GitHub for existing "qatomic_" users but there seem to be none.
This patch was generated using:
$ git grep -h -o '\<atomic\(64\)\?_[a-z0-9_]\+' include/qemu/atomic.h | \
sort -u >/tmp/changed_identifiers
$ for identifier in $(</tmp/changed_identifiers); do
sed -i "s%\<$identifier\>%q$identifier%g" \
$(git grep -I -l "\<$identifier\>")
done
I manually fixed line-wrap issues and misaligned rST tables.
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Acked-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <20200923105646.47864-1-stefanha@redhat.com>
When debugging QEMU it is often useful to put a breakpoint on the
error_setg_internal method impl.
Unfortunately the object_property_add / object_class_property_add
methods call object_property_find / object_class_property_find methods
to check if a property exists already before adding the new property.
As a result there are a huge number of calls to error_setg_internal
on startup of most QEMU commands, making it very painful to set a
breakpoint on this method.
Most callers of object_find_property and object_class_find_property,
however, pass in a NULL for the Error parameter. This simplifies the
methods to remove the Error parameter entirely, and then adds some
new wrapper methods that are able to raise an Error when needed.
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Message-Id: <20200914135617.1493072-1-berrange@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
One of the goals of having less boilerplate on QOM declarations
is to avoid human error. Requiring an extra argument that is
never used is an opportunity for mistakes.
Remove the unused argument from OBJECT_DECLARE_TYPE and
OBJECT_DECLARE_SIMPLE_TYPE.
Coccinelle patch used to convert all users of the macros:
@@
declarer name OBJECT_DECLARE_TYPE;
identifier InstanceType, ClassType, lowercase, UPPERCASE;
@@
OBJECT_DECLARE_TYPE(InstanceType, ClassType,
- lowercase,
UPPERCASE);
@@
declarer name OBJECT_DECLARE_SIMPLE_TYPE;
identifier InstanceType, lowercase, UPPERCASE;
@@
OBJECT_DECLARE_SIMPLE_TYPE(InstanceType,
- lowercase,
UPPERCASE);
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Reviewed-by: Cédric Le Goater <clg@kaod.org>
Acked-by: Cornelia Huck <cohuck@redhat.com>
Acked-by: Igor Mammedov <imammedo@redhat.com>
Acked-by: Paul Durrant <paul@xen.org>
Acked-by: Thomas Huth <thuth@redhat.com>
Message-Id: <20200916182519.415636-4-ehabkost@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
apic_id contains all the information required to build
CPUID_8000_001E. core_id and node_id is already part of
apic_id generated by x86_topo_ids_from_apicid.
Also remove the restriction on number bits on core_id and
node_id.
Remove all the hardcoded values and replace with generalized
fields.
Refer the Processor Programming Reference (PPR) documentation
available from the bugzilla Link below.
Signed-off-by: Babu Moger <babu.moger@amd.com>
Reviewed-by: Igor Mammedov <imammedo@redhat.com>
Reviewed-by: Pankaj Gupta <pankaj.gupta.linux@gmail.com>
Acked-by: Michael S. Tsirkin <mst@redhat.com>
Link: https://bugzilla.kernel.org/show_bug.cgi?id=206537
Message-Id: <159897585257.30750.5815593918927986935.stgit@naples-babu.amd.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Remove all the hardcoded values and replace with generalized
fields.
Signed-off-by: Babu Moger <babu.moger@amd.com>
Reviewed-by: Igor Mammedov <imammedo@redhat.com>
Reviewed-by: Pankaj Gupta <pankaj.gupta.linux@gmail.com>
Acked-by: Michael S. Tsirkin <mst@redhat.com>
Message-Id: <159897584649.30750.3939159632943292252.stgit@naples-babu.amd.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Hyper-V TLFS prior to version 6.0 had a mistake in it: special value
'0xffffffff' for CPUID 0x40000004.EBX was called 'never to retry', this
looked weird (like why it's not '0' which supposedly have the same effect?)
but nobody raised the question. In TLFS version 6.0 the mistake was
corrected to 'never notify' which sounds logical. Fix QEMU accordingly.
Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Message-Id: <20200515114847.74523-1-vkuznets@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Let's make this file compilable with -Werror=implicit-fallthrough :
Looking at the code, it seems like the fallthrough is intended here,
so we should add the corresponding "/* fallthrough */" comment here.
Signed-off-by: Thomas Huth <thuth@redhat.com>
Acked-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <20200911125301.413081-1-thuth@redhat.com>
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
In order to use inclusive terminology, rename host_tsx_blacklisted()
as host_tsx_broken().
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Acked-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <20200910070131.435543-7-philmd@redhat.com>
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
Some typedefs and macros are defined after the type check macros.
This makes it difficult to automatically replace their
definitions with OBJECT_DECLARE_TYPE.
Patch generated using:
$ ./scripts/codeconverter/converter.py -i \
--pattern=QOMStructTypedefSplit $(git grep -l '' -- '*.[ch]')
which will split "typdef struct { ... } TypedefName"
declarations.
Followed by:
$ ./scripts/codeconverter/converter.py -i --pattern=MoveSymbols \
$(git grep -l '' -- '*.[ch]')
which will:
- move the typedefs and #defines above the type check macros
- add missing #include "qom/object.h" lines if necessary
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Message-Id: <20200831210740.126168-9-ehabkost@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Message-Id: <20200831210740.126168-10-ehabkost@redhat.com>
Message-Id: <20200831210740.126168-11-ehabkost@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
This reverts commit c24a41bb53.
Remove the EPYC specific apicid decoding and use the generic
default decoding.
Signed-off-by: Babu Moger <babu.moger@amd.com>
Reviewed-by: Igor Mammedov <imammedo@redhat.com>
Message-Id: <159889937478.21294.4192291354416942986.stgit@naples-babu.amd.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
This reverts commit dd08ef0318.
Remove the EPYC specific apicid decoding and use the generic
default decoding.
Signed-off-by: Babu Moger <babu.moger@amd.com>
Reviewed-by: Igor Mammedov <imammedo@redhat.com>
Message-Id: <159889936257.21294.1786224705357428082.stgit@naples-babu.amd.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
This reverts commit 0c1538cb1a.
Remove the EPYC specific apicid decoding and use the generic
default decoding.
Signed-off-by: Babu Moger <babu.moger@amd.com>
Reviewed-by: Igor Mammedov <imammedo@redhat.com>
Message-Id: <159889935015.21294.1425332462852607813.stgit@naples-babu.amd.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
This reverts commit 247b18c593.
Remove the EPYC specific apicid decoding and use the generic
default decoding.
Signed-off-by: Babu Moger <babu.moger@amd.com>
Reviewed-by: Igor Mammedov <imammedo@redhat.com>
Message-Id: <159889933756.21294.13999336052652073520.stgit@naples-babu.amd.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
This reverts commit 7b225762c8.
Remove the EPYC specific apicid decoding and use the generic
default decoding.
Also fix all the references of pkg_offset.
Signed-off-by: Babu Moger <babu.moger@amd.com>
Reviewed-by: Igor Mammedov <imammedo@redhat.com>
Message-Id: <159889933119.21294.8112825730577505757.stgit@naples-babu.amd.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Some QOM macros were using a X86_IOMMU_DEVICE prefix, and others
were using a X86_IOMMU prefix. Rename all of them to use the
same X86_IOMMU_DEVICE prefix.
This will make future conversion to OBJECT_DECLARE* easier.
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Message-Id: <20200825192110.3528606-47-ehabkost@redhat.com>
Acked-by: Peter Xu <peterx@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Move typedef closer to the type check macros, to make it easier
to convert the code to OBJECT_DEFINE_TYPE() in the future.
Reviewed-by: Roman Bolshakov <r.bolshakov@yadro.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Tested-By: Roman Bolshakov <r.bolshakov@yadro.com>
Message-Id: <20200825192110.3528606-20-ehabkost@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Similar to hw_arch, each architecture defines two sourceset which are placed in
dictionaries target_arch and target_softmmu_arch. These are then picked up
from there when building the per-emulator static_library.
Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Meson doesn't enjoy the same flexibility we have with Make in choosing
the include path. In particular the tracing headers are using
$(build_root)/$(<D).
In order to keep the include directives unchanged,
the simplest solution is to generate headers with patterns like
"trace/trace-audio.h" and place forwarding headers in the source tree
such that for example "audio/trace.h" includes "trace/trace-audio.h".
This patch is too ugly to be applied to the Makefiles now. It's only
a way to separate the changes to the tracing header files from the
Meson rewrite of the tracing logic.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Quoting ISO C99 6.7.8p4, "All the expressions in an initializer for an
object that has static storage duration shall be constant expressions or
string literals".
The compound literal produced by the make_floatx80() macro is not such a
constant expression, per 6.6p7-9. (An implementation may accept it,
according to 6.6p10, but is not required to.)
Therefore using "floatx80_zero" and make_floatx80() for initializing
"f2xm1_table" and "fpatan_table" is not portable. And gcc-4.8 in RHEL-7.6
actually chokes on them:
> target/i386/fpu_helper.c:871:5: error: initializer element is not constant
> { make_floatx80(0xbfff, 0x8000000000000000ULL),
> ^
We've had the make_floatx80_init() macro for this purpose since commit
3bf7e40ab9 ("softfloat: fix for C99", 2012-03-17), so let's use that
macro again.
Fixes: eca30647fc ("target/i386: reimplement f2xm1 using floatx80 operations")
Fixes: ff57bb7b63 ("target/i386: reimplement fpatan using floatx80 operations")
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Cc: Aurelien Jarno <aurelien@aurel32.net>
Cc: Eduardo Habkost <ehabkost@redhat.com>
Cc: Joseph Myers <joseph@codesourcery.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Peter Maydell <peter.maydell@linaro.org>
Link: https://lists.gnu.org/archive/html/qemu-devel/2017-08/msg06566.html
Link: https://lists.gnu.org/archive/html/qemu-devel/2020-07/msg04714.html
Message-Id: <20200716144251.23004-1-lersek@redhat.com>
Message-Id: <20200724064509.331-8-alex.bennee@linaro.org>
We forgot to update cc_op before these branch insns,
which lead to losing track of the current eflags.
Buglink: https://bugs.launchpad.net/qemu/+bug/1888165
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20200720154028.477457-1-richard.henderson@linaro.org>
HF2_GIF_MASK is set in env->hflags2 unconditionally on CPU reset
(see x86_cpu_reset()) but when calling KVM_SET_NESTED_STATE,
KVM_STATE_NESTED_GIF_SET is only valid for nSVM as e.g. nVMX code
looks like
if (kvm_state->hdr.vmx.vmxon_pa == -1ull) {
if (kvm_state->flags & ~KVM_STATE_NESTED_EVMCS)
return -EINVAL;
}
Also, when adjusting the environment after KVM_GET_NESTED_STATE we
need not reset HF2_GIF_MASK on VMX as e.g. x86_cpu_pending_interrupt()
expects it to be set.
Alternatively, we could've made env->hflags2 SVM-only.
Reported-by: Jan Kiszka <jan.kiszka@siemens.com>
Fixes: b16c0e20c7 ("KVM: add support for AMD nested live migration")
Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Message-Id: <20200723142701.2521161-1-vkuznets@redhat.com>
Tested-by: Jan Kiszka <jan.kiszka@siemens.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Add the missing vmx features in Skylake-Server and Cascadelake-Server
CPU models based on the output of Paolo's script.
Signed-off-by: Chenyi Qiang <chenyi.qiang@intel.com>
Message-Id: <20200714084148.26690-4-chenyi.qiang@intel.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Add the missing features(sha_ni, avx512ifma, rdpid, fsrm,
vmx-rdseed-exit, vmx-pml, vmx-eptp-switching) and change the model
number to 106 in the Icelake-Server-v4 CPU model.
Signed-off-by: Chenyi Qiang <chenyi.qiang@intel.com>
Message-Id: <20200714084148.26690-3-chenyi.qiang@intel.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
For CPUs support fast short REP MOV[CPUID.(EAX=7,ECX=0):EDX(bit4)], e.g
Icelake and Tigerlake, expose it to the guest VM.
Reviewed-by: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Chenyi Qiang <chenyi.qiang@intel.com>
Message-Id: <20200714084148.26690-2-chenyi.qiang@intel.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Features unavailable due to absent of their dependent features should
not be added to env->user_features. env->user_features only contains the
feature explicity specified with -feature/+feature by user.
Fixes: 99e24dbdaa ("target/i386: introduce generic feature dependency mechanism")
Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
Message-Id: <20200713174436.41070-3-xiaoyao.li@intel.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Features defined in versioned CPU model are recorded in env->user_features
since they are updated as property. It's unwated because they are not
user specified.
Simply clear env->user_features as a fix. It won't clear user specified
features because user specified features are filled to
env->user_features later in x86_cpu_expand_features().
Cc: Chenyi Qiang <chenyi.qiang@intel.com>
Suggested-by: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
Message-Id: <20200713174436.41070-2-xiaoyao.li@intel.com>
[ehabkost: fix coding style]
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Blindly setting FD_CLOEXEC without a read-modify-write will
inadvertently clear any other intentionally-set bits, such as a
proposed new bit for designating a fd that must behave in 32-bit mode.
Use our wrapper function instead of an incorrect hand-rolled version.
Signed-off-by: Eric Blake <eblake@redhat.com>
Message-Id: <20200420175309.75894-2-eblake@redhat.com>
Reviewed-by: Colin Xu <colin.xu@intel.com>
This is helpful when debugging stuck guest timers.
As we need apic_get_current_count for that, and it is really not
emulation specific, move it to apic_common.c and export it. Fix its
style at this chance as well.
Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Message-Id: <e00e2896-ca5b-a929-de7a-8e5762f0c1c2@siemens.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
This instruction aims to give a way to choose which memory accesses
do not need to be tracked in the TSX read set, which is defined as
CPUID.(EAX=7,ECX=0):EDX[bit 16].
The release spec link is as follows:
https://software.intel.com/content/dam/develop/public/us/en/documents/\
architecture-instruction-set-extensions-programming-reference.pdf
The associated kvm patch link is as follows:
https://lore.kernel.org/patchwork/patch/1268026/
Signed-off-by: Cathy Zhang <cathy.zhang@intel.com>
Message-Id: <1593991036-12183-3-git-send-email-cathy.zhang@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
The availability of the SERIALIZATION instruction is indicated
by the presence of the CPUID feature flag SERIALIZE, which is
defined as CPUID.(EAX=7,ECX=0):ECX[bit 14].
The release spec link is as follows:
https://software.intel.com/content/dam/develop/public/us/en/documents/\
architecture-instruction-set-extensions-programming-reference.pdf
The associated kvm patch link is as follows:
https://lore.kernel.org/patchwork/patch/1268025/
Signed-off-by: Cathy Zhang <cathy.zhang@intel.com>
Message-Id: <1593991036-12183-2-git-send-email-cathy.zhang@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
The CPUID level need to be set to 0x14 manually on old
machine-type if Intel PT is enabled in guest. E.g. the
CPUID[0].EAX(level)=7 and CPUID[7].EBX[25](intel-pt)=1 when the
Qemu with "-machine pc-i440fx-3.1 -cpu qemu64,+intel-pt" parameter.
This patch corrects the warning message of the previous
submission(ddc2fc9).
Signed-off-by: Luwei Kang <luwei.kang@intel.com>
Message-Id: <1593499113-4768-1-git-send-email-luwei.kang@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
In some cases, such as if the kvm-amd "sev" module parameter is set
to 0, SEV will be unavailable but query-sev-capabilities will still
return all the information. This tricks libvirt into erroneously
reporting that SEV is available. Check the actual usability of the
feature and return the appropriate error if QEMU cannot use KVM
or KVM cannot use SEV.
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
The query-sev-capabilities was reporting errors through error_report;
change it to use Error** so that the cause of the failure is clearer.
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Currently, QEMU is overriding KVM_GET_SUPPORTED_CPUID's answer for
the WAITPKG bit depending on the "-overcommit cpu-pm" setting. This is a
bad idea because it does not even check if the host supports it, but it
can be done in x86_cpu_realizefn just like we do for the MONITOR bit.
This patch moves it there, while making it conditional on host
support for the related UMWAIT MSR.
Cc: qemu-stable@nongnu.org
Reported-by: Maxim Levitsky <mlevitsk@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
In 32-bit mode, the higher 16 bits of the destination
register are undefined. In practice CR0[31:0] is stored,
just like in 64-bit mode, so just remove the "if" that
currently differentiates the behavior.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reported-by: Roman Bolshakov <r.bolshakov@yadro.com>
Reviewed-by: Roman Bolshakov <r.bolshakov@yadro.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Force the end of a translation block after an I/O instruction in
icount mode. For consistency, all CF_USE_ICOUNT code is kept in
disas_insn instead of having it in gen_ins and gen_outs.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Make them more concise and consitent with the rest of the code in the
file and drop non-relevant TODO.
Cc: Cameron Esfahani <dirty@apple.com>
Signed-off-by: Roman Bolshakov <r.bolshakov@yadro.com>
Message-Id: <20200630102824.77604-9-r.bolshakov@yadro.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
hvf_reset_vcpu() duplicates actions performed by x86_cpu_reset(). The
difference is that hvf_reset_vcpu() stores initial values directly to
VMCS while x86_cpu_reset() stores it in CPUX86State and then
cpu_synchronize_all_post_init() or cpu_synchronize_all_post_reset()
flushes CPUX86State into VMCS. That makes hvf_reset_vcpu() a kind of
no-op.
Here's the trace of CPU state modifications during VM start:
hvf_reset_vcpu (resets VMCS)
cpu_synchronize_all_post_init (overwrites VMCS fields written by
hvf_reset_vcpu())
cpu_synchronize_all_states
hvf_reset_vcpu (resets VMCS)
cpu_synchronize_all_post_reset (overwrites VMCS fields written by
hvf_reset_vcpu())
General purpose registers, system registers, segment descriptors, flags
and IP are set by hvf_put_segments() in post-init and post-reset,
therefore it's safe to remove them from hvf_reset_vcpu().
PDPTE initialization can be dropped because Intel SDM (26.3.1.6 Checks
on Guest Page-Directory-Pointer-Table Entries) doesn't require PDPTE to
be clear unless PAE is used: "A VM entry to a guest that does not use
PAE paging does not check the validity of any PDPTEs."
And if PAE is used, PDPTE's are initialized from CR3 in macvm_set_cr0().
Cc: Cameron Esfahani <dirty@apple.com>
Signed-off-by: Roman Bolshakov <r.bolshakov@yadro.com>
Message-Id: <20200630102824.77604-8-r.bolshakov@yadro.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
The only useful purpose of hvf_reset_vcpu() is to clear "IA-32e mode
guest" (LMA) VM-Entry control. But it can be moved to macvm_set_cr0()
which is indirectly used by post-init and post-reset to flush emulator
state. That enables clean removal of hvf_reset_vcpu().
LMA is set only if IA32_EFER.LME = 1, according to Intel SDM "9.8.5
Initializing IA-32e Mode" and "9.8.5.4 Switching Out of IA-32e Mode
Operation", otherwise the entry control can be safely cleared.
Cc: Cameron Esfahani <dirty@apple.com>
Signed-off-by: Roman Bolshakov <r.bolshakov@yadro.com>
Message-Id: <20200630102824.77604-7-r.bolshakov@yadro.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Intel SDM "9.8.5 Initializing IA-32e Mode" and "9.8.5.4 Switching Out of
IA-32e Mode Operation" define activation and deactivation of long mode
only upon a change of CR0.PG but current code invokes exit_long_mode()
unconditionally until LME is cleared.
Signed-off-by: Cameron Esfahani <dirty@apple.com>
Signed-off-by: Roman Bolshakov <r.bolshakov@yadro.com>
Message-Id: <20200630102824.77604-6-r.bolshakov@yadro.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
hvf lacks an implementation of cpu_synchronize_pre_loadvm().
Cc: Cameron Esfahani <dirty@apple.com>
Signed-off-by: Roman Bolshakov <r.bolshakov@yadro.com>
Message-Id: <20200630102824.77604-4-r.bolshakov@yadro.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
cpu_synchronize_state() is currently no-op for hvf but BIOS will hang in
vAPIC option ROM when cpu_synchronize_state() is wired to
hvf_cpu_synchronize_state().
cpu_synchronize_state() state is called from vapic_write() during option
ROM initialization. It sets dirty flag on the cpu. macvm_set_rip() is
then invoked to advance IP after the I/O write to vAPIC port.
macvm_set_rip() only modifies VMCS, it doesn't change env->eip.
Therefore on the next iteration of vCPU loop, vcpu_dirty flag is checked
and hvf_put_registers() overwrites correct RIP in VMCS with the value of
env->eip that points to the I/O write instruction. Execution of the CPU
gets stuck on the instruction.
The issue can be avoided if eip doesn't contain stale value when dirty
flag is set on cpu.
Cc: Cameron Esfahani <dirty@apple.com>
Signed-off-by: Roman Bolshakov <r.bolshakov@yadro.com>
Message-Id: <20200630102824.77604-2-r.bolshakov@yadro.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Support for nested guest live migration is part of Linux 5.8, add the
corresponding code to QEMU. The migration format consists of a few
flags, is an opaque 4k blob.
The blob is in VMCB format (the control area represents the L1 VMCB
control fields, the save area represents the pre-vmentry state; KVM does
not use the host save area since the AMD manual allows that) but QEMU
does not really care about that. However, the flags need to be
copied to hflags/hflags2 and back.
In addition, support for retrieving and setting the AMD nested virtualization
states allows the L1 guest to be reset while running a nested guest, but
a small bug in CPU reset needs to be fixed for that to work.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
The SSE instruction implementations all fail to raise the expected
IEEE floating-point exceptions because they do nothing to convert the
exception state from the softfloat machinery into the exception flags
in MXCSR.
Fix this by adding such conversions. Unlike for x87, emulated SSE
floating-point operations might be optimized using hardware floating
point on the host, and so a different approach is taken that is
compatible with such optimizations. The required invariant is that
all exceptions set in env->sse_status (other than "denormal operand",
for which the SSE semantics are different from those in the softfloat
code) are ones that are set in the MXCSR; the emulated MXCSR is
updated lazily when code reads MXCSR, while when code sets MXCSR, the
exceptions in env->sse_status are set accordingly.
A few instructions do not raise all the exceptions that would be
raised by the softfloat code, and those instructions are made to save
and restore the softfloat exception state accordingly.
Nothing is done about "denormal operand"; setting that (only for the
case when input denormals are *not* flushed to zero, the opposite of
the logic in the softfloat code for such an exception) will require
custom code for relevant instructions, or else architecture-specific
conditionals in the softfloat code for when to set such an exception
together with custom code for various SSE conversion and rounding
instructions that do not set that exception.
Nothing is done about trapping exceptions (for which there is minimal
and largely broken support in QEMU's emulation in the x87 case and no
support at all in the SSE case).
Signed-off-by: Joseph Myers <joseph@codesourcery.com>
Message-Id: <alpine.DEB.2.21.2006252358000.3832@digraph.polyomino.org.uk>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
The code to set floating-point state when MXCSR changes calls
set_flush_to_zero on &env->fp_status, so affecting the x87
floating-point state rather than the SSE state. Fix to call it for
&env->sse_status instead.
Signed-off-by: Joseph Myers <joseph@codesourcery.com>
Message-Id: <alpine.DEB.2.21.2006252357170.3832@digraph.polyomino.org.uk>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
During a page table walk of TCG+SVM the code in target/i386/excp_helper.c
get_hphys() uses the cr4 register of the guest instead of the hypervisor
to check for the PSE bit. In the test case we have, the guest have not
enabled (yet) the PSE bit and so the page table walk results in a wrong
host physical address resolution and wrong content read by the guest.
Attached patch is against 4.2.1, but works also on 3.1.0. It fixes the
issue for our automated testcase, which is a 32bit hypervisor w/o PAE
support running a guest VM with tcg+svm.
The test worked beforehand up to qemu 2.12, started to fail with qemu 3.0
and later. The added TCG/SVM NPT commit seems to introduce the regression.
In case someone want to try to reproduce it, the iso is at [0], the good
case is [1] and the failing case is [2]. The used commandline is:
qemu-system-i386 -no-kvm -nographic -cpu phenom -m 512 -machine q35 -cdrom seoul-vmm-test.iso
[0] https://depot.genode.org/alex-ab/images/seoul-vmm-test.iso
[1] https://depot.genode.org/alex-ab/images/seoul-vmm-good.txt
[2] https://depot.genode.org/alex-ab/images/seoul-vmm-bad.txt
Signed-off-by: Alexander Boettcher <alexander.boettcher@genode-labs.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
When all we do with an Error we receive into a local variable is
propagating to somewhere else, we can just as well receive it there
right away. The previous two commits did that for sufficiently simple
cases with Coccinelle. Do it for several more manually.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-Id: <20200707160613.848843-37-armbru@redhat.com>
When all we do with an Error we receive into a local variable is
propagating to somewhere else, we can just as well receive it there
right away. Convert
if (!foo(..., &err)) {
...
error_propagate(errp, err);
...
return ...
}
to
if (!foo(..., errp)) {
...
...
return ...
}
where nothing else needs @err. Coccinelle script:
@rule1 forall@
identifier fun, err, errp, lbl;
expression list args, args2;
binary operator op;
constant c1, c2;
symbol false;
@@
if (
(
- fun(args, &err, args2)
+ fun(args, errp, args2)
|
- !fun(args, &err, args2)
+ !fun(args, errp, args2)
|
- fun(args, &err, args2) op c1
+ fun(args, errp, args2) op c1
)
)
{
... when != err
when != lbl:
when strict
- error_propagate(errp, err);
... when != err
(
return;
|
return c2;
|
return false;
)
}
@rule2 forall@
identifier fun, err, errp, lbl;
expression list args, args2;
expression var;
binary operator op;
constant c1, c2;
symbol false;
@@
- var = fun(args, &err, args2);
+ var = fun(args, errp, args2);
... when != err
if (
(
var
|
!var
|
var op c1
)
)
{
... when != err
when != lbl:
when strict
- error_propagate(errp, err);
... when != err
(
return;
|
return c2;
|
return false;
|
return var;
)
}
@depends on rule1 || rule2@
identifier err;
@@
- Error *err = NULL;
... when != err
Not exactly elegant, I'm afraid.
The "when != lbl:" is necessary to avoid transforming
if (fun(args, &err)) {
goto out
}
...
out:
error_propagate(errp, err);
even though other paths to label out still need the error_propagate().
For an actual example, see sclp_realize().
Without the "when strict", Coccinelle transforms vfio_msix_setup(),
incorrectly. I don't know what exactly "when strict" does, only that
it helps here.
The match of return is narrower than what I want, but I can't figure
out how to express "return where the operand doesn't use @err". For
an example where it's too narrow, see vfio_intx_enable().
Silently fails to convert hw/arm/armsse.c, because Coccinelle gets
confused by ARMSSE being used both as typedef and function-like macro
there. Converted manually.
Line breaks tidied up manually. One nested declaration of @local_err
deleted manually. Preexisting unwanted blank line dropped in
hw/riscv/sifive_e.c.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-Id: <20200707160613.848843-35-armbru@redhat.com>
The previous commit enables conversion of
foo(..., &err);
if (err) {
...
}
to
if (!foo(..., errp)) {
...
}
for QOM functions that now return true / false on success / error.
Coccinelle script:
@@
identifier fun = {
object_apply_global_props, object_initialize_child_with_props,
object_initialize_child_with_propsv, object_property_get,
object_property_get_bool, object_property_parse, object_property_set,
object_property_set_bool, object_property_set_int,
object_property_set_link, object_property_set_qobject,
object_property_set_str, object_property_set_uint, object_set_props,
object_set_propv, user_creatable_add_dict,
user_creatable_complete, user_creatable_del
};
expression list args, args2;
typedef Error;
Error *err;
@@
- fun(args, &err, args2);
- if (err)
+ if (!fun(args, &err, args2))
{
...
}
Fails to convert hw/arm/armsse.c, because Coccinelle gets confused by
ARMSSE being used both as typedef and function-like macro there.
Convert manually.
Line breaks tidied up manually.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Message-Id: <20200707160613.848843-29-armbru@redhat.com>
The object_property_set_FOO() setters take property name and value in
an unusual order:
void object_property_set_FOO(Object *obj, FOO_TYPE value,
const char *name, Error **errp)
Having to pass value before name feels grating. Swap them.
Same for object_property_set(), object_property_get(), and
object_property_parse().
Convert callers with this Coccinelle script:
@@
identifier fun = {
object_property_get, object_property_parse, object_property_set_str,
object_property_set_link, object_property_set_bool,
object_property_set_int, object_property_set_uint, object_property_set,
object_property_set_qobject
};
expression obj, v, name, errp;
@@
- fun(obj, v, name, errp)
+ fun(obj, name, v, errp)
Chokes on hw/arm/musicpal.c's lcd_refresh() with the unhelpful error
message "no position information". Convert that one manually.
Fails to convert hw/arm/armsse.c, because Coccinelle gets confused by
ARMSSE being used both as typedef and function-like macro there.
Convert manually.
Fails to convert hw/rx/rx-gdbsim.c, because Coccinelle gets confused
by RXCPU being used both as typedef and function-like macro there.
Convert manually. The other files using RXCPU that way don't need
conversion.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Message-Id: <20200707160613.848843-27-armbru@redhat.com>
[Straightforwad conflict with commit 2336172d9b "audio: set default
value for pcspk.iobase property" resolved]
The previous commit enables conversion of
visit_foo(..., &err);
if (err) {
...
}
to
if (!visit_foo(..., errp)) {
...
}
for visitor functions that now return true / false on success / error.
Coccinelle script:
@@
identifier fun =~ "check_list|input_type_enum|lv_start_struct|lv_type_bool|lv_type_int64|lv_type_str|lv_type_uint64|output_type_enum|parse_type_bool|parse_type_int64|parse_type_null|parse_type_number|parse_type_size|parse_type_str|parse_type_uint64|print_type_bool|print_type_int64|print_type_null|print_type_number|print_type_size|print_type_str|print_type_uint64|qapi_clone_start_alternate|qapi_clone_start_list|qapi_clone_start_struct|qapi_clone_type_bool|qapi_clone_type_int64|qapi_clone_type_null|qapi_clone_type_number|qapi_clone_type_str|qapi_clone_type_uint64|qapi_dealloc_start_list|qapi_dealloc_start_struct|qapi_dealloc_type_anything|qapi_dealloc_type_bool|qapi_dealloc_type_int64|qapi_dealloc_type_null|qapi_dealloc_type_number|qapi_dealloc_type_str|qapi_dealloc_type_uint64|qobject_input_check_list|qobject_input_check_struct|qobject_input_start_alternate|qobject_input_start_list|qobject_input_start_struct|qobject_input_type_any|qobject_input_type_bool|qobject_input_type_bool_keyval|qobject_input_type_int64|qobject_input_type_int64_keyval|qobject_input_type_null|qobject_input_type_number|qobject_input_type_number_keyval|qobject_input_type_size_keyval|qobject_input_type_str|qobject_input_type_str_keyval|qobject_input_type_uint64|qobject_input_type_uint64_keyval|qobject_output_start_list|qobject_output_start_struct|qobject_output_type_any|qobject_output_type_bool|qobject_output_type_int64|qobject_output_type_null|qobject_output_type_number|qobject_output_type_str|qobject_output_type_uint64|start_list|visit_check_list|visit_check_struct|visit_start_alternate|visit_start_list|visit_start_struct|visit_type_.*";
expression list args;
typedef Error;
Error *err;
@@
- fun(args, &err);
- if (err)
+ if (!fun(args, &err))
{
...
}
A few line breaks tidied up manually.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Message-Id: <20200707160613.848843-19-armbru@redhat.com>
AMD SEV will pin all guest memory, mark discarding of RAM broken. At the
time this is called, we cannot have anyone active that relies on discards
to work properly - let's still implement error handling.
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Cc: "Michael S. Tsirkin" <mst@redhat.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Richard Henderson <rth@twiddle.net>
Cc: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20200626072248.78761-8-david@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
QEMU incorrectly validates FEAT_SVM feature flags against
GET_SUPPORTED_CPUID even if SVM features are being masked out by
cpu_x86_cpuid(). This can make QEMU print warnings on most AMD
CPU models, even when SVM nesting is disabled (which is the
default).
This bug was never detected before because of a Linux KVM bug:
until Linux v5.6, KVM was not filtering out SVM features in
GET_SUPPORTED_CPUID when nested was disabled. This KVM bug was
fixed in Linux v5.7-rc1, on Linux commit a50718cc3f43 ("KVM:
nSVM: Expose SVM features to L1 iff nested is enabled").
Fix the problem by adding a CPUID_EXT3_SVM dependency to all
FEAT_SVM feature flags in the feature_dependencies table.
Reported-by: Yanan Fu <yfu@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Message-Id: <20200623230116.277409-1-ehabkost@redhat.com>
[Fix testcase. - Paolo]
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
The Linux TSC calibration procedure is subject to small variations
(its common to see +-1 kHz difference between reboots on a given CPU, for example).
So migrating a guest between two hosts with identical processor can fail, in case
of a small variation in calibrated TSC between them.
Allow a conservative 250ppm error between host TSC and VM TSC frequencies,
rather than requiring an exact match. NTP daemon in the guest can
correct this difference.
Also change migration to accept this bound.
KVM_SET_TSC_KHZ depends on a kernel interface change. Without this change,
the behaviour remains the same: in case of a different frequency
between host and VM, KVM_SET_TSC_KHZ will fail and QEMU will exit.
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
Message-Id: <20200616165805.GA324612@fuller.cnet>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Add which features are added or removed in this version.
Signed-off-by: Tao Xu <tao3.xu@intel.com>
Message-Id: <20200324051034.30541-1-tao3.xu@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
The x87 fpatan emulation is currently based around conversion to
double. This is inherently unsuitable for a good emulation of any
floatx80 operation. Reimplement using the soft-float operations, as
for other such instructions.
Signed-off-by: Joseph Myers <joseph@codesourcery.com>
Message-Id: <alpine.DEB.2.21.2006230000340.24721@digraph.polyomino.org.uk>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
The x87 fyl2x emulation is currently based around conversion to
double. This is inherently unsuitable for a good emulation of any
floatx80 operation. Reimplement using the soft-float operations,
building on top of the reimplementation of fyl2xp1 and factoring out
code to be shared between the two instructions.
The included test assumes that the result in round-to-nearest mode
should always be one of the two closest floating-point numbers to the
mathematically exact result (including that it should be exact, in the
exact cases which cover more cases than for fyl2xp1).
Signed-off-by: Joseph Myers <joseph@codesourcery.com>
Message-Id: <alpine.DEB.2.21.2006172321530.20587@digraph.polyomino.org.uk>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
The x87 fyl2xp1 emulation is currently based around conversion to
double. This is inherently unsuitable for a good emulation of any
floatx80 operation, even before considering that it is a particularly
naive implementation using double (adding 1 then using log rather than
attempting a better emulation using log1p).
Reimplement using the soft-float operations, as was done for f2xm1; as
in that case, m68k has related operations but not exactly this one and
it seemed safest to implement directly rather than reusing the m68k
code to avoid accumulation of errors.
A test is included with many randomly generated inputs. The
assumption of the test is that the result in round-to-nearest mode
should always be one of the two closest floating-point numbers to the
mathematical value of y * log2(x + 1); the implementation aims to do
somewhat better than that (about 70 correct bits before rounding). I
haven't investigated how accurate hardware is.
Intel manuals describe a narrower range of valid arguments to this
instruction than AMD manuals. The implementation accepts the wider
range (it's needed anyway for the core code to be reusable in a
subsequent patch reimplementing fyl2x), but the test only has inputs
in the narrower range so that it's valid on hardware that may reject
or produce poor results for inputs outside that range.
Code in the previous implementation that sets C2 for some out-of-range
arguments is not carried forward to the new implementation; C2 is
undefined for this instruction and I suspect that code was just
cut-and-pasted from the trigonometric instructions (fcos, fptan, fsin,
fsincos) where C2 *is* defined to be set for out-of-range arguments.
Signed-off-by: Joseph Myers <joseph@codesourcery.com>
Message-Id: <alpine.DEB.2.21.2006172320190.20587@digraph.polyomino.org.uk>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
The x87 fprem and fprem1 emulation is currently based around
conversion to double, which is inherently unsuitable for a good
emulation of any floatx80 operation. Reimplement using the soft-float
floatx80 remainder operations.
Signed-off-by: Joseph Myers <joseph@codesourcery.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <alpine.DEB.2.21.2006081657200.23637@digraph.polyomino.org.uk>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
The x87 f2xm1 emulation is currently based around conversion to
double. This is inherently unsuitable for a good emulation of any
floatx80 operation, even before considering that it is a particularly
naive implementation using double (computing with pow and then
subtracting 1 rather than attempting a better emulation using expm1).
Reimplement using the soft-float operations, including additions and
multiplications with higher precision where appropriate to limit
accumulation of errors. I considered reusing some of the m68k code
for transcendental operations, but the instructions don't generally
correspond exactly to x87 operations (for example, m68k has 2^x and
e^x - 1, but not 2^x - 1); to avoid possible accumulation of errors
from applying multiple such operations each rounding to floatx80
precision, I wrote a direct implementation of 2^x - 1 instead. It
would be possible in principle to make the implementation more
efficient by doing the intermediate operations directly with
significands, signs and exponents and not packing / unpacking floatx80
format for each operation, but that would make it significantly more
complicated and it's not clear that's worthwhile; the m68k emulation
doesn't try to do that.
A test is included with many randomly generated inputs. The
assumption of the test is that the result in round-to-nearest mode
should always be one of the two closest floating-point numbers to the
mathematical value of 2^x - 1; the implementation aims to do somewhat
better than that (about 70 correct bits before rounding). I haven't
investigated how accurate hardware is.
Signed-off-by: Joseph Myers <joseph@codesourcery.com>
Message-Id: <alpine.DEB.2.21.2006112341010.18393@digraph.polyomino.org.uk>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
All remaining conversions to qdev_realize() are for bus-less devices.
Coccinelle script:
// only correct for bus-less @dev!
@@
expression errp;
expression dev;
@@
- qdev_init_nofail(dev);
+ qdev_realize(dev, NULL, &error_fatal);
@ depends on !(file in "hw/core/qdev.c") && !(file in "hw/core/bus.c")@
expression errp;
expression dev;
symbol true;
@@
- object_property_set_bool(OBJECT(dev), true, "realized", errp);
+ qdev_realize(DEVICE(dev), NULL, errp);
@ depends on !(file in "hw/core/qdev.c") && !(file in "hw/core/bus.c")@
expression errp;
expression dev;
symbol true;
@@
- object_property_set_bool(dev, true, "realized", errp);
+ qdev_realize(DEVICE(dev), NULL, errp);
Note that Coccinelle chokes on ARMSSE typedef vs. macro in
hw/arm/armsse.c. Worked around by temporarily renaming the macro for
the spatch run.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Acked-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <20200610053247.1583243-57-armbru@redhat.com>
The last real change to this file is from 2012, so it is very likely
that this file is completely out-of-date and ignored today. Let's
simply remove it to avoid confusion if someone finds it by accident.
Signed-off-by: Thomas Huth <thuth@redhat.com>
Message-Id: <20200611172445.5177-1-thuth@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
SEVState is contained with SevGuestState. We've now fixed redundancies
and name conflicts, so there's no real point to the nested structure. Just
move all the fields of SEVState into SevGuestState.
This eliminates the SEVState structure, which as a bonus removes the
confusion with the SevState enum.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20200604064219.436242-10-david@gibson.dropbear.id.au>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
The user can explicitly specify a handle via the "handle" property wired
to SevGuestState::handle. That gets passed to the KVM_SEV_LAUNCH_START
ioctl() which may update it, the final value being copied back to both
SevGuestState::handle and SEVState::handle.
AFAICT, nothing will be looking SEVState::handle before it and
SevGuestState::handle have been updated from the ioctl(). So, remove the
field and just use SevGuestState::handle directly.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20200604064219.436242-9-david@gibson.dropbear.id.au>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
SEVState::policy is set from the final value of the policy field in the
parameter structure for the KVM_SEV_LAUNCH_START ioctl(). But, AFAICT
that ioctl() won't ever change it from the original supplied value which
comes from SevGuestState::policy.
So, remove this field and just use SevGuestState::policy directly.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20200604064219.436242-8-david@gibson.dropbear.id.au>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
The SEVState structure has cbitpos and reduced_phys_bits fields which are
simply copied from the SevGuestState structure and never changed. Now that
SEVState is embedded in SevGuestState we can just access the original copy
directly.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20200604064219.436242-7-david@gibson.dropbear.id.au>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
The SEV code uses a pretty ugly global to access its internal state. Now
that SEVState is embedded in SevGuestState, we can avoid accessing it via
the global in some cases. In the remaining cases use a new global
referencing the containing SevGuestState which will simplify some future
transformations.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Message-Id: <20200604064219.436242-6-david@gibson.dropbear.id.au>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Currently SevGuestState contains only configuration information. For
runtime state another non-QOM struct SEVState is allocated separately.
Simplify things by instead embedding the SEVState structure in
SevGuestState.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Message-Id: <20200604064219.436242-5-david@gibson.dropbear.id.au>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
At the moment this is a purely passive object which is just a container for
information used elsewhere, hence the name. I'm going to change that
though, so as a preliminary rename it to SevGuestState.
That name risks confusion with both SEVState and SevState, but I'll be
working on that in following patches.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20200604064219.436242-4-david@gibson.dropbear.id.au>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Neither QSevGuestInfo nor SEVState (not to be confused with SevState) is
used anywhere outside target/i386/sev.c, so they might as well live in
there rather than in a (somewhat) exposed header.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20200604064219.436242-3-david@gibson.dropbear.id.au>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
This structure is nothing but an empty wrapper around the parent class,
which by QOM conventions means we don't need it at all.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20200604064219.436242-2-david@gibson.dropbear.id.au>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Roman Bolshakov <r.bolshakov@yadro.com>
Message-Id: <20200528193758.51454-14-r.bolshakov@yadro.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
There's no similar field in CPUX86State, but it's needed for MMIO traps.
Signed-off-by: Roman Bolshakov <r.bolshakov@yadro.com>
Message-Id: <20200528193758.51454-13-r.bolshakov@yadro.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
The lazy flags are still needed for instruction decoder.
Signed-off-by: Roman Bolshakov <r.bolshakov@yadro.com>
Message-Id: <20200528193758.51454-12-r.bolshakov@yadro.com>
[Move struct to target/i386/cpu.h - Paolo]
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
HVFX86EmulatorState carries it's own copy of x86 registers. It can be
dropped in favor of regs in generic CPUX86State.
Signed-off-by: Roman Bolshakov <r.bolshakov@yadro.com>
Message-Id: <20200528193758.51454-11-r.bolshakov@yadro.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Use the ones provided in target/i386/cpu.h instead.
Signed-off-by: Roman Bolshakov <r.bolshakov@yadro.com>
Message-Id: <20200528193758.51454-10-r.bolshakov@yadro.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
HVFX86EmulatorState carries it's own copy of x86 flags. It can be
dropped in favor of eflags in generic CPUX86State.
Signed-off-by: Roman Bolshakov <r.bolshakov@yadro.com>
Message-Id: <20200528193758.51454-9-r.bolshakov@yadro.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
The field is used to print address of instructions that have no parser
in decode_invalid(). RIP from VMCS is saved into fetch_rip before
decoding starts but it's also saved into env->eip in load_regs().
Therefore env->eip can be used instead of fetch_rip.
While at it, correct address printed in decode_invalid(). It prints an
address before the unknown instruction.
Signed-off-by: Roman Bolshakov <r.bolshakov@yadro.com>
Message-Id: <20200528193758.51454-8-r.bolshakov@yadro.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Drop and replace rip field from HVFX86EmulatorState in favor of eip from
common CPUX86State.
Signed-off-by: Roman Bolshakov <r.bolshakov@yadro.com>
Message-Id: <20200528193758.51454-7-r.bolshakov@yadro.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
There's no need to read VMCS twice, instruction length is already
available in ins_len.
Signed-off-by: Roman Bolshakov <r.bolshakov@yadro.com>
Message-Id: <20200528193758.51454-6-r.bolshakov@yadro.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Roman Bolshakov <r.bolshakov@yadro.com>
Message-Id: <20200528193758.51454-5-r.bolshakov@yadro.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
They're either declared elsewhere or have no use.
While at it, rename _hvf_cpu_synchronize_post_init() to
do_hvf_cpu_synchronize_post_init().
Signed-off-by: Roman Bolshakov <r.bolshakov@yadro.com>
Message-Id: <20200528193758.51454-3-r.bolshakov@yadro.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
"sysemu/hvf.h" is intended for inclusion in generic code. However it
also contains several hvf definitions and declarations, including
HVFState that are used only inside "hvf.c". "hvf-i386.h" would be more
appropriate place to define HVFState as it's only included by "hvf.c"
and "x86_task.c".
Signed-off-by: Roman Bolshakov <r.bolshakov@yadro.com>
Message-Id: <20200528193758.51454-2-r.bolshakov@yadro.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
This corrects a bug introduced in my previous fix for SSE4.2 pcmpestri
/ pcmpestrm / pcmpistri / pcmpistrm substring search, commit
ae35eea7e4.
That commit fixed a bug that showed up in four GCC tests with one libc
implementation. The tests in question generate random inputs to the
intrinsics and compare results to a C implementation, but they only
test 1024 possible random inputs, and when the tests use the cases of
those instructions that work with word rather than byte inputs, it's
easy to have problematic cases that show up much less frequently than
that. Thus, testing with a different libc implementation, and so a
different random number generator, showed up a problem with the
previous patch.
When investigating the previous test failures, I found the description
of these instructions in the Intel manuals (starting from computing a
16x16 or 8x8 set of comparison results) confusing and hard to match up
with the more optimized implementation in QEMU, and referred to AMD
manuals which described the instructions in a different way. Those
AMD descriptions are very explicit that the whole of the string being
searched for must be found in the other operand, not running off the
end of that operand; they say "If the prototype and the SUT are equal
in length, the two strings must be identical for the comparison to be
TRUE.". However, that statement is incorrect.
In my previous commit message, I noted:
The operation in this case is a search for a string (argument d to
the helper) in another string (argument s to the helper); if a copy
of d at a particular position would run off the end of s, the
resulting output bit should be 0 whether or not the strings match in
the region where they overlap, but the QEMU implementation was
wrongly comparing only up to the point where s ends and counting it
as a match if an initial segment of d matched a terminal segment of
s. Here, "run off the end of s" means that some byte of d would
overlap some byte outside of s; thus, if d has zero length, it is
considered to match everywhere, including after the end of s.
The description "some byte of d would overlap some byte outside of s"
is accurate only when understood to refer to overlapping some byte
*within the 16-byte operand* but at or after the zero terminator; it
is valid to run over the end of s if the end of s is the end of the
16-byte operand. So the fix in the previous patch for the case of d
being empty was correct, but the other part of that patch was not
correct (as it never allowed partial matches even at the end of the
16-byte operand). Nor was the code before the previous patch correct
for the case of d nonempty, as it would always have allowed partial
matches at the end of s.
Fix with a partial revert of my previous change, combined with
inserting a check for the special case of s having maximum length to
determine where it is necessary to check for matches.
In the added test, test 1 is for the case of empty strings, which
failed before my 2017 patch, test 2 is for the bug introduced by my
2017 patch and test 3 deals with the case where a match of an initial
segment at the end of the string is not valid when the string ends
before the end of the 16-byte operand (that is, the case that would be
broken by a simple revert of the non-empty-string part of my 2017
patch).
Signed-off-by: Joseph Myers <joseph@codesourcery.com>
Message-Id: <alpine.DEB.2.21.2006121344290.9881@digraph.polyomino.org.uk>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Most x87 instruction implementations fail to raise the expected IEEE
floating-point exceptions because they do nothing to convert the
exception state from the softfloat machinery into the exception flags
in the x87 status word. There is special-case handling of division to
raise the divide-by-zero exception, but that handling is itself buggy:
it raises the exception in inappropriate cases (inf / 0 and nan / 0,
which should not raise any exceptions, and 0 / 0, which should raise
"invalid" instead).
Fix this by converting the floating-point exceptions raised during an
operation by the softfloat machinery into exceptions in the x87 status
word (passing through the existing fpu_set_exception function for
handling related to trapping exceptions). There are special cases
where some functions convert to integer internally but exceptions from
that conversion are not always correct exceptions for the instruction
to raise.
There might be scope for some simplification if the softfloat
exception state either could always be assumed to be in sync with the
state in the status word, or could always be ignored at the start of
each instruction and just set to 0 then; I haven't looked into that in
detail, and it might run into interactions with the various ways the
emulation does not yet handle trapping exceptions properly. I think
the approach taken here, of saving the softfloat state, setting
exceptions there to 0 and then merging the old exceptions back in
after carrying out the operation, is conservatively safe.
Signed-off-by: Joseph Myers <joseph@codesourcery.com>
Message-Id: <alpine.DEB.2.21.2005152120280.3469@digraph.polyomino.org.uk>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
The Perfmon and Debug Capability MSR named IA32_PERF_CAPABILITIES is
a feature-enumerating MSR, which only enumerates the feature full-width
write (via bit 13) by now which indicates the processor supports IA32_A_PMCx
interface for updating bits 32 and above of IA32_PMCx.
The existence of MSR IA32_PERF_CAPABILITIES is enumerated by CPUID.1:ECX[15].
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Richard Henderson <rth@twiddle.net>
Cc: Eduardo Habkost <ehabkost@redhat.com>
Cc: Marcelo Tosatti <mtosatti@redhat.com>
Cc: qemu-devel@nongnu.org
Signed-off-by: Like Xu <like.xu@linux.intel.com>
Message-Id: <20200529074347.124619-5-like.xu@linux.intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>