The missing nibble made it more difficult to read.
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20181008212205.17752-5-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20181008212205.17752-3-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
At present we assert:
arm_el_is_aa64: Assertion `el >= 1 && el <= 3' failed.
The comment in arm_el_is_aa64 explains why asking about EL0 without
extra information is impossible. Add an extra argument to provide
it from the surrounding context.
Fixes: 0ab5953b00
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20181008212205.17752-2-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Let's use the KVM_SET_DEVICE_ATTR ioctl to enable hardware
interpretation of AP instructions executed on the guest.
If the S390_FEAT_AP feature is switched on for the guest,
AP instructions must be interpreted by default; otherwise,
they will be intercepted.
This attribute setting may be overridden by a device. For example,
a device may want to provide AP instructions to the guest (i.e.,
S390_FEAT_AP turned on), but it may want to emulate them. In this
case, the AP instructions executed on the guest must be
intercepted; so when the device is realized, it must disable
interpretation.
Signed-off-by: Tony Krowiak <akrowiak@linux.ibm.com>
Tested-by: Pierre Morel <pmorel@linux.ibm.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: Christian Borntraeger <borntraeger@de.ibm.com>
Acked-by: Halil Pasic <pasic@linux.ibm.com>
Tested-by: Christian Borntraeger <borntraeger@de.ibm.com>
Message-Id: <20181010170309.12045-4-akrowiak@linux.ibm.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
A new CPU model feature and two new CPU model facilities are
introduced to support AP devices for a KVM guest.
CPU model features:
1. The S390_FEAT_AP CPU model feature indicates whether AP
instructions are available to the guest. This feature will
be enabled only if the AP instructions are available on the
linux host as determined by the availability of the
KVM_S390_VM_CRYPTO_ENABLE_APIE VM attribute which is exposed
by KVM only if the AP instructions are available on the
host.
This feature must be turned on from userspace to execute AP
instructions on the KVM guest. The QEMU command line to turn
this feature on looks something like this:
qemu-system-s390x ... -cpu xxx,ap=on ...
This feature will be supported for zEC12 and newer CPU models.
The feature will not be supported for older models because
there are few older systems on which to test and the older
crypto cards will be going out of service in the relatively
near future.
CPU model facilities:
1. The S390_FEAT_AP_QUERY_CONFIG_INFO feature indicates whether the
AP Query Configuration Information (QCI) facility is available
to the guest as determined by whether the facility is available
on the host. This feature will be exposed by KVM only if the
QCI facility is installed on the host.
2. The S390_FEAT_AP_FACILITY_TEST feature indicates whether the AP
Facility Test (APFT) facility is available to the guest as
determined by whether the facility is available on the host.
This feature will be exposed by KVM only if APFT is installed
on the host.
Signed-off-by: Tony Krowiak <akrowiak@linux.ibm.com>
Tested-by: Pierre Morel <pmorel@linux.ibm.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
Reviewed-by: Halil Pasic <pasic@linux.ibm.com>
Reviewed-by: Christian Borntraeger <borntraeger@de.ibm.com>
Tested-by: Christian Borntraeger <borntraeger@de.ibm.com>
Message-Id: <20181010170309.12045-3-akrowiak@linux.ibm.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
Debug macros that are disabled by default should be avoided (since the
code bit-rots quite easily). Thus turn these debug prints into proper
qemu_log_mask(CPU_LOG_xxx, ...) statements instead. The DPRINTF statements
in do_[ext|io|mchk]_interrupt can even be removed completely since we can
log the information in a central place, s390_cpu_do_interrupt, instead.
Signed-off-by: Thomas Huth <thuth@redhat.com>
Message-Id: <1538751601-7433-1-git-send-email-thuth@redhat.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
linux-user should always enable AFP, otherwise our emulated binary
might crash once it tries to make use of additional floating-point
registers or instructions.
Cc: Peter Maydell <peter.maydell@linaro.org>
Cc: Alex Bennée <alex.bennee@linaro.org>
Fixes: db0504154e ("s390x/tcg: check for AFP-register, BFP and DFP data exceptions")
Reported-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Tested-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Cornelia Huck <cohuck@redhat.com>
Tested-by: Cornelia Huck <cohuck@redhat.com>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Updating the NS stack pointer via MSR to SP_NS should include
a check whether the new SP value is below the stack limit.
No other kinds of update to the various stack pointer and
limit registers via MSR should perform a check.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20181002163556.10279-14-peter.maydell@linaro.org
Add the v8M stack checks for the VLDM/VSTM
(aka VPUSH/VPOP) instructions. This code is currently
unreachable because we haven't yet implemented M profile
floating point support, but since the change is simple,
we add it now because otherwise we're likely to forget to
do it later.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20181002163556.10279-13-peter.maydell@linaro.org
Add v8M stack checks for the 16-bit Thumb push/pop
encodings: STMDB, STMFD, LDM, LDMIA, LDMFD.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20181002163556.10279-12-peter.maydell@linaro.org
Add v8M stack checks for the instructions in the T32
"load/store single" encoding class: these are the
"immediate pre-indexed" and "immediate, post-indexed"
LDR and STR instructions.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20181002163556.10279-11-peter.maydell@linaro.org
Add the v8M stack checks for:
* LDM (T2 encoding)
* STM (T2 encoding)
This includes the 32-bit encodings of the instructions listed
in v8M ARM ARM rule R_YVWT as
* LDM, LDMIA, LDMFD
* LDMDB, LDMEA
* POP (multiple registers)
* PUSH (muliple registers)
* STM, STMIA, STMEA
* STMDB, STMFD
We perform the stack limit before doing any other part
of the load or store.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20181002163556.10279-10-peter.maydell@linaro.org
Add the v8M stack checks for:
* LDRD (immediate)
* STRD (immediate)
Loads and stores are more complicated than ADD/SUB/MOV, because we
must ensure that memory accesses below the stack limit are not
performed, so we can't simply do the check when we actually update
SP.
For these instructions, if the stack limit check triggers
we must not:
* perform any memory access below the SP limit
* update PC, SP or the load/store base register
but it is IMPDEF whether we:
* perform any accesses above or equal to the SP limit
* update destination registers for loads
For QEMU we choose to always check the limit before doing any other
part of the load or store, so we won't update any registers or
perform any memory accesses.
It is UNKNOWN whether the limit check triggers for a load or store
where the initial SP value is below the limit and one of the stores
would be below the limit, but the writeback moves SP to above the
limit. For QEMU we choose to trigger the check in this situation.
Note that limit checks happen only for loads and stores which update
SP via writeback; they do not happen for loads and stores which
simply use SP as a base register.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20181002163556.10279-9-peter.maydell@linaro.org
Check the v8M stack limits when pushing the frame for a
non-secure function call via BLXNS.
In order to be able to generate the exception we need to
promote raise_exception() from being local to op_helper.c
so we can call it from helper.c.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20181002163556.10279-8-peter.maydell@linaro.org
Add checks for breaches of the v8M stack limit when the
stack pointer is decremented to push the exception frame
for exception entry.
Note that the exception-entry case is unique in that the
stack pointer is updated to be the limit value if the limit
is hit (per rule R_ZLZG).
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20181002163556.10279-7-peter.maydell@linaro.org
Add some comments to the Thumb decoder indicating what bits
of the instruction have been decoded at various points in
the code.
This is not an exhaustive set of comments; we're gradually
adding comments as we work with particular bits of the code.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20181002163556.10279-6-peter.maydell@linaro.org
Add code to insert calls to a helper function to do the stack
limit checking when we handle these forms of instruction
that write to SP:
* ADD (SP plus immediate)
* ADD (SP plus register)
* SUB (SP minus immediate)
* SUB (SP minus register)
* MOV (register)
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20181002163556.10279-5-peter.maydell@linaro.org
We're going to want v7m_using_psp() in op_helper.c in the
next patch, so move it from helper.c to internals.h.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20181002163556.10279-4-peter.maydell@linaro.org
Define EXCP_STKOF, and arrange for it to cause us to take
a UsageFault with CFSR.STKOF set.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20181002163556.10279-3-peter.maydell@linaro.org
The Arm v8M architecture includes hardware stack limit checking.
When certain instructions update the stack pointer, if the new
value of SP is below the limit set in the associated limit register
then an exception is taken. Add a TB flag that tracks whether
the limit-checking code needs to be emitted.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Message-id: 20181002163556.10279-2-peter.maydell@linaro.org
There is quite a lot of code required to compute cpu_mem_index,
or even put together the full TCGMemOpIdx. This can easily be
done at translation time.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Tested-by: Laurent Desnogues <laurent.desnogues@gmail.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20181005175350.30752-16-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This implements the feature for softmmu, and moves the
main loop out of a macro and into a function.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Tested-by: Laurent Desnogues <laurent.desnogues@gmail.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20181005175350.30752-15-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This fixes the endianness problem for softmmu, and moves
the main loop out of a macro and into an inlined function.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Tested-by: Laurent Desnogues <laurent.desnogues@gmail.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20181005175350.30752-14-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This fixes the endianness problem for softmmu, and moves
the main loop out of a macro and into an inlined function.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Tested-by: Laurent Desnogues <laurent.desnogues@gmail.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20181005175350.30752-13-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
We can choose the endianness at translation time, rather than
re-computing it at execution time.
Tested-by: Laurent Desnogues <laurent.desnogues@gmail.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20181005175350.30752-12-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
We can choose the endianness at translation time, rather than
re-computing it at execution time.
Tested-by: Laurent Desnogues <laurent.desnogues@gmail.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20181005175350.30752-11-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This fixes the endianness problem for softmmu, and moves the
main loop out of a macro and into an inlined function.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Tested-by: Laurent Desnogues <laurent.desnogues@gmail.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20181005175350.30752-10-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Use the same *_tlb primitives as we use for ld1.
For linux-user, this hoists the set of helper_retaddr. For softmmu,
hoists the computation of the current mmu_idx outside the loop,
fixes the endianness problem, and moves the main loop out of a
macro and into an inlined function.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Tested-by: Laurent Desnogues <laurent.desnogues@gmail.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20181005175350.30752-9-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Uses tlb_vaddr_to_host for correct operation with softmmu.
Optimize for accesses within a single page or pair of pages.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20181005175350.30752-8-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The 16-byte load only uses 16 predicate bits. But while
reusing the other load infrastructure, we find other bits
that are set and trigger an assert. To avoid this and
retain the assert, zero-extend the predicate that we pass
to the LD1 helper.
Tested-by: Laurent Desnogues <laurent.desnogues@gmail.com>
Reported-by: Laurent Desnogues <laurent.desnogues@gmail.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20181005175350.30752-7-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Use the existing helpers to determine if (1) the fpu is enabled,
(2) sve state is enabled, and (3) the current sve vector length.
Tested-by: Laurent Desnogues <laurent.desnogues@gmail.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20181005175350.30752-6-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
SVE vector length can change when changing EL, or when writing
to one of the ZCR_ELn registers.
For correctness, our implementation requires that predicate bits
that are inaccessible are never set. Which means noticing length
changes and zeroing the appropriate register bits.
Tested-by: Laurent Desnogues <laurent.desnogues@gmail.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20181005175350.30752-5-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
We are going to want to determine whether sve is enabled
for EL other than current.
Tested-by: Laurent Desnogues <laurent.desnogues@gmail.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20181005175350.30752-4-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Check for EL3 before testing CPTR_EL3.EZ. Return 0 when the exception
should be routed via AdvSIMDFPAccessTrap. Mirror the structure of
CheckSVEEnabled more closely.
Fixes: 5be5e8eda7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Tested-by: Laurent Desnogues <laurent.desnogues@gmail.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20181005175350.30752-3-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Given that the only field defined for this new register may only
be 0, we don't actually need to change anything except the name.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Tested-by: Laurent Desnogues <laurent.desnogues@gmail.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20181005175350.30752-2-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
A cut-and-paste error meant we were reading r4 from the v8M
callee-saves exception stack frame twice. This is harmless
since it just meant we did two memory accesses to the same
location, but it's unnecessary. Delete it.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20181002150304.2287-1-peter.maydell@linaro.org
In v7m_exception_taken() we were incorrectly using a
"LR bit EXCRET.ES is 1" check when it should be 0
(compare the pseudocode ExceptionTaken() function).
This meant we didn't stack the callee-saved registers
when tailchaining from a NonSecure to a Secure exception.
Cc: qemu-stable@nongnu.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20181002145940.30931-1-peter.maydell@linaro.org
The parameter of kvm_arm_init_cpreg_list() is ARMCPU instead of
CPUState, so correct the note to make it match the code.
Signed-off-by: Dongjiu Geng <gengdongjiu@huawei.com>
Message-id: 1538069046-5757-1-git-send-email-gengdongjiu@huawei.com
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
We can fit this nicely into less LOC, without harming readability.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20180927130303.12236-10-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
Valid register pairs are 0/2, 1/3, 4/6, 5/7, 8/10, 9/11, 12/14, 13/15.
R1/R2 always selects the lower number, so the current checks are not
correct as e.g. 2/4 could be selected as a pair.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20180927130303.12236-9-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
Let's check this also at a central place.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20180927130303.12236-8-david@redhat.com>
Acked-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
With the annotated functions, we can now easily check this at a central
place.
DXC 1 is to be injected if an AFP register is used (for a HFP AND FPS
instruction) when AFP is disabled.
DXC 2 is to be injected if a BFP instruction is used when AFP is
disabled.
DXC 3 is to be injected if a DFP instruction is used when AFP is
disabled.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20180927130303.12236-7-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
These flags allow us to later on detect if a DATA program interrupt
is to be injected, and which DXC (1,2,3) is to be used.
Interestingly, some support FP instructions are considered as HFP
instructions (I assume simply because they were available very early).
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20180927130303.12236-6-david@redhat.com>
Acked-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
Storing flags for instructions allows us to efficiently verify certain
properties at a central point. Examples might later be handling if
AFP is disabled in CR0, we are not in problem state, or if vector
instructions are disabled in CR0.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20180927130303.12236-5-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
We exit the TB when changing the control registers, so just like PSW
bits, this should always be consistent for a TB.
Using the PSW bit semantic makes things a lot easier compared to
manually defining the spare, shifted bits.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20180927130303.12236-4-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
The DXC is to be stored in the low core, and only in the FPC in case AFP
is enabled in CR0. Stub is not required in current code, but this way
we never run into problems.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20180927130303.12236-3-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
Move it into TCG-only code and provide a stub. Turn it into noreturn.
As Richard noted, we currently don't log the psw.addr before restoring
the state, fix that by moving (duplicating) the qemu_log_mask in the
tcg/kvm handlers.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20180927130303.12236-2-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
Both LPSW and LPSWE should raise a specification exception when their
operand is not doubleword aligned.
Signed-off-by: Pavel Zbitskiy <pavel.zbitskiy@gmail.com>
Message-Id: <20180902003322.3428-3-pavel.zbitskiy@gmail.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
As the kernel has no way of disallowing the start of a huge page
backed VM, we can migrate a running huge backed VM to a host that has
no huge page KVM support.
Let's glue huge page support support to the 3.1 machine, so we do not
migrate to a destination host that doesn't have QEMU huge page support
and can stop migration if KVM doesn't indicate support.
Signed-off-by: Janosch Frank <frankja@linux.ibm.com>
Message-Id: <20180928093435.198573-1-frankja@linux.ibm.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
This patch fixes the checking of boundary crossing instructions.
In icount mode only first instruction of the block may cross
the page boundary to keep the translation deterministic.
These conditions already existed, but compared the wrong variable.
Signed-off-by: Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
Message-Id: <20180920071702.22477.43980.stgit@pasha-VirtualBox>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
While at it, also rename var to indicate it is not used only in KVM.
Reviewed-by: Nikita Leshchenko <nikita.leshchenko@oracle.com>
Reviewed-by: Patrick Colp <patrick.colp@oracle.com>
Signed-off-by: Liran Alon <liran.alon@oracle.com>
Message-Id: <20180914003827.124570-2-liran.alon@oracle.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
This flag will be used for KVM's nested VMX migration; the HF_GUEST_MASK name
is already used in KVM, adopt it in QEMU as well.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Interrupt handling depends on various flags in env->hflags or env->hflags2,
and the exact detail were not exactly replicated between x86_cpu_has_work
and x86_cpu_exec_interrupt. Create a new function that extracts the
highest-priority non-masked interrupt, and use it in both functions.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
And convert it to a bool to use an existing hole
in the struct.
Signed-off-by: Emilio G. Cota <cota@braap.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
The AMD IOMMU does not (yet) support interrupt remapping. But
kvm_arch_fixup_msi_route assumes that all implementations do and crashes
when the AMD IOMMU is used in KVM mode.
Fixes: 8b5ed7dffa ("intel_iommu: add support for split irqchip")
Reported-by: Christopher Goldsworthy <christopher.goldsworthy@outlook.com>
Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
Message-Id: <48ae78d8-58ec-8813-8680-6f407ea46041@siemens.com>
Reviewed-by: Peter Xu <peterx@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
- mark instructions that affect active IRQ level;
- put call for gen_check_interrupts right after the instruction
translation; when FLIX is enabled it will need to appear before
other exits from the TB as well;
Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
Now that all logic for TB termination is extracted from rsr/wsr their
return value is not used and may be dropped.
Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
- mark instructions that require TB termination via slot 0;
- put TB termination right after the instruction translation loop, if
termination w/o TB linking wasn't requested;
Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
Currently we only end TB in icount mode, because access to CCOUNT or
write to CCOMPARE are IO operations. Simplify the behaviour a bit and
end TB unconditionally.
Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
Opcode decoding with libisa takes care about range of valid group SRs,
like CCOMPARE, IBREAKA, DBREAKA or DBREAKC. Turn range checks in wsr
implementations into assertions.
Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
- mark all instructions that exit TB and require dynamic search for the
next TB;
- put TB termination right after the instruction translation loop;
Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
- mark quos/quou/rems/remu instructions;
- drop parameter 0 from the translate_quou and split translate_remu from
it;
- put test for division by zero exception right after the coprocessor
exception test;
Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
- add XtensaOpcodeOps::coprocessor with bitmask of coprocessors used by
the instruction;
- replace coprocessor id parameter of gen_check_cpenable with the
bitmask of used coprocessors;
- collect coprocessor IDs used by an instruction in the disassembly
loop;
- put test for coprocessor disabled exception after the alloca test;
Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
- mark retw and retw.n instructions;
- extract window inderflow test from retw helper;
- put underflow exception check generation right after the overflow
check;
Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
- add ps.callinc to the TB flags, that allows testing all instructions
for window overflow statically;
- drop gen_window_check* functions; replace them with get_window_check
that accepts bitmask of used registers;
- add XtensaOpcodeOps::test_overflow that returns bitmask of implicitly
used registers; use it for entry and call{,x}{4,8,12};
- drop window overflow test from the entry helper;
- drop parameter 0 from translate_[di]cache and use translate_nop for
d/i cache opcodes that don't need memory accessibility check;
- add bitmask XtensaOpcodeOps::windowed_register_op that marks opcode
arguments that refer to windowed registers;
- translate windowed_register_op mask to a mask of actually used
registers in the disassembly loop;
- add check for window overflow right after the check for debug
exception;
Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
- mark break and break.n instructions;
- collect debug cause bits from parameter 0 of instructions marked for
debug exception;
- put debug exception check right after syscall check;
Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
- mark privileged instructions;
- put single privileged instruction check after disassembly loop;
- translate_[di]cache: drop parameter 0, shift parameters one down;
Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
- TB flags: add XTENSA_TBFLAG_CWOE that corresponds to the architectural
CWOE state;
- entry: move CWOE check from the helper to the test_ill_entry;
- retw: move CWOE check from the helper to the test_ill_retw;
- separate instruction disassembly loop and translation loop; save
disassembly results in local array;
Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
Marking all pages as always dirty has some really bad consequences for
the current NV2A emulation. The lesser of two evils for now is to leave
the pages as-is. Eventually this will be replaced by proper dirty page
tracking once HAXM supports it.
QEMU core will call this handler when a RAM block is removed (no
surprises there), but it does _not_ check to see if this handler is
non-NULL. Add a dummy handler for now.
If gpa is exactly the start address of one slot, the
old code fails to find the slot. Change to use one byte
range to find slot.
Change-Id: I169ec8f759bb211a5ea7c693c5d99f27576c2e93
Signed-off-by: Tao Wu <lepton@google.com>
The ARMv8 architecture defines that an AArch32 CPU starts
in SVC mode, unless EL2 is the highest available EL, in
which case it starts in Hyp mode. (In ARMv7 a CPU with EL2
but not EL3 was not a valid configuration, but we don't
specifically reject this if the user asks for one.)
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Message-id: 20180823135047.16525-1-peter.maydell@linaro.org
Not only are the sve-related tb_flags fields unused when SVE is
disabled, but not all of the cpu registers are initialized properly
for computing same. This can corrupt other fields by ORing in -1,
which might result in QEMU crashing.
This bug was not present in 3.0, but this patch is cc'd to
stable because adf92eab90 where the bug was
introduced was marked for stable.
Fixes: adf92eab90
Cc: qemu-stable@nongnu.org (3.0.1)
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Here are the accumulated ppc target patches for the last several
weeks. Highlights are:
* A number of 40p / PReP cleanups
* Preliminary irq rework on the pseries machine towards the new
XIVE interrupt controller
There are a few patches which make small changes to generic device and
arm code as prerequisites to the 40p interrupt routing cleanup. They
have acks from the relevant maintainers.
-----BEGIN PGP SIGNATURE-----
iQIzBAABCAAdFiEEdfRlhq5hpmzETofcbDjKyiDZs5IFAlup3PYACgkQbDjKyiDZ
s5IcYQ//fp79LhIXUKfJuGasVg1K8X795s3nD8vZ76z7FV2kNyHvOCcTsLn0Ccrp
WJLdXdZ0ErY87vJPfHckii9pXOX8J38nV5EFCElSLslx6gCndQZdQX2WY3luwIzq
afiKMERwTkCcqFXXPgweijhhuAU+roay8xdO/ZBO52ogzGaZalTFjG4l9a0DZMSm
ZceDrLrKw6GOaxntLptcn2+Ncuwpm0WSpLyL+bGNAzSAbqdn1dhHQ9UBrcSMteWj
df8J7CX63CFL2MwbQE3RyXeKaomdHabG+QgEVMlS4dpXVUx++ciMtrwZTX1mMDlI
DA9+5u6TcRMz34hN8lWk2O05scOVp8965BcfdeRBYAOTDS4ztiZJ9spKkIV0lHfe
rkgo7F1OsqoQhs9QrLYp0zZYn1OIhHWrbhk/DQptCJMRHk8mct4v2FcyGecU0e1Z
7SlJErxHXmar83PCCJXhtYHthDxN+dTHUW0bbrF4IjysfK+poX5hvvFEjyHGPIJL
duytwgEnnrBOFM7f7mdfH1LKeKzm1ji8nu7g2IsPAXC0xuFaq+d0fZWUWjymSPku
k5k5UUPs8KLtP9XY2qhO0vxBWl5d+CTam19FWVqHjRAp5WqjmoLxWnkofupcT0Yv
LcoHH2Ad9K8e0F4nA4UCYdJwfGH3qO+eBzmBR4+HZOuT1gVvRuw=
=A62f
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/dgibson/tags/ppc-for-3.1-20180925' into staging
ppc patch queue 2018-09-25
Here are the accumulated ppc target patches for the last several
weeks. Highlights are:
* A number of 40p / PReP cleanups
* Preliminary irq rework on the pseries machine towards the new
XIVE interrupt controller
There are a few patches which make small changes to generic device and
arm code as prerequisites to the 40p interrupt routing cleanup. They
have acks from the relevant maintainers.
# gpg: Signature made Tue 25 Sep 2018 08:00:06 BST
# gpg: using RSA key 6C38CACA20D9B392
# gpg: Good signature from "David Gibson <david@gibson.dropbear.id.au>"
# gpg: aka "David Gibson (Red Hat) <dgibson@redhat.com>"
# gpg: aka "David Gibson (ozlabs.org) <dgibson@ozlabs.org>"
# gpg: aka "David Gibson (kernel.org) <dwg@kernel.org>"
# Primary key fingerprint: 75F4 6586 AE61 A66C C44E 87DC 6C38 CACA 20D9 B392
* remotes/dgibson/tags/ppc-for-3.1-20180925:
40p: add fixed IRQ routing for LSI SCSI device
lsi53c895a: add optional external IRQ via qdev
scsi: remove unused lsi53c895a_create() and lsi53c810_create() functions
scsi: move lsi53c8xx_create() callers to lsi53c8xx_handle_legacy_cmdline()
scsi: add lsi53c8xx_handle_legacy_cmdline() function
sm501: Adjust endianness of pixel value in rectangle fill
spapr_pci: add an extra 'nr_msis' argument to spapr_populate_pci_dt
spapr: increase the size of the IRQ number space
spapr: introduce a spapr_irq class 'nr_msis' attribute
40p: use OR gate to wire up raven PCI interrupts
raven: some minor IRQ-related tidy-ups
hw/ppc: on 40p machine, change default firmware to OpenBIOS
target/ppc/cpu-models: Re-group the 970 CPUs together again
Record history of ppcemb target in common.json
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The addition of the POWER9 CPUs divided the entries for the 970 CPUs,
which is a little bit confusing when you look at the code. So let's
re-group the 970 CPUs together again, and since these chips have been
based on the POWER4 processor, move them also in front of the POWER5
chips now.
Signed-off-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Here's another pull request for qemu-3.1. No real theme here, just an
assortment of various fixes. Probably the most notable thing is the
removal of the ppcemb target which has been deprecated for some time
now.
-----BEGIN PGP SIGNATURE-----
iQIzBAABCAAdFiEEdfRlhq5hpmzETofcbDjKyiDZs5IFAluSKPoACgkQbDjKyiDZ
s5JSpRAAhWvxLM6OoTdhAaPKhlKrIzWexWNI8efJNWfXvHnbHBxs8tk+hnJOZVsU
m00hfFMKMA0/4JMURrbYsCiyaq+r+Ws8oEbLDVKQdng6LNeUrLq7uC0rv41bW3CC
1BTqTX16lvhPsg1Sz8mh6IGwCIgRiV8zgvQ4iCc3GCJidI2A+3uLvW5hAndvDdjb
3lq6drg23LXZ6z/ou7hPynKmV6tFTlxSnB957LCnPGFACZeJKbuoRHPP30IrWwY+
nOQ1GTvenouGvEKI5gsC13qFWYcoNPPfc7NZFtx1fvxiMpkOj7R5hg9oStT2Ya6u
MVRwcp/XA2MF+2NnJ8TZOkAV7+1JidhRirsKFjcn1JqftWSxJOKA0weWuNQgdQNY
lJzyZZejEJCHn0NgOq9ZRjOP4U6iIcSlTurfXoronhw1q7yEBkYkS+JpLToLLsid
9qwxlBAfUfQ8E1wR8RnM6ATygVp2Z2ToL+70Rc7xzq6/R8kYFSzuhyaI1GUUtPGW
ZPwp3GRYWJE/xOK3z1YAndrN8FlNxqz3Cov3vtH118aBatWAT+PRVlouOB1/aF3T
KfV8Kme5KQrMGuj/RDLGLOeQi0e8wqBtVIhsESpHdocC6uo28H5gNXxptyLJPA04
dJwWvaQf/J7eIuChhuFygiTzMnQyJA1f77jlExpKfxKKQwUpHf4=
=WnE4
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/dgibson/tags/ppc-for-3.1-20180907' into staging
ppc patch queue 2018-09-07
Here's another pull request for qemu-3.1. No real theme here, just an
assortment of various fixes. Probably the most notable thing is the
removal of the ppcemb target which has been deprecated for some time
now.
# gpg: Signature made Fri 07 Sep 2018 08:30:02 BST
# gpg: using RSA key 6C38CACA20D9B392
# gpg: Good signature from "David Gibson <david@gibson.dropbear.id.au>"
# gpg: aka "David Gibson (Red Hat) <dgibson@redhat.com>"
# gpg: aka "David Gibson (ozlabs.org) <dgibson@ozlabs.org>"
# gpg: aka "David Gibson (kernel.org) <dwg@kernel.org>"
# Primary key fingerprint: 75F4 6586 AE61 A66C C44E 87DC 6C38 CACA 20D9 B392
* remotes/dgibson/tags/ppc-for-3.1-20180907:
target-ppc: Extend HWCAP2 bits for ISA 3.0
target/ppc/kvm: set vcpu as online/offline
Fix a deadlock case in the CPU hotplug flow
spapr: Correct reference count on spapr-cpu-core
mac_newworld: implement custom FWPathProvider
uninorth: add ofw-addr property to allow correct fw path generation
mac_oldworld: implement custom FWPathProvider
grackle: set device fw_name and address for correct fw path generation
macio: add addr property to macio IDE object
macio: add macio bus to help with fw path generation
macio: move MACIOIDEState type declarations to macio.h
spapr_pci: fix potential NULL pointer dereference
spapr: fix leak of rev array
ppc: Remove deprecated ppcemb target
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
-----BEGIN PGP SIGNATURE-----
iQEzBAABCgAdFiEE9sSsRtSTSGjTuM6PIeENKd+XcFQFAluQU38ACgkQIeENKd+X
cFRAXQgAlhNcwby+Jsk8sbLajMWXEtww9FIv+XESldPOJHmJyCkNDVZX8MuMM7+f
8NraD3YGDJvXP/BEcmyE5yPC6mx+OIi8ufzqP0rUML1x4+Tpxp8nZ7sBH197RtGg
eImPA6oKvg4wyfNOrZ+hGa8HF/iMT03TqeKggUPf3dVAs8LV2iUwBIzrRLB4IhIN
yFnhbcw8cW04tWUhYg4+viDY2k0q7fMrJZkASD/RjGMBjubJkwWvSYOdMIEWSpcG
2qLT5SohzUzHyKPONsoBKjSIP+nKgtyYR6IJh40FDd5S5RRMHe/n3q9jChIkHMma
x1eSNvVd41++QlBKqDeAlA+gbdK/uw==
=FJn/
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/alistair/tags/pull-riscv-pullreq-20180905' into staging
A misc collection of RISC-V related patches for 3.1.
# gpg: Signature made Wed 05 Sep 2018 23:06:55 BST
# gpg: using RSA key 21E10D29DF977054
# gpg: Good signature from "Alistair Francis <alistair@alistair23.me>"
# gpg: WARNING: This key is not certified with sufficiently trusted signatures!
# gpg: It is not certain that the signature belongs to the owner.
# Primary key fingerprint: F6C4 AC46 D493 4868 D3B8 CE8F 21E1 0D29 DF97 7054
* remotes/alistair/tags/pull-riscv-pullreq-20180905:
riscv: remove define cpu_init()
hw/riscv/spike: Set the soc device tree node as a simple-bus
hw/riscv/virtio: Set the soc device tree node as a simple-bus
target/riscv: call gen_goto_tb on DISAS_TOO_MANY
target/riscv: optimize indirect branches
target/riscv: optimize cross-page direct jumps in softmmu
RISC-V: Simplify riscv_cpu_local_irqs_pending
RISC-V: Use atomic_cmpxchg to update PLIC bitmaps
RISC-V: Improve page table walker spec compliance
RISC-V: Update address bits to support sv39 and sv48
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Complete xtensa-semi chardev console implementation: allow reading input
characters from file descriptor 0 and call sys_select_one simcall on it.
Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
s32c1i must load and store value with target endianness, not host.
This results in an infinite loop in atomic cmpxchg sequences when target
endianness doesn't match host endianness.
Fixes: 9fb40342d4 ("target/xtensa: support MTTCG")
Cc: qemu-stable@nongnu.org
Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
- FPU2000 defines rfr and wfr opcodes, not rfr.s and wfr.s;
- movcond.s uses incorrect operand in tcg_gen_movcond: in case the
condition is not satisfied it must not change its argument 0.
Fixes: c04e1692e3 ("target/xtensa: extract FPU2000 opcode
translators")
Cc: qemu-stable@nongnu.org
Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
cpu_init() was removed since 2.12, so drop the define that is now unused.
Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Michael Clark <mjc@sifive.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Performance impact of this and the previous commits, measured with
the very-easy-to-cross-compile rv8-bench:
https://github.com/rv8-io/rv8-bench
Host: Intel(R) Core(TM) i7-4790K CPU @ 4.00GHz
- Key:
before: master
after1,2,3: the 3 commits in this series (i.e. 3 is this commit)
- User-mode:
bench before after1 after2 after3 final speedup
---------------------------------------------------------
aes 1.12s 1.12s 1.10s 1.00s 1.12
bigint 0.78s 0.78s 0.78s 0.78s 1
dhrystone 0.96s 0.97s 0.49s 0.49s 1.9591837
miniz 1.94s 1.94s 1.88s 1.86s 1.0430108
norx 0.51s 0.51s 0.49s 0.48s 1.0625
primes 0.85s 0.85s 0.84s 0.84s 1.0119048
qsort 4.87s 4.88s 1.86s 1.86s 2.6182796
sha512 0.76s 0.77s 0.64s 0.64s 1.1875
(after1 only applies to softmmu, so no surprises here)
- Full-system (fedora):
bench before after1 after2 after3 final speedup
---------------------------------------------------------
aes 2.68s 2.54s 2.60s 2.34s 1.1452991
bigint 1.61s 1.56s 1.55s 1.64s 0.98170732
dhrystone 1.78s 1.67s 1.25s 1.24s 1.4354839
miniz 3.53s 3.35s 3.28s 3.35s 1.0537313
norx 1.13s 1.09s 1.07s 1.06s 1.0660377
primes 15.37s 15.41s 15.20s 15.37s 1
qsort 7.20s 6.71s 3.85s 3.96s 1.8181818
sha512 1.07s 1.04s 0.90s 0.90s 1.1888889
SoftMMU slows things down, so the numbers are less sensitive.
Cross-page jumps improve things a little bit, though.
Note that I'm not showing here averages, just results from a
single run, so with primes there isn't much to worry about.
Signed-off-by: Emilio G. Cota <cota@braap.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Signed-off-by: Emilio G. Cota <cota@braap.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Signed-off-by: Emilio G. Cota <cota@braap.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Set the newly added register(KVM_REG_PPC_ONLINE) to indicate if the vcpu is
online(1) or offline(0)
KVM will use this information to set the RWMR register, which controls the PURR
and SPURR accumulation.
CC: paulus@samba.org
Signed-off-by: Nikunj A Dadhania <nikunj@linux.ibm.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
This commit is intended to improve readability.
There is no change to the logic.
Cc: Sagar Karandikar <sagark@eecs.berkeley.edu>
Cc: Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
Cc: Palmer Dabbelt <palmer@sifive.com>
Cc: Alistair Francis <Alistair.Francis@wdc.com>
Signed-off-by: Michael Clark <mjc@sifive.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
- Inline PTE_TABLE check for better readability
- Change access checks from ternary operator to if
- Improve readibility of User page U mode and SUM test
- Disallow non U mode from fetching from User pages
- Add reserved PTE flag check: W or W|X
- Add misaligned PPN check
- Set READ protection for PTE X flag and mstatus.mxr
- Use memory_region_is_ram in pte update
Cc: Sagar Karandikar <sagark@eecs.berkeley.edu>
Cc: Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
Cc: Palmer Dabbelt <palmer@sifive.com>
Cc: Alistair Francis <Alistair.Francis@wdc.com>
Signed-off-by: Michael Clark <mjc@sifive.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
In a few places translate.c contains non-breaking spaces (0xc2 0xa0)
instead of regular ones (0x20):
7c 7c c2 a0 63 63
7c 7c 20 63 63
| | c c
This confuses some text editors.
Signed-off-by: Pavel Zbitskiy <pavel.zbitskiy@gmail.com>
Message-Id: <20180822144039.5796-2-pavel.zbitskiy@gmail.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
PACK fails on the test from the Principles of Operation: F1F2F3F4
becomes 0000234C instead of 0001234C due to an off-by-one error.
Furthermore, it overwrites one extra byte to the left of F1.
If len_dest is 0, then we only want to flip the 1st byte and never loop
over the rest. Therefore, the loop condition should be > and not >=.
If len_src is 1, then we should flip the 1st byte and pack the 2nd.
Since len_src is already decremented before the loop, the first
condition should be >=, and not >.
Likewise for len_src == 2 and the second condition.
Signed-off-by: Pavel Zbitskiy <pavel.zbitskiy@gmail.com>
Message-Id: <20180821025104.19604-7-pavel.zbitskiy@gmail.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
Improves "b213c9f5: target/s390x: Implement TRTR" by introducing the
intermediate functions, which are compatible with dx_helper type.
Signed-off-by: Pavel Zbitskiy <pavel.zbitskiy@gmail.com>
Message-Id: <20180821025104.19604-6-pavel.zbitskiy@gmail.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
Suppose psw.mask=0x0000000080000000, cc=2, r1=0 and we do "ipm 1".
This command must touch only bits 32-39, so the expected output
is r1=0x20000000. However, currently qemu yields r1=0x20008000,
because irrelevant parts of PSW leak into r1 during program mask
transfer.
Signed-off-by: Pavel Zbitskiy <pavel.zbitskiy@gmail.com>
Message-Id: <20180821025104.19604-5-pavel.zbitskiy@gmail.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
CSST is defined as:
C(0xc802, CSST, SSF, CASS, la1, a2, 0, 0, csst, 0)
It means that the first parameter is handled by in1_la1().
in1_la1() fills addr1 field, and not in1.
Furthermore, when extract32() is used for the alignment check, the
third parameter should specify the number of trailing bits that must
be 0. For FC these numbers are:
FC=0 (word, 4 bytes): 2
FC=1 (double word, 8 bytes): 3
FC=2 (quad word, 16 bytes): 4
For SC these numbers correspond to the size:
SC=0: 0
SC=1: 1
SC=2: 2
SC=3: 3
SC=4: 4
Signed-off-by: Pavel Zbitskiy <pavel.zbitskiy@gmail.com>
Message-Id: <20180821025104.19604-4-pavel.zbitskiy@gmail.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
These instructions are provided for compatibility purposes and are
used only by old software, in the new code BAS and BASR are preferred.
The difference between the old and new instruction exists only in the
24-bit mode.
In addition, fix BAS polluting high 32 bits of the first operand in
24- and 31-bit addressing modes.
Signed-off-by: Pavel Zbitskiy <pavel.zbitskiy@gmail.com>
Message-Id: <20180821025104.19604-3-pavel.zbitskiy@gmail.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
There is no known available OS for ppc around anymore that uses page
sizes below 4k, so it does not make much sense that we keep wasting
our time on building and testing the ppcemb-softmmu target. It has
been deprecated since two releases, and nobody complained, so let's
remove this now.
Signed-off-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Add definition of the first nanoMIPS processor in QEMU.
Reviewed-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Signed-off-by: Yongbok Kim <yongbok.kim@mips.com>
Signed-off-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Signed-off-by: Stefan Markovic <smarkovic@wavecomp.com>
Fix ERET/ERETNC so that ADEL exception can be raised.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Signed-off-by: Yongbok Kim <yongbok.kim@mips.com>
Signed-off-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Signed-off-by: Stefan Markovic <smarkovic@wavecomp.com>
Update BadInstr and BadInstrX registers for nanoMIPS. The same
support for pre-nanoMIPS remains unimplemented.
Reviewed-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Signed-off-by: Yongbok Kim <yongbok.kim@mips.com>
Signed-off-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Signed-off-by: Stefan Markovic <smarkovic@wavecomp.com>
A set of nanoMIPS instructions is not available if Config5 bit NMS
is set.
Reviewed-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Signed-off-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Add emulation of DSP ASE instructions for nanoMIPS - part 6.
Reviewed-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Signed-off-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Signed-off-by: Stefan Markovic <smarkovic@wavecomp.com>
Add emulation of DSP ASE instructions for nanoMIPS - part 5.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Signed-off-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Signed-off-by: Stefan Markovic <smarkovic@wavecomp.com>
Add emulation of DSP ASE instructions for nanoMIPS - part 4.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Signed-off-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Signed-off-by: Stefan Markovic <smarkovic@wavecomp.com>
Add emulation of DSP ASE instructions for nanoMIPS - part 3.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Signed-off-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Signed-off-by: Stefan Markovic <smarkovic@wavecomp.com>
Add emulation of DSP ASE instructions for nanoMIPS - part 2.
Reviewed-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Signed-off-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Signed-off-by: Stefan Markovic <smarkovic@wavecomp.com>
Add emulation of DSP ASE instructions for nanoMIPS - part 1.
Reviewed-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Signed-off-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Signed-off-by: Stefan Markovic <smarkovic@wavecomp.com>
Add emulation of MT ASE instructions for nanoMIPS.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Signed-off-by: Stefan Markovic <smarkovic@wavecomp.com>
Use bits from configuration registers for availability control
of MT ASE instructions, rather than only ISA_MT bit in insn_flags.
This is done by adding a field in hflags for MT bit, and adding
functions check_mt() and check_cp0_mt().
Reviewed-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Signed-off-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Signed-off-by: Stefan Markovic <smarkovic@wavecomp.com>
Add emulation of various flavors of nanoMIPS 32-bit branch
instructions.
Reviewed-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Signed-off-by: Yongbok Kim <yongbok.kim@mips.com>
Signed-off-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Signed-off-by: Stefan Markovic <smarkovic@wavecomp.com>
Implement support for nanoMIPS LLWP/SCWP instructions. Beside
adding core functionality of these instructions, this patch adds
support for availability control via configuration bit XNP.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Signed-off-by: Dimitrije Nikolic <dnikolic@wavecomp.com>
Signed-off-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Signed-off-by: Stefan Markovic <smarkovic@wavecomp.com>
Add CP0_Config3 and CP0_Config5 to DisasContext structure. This is
needed for implementing availability control of various instructions.
Reviewed-by: "Aleksandar Markovic <amarkovic@wavecomp.com>"
Signed-off-by: "Aleksandar Markovic <amarkovic@wavecomp.com>"
Add emulation of various nanoMIPS load and store instructions.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Signed-off-by: Yongbok Kim <yongbok.kim@mips.com>
Signed-off-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Signed-off-by: Stefan Markovic <smarkovic@wavecomp.com>
Implement emulation of nanoMIPS EXTW instruction. EXTW instruction
is similar to the MIPS r6 ALIGN instruction, except that it counts
the other way and in bits instead of bytes. We therefore generalise
gen_align() function into a new gen_align_bits() function (which
counts in bits instead of bytes and optimises when bits = size of
the word), and implement gen_align() and a new gen_ext() based on
that. Since we need to know the word size to check for when the
number of bits == the word size, the opc argument is replaced with
a wordsz argument (either 32 or 64).
Signed-off-by: James Hogan <james.hogan@mips.com>
Signed-off-by: Yongbok Kim <yongbok.kim@mips.com>
Signed-off-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Signed-off-by: Stefan Markovic <smarkovic@wavecomp.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Added a helper for ROTX based on the pseudocode from the
architecture spec. This instraction was not present in previous
MIPS instruction sets.
Signed-off-by: Yongbok Kim <yongbok.kim@mips.com>
Signed-off-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Signed-off-by: Stefan Markovic <smarkovic@wavecomp.com>
Reviewed-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Add emulation of nanoMIPS instructions situated in pool p_lsx, and
emulation of LSA instruction as well.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Signed-off-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Signed-off-by: Stefan Markovic <smarkovic@wavecomp.com>
Add emulation of misc nanoMIPS instructions situated in pool32axf.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Yongbok Kim <yongbok.kim@mips.com>
Signed-off-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Signed-off-by: Stefan Markovic <smarkovic@wavecomp.com>
Add emulation of nanoMIPS instructions that are situated in pool32a0.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Signed-off-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Signed-off-by: Stefan Markovic <smarkovic@wavecomp.com>
Add emulation of basic floating point arithmetic for nanoMIPS.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Signed-off-by: Yongbok Kim <yongbok.kim@mips.com>
Signed-off-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Signed-off-by: Stefan Markovic <smarkovic@wavecomp.com>
Add emulation of LI48, ADDIU48, ADDIUGP48, ADDIUPC48, LWPC48, and
SWPC48 instructions.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Signed-off-by: Yongbok Kim <yongbok.kim@mips.com>
Signed-off-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Signed-off-by: Stefan Markovic <smarkovic@wavecomp.com>
Add emulation of nanoMIPS instructions MOVE.P and MOVE.PREV.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Yongbok Kim <yongbok.kim@mips.com>
Signed-off-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Signed-off-by: Stefan Markovic <smarkovic@wavecomp.com>
Add emulation of SIGRIE, SYSCALL, BREAK, SDBBP, ADDIU, ADDIUPC,
ADDIUGP.W, LWGP, SWGP, ORI, XORI, ANDI, and other instructions.
Reviewed-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Signed-off-by: Yongbok Kim <yongbok.kim@mips.com>
Signed-off-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Signed-off-by: Stefan Markovic <smarkovic@wavecomp.com>
Add emulation of SAVE16 and RESTORE.JRC16 instructions. Routines
gen_save(), gen_restore(), and gen_adjust_sp() are provided to support
this feature.
This patch at the same time provides function gen_op_addr_addi(). This
function will be used in emulation of some other nanoMIPS instructions.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Yongbok Kim <yongbok.kim@mips.com>
Signed-off-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Signed-off-by: Stefan Markovic <smarkovic@wavecomp.com>
Add emulation of NOT16, AND16, XOR16, OR16 instructions.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Yongbok Kim <yongbok.kim@mips.com>
Signed-off-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Signed-off-by: Stefan Markovic <smarkovic@wavecomp.com>
Add emulation of misc nanoMIPS 16-bit instructions.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Yongbok Kim <yongbok.kim@mips.com>
Signed-off-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Signed-off-by: Stefan Markovic <smarkovic@wavecomp.com>
Add emulation of nanoMIPS 16-bit shift instructions.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Yongbok Kim <yongbok.kim@mips.com>
Signed-off-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Signed-off-by: Stefan Markovic <smarkovic@wavecomp.com>
Add emulation of nanoMIPS 16-bit branch instructions.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Yongbok Kim <yongbok.kim@mips.com>
Signed-off-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Signed-off-by: Stefan Markovic <smarkovic@wavecomp.com>
Add emulation of nanoMIPS 16-bit arithmetic instructions.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Yongbok Kim <yongbok.kim@mips.com>
Signed-off-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Signed-off-by: Stefan Markovic <smarkovic@wavecomp.com>
Add some basic utility functions and macros for nanoMIPS decoding
engine.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Yongbok Kim <yongbok.kim@mips.com>
Signed-off-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Signed-off-by: Stefan Markovic <smarkovic@wavecomp.com>
Add empty body and invocation of decode_nanomips_opc() if the bit
ISA_NANOMIPS32 is set in ctx->insn_flags.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Yongbok Kim <yongbok.kim@mips.com>
Signed-off-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Signed-off-by: Stefan Markovic <smarkovic@wavecomp.com>
Only if Config3.ISA is 3 (microMIPS), the mode should be switched in
cpu_state_reset(). Config3.ISA is 1 for nanoMIPS processors, and no mode
change should happen.
Reviewed-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Signed-off-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Add nanoMIPS opcodes for DSP ASE instruction pools and instructions.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Signed-off-by: Stefan Markovic <smarkovic@wavecomp.com>
Add nanoMIPS opcodes. nanoMIPS instruction are organized by so-called
instruction pools. Each pool contains a set of opcodes, that in turn
can be instruction opcodes or instruction pool opcodes.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Yongbok Kim <yongbok.kim@mips.com>
Signed-off-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Signed-off-by: Stefan Markovic <smarkovic@wavecomp.com>
Add ISA_NANOMIPS32 and CPU_NANOMIPS32 preprocessor constants.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Yongbok Kim <yongbok.kim@mips.com>
Signed-off-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Signed-off-by: Stefan Markovic <smarkovic@wavecomp.com>
Following the bulk conversion of the iwMMXt code, there are
just a handful of hard coded tabs in target/arm; fix them.
This is a whitespace-only patch.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20180821165215.29069-4-peter.maydell@linaro.org
Untabify the arm iwmmxt_helper.c. This affects only the iwMMXt code.
We've never touched that code in years, so it's not going to get
fixed up by our "change when touched" process, and a bulk change is
not going to be too disruptive.
This commit was produced using Emacs "untabify" (plus one
by-hand removal of a space to fix a checkpatch nit); it is
a whitespace-only change.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20180821165215.29069-3-peter.maydell@linaro.org
Untabify the arm translate.c. This affects only some lines,
mostly comments, in the iwMMXt code. We've never touched
that code in years, so it's not going to get fixed up
by our "change when touched" process, and a bulk change
is not going to be too disruptive.
This commit was produced using Emacs "untabify"; it is
a whitespace-only change.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20180821165215.29069-2-peter.maydell@linaro.org
On 32-bit exception entry, CPSR.J must always be set to 0
(see v7A Arm ARM DDI0406C.c B1.8.5). CPSR.IL must also
be cleared on 32-bit exception entry (see v8A Arm ARM
DDI0487C.a G1.10).
Clear these bits. (This fixes a bug which will never be noticed
by non-buggy guests.)
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Reviewed-by: Luc Michel <luc.michel@greensocs.com>
Message-id: 20180820153020.21478-6-peter.maydell@linaro.org
Implement the necessary support code for taking exceptions
to Hyp mode in AArch32.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Reviewed-by: Luc Michel <luc.michel@greensocs.com>
Message-id: 20180820153020.21478-5-peter.maydell@linaro.org
Factor out the code which changes the CPU state so as to
actually take an exception to AArch32. We're going to want
to use this for handling exception entry to Hyp mode.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Reviewed-by: Luc Michel <luc.michel@greensocs.com>
Message-id: 20180820153020.21478-4-peter.maydell@linaro.org
The AArch32 HCR and HCR2 registers alias HCR_EL2
bits [31:0] and [63:32]; implement them.
Since HCR2 exists in ARMv8 but not ARMv7, we need new
regdef arrays for "we have EL3, not EL2, we're ARMv8"
and "we have EL2, we're ARMv8" to hold the definitions.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Reviewed-by: Luc Michel <luc.michel@greensocs.com>
Message-id: 20180820153020.21478-3-peter.maydell@linaro.org
The v8 AArch32 HACTLR2 register maps to bits [63:32] of ACTLR_EL2.
We implement ACTLR_EL2 as RAZ/WI, so make HACTLR2 also RAZ/WI.
(We put the regdef next to ACTLR_EL2 as a reminder in case we
ever make ACTLR_EL2 something other than RAZ/WI).
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Reviewed-by: Luc Michel <luc.michel@greensocs.com>
Message-id: 20180820153020.21478-2-peter.maydell@linaro.org
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20180814002653.12828-5-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20180814002653.12828-4-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* qumu-guest-agent freeze-hook tweak (Christian)
* pm_smbus improvements (Corey)
* Move validation to pre_plug for pc-dimm (David)
* Fix memory leaks (Eduardo, Marc-André)
* synchronization profiler (Emilio)
* Convert the CPU list to RCU (Emilio)
* LSI support for PPR Extended Message (George)
* vhost-scsi support for protection information (Greg)
* Mark mptsas as a storage device in the help (Guenter)
* checkpatch tweak cherry-picked from Linux (me)
* Typos, cleanups and dead-code removal (Julia, Marc-André)
* qemu-pr-helper support for old libmultipath (Murilo)
* Annotate fallthroughs (me)
* MemoryRegionOps cleanup (me, Peter)
* Make s390 qtests independent from libqos, which doesn't actually support it (me)
* Make cpu_get_ticks independent from BQL (me)
* Introspection fixes (Thomas)
* Support QEMU_MODULE_DIR environment variable (ryang)
-----BEGIN PGP SIGNATURE-----
iQFIBAABCAAyFiEE8TM4V0tmI4mGbHaCv/vSX3jHroMFAlt+5OYUHHBib256aW5p
QHJlZGhhdC5jb20ACgkQv/vSX3jHroPtxwf8CQM/F+0L+EKeYfYcVgVZsDhhOkLj
Pm61q0bZsWKLby5jCqIDYw7Z/vodJnSS1DO0slIRoXxvQ9DwlkbBnBy/aG/E9U0q
WF1vbCezibDIt7sGcsu9F5zXU9eqe+E6dZfxFrv8FQSOFVxn34TfeJagWLCtzg0d
LnVTF/e4zJD8IQiM7w6lJQxua3fz13ssPEg2KnMkguDhACMwvZ/K/cA2AJkHRMhY
sroPMwLHlrF1NOoeCIrWxYUmSGCRCAy1DmiPGiiSs0yBq/dL0UkAa5Eu6HMQ7rgI
zUff3JDmzEjixUSIEbpVRN+yPCN0/ACSOpJUrKLDxXbc4nZ+PBQ04YpyPQ==
=UZiV
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/bonzini/tags/for-upstream' into staging
* x86 TCG fixes for 64-bit call gates (Andrew)
* qumu-guest-agent freeze-hook tweak (Christian)
* pm_smbus improvements (Corey)
* Move validation to pre_plug for pc-dimm (David)
* Fix memory leaks (Eduardo, Marc-André)
* synchronization profiler (Emilio)
* Convert the CPU list to RCU (Emilio)
* LSI support for PPR Extended Message (George)
* vhost-scsi support for protection information (Greg)
* Mark mptsas as a storage device in the help (Guenter)
* checkpatch tweak cherry-picked from Linux (me)
* Typos, cleanups and dead-code removal (Julia, Marc-André)
* qemu-pr-helper support for old libmultipath (Murilo)
* Annotate fallthroughs (me)
* MemoryRegionOps cleanup (me, Peter)
* Make s390 qtests independent from libqos, which doesn't actually support it (me)
* Make cpu_get_ticks independent from BQL (me)
* Introspection fixes (Thomas)
* Support QEMU_MODULE_DIR environment variable (ryang)
# gpg: Signature made Thu 23 Aug 2018 17:46:30 BST
# gpg: using RSA key BFFBD25F78C7AE83
# gpg: Good signature from "Paolo Bonzini <bonzini@gnu.org>"
# gpg: aka "Paolo Bonzini <pbonzini@redhat.com>"
# Primary key fingerprint: 46F5 9FBD 57D6 12E7 BFD4 E2F7 7E15 100C CD36 69B1
# Subkey fingerprint: F133 3857 4B66 2389 866C 7682 BFFB D25F 78C7 AE83
* remotes/bonzini/tags/for-upstream: (69 commits)
KVM: cleanup unnecessary #ifdef KVM_CAP_...
target/i386: update MPX flags when CPL changes
i2c: pm_smbus: Add the ability to force block transfer enable
i2c: pm_smbus: Don't delay host status register busy bit when interrupts are enabled
i2c: pm_smbus: Add interrupt handling
i2c: pm_smbus: Add block transfer capability
i2c: pm_smbus: Make the I2C block read command read-only
i2c: pm_smbus: Fix the semantics of block I2C transfers
i2c: pm_smbus: Clean up some style issues
pc-dimm: assign and verify the "addr" property during pre_plug
pc: drop memory region alignment check for 0
util/oslib-win32: indicate alignment for qemu_anon_ram_alloc()
pc-dimm: assign and verify the "slot" property during pre_plug
ipmi: Use proper struct reference for BT vmstate
vhost-scsi: expose 't10_pi' property for VIRTIO_SCSI_F_T10_PI
vhost-scsi: unify vhost-scsi get_features implementations
vhost-user-scsi: move host_features into VHostSCSICommon
cpus: allow cpu_get_ticks out of BQL
cpus: protect TimerState writes with a spinlock
seqlock: add QemuLockable support
...
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The capability macros are always defined, since they come from kernel
headers that are copied into the QEMU tree. Remove the unnecessary #ifdefs.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Iterating over the list without using atomics is undefined behaviour,
since the list can be modified concurrently by other threads (e.g.
every time a new thread is created in user-mode).
Fix it by implementing the CPU list as an RCU QTAILQ. This requires
a little bit of extra work to traverse list in reverse order (see
previous patch), but other than that the conversion is trivial.
Signed-off-by: Emilio G. Cota <cota@braap.org>
Message-Id: <20180819091335.22863-12-cota@braap.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
The current implementation has three bugs,
* segment limits are not enforced in protected mode if the L bit is set
in the target segment descriptor
* segment limits are not enforced in compatibility mode (ljmp to 32-bit
code segment in long mode)
* #GP(new_cs) is generated rather than #GP(0)
Now the segment limits are enforced if we're not in long mode OR the
target code segment doesn't have the L bit set.
Signed-off-by: Andrew Oates <aoates@google.com>
Message-Id: <20180816011903.39816-1-andrew@andrewoates.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Currently call gates are always treated as 32-bit gates. In IA-32e mode
(either compatibility or 64-bit submode), system segment descriptors are
always 64-bit. Treating them as 32-bit has the expected unfortunate
effect: only the lower 32 bits of the offset are loaded, the stack
pointer is truncated, a bad new stack pointer is loaded from the TSS (if
switching privilege levels), etc.
This change adds support for 64-bit call gate to the lcall and ljmp
instructions. Additionally, there should be a check for non-canonical
stack pointers, but I've omitted that since there doesn't seem to be
checks for non-canonical addresses in this code elsewhere.
I've left the raise_exception_err_ra lines unwapped at 80 columns to
match the style in the rest of the file.
Signed-off-by: Andrew Oates <aoates@google.com>
Message-Id: <20180819181725.34098-1-andrew@andrewoates.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Reported by Coverity:
Error: RESOURCE_LEAK (CWE-772): [#def439]
qemu-2.12.0/target/i386/cpu.c:3179: alloc_fn: Storage is returned from allocation function "qdict_new".
qemu-2.12.0/qobject/qdict.c:34:5: alloc_fn: Storage is returned from allocation function "g_malloc0".
qemu-2.12.0/qobject/qdict.c:34:5: var_assign: Assigning: "qdict" = "g_malloc0(4120UL)".
qemu-2.12.0/qobject/qdict.c:37:5: return_alloc: Returning allocated memory "qdict".
qemu-2.12.0/target/i386/cpu.c:3179: var_assign: Assigning: "props" = storage returned from "qdict_new()".
qemu-2.12.0/target/i386/cpu.c:3217: leaked_storage: Variable "props" going out of scope leaks the storage it points to.
This was introduced by commit b8097deb35 ("i386: Improve
query-cpu-model-expansion full mode").
The leak is only theoretical: if ret->model->props is set to
props, the qapi_free_CpuModelExpansionInfo() call will free props
too in case of errors. The only way for this to not happen is if
we enter the default branch of the switch statement, which would
never happen because all CpuModelExpansionType values are being
handled.
It's still worth to change this to make the allocation logic
easier to follow and make the Coverity error go away. To make
everything simpler, initialize ret->model and ret->model->props
earlier in the function.
While at it, remove redundant check for !prop because prop is
always initialized at the beginning of the function.
Fixes: b8097deb35
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Message-Id: <20180816183509.8231-1-ehabkost@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Many of these are marked as "intentional/fix required" because they
just need adding a fall through comment. This is exactly what this
patch does, except for target/mips/translate.c where it is easier to
duplicate the code, and hw/audio/sb16.c where I consulted the DOSBox
sources and decide to just remove the LOG_UNIMP before the fallthrough.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Here's my first ppc & spapr pull request for qemu-3.1. This contains
a bunch of things that have accumulated while 3.0 was in freeze.
Highlights are:
* SLOF firmware update
* A number of floating point cleanups from Richard Henderson and
Yasmin Beatriz
* A new model for assigning irq numbers on spapr, this is an
important preliminary step towards implementing the POWER9
"XIVE" interrupt controller
-----BEGIN PGP SIGNATURE-----
iQIzBAABCAAdFiEEdfRlhq5hpmzETofcbDjKyiDZs5IFAlt7lewACgkQbDjKyiDZ
s5K+TA//QIMtlm59lR1G68Bwj656WEMgi/f+HN3FL419XtOZ/UkgprPmvBzWvoVP
r7EgyktRw9qSyCsOe5OOST12rkP8s4RwyjxOPak8opRBEgXRFYk9q8micCCOv/94
X7dtxh7sqDYvWVC4Gky1SvmNbrPtaFqSWAp7ZC/+OYnN5jOg9g+nQloPTko++GKp
hNEKoS5I/5Q/OvtkaxGy6+G5oShi3in9gpC/nE5vtfJOnZ/ukIJcW5Niate6INpF
WoKg5LPEF3/f0GGCDxumpoOQ7odVcBIFrtbeoeEDIK91f0l3H7+n75b8xgWE1Y51
WelLNgdD2n0Z1pxhKwxUljIg5CnJamVSBhd6zELXDc5cx8CcOBLuNBSqtpriyRPn
0Or3E4xfq3EbD+fNVcqHNVBC8M5mN18iplx+sOjmNTbBtwAiB/IGpVVfJkhc83Ed
85Rlu4FxDdwBdeeE21PwdLhkRrRrtYpgobiWU2Mw0l20YYflhnQ20XS80AVQiVBa
H/NflZbkEM93rqt/sKwenlx0bAUKt1HjZpE3mDuhSkLMRL4Sdg4hsulFEMT7QpPW
QSZs+AntJpC6znRmZfE0Cavq1GNk5j4j9O5MBSKD8fbSNv7UR6Muu4SABIhjEZ0m
7wG7qfqfLVEO/cnFph4nKgSAPnCE8mNiIyE0VowpkjhUWFSDTGE=
=viH7
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/dgibson/tags/ppc-for-3.1-20180821' into staging
ppc patch queue 2018-08-21
Here's my first ppc & spapr pull request for qemu-3.1. This contains
a bunch of things that have accumulated while 3.0 was in freeze.
Highlights are:
* SLOF firmware update
* A number of floating point cleanups from Richard Henderson and
Yasmin Beatriz
* A new model for assigning irq numbers on spapr, this is an
important preliminary step towards implementing the POWER9
"XIVE" interrupt controller
# gpg: Signature made Tue 21 Aug 2018 05:32:44 BST
# gpg: using RSA key 6C38CACA20D9B392
# gpg: Good signature from "David Gibson <david@gibson.dropbear.id.au>"
# gpg: aka "David Gibson (Red Hat) <dgibson@redhat.com>"
# gpg: aka "David Gibson (ozlabs.org) <dgibson@ozlabs.org>"
# gpg: aka "David Gibson (kernel.org) <dwg@kernel.org>"
# Primary key fingerprint: 75F4 6586 AE61 A66C C44E 87DC 6C38 CACA 20D9 B392
* remotes/dgibson/tags/ppc-for-3.1-20180821: (26 commits)
ppc: add DBCR based debugging
spapr_pci: factorize the use of SPAPR_MACHINE_GET_CLASS()
mac_newworld: don't use legacy fw_cfg_init_mem() function
mac_oldworld: don't use legacy fw_cfg_init_mem() function
40p: don't use legacy fw_cfg_init_mem() function
qemu-doc: mark ppc/prep machine as deprecated
hw/ppc: deprecate the machine type 'prep', replaced by '40p'
spapr: introduce a IRQ controller backend to the machine
hw/ppc/ppc405_uc: Convert away from old_mmio
hw/ppc/ppc_boards: Don't use old_mmio for ref405ep_fpga
hw/ppc/prep: Remove ifdeffed-out stub of XCSR code
spapr: introduce a fixed IRQ number space
spapr: Add a pseries-3.1 machine type
target/ppc: simplify bcdadd/sub functions
xics: don't include "target/ppc/cpu-qom.h" in "hw/ppc/xics.h"
vfio/spapr: Allow backing bigger guest IOMMU pages with smaller physical pages
target/ppc: bcdsub fix sign when result is zero
target/ppc: Use non-arithmetic conversions for fp load/store
target/ppc: Honor fpscr_ze semantics and tidy fre, fresqrt
target/ppc: Tidy helper_fsqrt
...
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Add support for DBCR (debug control register) based debugging as used on
BookE ppc. So far supports only branch and single-step events, but these are
the important ones. GDB in Linux guest can now do single-stepping.
Signed-off-by: Roman Kapl <rka@sysgo.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
After solving a corner case in bcdsub, this patch simplifies the logic
of both bcdadd/sub instructions by removing some unnecessary local flags.
This commit also rearranges some if-else conditions in bcdadd to make it
easier to read.
Signed-off-by: Yasmin Beatriz <yasmins@linux.ibm.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
When the result of bcdsub is equal to zero, the result sign may be
set to negative in some cases, and this does not follow the Power ISA
specifications as to decimal integer arithmetic instructions.
Signed-off-by: Yasmin Beatriz <yasmins@linux.ibm.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Memory operations have no side effects on fp state.
The use of a "real" conversions between float64 and float32
would raise exceptions for SNaN and out-of-range inputs.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Divide by zero, exception taken, leaves the destination register
unmodified. Therefore we must raise the exception before returning
from the respective helpers.
>From helper_fre, divide by zero exception not taken, return the
documented +/- 0.5.
At the same time, tidy the invalid exception checking so that we
rely on softfloat for initial argument validation, and select the
kind of invalid operand exception only when we know we must.
At the same time, pass and return float64 values directly rather
than bounce through the CPU_DoubleU union.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Tidy the invalid exception checking so that we rely on softfloat for
initial argument validation, and select the kind of invalid operand
exception only when we know we must. Pass and return float64 values
directly rather than bounce through the CPU_DoubleU union.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Tidy the invalid exception checking so that we rely on softfloat for
initial argument validation, and select the kind of invalid operand
exception only when we know we must. Pass and return float64 values
directly rather than bounce through the CPU_DoubleU union.
Note that because we know float_flag_invalid was set, we do not have
to re-check the signs of the infinities.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Tidy the invalid exception checking so that we rely on softfloat for
initial argument validation, and select the kind of invalid operand
exception only when we know we must. Pass and return float64 values
directly rather than bounce through the CPU_DoubleU union.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Divide by zero, exception taken, leaves the destination register
unmodified. Therefore we must raise the exception before returning
from helper_fdiv. Move the check from do_float_check_status into
helper_fdiv.
At the same time, tidy the invalid exception checking so that we
rely on softfloat for initial argument validation, and select the
kind of invalid operand exception only when we know we must.
At the same time, pass and return float64 values directly rather
than bounce through the CPU_DoubleU union.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
While just setting the MSR bits is sufficient, we can tidy
the helper code by extracting the MSR test to a helper and
then forcing it true for user-only.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
QEMU has had huge page support for a longer time already, but KVM
memory management under s390x needed some changes to work with huge
backings.
Now that we have support, let's enable it if requested and
available. Otherwise we now properly tell the user if there is no
support and back out instead of failing to run the VM later on.
Signed-off-by: Janosch Frank <frankja@linux.ibm.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
Message-Id: <20180802070201.257406-1-frankja@linux.ibm.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
Provide the etoken facility. We need to handle cpu model, migration and
clear reset.
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
Acked-by: Janosch Frank <frankja@linux.ibm.com>
Message-Id: <20180731090448.36662-3-borntraeger@de.ibm.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
The "max" CPU model behaves like "-cpu host" when KVM is enabled, and like
a CPU with the maximum possible feature set when TCG is enabled.
While the "host" model can not be used under TCG ("kvm_required"), the
"max" model can and "Enables all features supported by the accelerator in
the current host".
So we can treat "host" just as a special case of "max" (like x86 does).
It differs to the "qemu" CPU model under TCG such that compatibility
handling will not be performed and that some experimental CPU features
not yet part of the "qemu" model might be indicated.
These are right now under TCG (see "qemu_MAX"):
- stfle53
- msa5-base
- zpci
This will result right now in the following warning when starting QEMU TCG
with the "max" model:
"qemu-system-s390x: warning: 'msa5-base' requires 'kimd-sha-512'."
The "qemu" model (used as default in QEMU under TCG) will continue to
work without such warnings. The "max" model in the current form
might be interesting for kvm-unit-tests (where we would e.g. now also
test "msa5-base").
The "max" model is neither static nor migration safe (like the "host"
model). It is independent of the machine but dependends on the accelerator.
It can be used to detect the maximum CPU model also under TCG from upper
layers without having to care about CPU model names for CPU model
expansion.
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20180725091233.3300-1-david@redhat.com>
Reviewed-by: Eduardo Habkost <ehabkost@redhat.com>
[CH: minor wording changes]
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
This option has been deprecated for two releases; remove it.
Acked-by: Christian Borntraeger <borntraeger@de.ibm.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Acked-by: Halil Pasic <pasic@linux.ibm.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
The enumeration type S390FeatGroup is now generated as well.
This shall simplify the definition of new feature groups
without the requirement to modify existing code.
Signed-off-by: Michael Mueller <mimu@linux.ibm.com>
Message-Id: <20180725143617.8731-1-mimu@linux.ibm.com>
Acked-by: David Hildenbrand <david@redhat.com>
Acked-by: Christian Borntraeger <borntraeger@de.ibm.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
ARMv7VE introduced the ERET instruction, which is necessary to
return from an exception taken to Hyp mode. Implement this.
In A32 encoding it is a completely new encoding; in T32 it
is an adjustment of the behaviour of the existing
"SUBS PC, LR, #<imm8>" instruction.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Reviewed-by: Luc Michel <luc.michel@greensocs.com>
Message-id: 20180814124254.5229-10-peter.maydell@linaro.org
The MSR (banked) and MRS (banked) instructions allow accesses to ELR_Hyp
from either Monitor or Hyp mode. Our translate time check
was overly strict and only permitted access from Monitor mode.
The runtime check we do in msr_mrs_banked_exc_checks() had the
correct code in it, but never got there because of the earlier
"currmode == tgtmode" check. Special case ELR_Hyp.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Reviewed-by: Luc Michel <luc.michel@greensocs.com>
Message-id: 20180814124254.5229-9-peter.maydell@linaro.org
The AArch32 HSR is the equivalent of AArch64 ESR_EL2;
we can implement it by marking our existing ESR_EL2 regdef
as STATE_BOTH. It also needs to be "RES0 from EL3 if
EL2 not implemented", so add the missing stanza to
el3_no_el2_cp_reginfo.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Reviewed-by: Luc Michel <luc.michel@greensocs.com>
Message-id: 20180814124254.5229-8-peter.maydell@linaro.org
The AArch32 virtualization extensions support these fault address
registers:
* HDFAR: aliased with AArch64 FAR_EL2[31:0] and AArch32 DFAR(S)
* HIFAR: aliased with AArch64 FAR_EL2[63:32] and AArch32 IFAR(S)
Implement the accessors for these. This fixes in passing a bug
where we weren't implementing the "RES0 from EL3 if EL2 not
implemented" behaviour for AArch64 FAR_EL2.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Reviewed-by: Luc Michel <luc.michel@greensocs.com>
Message-id: 20180814124254.5229-7-peter.maydell@linaro.org
Implement the AArch32 HVBAR register; we can do this just by
making the existing VBAR_EL2 regdefs be STATE_BOTH.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Reviewed-by: Luc Michel <luc.michel@greensocs.com>
Message-id: 20180814124254.5229-5-peter.maydell@linaro.org
ARMCPRegInfo structs will default to .cp = 15 if they
are ARM_CP_STATE_BOTH, but not if they are ARM_CP_STATE_AA32
(because a coprocessor number of 0 is valid for AArch32).
We forgot to explicitly set .cp = 15 for the HMAIR1 and
HAMAIR1 regdefs, which meant they would UNDEF when the guest
tried to access them under cp15.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Reviewed-by: Luc Michel <luc.michel@greensocs.com>
Message-id: 20180814124254.5229-3-peter.maydell@linaro.org
We implement the HAMAIR1 register as RAZ/WI; we had a typo in the
regdef, though, and were incorrectly naming it HMAIR1 (which is
a different register which we also implement as RAZ/WI).
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Reviewed-by: Luc Michel <luc.michel@greensocs.com>
Message-id: 20180814124254.5229-2-peter.maydell@linaro.org
If an instruction is conditional (like CBZ) and it is executed
conditionally (using the ITx instruction), a jump to an undefined
label is generated, and QEMU crashes.
CBZ in IT block is an UNPREDICTABLE behavior, but we should not
crash. Honouring the condition code is allowed by the spec in this
case (constrained unpredictable, ARMv8, section K1.1.7), and matches
what we do for other "UNPREDICTABLE inside an IT block" instructions.
Fix the 'skip on condition' code to create a new label only if it
does not already exist. Previously multiple labels were created, but
only the last one of them was set.
Signed-off-by: Roman Kapl <rka@sysgo.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20180816120533.6587-1-rka@sysgo.com
[PMM: fixed ^ 1 being applied to wrong argument, fixed typo]
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
- move register counting to xtensa/gdbstub.c
- add symbolic names for register types and flags from GDB and use them
in register counting and access functions.
Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
This fixes communication with gdb in the presence of type-5 (TIE state
mapped on user registers) and type-7 (special case of masked registers)
registers in the xtensa core config.
Cc: qemu-stable@nongnu.org
Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
This fixes java in a linux-user chroot:
$ java --version
qemu-sh4: .../accel/tcg/cpu-exec.c:634: cpu_loop_exec_tb: Assertion `use_icount' failed.
qemu: uncaught target signal 6 (Aborted) - core dumped
Aborted (core dumped)
In gen_conditional_jump() in the GUSA_EXCLUSIVE part, we must reset
base.is_jmp to DISAS_NEXT after the gen_goto_tb() as it is done in
gen_delayed_conditional_jump() after the gen_jump().
Bug: https://bugs.launchpad.net/qemu/+bug/1768246
Fixes: 4834871bc9
("target/sh4: Convert to DisasJumpType")
Reported-by: John Paul Adrian Glaubitz <glaubitz@physik.fu-berlin.de>
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Aurelien Jarno <aurelien@aurel32.net>
Message-Id: <20180811082328.11268-1-laurent@vivier.eu>
Bug fix:
* Some guests may crash when using "-cpu host" due to TOPOEXT,
disable it by default
Features:
* PV_SEND_IPI feature bit
* Icelake-{Server,Client} CPU models
* New CPUID feature bits: PV_SEND_IPI, WBNOINVD, PCONFIG, ARCH_CAPABILITIES
Documentation:
* docs/qemu-cpu-models.texi
-----BEGIN PGP SIGNATURE-----
iQIcBAABCAAGBQJbdiXVAAoJECgHk2+YTcWmcuUP/i1ekHKIm1Irfelhbd0CpGJj
GTUoK/EAkNXxUq5qYpNL23sElxCyduoFlyrpHMxdRmaffrw7EBg/ye3eZNT9SMcE
OL2iLohZhev4V9iO2lBx4/4awFxHJC8vx9q4OQXHXewNZxoFdi+6h+b7eDnSD1XO
1saCSem5bZtu6Ra/aco21SVW7afWOPtYAW0Z6fXJ040K4wgKdxGo2NfBkRX1SUMD
xqUG084FJht+MeIq95mcY9bSubg9fXKYUr6psE2mL+ycztbx+vnUMMS+Yj8XfuCC
QIBzlpF0ZCTZlxRsmQqW/ZBb5qsSdJiCMGPibeLl3vKzByZ5NpZk4xUw69NcwQ07
kAEhK3Ug4X+gjUtLH3QvRF7pIHOtJS5RdHpEfOBYZ/P+JfX7y2tCmqlAhPje6urf
av2Go4PvdD8gS0KO4bpasE6guLz+bp734xcA9c/pVwWITOT8xBG9XGqj1cZ4/S+b
uJWLdeeR6vspJBs3BjWxxCMcAS3tk8CzYamjJBYnPasXznnEnmwaS8X5QWCS0h1R
Hx83z9WGr4oPry7Pg0keKEBFA2FvFtYH/xbSBUOpiaGvDICPY8w7BfOCtjBNxOsm
wMtlx6fBsXv89ymWpYHCldvdMw7sF6GGYuQIBnF7BqXKgLZABcOKRXG/JfVdK5iU
QxoROA+kpgws7LK3lRUV
=uE8w
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/ehabkost/tags/x86-next-pull-request' into staging
x86 queue, 2018-08-16
Bug fix:
* Some guests may crash when using "-cpu host" due to TOPOEXT,
disable it by default
Features:
* PV_SEND_IPI feature bit
* Icelake-{Server,Client} CPU models
* New CPUID feature bits: PV_SEND_IPI, WBNOINVD, PCONFIG, ARCH_CAPABILITIES
Documentation:
* docs/qemu-cpu-models.texi
# gpg: Signature made Fri 17 Aug 2018 02:33:09 BST
# gpg: using RSA key 2807936F984DC5A6
# gpg: Good signature from "Eduardo Habkost <ehabkost@redhat.com>"
# Primary key fingerprint: 5A32 2FD5 ABC4 D3DB ACCF D1AA 2807 936F 984D C5A6
* remotes/ehabkost/tags/x86-next-pull-request:
i386: Disable TOPOEXT by default on "-cpu host"
target-i386: adds PV_SEND_IPI CPUID feature bit
i386: Add new CPU model Icelake-{Server,Client}
i386: Add CPUID bit for WBNOINVD
i386: Add CPUID bit for PCONFIG
i386: Add CPUID bit and feature words for IA32_ARCH_CAPABILITIES MSR
i386: Add new MSR indices for IA32_PRED_CMD and IA32_ARCH_CAPABILITIES
docs: add guidance on configuring CPU models for x86
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQEcBAABAgAGBQJbdbIoAAoJENSXKoln91pl9KwIAJQ0CB4V/eRlvPOj828qLJyt
FxCETpCKBD1mSRX80E2HdYdaj8KfhhKrd9R2Wrk2VsiTQhLA0+hYsLxuT8VfPjsX
8WSN7egxzzA8Hzm1+wbwFCDNL13xjuhHWPk9YA1RYS6eQJot/Y27KYfvt/OPTrBO
jGaTmvXOE9q3qXZczP2TwipYkehg1+Ss5ZJl7dUmHc3Iek5t/H+kumkhBPoJqwAQ
wcwQuPm2KRGLuq9aB2IVbtfJA5Rme1GMFDWZicitJX2uK+Mx76fgi9IorrW6XAyM
6LM8DjvrivxW+UgZ+3QdJKkQCbyIrXqDAp1MRd3CWMlB576WWKm6RUTZclSo9gI=
=m1CJ
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/amarkovic/tags/mips-queue-aug-2018' into staging
MIPS queue Aug 16, 2018
# gpg: Signature made Thu 16 Aug 2018 18:19:36 BST
# gpg: using RSA key D4972A8967F75A65
# gpg: Good signature from "Aleksandar Markovic <amarkovic@wavecomp.com>"
# gpg: WARNING: This key is not certified with a trusted signature!
# gpg: There is no indication that the signature belongs to the owner.
# Primary key fingerprint: 8526 FBF1 5DA3 811F 4A01 DD75 D497 2A89 67F7 5A65
* remotes/amarkovic/tags/mips-queue-aug-2018:
qemu-doc: Amend MIPS-related items
linux-user: Add preprocessor availability control to some syscalls
linux-user: Update MIPS syscall numbers up to kernel 4.18 headers
elf: Add ELF flags for MIPS machine variants
elf: Remove duplicate preprocessor constant definition
target/mips: Check ELPA flag only in some cases of MFHC0 and MTHC0
target/mips: Don't update BadVAddr register in Debug Mode
target/mips: Implement CP0 Config1.WR bit functionality
target/mips: Add CP0 BadInstrX register
target/mips: Update some CP0 registers bit definitions
target/mips: Fix two instances of shadow variables
target/mips: Mark switch fallthroughs with interpretable comments
target/mips: Avoid case statements formulated by ranges - part 2
target/mips: Avoid case statements formulated by ranges - part 1
MAINTAINERS: Update target/mips maintainer's email addresses
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
MFHC0 and MTHC0 used to handle EntryLo0 and EntryLo1 registers only,
and placing ELPA flag checks before switch statement were technically
correct. However, after adding handling more registers, these checks
should be moved to act only in cases of handling EntryLo0 and
EntryLo1.
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Yongbok Kim <yongbok.kim@mips.com>
Signed-off-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Signed-off-by: Stefan Markovic <smarkovic@wavecomp.com>
BadVAddr should not be updated if (env->hflags & MIPS_HFLAG_DM) is
set.
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Yongbok Kim <yongbok.kim@mips.com>
Signed-off-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Add testing Config1.WR bit into watch exception handling logic.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Signed-off-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Signed-off-by: Stefan Markovic <smarkovic@wavecomp.com>
Add CP0 BadInstrX register. This register will be used in nanoMIPS.
Reviewed-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Stefan Markovic <smarkovic@wavecomp.com>
Signed-off-by: Yongbok Kim <yongbok.kim@mips.com>
Signed-off-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Update CP0 registers Config0, Config1, Config2, Config3,
Config4, and Config5 bit definitions.
Some of these bits will be utilized by upcoming nanoMIPS changes.
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Fix two instances of shadow variables. This cleans up entire file
translate.c from shadow variables.
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Signed-off-by: Stefan Markovic <smarkovic@wavecomp.com>
Mark switch fallthroughs with comments, in cases fallthroughs
are intentional.
The comments "/* fall through */" are interpreted by compilers and
other tools, and they will not issue warnings in such cases. For gcc,
the warning is turnend on by -Wimplicit-fallthrough. With this patch,
there will be no such warnings in target/mips directory. If such
warning appears in future, it should be checked if it is intentional,
and, if yes, marked with a comment similar to those from this patch.
The comment must be just before next "case", otherwise gcc won't
understand it.
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Signed-off-by: Stefan Markovic <smarkovic@wavecomp.com>
Remove "range style" case statements to make code analysis easier.
This patch handles cases when the values in the range in question
were not properly defined.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Signed-off-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Signed-off-by: Stefan Markovic <amarkovic@wavecomp.com>
Remove "range style" case statements to make code analysis easier.
This is needed also for some upcoming nanoMIPS-related refactorings.
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Enabling TOPOEXT is always allowed, but it can't be enabled
blindly by "-cpu host" because it may make guests crash if the
rest of the cache topology information isn't provided or isn't
consistent.
This addresses the bug reported at:
https://bugzilla.redhat.com/show_bug.cgi?id=1613277
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Message-Id: <20180809221852.15285-1-ehabkost@redhat.com>
Tested-by: Richard W.M. Jones <rjones@redhat.com>
Reviewed-by: Babu Moger <babu.moger@amd.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
New CPU models mostly inherit features from ancestor Skylake, while addin new
features: UMIP, New Instructions ( PCONIFIG (server only), WBNOINVD,
AVX512_VBMI2, GFNI, AVX512_VNNI, VPCLMULQDQ, VAES, AVX512_BITALG),
Intel PT and 5-level paging (Server only). As well as
IA32_PRED_CMD, SSBD support for speculative execution
side channel mitigations.
Note:
For 5-level paging, Guest physical address width can be configured, with
parameter "phys-bits". Unless explicitly specified, we still use its default
value, even for Icelake-Server cpu model.
At present, hold on expose IA32_ARCH_CAPABILITIES to guest, as 1) This MSR
actually presents more than 1 'feature', maintainers are considering expanding current
features presentation of only CPUIDs to MSR bits; 2) a reasonable default value
for MSR_IA32_ARCH_CAPABILITIES needs to settled first. These 2 are actully
beyond Icelake CPU model itself but fundamental. So split these work apart
and do it later.
https://lists.gnu.org/archive/html/qemu-devel/2018-07/msg00774.htmlhttps://lists.gnu.org/archive/html/qemu-devel/2018-07/msg00796.html
Signed-off-by: Robert Hoo <robert.hu@linux.intel.com>
Message-Id: <1530781798-183214-6-git-send-email-robert.hu@linux.intel.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
WBNOINVD: Write back and do not invalidate cache, enumerated by
CPUID.(EAX=80000008H, ECX=0):EBX[bit 9].
Signed-off-by: Robert Hoo <robert.hu@linux.intel.com>
Message-Id: <1530781798-183214-5-git-send-email-robert.hu@linux.intel.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Support of IA32_PRED_CMD MSR already be enumerated by same CPUID bit as
SPEC_CTRL.
At present, mark CPUID_7_0_EDX_ARCH_CAPABILITIES unmigratable, per Paolo's
comment.
Signed-off-by: Robert Hoo <robert.hu@linux.intel.com>
Message-Id: <1530781798-183214-3-git-send-email-robert.hu@linux.intel.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
IA32_PRED_CMD MSR gives software a way to issue commands that affect the state
of indirect branch predictors. Enumerated by CPUID.(EAX=7H,ECX=0):EDX[26].
IA32_ARCH_CAPABILITIES MSR enumerates architectural features of RDCL_NO and
IBRS_ALL. Enumerated by CPUID.(EAX=07H, ECX=0):EDX[29].
https://software.intel.com/sites/default/files/managed/c5/63/336996-Speculative-Execution-Side-Channel-Mitigations.pdf
Signed-off-by: Robert Hoo <robert.hu@linux.intel.com>
Message-Id: <1530781798-183214-2-git-send-email-robert.hu@linux.intel.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
These insns require u=1; failed to include that in the switch
cases. This probably happened during one of the rebases just
before final commit.
Fixes: d17b7cdcf4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Laurent Desnogues <laurent.desnogues@gmail.com>
Message-id: 20180810193129.1556-6-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
We were using the wrong flush-to-zero bit for the non-half input.
Fixes: 46d33d1e3c
Cc: qemu-stable@nongnu.org (3.0.1)
Reported-by: Laurent Desnogues <laurent.desnogues@gmail.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Laurent Desnogues <laurent.desnogues@gmail.com>
Tested-by: Laurent Desnogues <laurent.desnogues@gmail.com>
Message-id: 20180810193129.1556-5-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
When FZ is set, input_denormal exceptions are recognized, but this does
not happen with FZ16. The softfloat code has no way to distinguish
these bits and will raise such exceptions into fp_status_f16.flags,
so ignore them when computing the accumulated flags.
Cc: qemu-stable@nongnu.org (3.0.1)
Reported-by: Laurent Desnogues <laurent.desnogues@gmail.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Laurent Desnogues <laurent.desnogues@gmail.com>
Tested-by: Laurent Desnogues <laurent.desnogues@gmail.com>
Message-id: 20180810193129.1556-3-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
When support for FZ16 was added, we failed to include the bit
within FPCR_MASK, which means that it could never be set.
Continue to zero FZ16 when ARMv8.2-FP16 is not enabled.
Fixes: d81ce0ef2c
Cc: qemu-stable@nongnu.org (3.0.1)
Reported-by: Laurent Desnogues <laurent.desnogues@gmail.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Laurent Desnogues <laurent.desnogues@gmail.com>
Tested-by: Laurent Desnogues <laurent.desnogues@gmail.com>
Message-id: 20180810193129.1556-2-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Define a "cortex-m0" ARMv6-M CPU model.
Most of the register reset values set by other CPU models are not
relevant for the cut-down ARMv6-M architecture.
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-id: 20180814162739.11814-3-stefanha@redhat.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This allows the default (and maximum) vector length to be set
from the command-line. Which is extraordinarily helpful in
debugging problems depending on vector length without having to
bake knowledge of PR_SET_SVE_VL into every guest binary.
Cc: qemu-stable@nongnu.org (3.0.1)
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Tested-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Also fold the FPCR/FPSR state onto the same line as PSTATE,
and mention but do not dump disabled FPU state.
Cc: qemu-stable@nongnu.org (3.0.1)
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Tested-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
With PC, there are 33 registers. Three per line lines up nicely
without overflowing 80 columns.
Cc: qemu-stable@nongnu.org (3.0.1)
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The scaling should be solely on the memory operation size; the number
of registers being loaded does not come in to the initial computation.
Cc: qemu-stable@nongnu.org (3.0.1)
Reported-by: Laurent Desnogues <laurent.desnogues@gmail.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Tested-by: Laurent Desnogues <laurent.desnogues@gmail.com>
Reviewed-by: Laurent Desnogues <laurent.desnogues@gmail.com>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The immediate should be scaled by the size of the memory reference,
not the size of the elements into which it is loaded.
Cc: qemu-stable@nongnu.org (3.0.1)
Reported-by: Laurent Desnogues <laurent.desnogues@gmail.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Tested-by: Laurent Desnogues <laurent.desnogues@gmail.com>
Reviewed-by: Laurent Desnogues <laurent.desnogues@gmail.com>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The expression (int) imm + (uint32_t) len_align turns into uint32_t
and thus with negative imm produces a memory operation at the wrong
offset. None of the numbers involved are particularly large, so
change everything to use int.
Cc: qemu-stable@nongnu.org (3.0.1)
Reported-by: Laurent Desnogues <laurent.desnogues@gmail.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Cc: qemu-stable@nongnu.org (3.0.1)
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The pseudocode for this operation is an increment + compare loop,
so comparing <= the maximum integer produces an all-true predicate.
Rather than bound in both the inline code and the helper, pass the
helper the number of predicate bits to set instead of the number
of predicate elements to set.
Reported-by: Laurent Desnogues <laurent.desnogues@gmail.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Laurent Desnogues <laurent.desnogues@gmail.com>
Tested-by: Alex Bennée <alex.bennee@linaro.org>
Tested-by: Laurent Desnogues <laurent.desnogues@gmail.com>
Message-id: 20180801123111.3595-4-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Used the wrong temporary in the computation of subtractive overflow.
Reported-by: Laurent Desnogues <laurent.desnogues@gmail.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Laurent Desnogues <laurent.desnogues@gmail.com>
Tested-by: Alex Bennée <alex.bennee@linaro.org>
Tested-by: Laurent Desnogues <laurent.desnogues@gmail.com>
Message-id: 20180801123111.3595-3-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The normal vector element is sign-extended before
comparing with the wide vector element.
Reported-by: Laurent Desnogues <laurent.desnogues@gmail.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Laurent Desnogues <laurent.desnogues@gmail.com>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Tested-by: Alex Bennée <alex.bennee@linaro.org>
Tested-by: Laurent Desnogues <laurent.desnogues@gmail.com>
Message-id: 20180801123111.3595-2-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Tailchaining is an optimization in handling of exception return
for M-profile cores: if we are about to pop the exception stack
for an exception return, but there is a pending exception which
is higher priority than the priority we are returning to, then
instead of unstacking and then immediately taking the exception
and stacking registers again, we can chain to the pending
exception without unstacking and stacking.
For v6M and v7M it is IMPDEF whether tailchaining happens for pending
exceptions; for v8M this is architecturally required. Implement it
in QEMU for all M-profile cores, since in practice v6M and v7M
hardware implementations generally do have it.
(We were already doing tailchaining for derived exceptions which
happened during exception return, like the validity checks and
stack access failures; these have always been required to be
tailchained for all versions of the architecture.)
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20180720145647.8810-5-peter.maydell@linaro.org
On exception return for M-profile, we must restore the CONTROL.SPSEL
bit from the EXCRET value before we do any kind of tailchaining,
including for the derived exceptions on integrity check failures.
Otherwise we will give the guest an incorrect EXCRET.SPSEL value on
exception entry for the tailchained exception.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20180720145647.8810-4-peter.maydell@linaro.org
In do_v7m_exception_exit(), we use the exc_secure variable to track
whether the exception we're returning from is secure or non-secure.
Unfortunately the statement initializing this was accidentally
inside an "if (env->v7m.exception != ARMV7M_EXCP_NMI)" conditional,
which meant that we were using the wrong value for NMI handlers.
Move the initialization out to the right place.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-id: 20180720145647.8810-3-peter.maydell@linaro.org
Improve the exception-taken logging by logging in
v7m_exception_taken() the exception we're going to take
and whether it is secure/nonsecure.
This requires us to move logging at many callsites from after the
call to before it, so that the logging appears in a sensible order.
(This will make tail-chaining produce more useful logs; for the
current callers of v7m_exception_taken() we know which exception
we're going to take, so custom log messages at the callsite sufficed;
for tail-chaining only v7m_exception_taken() knows the exception
number that we're going to tail-chain to.)
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-id: 20180720145647.8810-2-peter.maydell@linaro.org
One of the required effects of setting HCR_EL2.TGE is that when
SCR_EL3.NS is 1 then SCTLR_EL1.M must behave as if it is zero for
all purposes except direct reads. That is, it effectively disables
the MMU for the NS EL0/EL1 translation regime.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20180724115950.17316-6-peter.maydell@linaro.org
The IMO, FMO and AMO bits in HCR_EL2 are defined to "behave as
1 for all purposes other than direct reads" if HCR_EL2.TGE
is set and HCR_EL2.E2H is 0, and to "behave as 0 for all
purposes other than direct reads" if HCR_EL2.TGE is set
and HRC_EL2.E2H is 1.
To avoid having to check E2H and TGE everywhere where we test IMO and
FMO, provide accessors arm_hcr_el2_imo(), arm_hcr_el2_fmo()and
arm_hcr_el2_amo(). We don't implement ARMv8.1-VHE yet, so the E2H
case will never be true, but we include the logic to save effort when
we eventually do get to that.
(Note that in several of these callsites the change doesn't
actually make a difference as either the callsite is handling
TGE specially anyway, or the CPU can't get into that situation
with TGE set; we change everywhere for consistency.)
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20180724115950.17316-5-peter.maydell@linaro.org
Whene we raise a synchronous exception, if HCR_EL2.TGE is set then
exceptions targeting NS EL1 must be redirected to EL2. Implement
this in raise_exception() -- all synchronous exceptions go through
this function.
(Asynchronous exceptions go via arm_cpu_exec_interrupt(), which
already honours HCR_EL2.TGE when it determines the target EL
in arm_phys_excp_target_el().)
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20180724115950.17316-4-peter.maydell@linaro.org
Some debug registers can be trapped via MDCR_EL2 bits TDRA, TDOSA,
and TDA, which we implement in the functions access_tdra(),
access_tdosa() and access_tda(). If MDCR_EL2.TDE or HCR_EL2.TGE
are 1, the TDRA, TDOSA and TDA bits should behave as if they were 1.
Implement this by having the access functions check MDCR_EL2.TDE
and HCR_EL2.TGE.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20180724115950.17316-3-peter.maydell@linaro.org
If the "trap general exceptions" bit HCR_EL2.TGE is set, we
must mask all virtual interrupts (as per DDI0487C.a D1.14.3).
Implement this in arm_excp_unmasked().
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20180724115950.17316-2-peter.maydell@linaro.org
Now that we have full support for small regions, including execution,
we can remove the workarounds where we marked all small regions as
non-executable for the M-profile MPU and SAU.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Tested-by: Cédric Le Goater <clg@kaod.org>
Tested-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-id: 20180710160013.26559-7-peter.maydell@linaro.org
Forbid stack alignment change. (CCR)
Reserve FAULTMASK, BASEPRI registers.
Report any fault as a HardFault. Disable MemManage, BusFault and
UsageFault, so they always escalated to HardFault. (SHCSR)
Signed-off-by: Julia Suvorova <jusual@mail.ru>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Message-id: 20180718095628.26442-1-jusual@mail.ru
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
MSR handling is the only place where CONTROL.nPRIV is modified.
Signed-off-by: Julia Suvorova <jusual@mail.ru>
Message-id: 20180705222622.17139-1-jusual@mail.ru
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The instance_init function of the xtensa CPUs creates a memory region,
but does not set an owner, so the memory region is not destroyed
correctly when the CPU object is removed. This can happen when
introspecting the CPU devices, so introspecting the CPU device will
leave a dangling memory region object in the QOM tree. Make sure to
set the right owner here to fix this issue.
Signed-off-by: Thomas Huth <thuth@redhat.com>
Acked-by: Max Filippov <jcmvbkbc@gmail.com>
Message-id: 1532005320-17794-1-git-send-email-thuth@redhat.com
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Currently the migration code incorrectly treats a subsection with
no .needed function pointer as if it was the subsection list
terminator -- it is ignored and so is everything after it.
Work around this by giving various M profile vmstate structs
a 'needed' function that always returns true.
We reuse m_needed() for this, since it's always true here.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20180806123445.1459-4-peter.maydell@linaro.org
Since 86f0a186d6 the TYPE_ARM_HOST_CPU is only compiled when CONFIG_KVM
is enabled.
Remove the now redundant special-case introduced in a96c0514ab, to avoid:
$ qemu-system-aarch64 -machine virt -cpu \? | fgrep host
host
host (only available in KVM mode)
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-id: 20180727132311.2777-1-f4bug@amsat.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
MSR_SMI_COUNT started being migrated in QEMU 2.12. Do not migrate it
on older machine types, or the subsection causes a load failure for
guests that use SMM.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Rename DCACHE to DATA_CACHE and ICACHE to INSTRUCTION_CACHE.
This avoids conflict with Linux asm/cachectl.h macros and fixes
build failure on mips hosts.
Reported-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Message-Id: <20180717194010.30096-1-ehabkost@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Tested-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Acked-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Reviewed-by: Babu Moger <babu.moger@amd.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
To correctly handle small (less than TARGET_PAGE_SIZE) MPU regions,
we must correctly handle the case where the address being looked
up hits in an MPU region that is not small but the address is
in the same page as a small region. For instance if MPU region
1 covers an entire page from 0x2000 to 0x2400 and MPU region
2 is small and covers only 0x2200 to 0x2280, then for an access
to 0x2000 we must not return a result covering the full page
even though we hit the page-sized region 1. Otherwise we will
then cache that result in the TLB and accesses that should
hit region 2 will incorrectly find the region 1 information.
Check for the case where we miss an MPU region but it is still
within the same page, and in that case narrow the size we will
pass to tlb_set_page_with_attrs() for whatever the final
outcome is of the MPU lookup.
Reported-by: Adithya Baglody <adithya.nagaraj.baglody@intel.com>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20180716133302.25989-1-peter.maydell@linaro.org
Usually, when baselining two CPU models, whereby one of them has base
CPU features disabled (e.g. z14-base,msa=off), we fallback to an older
model that did not have these features in the base model. We always try to
create a "sane" CPU model (as far as possible), and one part of it is that
removing base features is no good and to be avoided.
Now, if we disable base features that were part of a z900, we're out of
luck. We won't find a CPU model and QEMU will segfault. This is a
scenario that should never happen in real life, but it can be used to
crash QEMU.
So let's properly report an error if we baseline e.g.:
{ "execute": "query-cpu-model-baseline",
"arguments" : { "modela": { "name": "z14-base", "props": {"esan3" : false}},
"modelb": { "name": "z14"}} }
Instead of segfaulting.
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20180718092330.19465-1-david@redhat.com>
Acked-by: Christian Borntraeger <borntraeger@de.ibm.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
'I' was being double-incremented; correctly within the inner loop
and incorrectly within the outer loop.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Laurent Desnogues <laurent.desnogues@gmail.com>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Tested-by: Alex Bennée <alex.bennee@linaro.org>
Message-id: 20180711103957.3040-1-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Hyper-V identifies vCPUs by Virtual Processor (VP) index which can be
queried by the guest via HV_X64_MSR_VP_INDEX msr. It is defined by the
spec as a sequential number which can't exceed the maximum number of
vCPUs per VM.
It has to be owned by QEMU in order to preserve it across migration.
However, the initial implementation in KVM didn't allow to set this
msr, and KVM used its own notion of VP index. Fortunately, the way
vCPUs are created in QEMU/KVM makes it likely that the KVM value is
equal to QEMU cpu_index.
So choose cpu_index as the value for vp_index, and push that to KVM on
kernels that support setting the msr. On older ones that don't, query
the kernel value and assert that it's in sync with QEMU.
Besides, since handling errors from vCPU init at hotplug time is
impossible, disable vCPU hotplug.
This patch also introduces accessor functions to encapsulate the mapping
between a vCPU and its vp_index.
Signed-off-by: Roman Kagan <rkagan@virtuozzo.com>
Message-Id: <20180702134156.13404-3-rkagan@virtuozzo.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
In Hyper-V-related code, vCPUs are identified by their VP (virtual
processor) index. Since it's customary for "vcpu_id" in QEMU to mean
APIC id, rename the respective variables to "vp_index" to make the
distinction clear.
Signed-off-by: Roman Kagan <rkagan@virtuozzo.com>
Message-Id: <20180702134156.13404-2-rkagan@virtuozzo.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
This patch adds field with content of KERNEL_GS_BASE MSR to QEMU note in
ELF dump.
On Windows, if all vCPUs are running usermode tasks at the time the dump is
created, this can be helpful in the discovery of guest system structures
during conversion ELF dump to MEMORY.DMP dump.
Signed-off-by: Viktor Prutyanov <viktor.prutyanov@virtuozzo.com>
Message-Id: <20180714123000.11326-1-viktor.prutyanov@virtuozzo.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
For M-profile exception returns, the mmu index to use for exception
return unstacking is supposed to be that of wherever we are returning to:
* if returning to handler mode, privileged
* if returning to thread mode, privileged or unprivileged depending on
CONTROL.nPRIV for the destination security state
We were passing the wrong thing as the 'priv' argument to
arm_v7m_mmu_idx_for_secstate_and_priv(). The effect was that guests
which programmed the MPU to behave differently for privileged and
unprivileged code could get spurious MemManage Unstack exceptions.
Reported-by: Adithya Baglody <adithya.nagaraj.baglody@intel.com>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20180709124535.1116-1-peter.maydell@linaro.org
The translator loop does not allow the tb_start hook to set
dc->base.is_jmp; the only hook allowed to do that is translate_insn.
Split the work between init_disas_context where we validate
the gUSA parameters, and translate_insn where we emit code.
Tested-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Use MAKE_64BIT_MASK instead of open-coding. Remove an odd
vector size check that is unlikely to be more profitable
than 3 64-bit integer stores. Correct the iteration for WORD
to avoid writing too much data.
Fixes RISU tests of PTRUE for VL 256.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Tested-by: Alex Bennée <alex.bennee@linaro.org>
Message-id: 20180705191929.30773-3-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
These instructions must perform the sve_access_check, but
since they are implemented as NOPs there is no generated
code to elide when the access check fails.
Fixes: Coverity issues 1393780 & 1393779.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
When I try to build a ppc64 target on a ppc64 host (gcc 8.1.1), I have:
.../target/ppc/int_helper.c: In function 'helper_vinsertb':
.../target/ppc/int_helper.c:1954:32: error: array subscript 18446744073709551608 is above array bounds of 'uint8_t[16]' {aka 'unsigned char[16]'} [-Werror=array-bounds]
memmove(&r->u8[index], &b->u8[8 - sizeof(r->element)], \
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.../target/ppc/int_helper.c:1965:1: note: in expansion of macro 'VINSERT'
If we compare with the macro for ppc64le, we can see
sizeof(r->element[0]) should be used instead of sizeof(r->element).
And VINSERT uses only u8, u16, u32 and u64, so the maximum value
of sizeof(r->element[0]) is 8
Suggested-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Since commit d6dcc5583e, '-cpu ?' shows the description of the
X86_CPU_TYPE_NAME("max") for the host CPU model:
Enables all features supported by the accelerator in the current host
instead of the expected:
KVM processor with all supported host features
or
HVF processor with all supported host features
This is caused by the early use of kvm_enabled() and hvf_enabled() in
a class_init function. Since the accelerator isn't configured yet, both
helpers return false unconditionally.
A QEMU binary will only be compiled with one of these accelerators, not
both. The appropriate description can thus be decided at build time.
Signed-off-by: Greg Kurz <groug@kaod.org>
Message-Id: <153055056654.212317.4697363278304826913.stgit@bahia.lan>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Mostly patches from Richard Henderson fixing multiple things:
* Fix singlestepping in GDB.
* Use more TB linking.
* Fixes to exit TB after updating SPRs to enable registering of state
changes.
* Significant optimizations and refactors to the TLB
* Split out disassembly from translation.
* Add qemu-or1k to qemu-binfmt-conf.sh.
* Implement signal handling for linux-user.
Then there are a few fixups from me:
* Fix delay slot detections to match hardware, this was masking a bug
in the linus kernel.
* Fix stores to the PIC mask register
-----BEGIN PGP SIGNATURE-----
iQIcBAABAgAGBQJbO32qAAoJEMOzHC1eZifkdHwP/2zV/dE3A0XvEynghJU4XeVe
KlNRupCjp2civk9d9E+BJwIOVDMPfBQbBKfGC2fjzBGOuop8ZjUvvUuazNTEQoov
9RfeXPMkP8xJUzGp02Gl87ZcMY9ZXJrqlPb2BaJ//8f/E0CF+91ODnkeLK62UXnb
EbBCf5IlJy/B6Fp9icfdE09/nYx6SmQHPJZo9nC8xiWNZ8LewXn+DWGH81EHd8w1
j99FV5ijImwB/LNOP6aVelyyKV9ZpInI6ZqC1LztWWaZftJ42TuvUq4vboP1P2s9
vC9RV5oWl3/DL9HQxEphrynqKNvrxcceoQhxXirEzbLeYG83Tx1ed2a7J3x83gtY
reChNmnwRuuchCot3cK4xDn2e0dY87dT24wtBM9HNmLsGgEzudZuGPwdlHrBoRFP
o3exRItbfFr6SFdgUZaMjeC0vRSVU/FPqPRswJESWelEEMCi1R/CeQFKaw8BmaOG
rw9Ed1rtX23R1Ce/ggEQgkxh2cWGyV1Tc0q5M09UDDmHKq0pd27d7CLlVMYsqZqk
8guPjPzsYP12vNFTjx6tC8inOwJalK7kDVJv1a+c/bpyqaulcT22o7ck8rqlCZbU
wpQbhAAGbbVrwnjQevO11MZsXk+6FANw6KIXxEDdDVbfqQ+WLtHOfXsUkG/xVUuR
K1hiEeYKl77HCrDFkhqB
=mbu2
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/shorne/tags/pull-or-20180703' into staging
OpenRISC cleanups and Fixes for QEMU 3.0
Mostly patches from Richard Henderson fixing multiple things:
* Fix singlestepping in GDB.
* Use more TB linking.
* Fixes to exit TB after updating SPRs to enable registering of state
changes.
* Significant optimizations and refactors to the TLB
* Split out disassembly from translation.
* Add qemu-or1k to qemu-binfmt-conf.sh.
* Implement signal handling for linux-user.
Then there are a few fixups from me:
* Fix delay slot detections to match hardware, this was masking a bug
in the linus kernel.
* Fix stores to the PIC mask register
# gpg: Signature made Tue 03 Jul 2018 14:44:10 BST
# gpg: using RSA key C3B31C2D5E6627E4
# gpg: Good signature from "Stafford Horne <shorne@gmail.com>"
# gpg: WARNING: This key is not certified with a trusted signature!
# gpg: There is no indication that the signature belongs to the owner.
# Primary key fingerprint: D9C4 7354 AEF8 6C10 3A25 EFF1 C3B3 1C2D 5E66 27E4
* remotes/shorne/tags/pull-or-20180703: (25 commits)
target/openrisc: Fix writes to interrupt mask register
target/openrisc: Fix delay slot exception flag to match spec
linux-user: Fix struct sigaltstack for openrisc
linux-user: Implement signals for openrisc
target/openrisc: Add support in scripts/qemu-binfmt-conf.sh
target/openrisc: Reorg tlb lookup
target/openrisc: Increase the TLB size
target/openrisc: Stub out handle_mmu_fault for softmmu
target/openrisc: Use identical sizes for ITLB and DTLB
target/openrisc: Fix cpu_mmu_index
target/openrisc: Fix tlb flushing in mtspr
target/openrisc: Reduce tlb to a single dimension
target/openrisc: Merge mmu_helper.c into mmu.c
target/openrisc: Remove indirect function calls for mmu
target/openrisc: Merge tlb allocation into CPUOpenRISCState
target/openrisc: Form the spr index from tcg
target/openrisc: Exit the TB after l.mtspr
target/openrisc: Split out is_user
target/openrisc: Link more translation blocks
target/openrisc: Fix singlestep_enabled
...
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Here's a last minue pull request before today's soft freeze. Ideally
I would have sent this earlier, but I was waiting for a couple of
extra fixes I knew were close. And the freeze crept up on me, like
always.
Most of the changes here are bugfixes in any case. There are some
cleanups as well, which have been in my staging tree for a little
while. There are a couple of truly new features (some extensions to
the sam460ex platform), but these are low risk, since they only affect
a new and not really stabilized machine type anyway.
Higlights are:
* Mac platform improvements from Mark Cave-Ayland
* Sam460ex improvements from BALATON Zoltan et al.
* XICS interrupt handler cleanups from Cédric Le Goater
* TCG improvements for atomic loads and stores from Richard
Henderson
* Assorted other bugfixes
-----BEGIN PGP SIGNATURE-----
iQIzBAABCAAdFiEEdfRlhq5hpmzETofcbDjKyiDZs5IFAls7D8oACgkQbDjKyiDZ
s5Lxmg//YzPfC/nKqTTKkyJPzh/NnSC+kRTMAT3mbxdRIc7yfgMqJtWGGbS1iKgK
EeJ9hl5Qm0HfscfDuzf0xasU62ZEv3kNdLnWJEIgkqiXrxoO5KCnC0y4D8NN1W03
mvINNCa8+QDg2OsirGmNUTkriiG3wLIrHTpLZ4+JuC2Bd9H3nTHZgJ0MXON/1VWY
oRgr6kMZ5+IAzPhvYLFR6l3nPI883fgJOFyRo7YqYrkVBKFrFkfK0Xjw6vpsNxcx
2dE/YCHhNIriLuBG5noewL7GuqZRtLnl6rjjee5VAKIe1EmFeR+jsXwNjzGOVOJg
dhjOtsJsQQ3WdEw5uImJzE64kV228WCgmkeXzZd1010JBLr7sUkrd2EuoZ23vvat
uvZAHVSBrJg5WvzMo1VMEoPU3VeeZQ5HL+MI80iKiU6oUgRK11gVJcebtA0sEKt+
zhJC4JiUlHtZLTGIpMBmU8DJZ3Tyk1cBEm+Ky+SaPE+dsz16UHI0fazFQXJnXphE
MLHEGAyQgzWYp7kIcAjUFev0Geq/Uovy4JKIGI6ISop1wRPEQDxkthfkfRyQxQkE
zuse4EBcEH/Undw9KrmEQa0hCe+8BRkxklVbPesFPPdqH3PKNxtHYuWpSShQF0PW
XMjw43O2Rbsl8kBUHCpy4pYSugD1hpfgaw/mVUOU1u/M1O6toTw=
=AHrx
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/dgibson/tags/ppc-for-3.0-20180703' into staging
ppc patch queue 2018-07-03
Here's a last minue pull request before today's soft freeze. Ideally
I would have sent this earlier, but I was waiting for a couple of
extra fixes I knew were close. And the freeze crept up on me, like
always.
Most of the changes here are bugfixes in any case. There are some
cleanups as well, which have been in my staging tree for a little
while. There are a couple of truly new features (some extensions to
the sam460ex platform), but these are low risk, since they only affect
a new and not really stabilized machine type anyway.
Higlights are:
* Mac platform improvements from Mark Cave-Ayland
* Sam460ex improvements from BALATON Zoltan et al.
* XICS interrupt handler cleanups from Cédric Le Goater
* TCG improvements for atomic loads and stores from Richard
Henderson
* Assorted other bugfixes
# gpg: Signature made Tue 03 Jul 2018 06:55:22 BST
# gpg: using RSA key 6C38CACA20D9B392
# gpg: Good signature from "David Gibson <david@gibson.dropbear.id.au>"
# gpg: aka "David Gibson (Red Hat) <dgibson@redhat.com>"
# gpg: aka "David Gibson (ozlabs.org) <dgibson@ozlabs.org>"
# gpg: aka "David Gibson (kernel.org) <dwg@kernel.org>"
# Primary key fingerprint: 75F4 6586 AE61 A66C C44E 87DC 6C38 CACA 20D9 B392
* remotes/dgibson/tags/ppc-for-3.0-20180703: (35 commits)
ppc: Include vga cirrus card into the compiling process
target/ppc: Relax reserved bitmask of indexed store instructions
target/ppc: set is_jmp on ppc_tr_breakpoint_check
spapr: compute default value of "hpt-max-page-size" later
target/ppc/kvm: don't pass cpu to kvm_get_smmu_info()
target/ppc/kvm: get rid of kvm_get_fallback_smmu_info()
ppc440_uc: Basic emulation of PPC440 DMA controller
sam460ex: Add RTC device
hw/timer: Add basic M41T80 emulation
ppc4xx_i2c: Rewrite to model hardware more closely
hw/ppc: Give sam46ex its own config option
fpu_helper.c: fix setting FPSCR[FI] bit
target/ppc: Implement the rest of gen_st_atomic
target/ppc: Implement the rest of gen_ld_atomic
target/ppc: Use atomic min/max helpers
target/ppc: Use MO_ALIGN for EXIWX and ECOWX
target/ppc: Split out gen_st_atomic
target/ppc: Split out gen_ld_atomic
target/ppc: Split out gen_load_locked
target/ppc: Tidy gen_conditional_store
...
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
# Conflicts:
# hw/ppc/spapr.c
The interrupt controller mask register (PICMR) allows writing any value
to any of the 32 interrupt mask bits. Writing a 0 masks the interrupt
writing a 1 unmasks (enables) the the interrupt.
For some reason the old code was or'ing the write values to the PICMR
meaning it was not possible to ever mask a interrupt once it was
enabled.
I have tested this by running linux 4.18 and my regular checks, I don't
see any issues.
Reported-by: Davidson Francis <davidsondfgl@gmail.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Stafford Horne <shorne@gmail.com>
The delay slot exception flag is only set on the SR register during
exception. Previously it was being set on both the ESR and SR this
caused QEMU to differ from the spec. The was apparent as the linux
kernel had a bug where it could boot on QEMU but not on real hardware.
The fixed logic now matches hardware.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Stafford Horne <shorne@gmail.com>
All of the existing code was boilerplate from elsewhere,
and would crash the guest upon the first signal.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Stafford Horne <shorne@gmail.com>
---
v2:
Add a comment to the new definition of target_pt_regs.
Install the signal mask into the ucontext.
v3:
Incorporate feedback from Laurent.
While openrisc has a split i/d tlb, qemu does not. Perform a
lookup on both i & d tlbs in parallel and put the composite
rights into qemu's tlb. This avoids ping-ponging the qemu tlb
between EXEC and READ.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Stafford Horne <shorne@gmail.com>
The PPC440 User Manual says that if bit 31 is set, the contents of
CR[CR0] are undefined for indexed store instructions but this form is
not invalid. Other PPC variants confirming to recent ISA where this
bit may be reserved should ignore reserved bits and not raise invalid
instruction exception. In particular, MorphOS has an stwx instruction
with bit 31 set and fails to boot currently because of this. With this
patch it gets further.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
The use of GDB breakpoints was broken by b0c2d52 ("target/ppc: convert
to TranslatorOps", 2018-02-16).
Fix it by setting is_jmp, so that we break from the translation loop
as originally intended.
Tested-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Reported-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Signed-off-by: Emilio G. Cota <cota@braap.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
In a future patch the machine code will need to retrieve the MMU
information from KVM during machine initialization before the CPUs
are created.
Actually, KVM_PPC_GET_SMMU_INFO is a VM class ioctl, and thus, we
don't need to have a CPU object around. We just need for KVM to
be initialized and use the kvm_state global. This patch just does
that.
Signed-off-by: Greg Kurz <groug@kaod.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Now that we're checking our MMU configuration is supported by KVM,
rather than adjusting it to KVM, it doesn't really make sense to
have a fallback for kvm_get_smmu_info(). If KVM is too old or buggy
to provide the details, we should rather treat this as an error.
This patch thus adds error reporting to kvm_get_smmu_info() and get
rid of the fallback code. QEMU will now terminate if KVM fails to
provide MMU details. This may break some very old setups, but the
simplification is worth the sacrifice.
Signed-off-by: Greg Kurz <groug@kaod.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
The FPSCR[FI] bit indicates if the last floating point instruction had a result that was rounded. Each consecutive floating point instruction is suppose to set this bit to the correct value. What currently happens is this bit is not set as often as it should be. I have verified that this is the behavior of a real PowerPC 950. This patch fixes that problem by deciding to set this bit after each floating point instruction.
https://www.pdfdrive.net/powerpc-microprocessor-family-the-programming-environments-for-32-e3087633.html
Page 63 in table 2-4 is where the description of this bit can be found.
Signed-off-by: John Arbuckle <programmingkidx@gmail.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
The store twin case was stubbed out. For now, implement it only within
a serial context, forcing parallel execution to synchronize. It would
be possible to implement with a cmpxchg loop, if we care, but the loose
alignment requirements (simply no crossing 32-byte boundary) might send
us back to the serial context anyway.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
These cases were stubbed out. For now, implement them only within
a serial context, forcing parallel execution to synchronize. It
would be possible to implement these with cmpxchg loops, if we care.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
These operations were previously unimplemented for ppc.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
This avoids the need for gen_check_align entirely.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Move the guts of ST_ATOMIC to a function. Use foo_tl for the operations
instead of foo_i32 or foo_i64 specifically. Use MO_ALIGN instead of an
explicit call to gen_check_align.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Move the guts of LD_ATOMIC to a function. Use foo_tl for the operations
instead of foo_i32 or foo_i64 specifically. Use MO_ALIGN instead of an
explicit call to gen_check_align.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Leave only the minimal amount of code within the LDAR macro,
moving the rest of the code into gen_load_locked. Use MO_ALIGN
and remove the explicit call to gen_check_align.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Leave only the minimal amount of code within the STCX macro,
moving the rest of the code into gen_conditional_store.
Remove the explicit call to gen_check_align; the matching LDAX will
have already checked alignment, and we verify the same address.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Always use the gen_conditional_store implementation that uses
atomic_cmpxchg. Make sure and clear reserve_addr across most
interrupts crossing the cpu_loop.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
When running in a parallel context, we must use a helper in order
to perform the 128-bit atomic operation. When running in a serial
context, do the compare before the store.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Section 1.4 of the Power ISA v3.0B states that this insn is
single-copy atomic. As we cannot (yet) issue 128-bit stores
within TCG, use the generic helpers provided.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Section 1.4 of the Power ISA v3.0B states that both of these
instructions are single-copy atomic. As we cannot (yet) issue
128-bit loads within TCG, use the generic helpers provided.
Since TCG cannot (yet) return a 128-bit value, add a slot within
CPUPPCState for returning the high half of a 128-bit return value.
This solution is preferred to the helper assigning to architectural
registers directly, as it avoids clobbering all TCG live values.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
This allows faults from MO_ALIGN to have the same effect
as from gen_check_align.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
The architecture supports 128 TLB entries. There is no reason
not to provide all of them. In the process we need to fix a
bug that failed to parameterize the configuration register that
tells the operating system the number of entries.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Stafford Horne <shorne@gmail.com>
---
v2:
- Change VMState version.
This hook is only used by CONFIG_USER_ONLY.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Stafford Horne <shorne@gmail.com>
The sizes are already the same, however, we can improve things
if they are identical by design.
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Stafford Horne <shorne@gmail.com>
The code in cpu_mmu_index does not properly honor SR_DME.
This bug has workarounds elsewhere in that we flush the
tlb more often than necessary, on the state changes that
should be reflected in a change of mmu_index.
Fixing this means that we can respect the mmu_index that
is given to tlb_flush.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Stafford Horne <shorne@gmail.com>
The previous code was confused, avoiding the flush of the old entry
if the new entry is invalid. We need to flush the old page if the
old entry is valid and the new page if the new entry is valid.
This bug was masked by over-flushing elsewhere.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Stafford Horne <shorne@gmail.com>
While we had defines for *_WAYS, we didn't define more than 1.
Reduce the complexity by eliminating this unused dimension.
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Stafford Horne <shorne@gmail.com>
With tlb_fill in mmu.c, we can simplify things further.
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Stafford Horne <shorne@gmail.com>
There is no reason to use an indirect branch instead
of simply testing the SR bits that control mmu state.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Stafford Horne <shorne@gmail.com>
There is no reason to allocate this separately. This was probably
copied from target/mips which makes the same mistake.
While doing so, move tlb into the clear-on-reset range. While not
all of the TLB bits are guaranteed zero on reset, all of the valid
bits are cleared, and the rest of the bits are unspecified.
Therefore clearing the whole of the TLB is correct.
Reviewed-by: Stafford Horne <shorne@gmail.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Stafford Horne <shorne@gmail.com>
Rather than pass base+offset to the helper, pass the full index.
In most cases the base is r0 and optimization yields a constant.
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Stafford Horne <shorne@gmail.com>
A store to SR changes interrupt state, which should return
to the main loop to recognize that state.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Stafford Horne <shorne@gmail.com>
This allows us to limit the amount of ifdefs and isolate
the test for usermode.
Reviewed-by: Stafford Horne <shorne@gmail.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Stafford Horne <shorne@gmail.com>
Track direct jumps via dc->jmp_pc_imm. Use that in
preference to jmp_pc when possible. Emit goto_tb in
that case, and lookup_and_goto_tb otherwise.
Reviewed-by: Stafford Horne <shorne@gmail.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Stafford Horne <shorne@gmail.com>
We failed to store to cpu_pc before raising the exception,
which caused us to re-execute the same insn that we stepped.
Reviewed-by: Stafford Horne <shorne@gmail.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Stafford Horne <shorne@gmail.com>
No need to use the interrupt mechanisms when we can
simply exit the tb directly.
Reviewed-by: Stafford Horne <shorne@gmail.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Stafford Horne <shorne@gmail.com>
These values are unused.
Reviewed-by: Stafford Horne <shorne@gmail.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Stafford Horne <shorne@gmail.com>
Rather than emit disassembly while translating, reuse the
generated decoder to build a separate disassembler.
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Stafford Horne <shorne@gmail.com>
Correct the output of the "info mem" and "info tlb" monitor commands to
correctly show canonical addresses.
In 48-bit addressing mode, the upper 16 bits of linear addresses are
equal to bit 47. In 57-bit addressing mode (LA57), the upper 7 bits of
linear addresses are equal to bit 56.
Signed-off-by: Doug Gale <doug16k@gmail.com>
Message-Id: <20180617084025.29198-1-doug16k@gmail.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
This implements NPT suport for SVM by hooking into
x86_cpu_handle_mmu_fault where it reads the stage-1 page table. Whether
we need to perform this 2nd stage translation, and how, is decided
during vmrun and stored in hflags2, along with nested_cr3 and
nested_pg_mode.
As get_hphys performs a direct cpu_vmexit in case of NPT faults, we need
retaddr in that function. To avoid changing the signature of
cpu_handle_mmu_fault, this passes the value from tlb_fill to get_hphys
via the CPU state.
This was tested successfully via the Jailhouse hypervisor.
Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
Message-Id: <567473a0-6005-5843-4c73-951f476085ca@web.de>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
It eases code review, unit is explicit.
Patch generated using:
$ git grep -E '(1024|2048|4096|8192|(<<|>>).?(10|20|30))' hw/ include/hw/
and modified manually.
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Acked-by: David Gibson <david@gibson.dropbear.id.au>
Message-Id: <20180625124238.25339-33-f4bug@amsat.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Missing break when this feature was added in 89e71e873d
("target/openrisc: implement shadow registers"). This was causing
strange issues as we get writes into the translation block jump cache
and other bits of state.
Fixes: 89e71e873d ("target/openrisc: implement shadow registers")
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Stafford Horne <shorne@gmail.com>
Add support for Hyper-V TLB flush which recently got added to KVM.
Just like regular Hyper-V we announce HV_EX_PROCESSOR_MASKS_RECOMMENDED
regardless of how many vCPUs we have. Windows is 'smart' and uses less
expensive non-EX Hypercall whenever possible (when it wants to flush TLB
for all vCPUs or the maximum vCPU index in the vCPU set requires flushing
is less than 64).
Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Message-Id: <20180610184927.19309-1-vkuznets@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
tcg_s390_tod_updated() is always called with the iothread being locked
(e.g. from S390TODClass->set() e.g. via HELPER(sck) or on incoming
migration). The helper we call takes the lock itself - bad.
Let's change that by factoring out updating the ckc timer. This now looks
much nicer than having to call a helper from another function.
While touching it we also make sure that env->ckc is updated even if the
new value is -1ULL, for now it would not have been modified in that case.
Reported-by: Christian Borntraeger <borntraeger@de.ibm.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20180629170520.13671-1-david@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
Let's do this for completeness reason, although we don't support e.g.
PCDIMM/NVDIMM, which would use the alignment for placing the memory
region in guest physical memory. But maybe someday we would want to
support something like this - then we don't forget about this if
allowing multiple allocations in legacy_s390_alloc().
Use the same alignment as we would set in qemu_anon_ram_alloc(). Our
fixed address satisfies this alignment (1MB). This implicitly sets the
alignment of the underlying memory region.
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20180628113817.30814-3-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
We always allocate at a fixed address, a second allocation can therefore
of course never work. We would simply overwrite mappings.
This can e.g. happen in s390_memory_init(), if trying to allocate more
than > 8TB. Let's just bail out, as there is no need for supporting it
(legacy handling for z/VM).
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20180628113817.30814-2-david@redhat.com>
Reviewed-by: Christian Borntraeger <borntraeger@de.ibm.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
run_on_cpu() doesn't seem to work reliably until the CPU has been fully
created if the single-threaded TCG main loop is already running.
Therefore, hotplugging a CPU under single-threaded TCG does currently
not work. We should use the direct call instead of going via
run_on_cpu().
So let's use run_on_cpu() for KVM only - KVM requires it due to the initial
CPU reset ioctl. As a nice side effect, we get rid of the ifdef.
Reviewed-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20180627134410.4901-10-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
If the CPU data is migrated after the TOD clock, the CKC timer of a CPU
is not rearmed. Let's rearm it when loading the CPU state.
Introduce tcg-stub.c just like kvm-stub.c for tcg specific stubs.
Reviewed-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20180627134410.4901-9-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
This allows a guest to change its TOD. We already take care of updating
all CKC timers from within S390TODClass.
Use MO_ALIGN to load the operand manually - this will properly trigger a
SPECIFICATION exception.
Acked-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20180627134410.4901-8-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
Let's stop the timer and delete any pending CKC IRQ before doing
anything else.
While at it, add a comment why the check for ckc == -1ULL is needed.
Reviewed-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20180627134410.4901-7-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
Right now, each CPU has its own TOD. Especially, the TOD will differ
based on creation time of a CPU - e.g. when hotplugging a CPU the times
will differ quite a lot, resulting in stall warnings in the guest.
Let's use a single TOD by implementing our new TOD device. Prepare it
for TOD-clock epoch extension.
Most importantly, whenever we set the TOD, we have to update the CKC
timer.
Introduce "tcg_s390x.h" just like "kvm_s390x.h" for tcg specific
function declarations that should not go into cpu.h.
Reviewed-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20180627134410.4901-6-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
Never set to anything but 0.
Reviewed-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20180627134410.4901-5-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
Let's treat this like a separate device. TCG will have to store the
actual state/time later on.
Include cpu-qom.h in kvm_s390x.h (due to S390CPU) to compile tod-kvm.c.
Reviewed-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20180627134410.4901-4-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
We are going to factor out the TOD into a separate device and use const
pointers for device class functions where possible. We are passing right
now ordinary pointers that should never be touched when setting the TOD.
Let's just pass the values directly.
Note that s390_set_clock() will be removed in a follow-on patch and
therefore its calling convention is not changed.
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20180627134410.4901-3-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
Big values for the TOD/ns clock can result in some overflows that can be
avoided. Not all overflows can be handled however, as the conversion either
multiplies by 4.096 or divided by 4.096.
Apply the trick used in the Linux kernel in arch/s390/include/asm/timex.h
for tod_to_ns() and use the same trick also for the conversion in the
other direction.
Reviewed-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20180627134410.4901-2-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
Most systems and host kernels provide the necessary building blocks for
bpb and ppa15. We can reverse the logic and default enable those
features, while still allowing to disable it via cpu model.
So let us add bpb and ppa15 to z196 and later default CPU model for the
qemu 3.0 machine. (like -cpu z13). Older machine types (e.g.
s390-ccw-virtio-2.12) will retain the old value and not provide those
bits in the default model.
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
Message-Id: <20180626123830.18282-1-borntraeger@de.ibm.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
The rom_ptr() function allows direct access to the ROM blobs that we
load during startup. However, there are currently no checks for the
size of the accesses, so it's currently possible to crash QEMU for
example with:
$ echo "Insane in the mainframe" > /tmp/test.txt
$ s390x-softmmu/qemu-system-s390x -kernel /tmp/test.txt -append xyz
Segmentation fault (core dumped)
$ s390x-softmmu/qemu-system-s390x -kernel /tmp/test.txt -initrd /tmp/test.txt
Segmentation fault (core dumped)
$ echo -n HdrS > /tmp/hdr.txt
$ sparc64-softmmu/qemu-system-sparc64 -kernel /tmp/hdr.txt -initrd /tmp/hdr.txt
Segmentation fault (core dumped)
We need a possibility to check the size of the ROM area that we want
to access, thus let's add a size parameter to the rom_ptr() function
to avoid these problems.
Acked-by: Christian Borntraeger <borntraeger@de.ibm.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
Message-Id: <1530005740-25254-1-git-send-email-thuth@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
The xtensa frontend calls get_page_addr_code() from its
itlb_hit_test helper function. This function is really part
of the TCG core's internals, and calling it from a target
helper makes it awkward to make changes to that core code.
It also means that we don't pass the correct retaddr to
tlb_fill(), so we won't correctly handle the case where
an exception is generated.
The helper is used for the instructions IHI, IHU and IPFL.
Change it to call cpu_ldb_code_ra() instead.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This will reduce the size of the patch in the next patch,
where the context will have to be a pointer.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
The usage of DISAS_UPDATE is after noreturn helpers.
It is thus indistinguishable from DISAS_NORETURN.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
ISA book documents that the first instruction of zero overhead loop
must fit completely into naturally aligned region of an instruction
fetch unit size. Check that condition and log a message if it's
violated.
Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
This register was added to aa32 state by ARMv8.2.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20180629001538.11415-6-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
There is no need to re-set these 3 features already
implied by the call to aarch64_a15_initfn.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20180629001538.11415-5-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
There is no need to re-set these 9 features already
implied by the call to aarch64_a57_initfn.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20180629001538.11415-4-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Leave ARM_CP_SVE, removing ARM_CP_FPU; the sve_access_check
produced by the flag already includes fp_access_check. If
we also check ARM_CP_FPU the double fp_access_check asserts.
Reported-by: Laurent Desnogues <laurent.desnogues@gmail.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Laurent Desnogues <laurent.desnogues@gmail.com>
Message-id: 20180629001538.11415-3-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
We already check for the same condition within the normal integer
sdiv and sdiv64 helpers. Use a slightly different formation that
does not require deducing the expression type.
Fixes: f97cfd596e
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20180629001538.11415-2-richard.henderson@linaro.org
[PMM: reworded a comment]
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This makes it match its AArch64 equivalent, PMINTENSET_EL1
Signed-off-by: Aaron Lindsay <alindsay@codeaurora.org>
Message-id: 1529699547-17044-13-git-send-email-alindsay@codeaurora.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
KVM implies V7VE, which implies ARM_DIV and THUMB_DIV. The conditional
detection here is therefore unnecessary. Because V7VE is already
unconditionally specified for all KVM hosts, ARM_DIV and THUMB_DIV are
already indirectly specified and do not need to be included here at all.
Signed-off-by: Aaron Lindsay <alindsay@codeaurora.org>
Message-id: 1529699547-17044-6-git-send-email-alindsay@codeaurora.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Since kernel commit a86bd139f2 (arm64: arch_timer: Enable CNTVCT_EL0
trap..), released in kernel version v4.12, user-space has been able
to read these system registers. As we can't use QEMUTimer's in
linux-user mode we just directly call cpu_get_clock().
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20180625160009.17437-2-alex.bennee@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
We've already added the helpers with an SVE patch, all that remains
is to wire up the aa64 and aa32 translators. Enable the feature
within -cpu max for CONFIG_USER_ONLY.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20180627043328.11531-36-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Enable ARM_FEATURE_SVE for the generic "max" cpu.
Tested-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20180627043328.11531-35-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Message-id: 20180627043328.11531-34-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20180627043328.11531-33-richard.henderson@linaro.org
[PMM: moved 'ra=%reg_movprfx' here from following patch]
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Enhance the existing helpers to support SVE, which takes the
index from each 128-bit segment. The change has no effect
for AdvSIMD, since there is only one such segment.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Message-id: 20180627043328.11531-32-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
For aa64 advsimd, we had been passing the pre-indexed vector.
However, sve applies the index to each 128-bit segment, so we
need to pass in the index separately.
For aa32 advsimd, the fp32 operation always has index 0, but
we failed to interpret the fp16 index correctly.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Message-id: 20180627043328.11531-31-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20180627043328.11531-30-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20180627043328.11531-29-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20180627043328.11531-28-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20180627043328.11531-27-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20180627043328.11531-26-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20180627043328.11531-25-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20180627043328.11531-24-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20180627043328.11531-23-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20180627043328.11531-22-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20180627043328.11531-21-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20180627043328.11531-20-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20180627043328.11531-19-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20180627043328.11531-18-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20180627043328.11531-17-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20180627043328.11531-16-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20180627043328.11531-15-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Message-id: 20180627043328.11531-14-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20180627043328.11531-13-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20180627043328.11531-12-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20180627043328.11531-11-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20180627043328.11531-10-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20180627043328.11531-9-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Message-id: 20180627043328.11531-8-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20180627043328.11531-7-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20180627043328.11531-5-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20180627043328.11531-4-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Tested-by: Alex Bennée <alex.bennee@linaro.org>
Message-id: 20180627043328.11531-3-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20180627043328.11531-2-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
When guest CPU PM is enabled, and with -cpu host, expose the host CPU
MWAIT leaf in the CPUID so guest can make good PM decisions.
Note: the result is 100% CPU utilization reported by host as host
no longer knows that the CPU is halted.
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Reviewed-by: Eduardo Habkost <ehabkost@redhat.com>
Message-Id: <20180622192148.178309-3-mst@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
With this flag, kvm allows guest to control host CPU power state. This
increases latency for other processes using same host CPU in an
unpredictable way, but if decreases idle entry/exit times for the
running VCPU, so to use it QEMU needs a hint about whether host CPU is
overcommitted, hence the flag name.
Follow-up patches will expose this capability to guest
(using mwait leaf).
Based on a patch by Wanpeng Li <kernellwp@gmail.com> .
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Message-Id: <20180622192148.178309-2-mst@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Let's start to use "info pic" just like other platforms. For now we
keep the command for a while so that old users can know what is the new
command to use.
Signed-off-by: Peter Xu <peterx@redhat.com>
Message-Id: <20171229073104.3810-6-peterx@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
It calls cpu_loop_exit in system emulation mode (and should never be
called in user emulation mode).
Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
Message-Id: <6f4d44ffde55d074cbceb48309c1678600abad2f.1522769774.git.jan.kiszka@web.de>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
We need to terminate the translation block after STGI so that pending
interrupts can be injected.
This fixes pending NMI injection for Jailhouse which uses "stgi; clgi"
to open a brief injection window.
Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
Message-Id: <37939b244dda0e9cccf96ce50f2b15df1e48315d.1522769774.git.jan.kiszka@web.de>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Check for SVM interception prior to injecting an NMI. Tested via the
Jailhouse hypervisor.
Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
Message-Id: <c65877e9a011ee4962931287e59f502c482b8d0b.1522769774.git.jan.kiszka@web.de>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Some variations of Linux kernels end up accessing MSR's that the Windows
Hypervisor doesn't implement which causes a GP to be returned to the guest.
This fix registers QEMU for unimplemented MSR access and globally returns 0 on
reads and ignores writes. This behavior is allows the Linux kernel to probe the
MSR with a write/read/check sequence it does often without failing the access.
Signed-off-by: Justin Terry (VM) <juterry@microsoft.com>
Message-Id: <20180605221500.21674-2-juterry@microsoft.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Adds a workaround to an incorrect value setting
CPUID Fn8000_0001_ECX[bit 9 OSVW] = 1. This can cause a guest linux kernel
to panic when an issue to rdmsr C001_0140h returns 0. Disabling this feature
correctly allows the guest to boot without accessing the osv workarounds.
Signed-off-by: Justin Terry (VM) <juterry@microsoft.com>
Message-Id: <20180605221500.21674-1-juterry@microsoft.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
The implementation of these two instructions was swapped.
At the same time, unify the setup of eflags for the insn group.
Reported-by: Ricardo Ribalda Delgado <ricardo.ribalda@gmail.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Message-Id: <20170712192902.15493-1-rth@twiddle.net>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Place them in exec.c, exec-all.h and ram_addr.h. This removes
knowledge of translate-all.h (which is an internal header) from
several files outside accel/tcg and removes knowledge of
AddressSpace from translate-all.c (as it only operates on ram_addr_t).
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Fix gdbstub to read/write 64 bit FP registers
Signed-off-by: Yongbok Kim <yongbok.kim@mips.com>
Reviewed-by: Aleksandar Markovic <aleksandar.markovic@mips.com>
Signed-off-by: Aleksandar Markovic <aleksandar.markovic@mips.com>
Offset can be larger than 16 bit from nanoMIPS,
and immediate field can be larger than 16 bits as well.
Signed-off-by: Yongbok Kim <yongbok.kim@mips.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Aleksandar Markovic <aleksandar.markovic@mips.com>
Signed-off-by: Aleksandar Markovic <aleksandar.markovic@mips.com>
Update gen_flt_ldst() in order to reuse the functions for nanoMIPS
Signed-off-by: Yongbok Kim <yongbok.kim@mips.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Aleksandar Markovic <aleksandar.markovic@mips.com>
Signed-off-by: Aleksandar Markovic <aleksandar.markovic@mips.com>
Fix to activate microMIPS on reset when Config3.ISA == {1, 3}
Signed-off-by: Yongbok Kim <yongbok.kim@mips.com>
Reviewed-by: Aleksandar Markovic <aleksandar.markovic@mips.com>
Signed-off-by: Aleksandar Markovic <aleksandar.markovic@mips.com>
Fix to raise a Reserved Instruction exception when given fs is not
available from CTC1.
Signed-off-by: Yongbok Kim <yongbok.kim@mips.com>
Reviewed-by: Aleksandar Markovic <aleksandar.markovic@mips.com>
Signed-off-by: Aleksandar Markovic <aleksandar.markovic@mips.com>
Determining the size of a field is useful when you don't have a struct
variable handy. Open-coding this is ugly.
This patch adds the sizeof_field() macro, which is similar to
typeof_field(). Existing instances are updated to use the macro.
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: John Snow <jsnow@redhat.com>
Message-id: 20180614164431.29305-1-stefanha@redhat.com
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Allow ARMv8M to handle small MPU and SAU region sizes, by making
get_phys_add_pmsav8() set the page size to the 1 if the MPU or
SAU region covers less than a TARGET_PAGE_SIZE.
We choose to use a size of 1 because it makes no difference to
the core code, and avoids having to track both the base and
limit for SAU and MPU and then convert into an artificially
restricted "page size" that the core code will then ignore.
Since the core TCG code can't handle execution from small
MPU regions, we strip the exec permission from them so that
any execution attempts will cause an MPU exception, rather
than allowing it to end up with a cpu_abort() in
get_page_addr_code().
(The previous code's intention was to make any small page be
treated as having no permissions, but unfortunately errors
in the implementation meant that it didn't behave that way.
It's possible that some binaries using small regions were
accidentally working with our old behaviour and won't now.)
We also retain an existing bug, where we ignored the possibility
that the SAU region might not cover the entire page, in the
case of executable regions. This is necessary because some
currently-working guest code images rely on being able to
execute from addresses which are covered by a page-sized
MPU region but a smaller SAU region. We can remove this
workaround if we ever support execution from small regions.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20180620130619.11362-4-peter.maydell@linaro.org
We want to handle small MPU region sizes for ARMv7M. To do this,
make get_phys_addr_pmsav7() set the page size to the region
size if it is less that TARGET_PAGE_SIZE, rather than working
only in TARGET_PAGE_SIZE chunks.
Since the core TCG code con't handle execution from small
MPU regions, we strip the exec permission from them so that
any execution attempts will cause an MPU exception, rather
than allowing it to end up with a cpu_abort() in
get_page_addr_code().
(The previous code's intention was to make any small page be
treated as having no permissions, but unfortunately errors
in the implementation meant that it didn't behave that way.
It's possible that some binaries using small regions were
accidentally working with our old behaviour and won't now.)
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20180620130619.11362-3-peter.maydell@linaro.org
Remove generic non-intel check while validating hyperthreading support.
Certain AMD CPUs can support hyperthreading now.
CPU family with TOPOEXT feature can support hyperthreading now.
Signed-off-by: Babu Moger <babu.moger@amd.com>
Tested-by: Geoffrey McRae <geoff@hostfission.com>
Reviewed-by: Eduardo Habkost <ehabkost@redhat.com>
Message-Id: <1529443919-67509-4-git-send-email-babu.moger@amd.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Enable TOPOEXT feature on EPYC CPU. This is required to support
hyperthreading on VM guests. Also extend xlevel to 0x8000001E.
Disable topoext on PC_COMPAT_2_12 and keep xlevel 0x8000000a.
Signed-off-by: Babu Moger <babu.moger@amd.com>
Message-Id: <1529443919-67509-3-git-send-email-babu.moger@amd.com>
[ehabkost: Added EPYC-IBPB.xlevel to PC_COMPAT_2_12]
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
This is part of topoext support. To keep the compatibility, it is better
we support all the combination of nr_cores and nr_threads currently
supported. By allowing more nr_cores and nr_threads, we might end up with
more nodes than we can actually support with the real hardware. We need to
fix up the node id to make this work. We can achieve this by shifting the
socket_id bits left to address more nodes.
Signed-off-by: Babu Moger <babu.moger@amd.com>
Message-Id: <1529443919-67509-2-git-send-email-babu.moger@amd.com>
Reviewed-by: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Enabling TOPOEXT feature might cause compatibility issues if
older kernels does not set this feature. Lets set this feature
unconditionally.
Signed-off-by: Babu Moger <babu.moger@amd.com>
Message-Id: <1528939107-17193-2-git-send-email-babu.moger@amd.com>
[ehabkost: rewrite comment and commit message]
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
AMD future CPUs expose a mechanism to tell the guest that the
Speculative Store Bypass Disable is not needed and that the
CPU is all good.
This is exposed via the CPUID 8000_0008.EBX[26] bit.
See 124441_AMD64_SpeculativeStoreBypassDisable_Whitepaper_final.pdf
A copy of this document is available at
https://bugzilla.kernel.org/show_bug.cgi?id=199889
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Message-Id: <20180601153809.15259-3-konrad.wilk@oracle.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
AMD future CPUs expose _two_ ways to utilize the Intel equivalant
of the Speculative Store Bypass Disable. The first is via
the virtualized VIRT_SPEC CTRL MSR (0xC001_011f) and the second
is via the SPEC_CTRL MSR (0x48). The document titled:
124441_AMD64_SpeculativeStoreBypassDisable_Whitepaper_final.pdf
gives priority of SPEC CTRL MSR over the VIRT SPEC CTRL MSR.
A copy of this document is available at
https://bugzilla.kernel.org/show_bug.cgi?id=199889
Anyhow, this means that on future AMD CPUs there will be _two_ ways to
deal with SSBD.
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Message-Id: <20180601153809.15259-2-konrad.wilk@oracle.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
OSPKE is not a static feature flag: it changes dynamically at
runtime depending on CR4, and it was never configurable: KVM
never returned OSPKE on GET_SUPPORTED_CPUID, and on TCG enables
it automatically if CR4_PKE_MASK is set.
Remove OSPKE from the feature name array so users don't try to
configure it manually.
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Message-Id: <20180611203712.12086-1-ehabkost@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
OSXAVE is not a static feature flag: it changes dynamically at
runtime depending on CR4, and it was never configurable: KVM
never returned OSXSAVE on GET_SUPPORTED_CPUID, and it is not
included in TCG_EXT_FEATURES.
Remove OSXSAVE from the feature name array so users don't try to
configure it manually.
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Message-Id: <20180611203855.13269-1-ehabkost@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
When using '-cpu help' the list of CPUID features is grouped according
to the internal low level CPUID grouping. The data printed results in
very long lines too.
This combines to make it hard for users to read the output and identify
if QEMU knows about the feature they wish to use.
This change gets rid of the grouping of features and treats all flags as
single list. The list is sorted into alphabetical order and the printing
with line wrapping at the 77th column.
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
Message-Id: <20180606165527.17365-4-berrange@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
The current list of CPU model names output by "-cpu help" is sorted
alphabetically based on the internal QOM class name. The text that is
displayed, however, uses the CPU model name, which is equivalent to the
QOM class name, minus a suffix. Unfortunately that suffix has an effect
on the sort ordering, for example, causing the various Broadwell
variants to appear reversed:
x86 486
x86 Broadwell-IBRS Intel Core Processor (Broadwell, IBRS)
x86 Broadwell-noTSX-IBRS Intel Core Processor (Broadwell, no TSX, IBRS
x86 Broadwell-noTSX Intel Core Processor (Broadwell, no TSX)
x86 Broadwell Intel Core Processor (Broadwell)
x86 Conroe Intel Celeron_4x0 (Conroe/Merom Class Core 2)
By sorting on the actual CPU model name text that is displayed, the
result is
x86 486
x86 Broadwell Intel Core Processor (Broadwell)
x86 Broadwell-IBRS Intel Core Processor (Broadwell, IBRS)
x86 Broadwell-noTSX Intel Core Processor (Broadwell, no TSX)
x86 Broadwell-noTSX-IBRS Intel Core Processor (Broadwell, no TSX, IBRS)
x86 Conroe Intel Celeron_4x0 (Conroe/Merom Class Core 2)
This requires extra string allocations during sorting, but this is not a
concern given the usage scenario and the number of CPU models that exist.
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
Message-Id: <20180606165527.17365-3-berrange@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Since the addition of the -IBRS CPU model variants, the descriptions
shown by '-cpu help' are not well aligned, as several model names
overflow the space allowed. Right aligning the CPU model names is also
not attractive, because it obscures the common name prefixes of many
models. The CPU model name field needs to be 4 characters larger, and
be left aligned instead.
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
Message-Id: <20180606165527.17365-2-berrange@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Add support for cpuid leaf CPUID_8000_001E. Build the config that closely
match the underlying hardware. Please refer to the Processor Programming
Reference (PPR) for AMD Family 17h Model for more details.
Signed-off-by: Babu Moger <babu.moger@amd.com>
Message-Id: <1528498581-131037-2-git-send-email-babu.moger@amd.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
* hw/intc/arm_gicv3: fix wrong values when reading IPRIORITYR
* target/arm: fix read of freed memory in kvm_arm_machine_init_done()
* virt: support up to 512 CPUs
* virt: support 256MB ECAM PCI region (for more PCI devices)
* xlnx-zynqmp: Use Cortex-R5F, not Cortex-R5
* mps2-tz: Implement and use the TrustZone Memory Protection Controller
* target/arm: enforce alignment checking for v6M cores
* xen: Don't use memory_region_init_ram_nomigrate() in pci_assign_dev_load_option_rom()
* vl.c: Don't zero-initialize statics for serial_hds
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQIcBAABCAAGBQJbLPHgAAoJEDwlJe0UNgzeCugP/0yFWNkIQqgPX1D2fDLaea8z
fj2QCkMHVDC/iM9s2Q7PpQDwfxMfE5MbvcavyfcVWe6lthyWQvvQ+5j4JHipgEup
vdGX+ASA9sbC3kJ1u/OUekz6JliEWaxOyImnj2gyQfBw8zuMfiF+eTmtoKXcJC3u
KNSNvArPIaFLNKsaQgTQE19Bu8CxIzfGEzsIgeAoD4gE7wv48EWqpOaYdIkbDo+0
gJylBVCa46pmf56ESCuLTDZC+2FxBcw+uCFtD5zzt6YVV0Fli1ja9FNwLP17YHqg
GOfNQ6melPeNUF+ByIEaPLWrq+Sy2P6wlnVlvKcKis8nXq497VjJdvv1txbbnyrn
s4dKgVHjQP6EocvaVKCxXsLfjvPUCF2+f/uIdA8IR4WRilgTEfV3IdOYNuKjOFXb
FcYap5UrX4ikEBkDBIBkh1BQXqOAU+lf8JjW1O6kn8PbfkEu2oRqGybWtEOjr+mz
+iNsJ+4PydJ34WlvPKkzDG7GjEJcleStmgD3/DdL3r+jtjBYBT97xOAqWVnyVRYb
NEUQEGmG894THftieuMw6r8vi4u24ZkI/3vGvBeSZ1od8IFEjzENRWkhYoDhLO5r
LH16da19pVgWnLSXJnhxLTRaYNNAphNSIW30GDAwMbuXFK+LpCBufT8X7b6RerNx
tOYET9DwlQRYEiuBVeSk
=1T5q
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/pmaydell/tags/pull-target-arm-20180622' into staging
target-arm queue:
* hw/intc/arm_gicv3: fix wrong values when reading IPRIORITYR
* target/arm: fix read of freed memory in kvm_arm_machine_init_done()
* virt: support up to 512 CPUs
* virt: support 256MB ECAM PCI region (for more PCI devices)
* xlnx-zynqmp: Use Cortex-R5F, not Cortex-R5
* mps2-tz: Implement and use the TrustZone Memory Protection Controller
* target/arm: enforce alignment checking for v6M cores
* xen: Don't use memory_region_init_ram_nomigrate() in pci_assign_dev_load_option_rom()
* vl.c: Don't zero-initialize statics for serial_hds
# gpg: Signature made Fri 22 Jun 2018 13:56:00 BST
# gpg: using RSA key 3C2525ED14360CDE
# gpg: Good signature from "Peter Maydell <peter.maydell@linaro.org>"
# gpg: aka "Peter Maydell <pmaydell@gmail.com>"
# gpg: aka "Peter Maydell <pmaydell@chiark.greenend.org.uk>"
# Primary key fingerprint: E1A5 C593 CD41 9DE2 8E83 15CF 3C25 25ED 1436 0CDE
* remotes/pmaydell/tags/pull-target-arm-20180622: (28 commits)
xen: Don't use memory_region_init_ram_nomigrate() in pci_assign_dev_load_option_rom()
vl.c: Don't zero-initialize statics for serial_hds
target/arm: Strict alignment for ARMv6-M and ARMv8-M Baseline
target/arm: Introduce ARM_FEATURE_M_MAIN
hw/arm/mps2-tz.c: Instantiate MPCs
hw/arm/iotkit: Wire up MPC interrupt lines
hw/arm/iotkit: Instantiate MPC
hw/misc/iotkit-secctl.c: Implement SECMPCINTSTATUS
hw/misc/tz_mpc.c: Honour the BLK_LUT settings in translate
hw/misc/tz-mpc.c: Implement correct blocked-access behaviour
hw/misc/tz-mpc.c: Implement registers
hw/misc/tz-mpc.c: Implement the Arm TrustZone Memory Protection Controller
xlnx-zynqmp: Swap Cortex-R5 for Cortex-R5F
target-arm: Add the Cortex-R5F
hw/arm/virt: Increase max_cpus to 512
hw/arm/virt: Use 256MB ECAM region by default
hw/arm/virt: Add virt-3.0 machine type
hw/arm/virt: Add a new 256MB ECAM region
hw/arm/virt: Register two redistributor regions when necessary
hw/arm/virt-acpi-build: Advertise one or two GICR structures
...
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Unlike ARMv7-M, ARMv6-M and ARMv8-M Baseline only supports naturally
aligned memory accesses for load/store instructions.
Signed-off-by: Julia Suvorova <jusual@mail.ru>
Message-id: 20180622080138.17702-3-jusual@mail.ru
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This feature is intended to distinguish ARMv8-M variants: Baseline and
Mainline. ARMv7-M compatibility requires the Main Extension. ARMv6-M
compatibility is provided by all ARMv8-M implementations.
Signed-off-by: Julia Suvorova <jusual@mail.ru>
Message-id: 20180622080138.17702-2-jusual@mail.ru
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Add the Cortex-R5F with the optional FPU enabled.
Reviewed-by: KONRAD Frederic <frederic.konrad@adacore.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Tested-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Message-id: 20180529124707.3025-2-edgar.iglesias@gmail.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
for KVM_VGIC_V3_ADDR_TYPE_REDIST_REGION attribute, the attribute
data pointed to by kvm_device_attr.addr is a OR of the
redistributor region address and other fields such as the index
of the redistributor region and the number of redistributors the
region can contain.
The existing machine init done notifier framework sets the address
field to the actual address of the device and does not allow to OR
this value with other fields.
This patch extends the KVMDevice struct with a new kda_addr_ormask
member. Its value is passed at registration time and OR'ed with the
resolved address on kvm_arm_set_device_addr().
Signed-off-by: Eric Auger <eric.auger@redhat.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 1529072910-16156-3-git-send-email-eric.auger@redhat.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The elements of kvm_devices_head list are freed in kvm_arm_machine_init_done(),
but we still access these illegal memory in kvm_arm_devlistener_del().
This will cause segment fault when booting guest with MALLOC_PERTURB_=1.
Signed-off-by: Zheng Xiang <xiang.zheng@linaro.org>
Message-id: 20180619075821.9884-1-zhengxiang9@huawei.com
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The arrays were made static, "if" was simplified because V7M and V8M
define V6 feature.
Signed-off-by: Julia Suvorova <jusual@mail.ru>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Message-id: 20180618214604.6777-1-jusual@mail.ru
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Currently during KVM initialization on POWER, kvm_fixup_page_sizes()
rewrites a bunch of information in the cpu state to reflect the
capabilities of the host MMU and KVM. This overwrites the information
that's already there reflecting how the TCG implementation of the MMU will
operate.
This means that we can get guest-visibly different behaviour between KVM
and TCG (and between different KVM implementations). That's bad. It also
prevents migration between KVM and TCG.
The pseries machine type now has filtering of the pagesizes it allows the
guest to use which means it can present a consistent model of the MMU
across all accelerators.
So, we can now replace kvm_fixup_page_sizes() with kvm_check_mmu() which
merely verifies that the expected cpu model can be faithfully handled by
KVM, rather than updating the cpu model to match KVM.
We call kvm_check_mmu() from the spapr cpu reset code. This is a hack:
conceptually it makes more sense where fixup_page_sizes() was - in the KVM
cpu init path. However, doing that would require moving the platform's
pagesize filtering much earlier, which would require a lot of work making
further adjustments. There wouldn't be a lot of concrete point to doing
that, since the only KVM implementation which has the awkward MMU
restrictions is KVM HV, which can only work with an spapr guest anyway.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Cédric Le Goater <clg@kaod.org>
The paravirtualized PAPR platform sometimes needs to restrict the guest to
using only some of the page sizes actually supported by the host's MMU.
At the moment this is handled in KVM specific code, but for consistency we
want to apply the same limitations to all accelerators.
This makes a start on this by providing a helper function in the cpu code
to allow platform code to remove some of the cpu's page size definitions
via a caller supplied callback.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Cédric Le Goater <clg@kaod.org>
Reviewed-by: Greg Kurz <groug@kaod.org>
The way we used to handle KVM allowable guest pagesizes for PAPR guests
required some convoluted checking of memory attached to the guest.
The allowable pagesizes advertised to the guest cpus depended on the memory
which was attached at boot, but then we needed to ensure that any memory
later hotplugged didn't change which pagesizes were allowed.
Now that we have an explicit machine option to control the allowable
maximum pagesize we can simplify this. We just check all memory backends
against that declared pagesize. We check base and cold-plugged memory at
reset time, and hotplugged memory at pre_plug() time.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Cédric Le Goater <clg@kaod.org>
Reviewed-by: Greg Kurz <groug@kaod.org>
According to PPC440 User Manual PPC440 has multiple opcodes for icbt
instruction: one for compatibility with older cores and two 440
specific opcodes one of which is defined in BookE. QEMU only
implements two of these, add the missing one.
Signed-off-by: BALATON Zoltan <balaton@eik.bme.hu>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Fix the helper_fpscr_clrbit() function so it correctly sets the FEX
and VX bits.
Determining the value for the Floating Point Status and Control
Register's (FPSCR) FEX bit is suppose to be done like this:
FEX = (VX & VE) | (OX & OE) | (UX & UE) | (ZX & ZE) | (XX & XE))
It is described as "the logical OR of all the floating-point exception
bits masked by their respective enable bits". It was not implemented
correctly. The value of FEX would stay on even when all other bits
were set to off.
The VX bit is described as "the logical OR of all of the invalid
operation exceptions". This bit was also not implemented correctly. It
too would stay on when all the other bits were set to off.
My main source of information is an IBM document called:
PowerPC Microprocessor Family:
The Programming Environments for 32-Bit Microprocessors
Page 62 is where the FPSCR information is located.
This is an older copy than the one I use but it is still very useful:
https://www.pdfdrive.net/powerpc-microprocessor-family-the-programming-environments-for-32-e3087633.html
I use a G3 and G5 iMac to compare bit values with QEMU. This patch
fixed all the problems I was having with these bits.
Signed-off-by: John Arbuckle <programmingkidx@gmail.com>
[dwg: Re-wrapped commit message]
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
KVM HV has a restriction that for HPT mode guests, guest pages must be hpa
contiguous as well as gpa contiguous. We have to account for that in
various places. We determine whether we're subject to this restriction
from the SMMU information exposed by KVM.
Planned cleanups to the way we handle this will require knowing whether
this restriction is in play in wider parts of the code. So, expose a
helper function which returns it.
This does mean some redundant calls to kvm_get_smmu_info(), but they'll go
away again with future cleanups.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Greg Kurz <groug@kaod.org>
Reviewed-by: Cédric Le Goater <clg@kaod.org>
ppc_check_compat() is used in a number of places to check if a cpu object
supports a certain compatiblity mode, subject to various constraints.
It takes a PowerPCCPU *, however it really only depends on the cpu's class.
We have upcoming cases where it would be useful to make compatibility
checks before we fully instantiate the cpu objects.
ppc_type_check_compat() will now make an equivalent check, but based on a
CPU's QOM typename instead of an instantiated CPU object.
We make use of the new interface in several places in spapr, where we're
essentially making a global check, rather than one specific to a particular
cpu. This avoids some ugly uses of first_cpu to grab a "representative"
instance.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Greg Kurz <groug@kaod.org>
Reviewed-by: Cédric Le Goater <clg@kaod.org>
- accommodate guests using vfio-ccw without specifying unlimited
prefetch, but actually working fine
- add cpu model for the z14 Model ZR1
- add support for pxelinux.cfg-style network booting to the s390x
firmware
-----BEGIN PGP SIGNATURE-----
iQJGBAABCAAwFiEEw9DWbcNiT/aowBjO3s9rk8bwL68FAlsozdISHGNvaHVja0By
ZWRoYXQuY29tAAoJEN7Pa5PG8C+vafQQAIUl+TPrwXQQCp9xBy+CpMAAQtPp0PI2
DGisaVohLQARfNorqDXqPWMy9ulbYQ1bXcdpbhVwzIHlBZopMAvC+QTRwqs4EIFw
0vjvm5ttC2b3zt+7BAom2xWqaxmSen02O0Qj2l5HpFBVp+5afT74geiWhmmirWbw
x/JEEnlOcP2dHAvujfbv9Slst1B/OQ4xz7gf5IOlfX2MCsDcIyvqrw6a2D5G4PlX
DHPZ7PzA6edoG+qP1DvH6Myi19zs3edAKJp5sdCK92FgJBxW6nI1+ThoN3ApbuDm
U7HYeLzrQ/dXWiHMwA+2QP5Iqe/VfKCvqVcm9+m458dQLiN8ZiyGmUbbZtL0L0X8
KQB70mSQjawWRVLr1vsvZNAnCeqjT2hZz37EAwA3gGOeIfFjxFLNtBu4WKgt4soG
7Uq99irLBkfJKY56gQUc0LYOKhTMlDk3Z3xzv2Y2q29T+Cp0927ruHQdRFo3xQAu
kBc1pIjt2Rfuh2EJkyoprMuM/e5E8OZZv1uLbqUOYKxIHCHp05wqNEva5WOhS5Ah
05Un03Vw/CVWnXuNSYR+eKrMydIZALA+GTpQYZuiAo8+dT9R3dmTHjQRqOdck3lB
BU1MzU3kfflmmToi4Pto1PtNPeAaCWMcFNLOgky3L+Wfx2p4WQxYZP2g42be0K4U
m4G6RXHStHCy
=hMax
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/cohuck/tags/s390x-20180619' into staging
- cleanup in virtio-ccw
- accommodate guests using vfio-ccw without specifying unlimited
prefetch, but actually working fine
- add cpu model for the z14 Model ZR1
- add support for pxelinux.cfg-style network booting to the s390x
firmware
# gpg: Signature made Tue 19 Jun 2018 10:33:06 BST
# gpg: using RSA key DECF6B93C6F02FAF
# gpg: Good signature from "Cornelia Huck <conny@cornelia-huck.de>"
# gpg: aka "Cornelia Huck <huckc@linux.vnet.ibm.com>"
# gpg: aka "Cornelia Huck <cornelia.huck@de.ibm.com>"
# gpg: aka "Cornelia Huck <cohuck@kernel.org>"
# gpg: aka "Cornelia Huck <cohuck@redhat.com>"
# Primary key fingerprint: C3D0 D66D C362 4FF6 A8C0 18CE DECF 6B93 C6F0 2FAF
* remotes/cohuck/tags/s390x-20180619:
pc-bios/s390-ccw: Update the s390-netboot.img binary
pc-bios/s390-ccw: Optimize the s390-netboot.img for size
pc-bios/s390-ccw/net: Try to load pxelinux.cfg file accoring to the UUID
pc-bios/s390-ccw/net: Add support for pxelinux-style config files
pc-bios/s390-ccw/net: Update code for the latest changes in SLOF
roms: Update SLOF submodule to current status
pc-bios/s390-ccw: define loadparm length
s390x/cpumodels: add z14 Model ZR1
s390x/ipl: Try to detect Linux vs non Linux for initial IPL PSW
vfio-ccw: remove orb.c64 (64 bit data addresses) check
vfio-ccw: add force unlimited prefetch property
virtio-ccw: clean up notify
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Introduce the new z14 Model ZR1 cpu model. Mostly identical to z14, only
the cpu type differs (3906 vs. 3907)
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
Message-Id: <20180613081819.147178-1-borntraeger@de.ibm.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
This patch adds gen_io_start()/gen_io_end() to various instructions as required
in order to boot my OpenBIOS test images on qemu-system-sparc64 with icount
enabled.
Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
CPUPPCState currently contains a number of fields containing the state of
the VPA. The VPA is a PAPR specific concept covering several guest/host
shared memory areas used to communicate some information with the
hypervisor.
As a PAPR concept this is really machine specific information, although it
is per-cpu, so it doesn't really belong in the core CPU state structure.
There's also other information that's per-cpu, but platform/machine
specific. So create a (void *)machine_data in PowerPCCPU which can be
used by the machine to locate per-cpu data. Intialization, lifetime and
cleanup of machine_data is entirely up to the machine type.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Greg Kurz <groug@kaod.org>
Tested-by: Greg Kurz <groug@kaod.org>
Commit 9d6f106552 moved the last line in this block to somewhere else,
but it forgot to remove the now useless #if/#endif.
Signed-off-by: Greg Kurz <groug@kaod.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
For cap_ppc_safe_cache to be set to workaround, we require both a l1d
cache flush instruction and private l1d cache.
On POWER8 don't require private l1d cache. This means a guest on a
POWER8 machine can make use of the cache flush workarounds.
Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
ARMv6-M supports 6 Thumb2 instructions. This patch checks for these
instructions and allows their execution.
Like Thumb2 cores, ARMv6-M always interprets BL instruction as 32-bit.
This patch is required for future Cortex-M0 support.
Signed-off-by: Julia Suvorova <jusual@mail.ru>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Message-id: 20180612204632.28780-1-jusual@mail.ru
[PMM: move armv6m_insn[] and armv6m_mask[] closer to
point of use, and mark 'const'. Check for M-and-not-v7
rather than M-and-6.]
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20180613015641.5667-19-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20180613015641.5667-18-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20180613015641.5667-17-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20180613015641.5667-16-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20180613015641.5667-15-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20180613015641.5667-14-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20180613015641.5667-13-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20180613015641.5667-12-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20180613015641.5667-11-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20180613015641.5667-10-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20180613015641.5667-9-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20180613015641.5667-8-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20180613015641.5667-7-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20180613015641.5667-6-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20180613015641.5667-5-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20180613015641.5667-4-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20180613015641.5667-3-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Rearrange the arithmetic so that we are agnostic about the total size
of the vector and the size of the element. This will allow us to index
up to the 32nd byte and with 16-byte elements.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20180613015641.5667-2-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The Cortex-M CPU and its NVIC are two intimately intertwined parts of
the same hardware; it is not possible to use one without the other.
Unfortunately a lot of our board models don't do any sanity checking
on the CPU type the user asks for, so a command line like
qemu-system-arm -M versatilepb -cpu cortex-m3
will create an M3 without an NVIC, and coredump immediately.
In the other direction, trying a non-M-profile CPU in an M-profile
board won't blow up, but doesn't do anything useful either:
qemu-system-arm -M lm3s6965evb -cpu arm926
Add some checking in the NVIC and CPU realize functions that the
user isn't trying to use an NVIC without an M-profile CPU or
an M-profile CPU without an NVIC, so we can produce a helpful
error message rather than a core dump.
Fixes: https://bugs.launchpad.net/qemu/+bug/1766896
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-id: 20180601160355.15393-1-peter.maydell@linaro.org
Remove the abort on a sequence of NOP/zero instructions.
Always return early and avoid decoding NOP/zero instructions.
This fixes Coverity CID 1391443.
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Correct the masking of output addresses.
This fixes Coverity CID 1391441.
Fixes: commit 3924a9aa02
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reported-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Here's another batch of ppc patches towards the 3.0 release. There's
a fair bit here, because I've been working through my mail backlog
after a holiday. There's not much of a central theme, amongst other
things we have:
* ppc440 / sam460ex improvements
* logging and error cleanups
* 40p (PReP) bugfixes
* Macintosh fixes and cleanups
* Add emulation of the new POWER9 store-forwarding barrier
instruction variant
* Hotplug cleanups
-----BEGIN PGP SIGNATURE-----
iQIzBAABCAAdFiEEdfRlhq5hpmzETofcbDjKyiDZs5IFAlsfa4kACgkQbDjKyiDZ
s5IdhxAA1ZDgEMyQkMPlnjYBNSpw9ICo/v+Lj1ZvlVjIbtjP6ZWdmaNHbBLi1N/U
jbY7Yb+a43xj96WigN2kxu16aL7AvgrGayscht+P+ZXGWaEAt0R1EhiqhheKjkOH
n0gAxe5c9wN18tJ7ZMtq7HmWw/MyWVjPQ4Pr38npyDU037LG4xY55DigcFu9D/6N
rSYtB3gWIEvbGhx2Sm+gzbkleKufUNhSJMJI9diRS2KvyocH8ENQUqcMqK//Y6w7
/Lk/j+qcAOAfCS+QWH9/XQvYdOGR9Gaw4oWLViOCD+EpTJHAYFMzVsp5ZLZZByw/
S51W3e9mfxzgtIjoDxaetrW95yx2bAcFekP+H8KQesWpydSprsnAlh3SE3dfO2Jp
Z1yVPHZv2fYZQjNZ0t62Z248iJ4YiBDttDpQYzsiGTiGVfx1IKTNz+gfNwV78CCJ
SkyIktky2TYAemAB+U90JL3gYSn0GHwgBFLTJ4Rrrf3kC0wNslgosGTEB9v3D+Mb
zsWxaBftFwZsRnNggBtE2/35zccJ2YURASlDu9ccmczyCUrRZbKE57/GftRXdIcY
Oqjf48/NSwBm98bIEQGTOw7H1NgWuTazAc2+VS4mAHtUEEYIunuRmC+eTKNM01Yu
qSMCg6tPADEXwRg6z062sb6T0vvpicv+X1U++tbHNqV3gprOlts=
=g3bP
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/dgibson/tags/ppc-for-3.0-20180612' into staging
ppc patch queue 2018-06-12
Here's another batch of ppc patches towards the 3.0 release. There's
a fair bit here, because I've been working through my mail backlog
after a holiday. There's not much of a central theme, amongst other
things we have:
* ppc440 / sam460ex improvements
* logging and error cleanups
* 40p (PReP) bugfixes
* Macintosh fixes and cleanups
* Add emulation of the new POWER9 store-forwarding barrier
instruction variant
* Hotplug cleanups
# gpg: Signature made Tue 12 Jun 2018 07:43:21 BST
# gpg: using RSA key 6C38CACA20D9B392
# gpg: Good signature from "David Gibson <david@gibson.dropbear.id.au>"
# gpg: aka "David Gibson (Red Hat) <dgibson@redhat.com>"
# gpg: aka "David Gibson (ozlabs.org) <dgibson@ozlabs.org>"
# gpg: aka "David Gibson (kernel.org) <dwg@kernel.org>"
# Primary key fingerprint: 75F4 6586 AE61 A66C C44E 87DC 6C38 CACA 20D9 B392
* remotes/dgibson/tags/ppc-for-3.0-20180612: (33 commits)
spapr_pci: Remove unhelpful pagesize warning
xics_kvm: use KVM helpers
ppc/pnv: fix LPC HC firmware address space
spapr: handle cpu core unplug via hotplug handler chain
spapr: handle pc-dimm unplug via hotplug handler chain
spapr: introduce machine unplug handler
spapr: move memory hotplug support check into spapr_memory_pre_plug()
spapr: move lookup of the node into spapr_memory_plug()
spapr: no need to verify the node
target/ppc: Allow PIR read in privileged mode
ppc4xx_i2c: Clean up and improve error logging
target/ppc: extend eieio for POWER9
mos6522: convert VMSTATE_TIMER_PTR_TEST to VMSTATE_TIMER_PTR
mos6522: move timer frequency initialisation to mos6522_reset
cuda: embed mos6522_cuda device directly rather than using QOM object link
mos6522: fix vmstate_mos6522_timer version in vmstate_mos6522
ppc: add missing FW_CFG_PPC_NVRAM_FLAT definition
ppc: remove obsolete macio_init() definition from mac.h
ppc: remove obsolete pci_pmac_init() definitions from mac.h
hw/misc/mos6522: Add trailing '\n' to qemu_log() calls
...
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
A link property can be set during creation, with
object_property_add_link() and later with object_property_set_link().
add_link() doesn't add a reference to the target object, while
set_link() does.
Furthemore, OBJ_PROP_LINK_UNREF_ON_RELEASE flags, set during add_link,
says whether a reference must be released when the property is destroyed.
This can lead to leaks if the property was later set_link(), as the
added reference is never released.
Instead, rename OBJ_PROP_LINK_UNREF_ON_RELEASE to OBJ_PROP_LINK_STRONG
and use that has an indication on how the link handle reference
management in set_link().
Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Message-id: 20180531195119.22021-3-marcandre.lureau@redhat.com
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
According to PowerISA, the PIR register should be readable in privileged
mode also, not only in hypervisor privileged mode.
PowerISA 3.0 - 4.3.3 Processor Identification Register
"Read access to the PIR is privileged; write access is not provided."
Figure 18 in section 4.4.4 explicitly confirms that mfspr PIR is privileged
and doesn't require hypervisor state.
Cc: David Gibson <david@gibson.dropbear.id.au>
Cc: Alexander Graf <agraf@suse.de>
Cc: qemu-ppc@nongnu.org
Signed-off-by: Leandro Lupori <leandro.lupori@gmail.com>
Reviewed-by: Jose Ricardo Ziviani <joserz@linux.ibm.com>
Reviewed-by: Greg Kurz <groug@kaod.org>
Signed-off-by: Greg Kurz <groug@kaod.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
POWER9 introduced a new variant of the eieio instruction using bit 6
as a hint to tell the CPU it is a store-forwarding barrier.
The usage of this eieio extension was recently added in Linux 4.17
which activated the "support for a store forwarding barrier at kernel
entry/exit".
Unfortunately, it is not possible to insert this new eieio instruction
without considerable change in ppc_tr_translate_insn(). So instead we
loosen the QEMU eieio instruction mask and modify the gen_eieio()
helper to test for bit6. On non-POWER9 CPUs, the bit6 is just ignored
but a warning is emitted as this is not an instruction software should
be using.
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
The powerpc Linux kernel[1] and skiboot firmware[2] recently gained changes
that cause the Processor Compatibility Register (PCR) SPR to be cleared.
These changes cause Linux to fail to boot on the Qemu powernv machine
with an error:
Trying to write privileged spr 338 (0x152) at 0000000030017f0c
With this patch Qemu makes this register available as a hypervisor
privileged register.
Note that bits set in this register disable features of the processor.
Currently the only register state that is supported is when the register
is zeroed (enable all features). This is sufficient for guests to
once again boot.
[1] https://lkml.kernel.org/r/20180518013742.24095-1-mikey@neuling.org
[2] https://patchwork.ozlabs.org/patch/915932/
Signed-off-by: Joel Stanley <joel@jms.id.au>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Factor out the parsing of struct kvm_ppc_cpu_char in
kvmppc_get_cpu_characteristics() into a separate function for each cap
for simplicity.
Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
fprintf() and qemu_log_separate() are frowned upon these days for printing
logging information in QEMU. Accessing the wrong SPRs indicates wrong guest
behaviour in most cases, and we've got a proper way to log such situations,
which is the qemu_log_mask(LOG_GUEST_ERROR, ...) function. So use this
function now for logging the bad SPR accesses instead.
Signed-off-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Greg Kurz <groug@kaod.org>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Rather than limit total TB size to PAGE-32 bytes, end the TB when
near the end of a page. This should provide proper semantics of
SIGSEGV when executing near the end of a page.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20180512050250.12774-9-richard.henderson@linaro.org>
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
Removed ctx->insn_pc in favour of ctx->base.pc_next.
Yes, it is annoying, but didn't want to waste its 4 bytes.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20180512050250.12774-7-richard.henderson@linaro.org>
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
The name gen_lookup_tb is at odds with tcg_gen_lookup_and_goto_tb.
For these cases, we do indeed want to exit back to the main loop.
Similarly, DISAS_UPDATE performs no actual update, whereas DISAS_EXIT
does what it says.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20180512050250.12774-6-richard.henderson@linaro.org>
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
These are all indirect or out-of-page direct jumps.
We can indirectly chain to the next TB without going
back to the main loop.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20180512050250.12774-5-richard.henderson@linaro.org>
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
We have exited the TB after using goto_tb; there is no
distinction from DISAS_NORETURN.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20180512050250.12774-3-richard.henderson@linaro.org>
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
The raise_exception helper does not return. Do not generate
any code following that.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20180512050250.12774-2-richard.henderson@linaro.org>
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
Add information for cpuid 0x8000001D leaf. Populate cache topology information
for different cache types (Data Cache, Instruction Cache, L2 and L3) supported
by 0x8000001D leaf. Please refer to the Processor Programming Reference (PPR)
for AMD Family 17h Model for more details.
Signed-off-by: Babu Moger <babu.moger@amd.com>
Message-Id: <1527176614-26271-3-git-send-email-babu.moger@amd.com>
Reviewed-by: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Always initialize CPUCaches structs with cache information, even
if legacy_cache=true. Use different CPUCaches struct for
CPUID[2], CPUID[4], and the AMD CPUID leaves.
This will simplify a lot the logic inside cpu_x86_cpuid().
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Babu Moger <babu.moger@amd.com>
Message-Id: <1527176614-26271-2-git-send-email-babu.moger@amd.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Acked-by: Max Filippov <jcmvbkbc@gmail.com>
Message-id: 20180606152128.449-12-f4bug@amsat.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-id: 20180606152128.449-11-f4bug@amsat.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-id: 20180606152128.449-9-f4bug@amsat.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
we have the same problem decribed in 7d6b1daedd
("linux-user, ppc: mftbl can be used by user application")
for ppc in the case of sparc.
When we use an application trying to resolve a name, it hangs in
0x00000000ff5dd40c: rd %tick, %o5
0x00000000ff5dd410: srlx %o5, 0x20, %o4
0x00000000ff5dd414: btst %o5, %g4
0x00000000ff5dd418: be %icc, 0xff5dd40c
because %tick is staying at 0.
As QEMU_CLOCK_VIRTUAL is not available in linux-user mode,
simply use cpu_get_host_ticks() instead.
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20180528194812.31216-1-laurent@vivier.eu>
Do the cast to uintptr_t within the helper, so that the compiler
can type check the pointer argument. We can also do some more
sanity checking of the index argument.
Reviewed-by: Laurent Vivier <laurent@vivier.eu>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Now we've updated our copy of the kernel headers we can remove the
compatibility shim that handled KVM_HINTS_REALTIME not being defined.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Message-Id: <20180525132755.21839-7-peter.maydell@linaro.org>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
In kernel header commit 633711e8287, the define KVM_HINTS_DEDICATED
was renamed to KVM_HINTS_REALTIME. Work around this compatibility
break by (a) using the new constant name, and (b) defining it
if the headers don't.
Part (b) can be removed once we've updated our copy of the kernel
headers to a version that defines KVM_HINTS_REALTIME.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Message-Id: <20180525132755.21839-5-peter.maydell@linaro.org>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Acked-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
This patch fixes a few compiler warnings, especially in case of
x86 targets, where the number of registers was not properly handled
and could cause an overflow.
Signed-off-by: Alessandro Pilotti <apilotti@cloudbasesolutions.com>
Signed-off-by: Justin Terry (VM) <juterry@microsoft.com>
Signed-off-by: Lucian Petrut <lpetrut@cloudbasesolutions.com>
Message-Id: <1526405722-10887-3-git-send-email-lpetrut@cloudbasesolutions.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
We're currently linking against import libraries of the WHP DLLs.
By dynamically loading the libraries, we ensure that QEMU will work
on previous Windows versions, where the WHP DLLs will be missing
(assuming that WHP is not requested).
Also, we're simplifying the build process, as we no longer require
the import libraries.
Signed-off-by: Alessandro Pilotti <apilotti@cloudbasesolutions.com>
Signed-off-by: Justin Terry (VM) <juterry@microsoft.com>
Signed-off-by: Lucian Petrut <lpetrut@cloudbasesolutions.com>
Message-Id: <1526405722-10887-2-git-send-email-lpetrut@cloudbasesolutions.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Since his inception in 61766fe9e2, this file uses the qemu_log()
API from "qemu/log.h". Include it to allow further includes
cleanup.
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20180528232719.4721-9-f4bug@amsat.org>
Acked-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Since it inception this include uses tlb_flush() declared in "exec/exec-all.h".
Include the other header to allow further includes cleanup.
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20180528232719.4721-8-f4bug@amsat.org>
Acked-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Since d0ce7e9cfc the dc232b structure uses the NANOSECONDS_PER_SECOND
definition from "qemu/timer.h". Include it to allow further includes
cleanup.
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20180528232719.4721-7-f4bug@amsat.org>
Acked-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
As part of plumbing MemTxAttrs down to the IOMMU translate method,
add MemTxAttrs as an argument to address_space_access_valid().
Its callers either have an attrs value to hand, or don't care
and can use MEMTXATTRS_UNSPECIFIED.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20180521140402.23318-6-peter.maydell@linaro.org
As part of plumbing MemTxAttrs down to the IOMMU translate method,
add MemTxAttrs as an argument to address_space_map().
Its callers either have an attrs value to hand, or don't care
and can use MEMTXATTRS_UNSPECIFIED.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20180521140402.23318-5-peter.maydell@linaro.org
As part of plumbing MemTxAttrs down to the IOMMU translate method,
add MemTxAttrs as an argument to address_space_translate()
and address_space_translate_cached(). Callers either have an
attrs value to hand, or don't care and can use MEMTXATTRS_UNSPECIFIED.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20180521140402.23318-4-peter.maydell@linaro.org
As part of plumbing MemTxAttrs down to the IOMMU translate method,
add MemTxAttrs as an argument to tb_invalidate_phys_addr().
Its callers either have an attrs value to hand, or don't care
and can use MEMTXATTRS_UNSPECIFIED.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Message-id: 20180521140402.23318-3-peter.maydell@linaro.org
In commit f0aff25570 we made cpacr_write() enforce that some CPACR
bits are RAZ/WI and some are RAO/WI for ARMv7 cores. Unfortunately
we forgot to also update the register's reset value. The effect
was that (a) a guest that read CPACR on reset would not see ones in
the RAO bits, and (b) if you did a migration before the guest did
a write to the CPACR then the migration would fail because the
destination would enforce the RAO bits and then complain that they
didn't match the zero value from the source.
Implement reset for the CPACR using a custom reset function
that just calls cpacr_write(), to avoid having to duplicate
the logic for which bits are RAO.
This bug would affect migration for TCG CPUs which are ARMv7
with VFP but without one of Neon or VFPv3.
Reported-by: Cédric Le Goater <clg@kaod.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Tested-by: Cédric Le Goater <clg@kaod.org>
Message-id: 20180522173713.26282-1-peter.maydell@linaro.org
cpregs_keys is an uint32_t* so the allocation should use uint32_t.
g_new is even better because it is type-safe.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Depending on the host abi, float16, aka uint16_t, values are
passed and returned either zero-extended in the host register
or with garbage at the top of the host register.
The tcg code generator has so far been assuming garbage, as that
matches the x86 abi, but this is incorrect for other host abis.
Further, target/arm has so far been assuming zero-extended results,
so that it may store the 16-bit value into a 32-bit slot with the
high 16-bits already clear.
Rectify both problems by mapping "f16" in the helper definition
to uint32_t instead of (a typedef for) uint16_t. This forces
the host compiler to assume garbage in the upper 16 bits on input
and to zero-extend the result on output.
Cc: qemu-stable@nongnu.org
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Tested-by: Laurent Desnogues <laurent.desnogues@gmail.com>
Message-id: 20180522175629.24932-1-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The FRECPX instructions should (like most other floating point operations)
honour the FPCR.FZ bit which specifies whether input denormals should
be flushed to zero (or FZ16 for the half-precision version).
We forgot to implement this, which doesn't affect the results (since
the calculation doesn't actually care about the mantissa bits) but did
mean we were failing to set the FPSR.IDC bit.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20180521172712.19930-1-peter.maydell@linaro.org
Rename the 2.13 machines to match the number we're going to
use for the next release.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Cornelia Huck <cohuck@redhat.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: Greg Kurz <groug@kaod.org>
Message-id: 20180522104000.9044-5-peter.maydell@linaro.org
Consolidate MMU enabled checks to cpu_mmu_index().
No functional changes.
Suggested-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Fixup the indentation of cpu_mmu_index in preparation for
future edits.
No functional changes.
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Cleanup eval_cond_jmp to use tcg_gen_movcond_i64().
No functional change.
Suggested-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Convert env_btarget to i64.
No functional change.
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Remove argument b in eval_cc() as it is always set to zero.
No functional change.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Use a table based conversion to map condition-codes between
MicroBlaze ISA encoding and TCG.
No functional change.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Cleanup debug log messages:
* Avoid long 80+ character lines.
* Remove D() macro and use qemu_log_mask.
* Remove logs that are not very useful
Suggested-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Simplify address computation using tcg_gen_addi_i32().
tcg_gen_addi_i32() already optimizes the case when the
immediate is zero.
No functional change.
Suggested-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Allow address sizes between 32 and 64 bits.
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Add support for extended access to TLBLO's upper 32 bits.
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Plug a temp leak.
Reported-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Add a configurable output address mask, used to mimic the
configurable physical address bit width.
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Prepare for 64-bit addresses.
This makes no functional difference as the upper parts of
the 64-bit addresses are not yet reachable.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Add explicit handling for MMU_R_TLBX and log accesses to
invalid MMU registers. We can now remove the state for
all regs but PID, ZPR and TLBX (0 - 2).
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Add a R_TBLX_MISS MASK and SHIFT macros.
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Implement MFSE EAR to enable access to the upper part of EAR.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Add support for Extended Addressing. Load/stores with EA
enabled concatenate two 32bit registers to form an extended
address.
We don't allow users to enable address sizes larger than
32 bits quite yet though. Once the MMU support is in, we'll
turn it on.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Setup MicroBlaze builds for 64bit addressing.
No functional change since the translator does not yet
emit 64bit addresses.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Extend special registers to 64-bits. This is in preparation for
MFSE/MTSE, moves to and from extended special registers.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Fix moves to FSR. Not only bit 31 is accessible.
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Reuse more code when decoding register numbers.
No functional changes.
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Use bool and extract32 to represent the to, clr and
clrset flags.
No functional change.
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Break out trap_illegal() to handle illegal operation traps.
We now generally stop translation of the current insn if
it's not valid.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Break out trap_userspace() to avoid open coding it everywhere.
For privileged insns, we now always stop translation of the
current insn for cores without exceptions.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Name special registers we support.
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Use TCGv for load/store addresses, allowing for future
computation of 64-bit load/store address.
No functional change.
Acked-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Make compute_ldst_addr always use a temp. This simplifies
the code a bit in preparation for adding support for
64bit addresses.
No functional change.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Bypass MMU translation when mmu-index MMU_NOMMU_IDX is used.
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Conditionalize setting of PVR11_USE_MMU on the use_mmu
CPU property, otherwise we may incorrectly advertise an
MMU via PVR when the core in fact has none.
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
We already have a CPU property to control if a core has
an MMU or not. Remove USE_MMU PVR checks in favor of
looking at the property.
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Tighten up TCGv_i32 vs TCGv type usage. Avoid using TCGv when
TCGv_i32 should be used.
This is in preparation for adding 64bit addressing support.
No functional change.
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Correct the PVR array size, there are 13 PVR registers.
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Correct special register array sizes.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Today, when running QEMU in linux-user or with boards that don't
select a specific CPU version, we treat it as an invalid version
and log a message.
Instead, if no specific version was selected, fallback to our
latest CPU version.
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Use bool instead of int to represent flags.
No functional change.
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Use bool instead of unsigned int to represent flags.
Also, use extract32 instead of open coding the bit extract.
No functional change.
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Use bool instead of unsigned int to represent flags.
No functional change.
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Beginning of merging vDPA, new PCI ID, a new virtio balloon stat, intel
iommu rework fixing a couple of security problems (no CVEs yet), fixes
all over the place.
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
-----BEGIN PGP SIGNATURE-----
iQEcBAABAgAGBQJbBX2cAAoJECgfDbjSjVRpOEYIAIR6KGkwbAJ9SnO9B71DQHl1
yYYgM7i2HwyZ1YPnXOYWnI1lzQ1bARTf2krQJFGmfjlDaueFf9KnXdNByoVCmG8m
UhF/rQp3DcJ4wTABktPtME8gWdQxKPmDxlN5W3f29Zrm3g9S+Hshi+sfPZUkBxL4
gQMFRctb2SxvQXG+lusHVwo1oF6pzGZMmX35906he3m4xS/cfoeCP7Qj6nSvHZq7
lsLoOeYxHtXWA9gTYxpd7zW+hhUxkspoOqcXySHfO7e5enJANaulTxKuC0T+6HL4
O2iUM+1wjUYE0tQcNJ6x7emA82k5OdG2OMD6gbR1oSdquttJo7+4R+goqpb44rc=
=NUoY
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/mst/tags/for_upstream' into staging
pc, pci, virtio, vhost: fixes, features
Beginning of merging vDPA, new PCI ID, a new virtio balloon stat, intel
iommu rework fixing a couple of security problems (no CVEs yet), fixes
all over the place.
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
# gpg: Signature made Wed 23 May 2018 15:41:32 BST
# gpg: using RSA key 281F0DB8D28D5469
# gpg: Good signature from "Michael S. Tsirkin <mst@kernel.org>"
# gpg: aka "Michael S. Tsirkin <mst@redhat.com>"
# Primary key fingerprint: 0270 606B 6F3C DF3D 0B17 0970 C350 3912 AFBE 8E67
# Subkey fingerprint: 5D09 FD08 71C8 F85B 94CA 8A0D 281F 0DB8 D28D 5469
* remotes/mst/tags/for_upstream: (28 commits)
intel-iommu: rework the page walk logic
util: implement simple iova tree
intel-iommu: trace domain id during page walk
intel-iommu: pass in address space when page walk
intel-iommu: introduce vtd_page_walk_info
intel-iommu: only do page walk for MAP notifiers
intel-iommu: add iommu lock
intel-iommu: remove IntelIOMMUNotifierNode
intel-iommu: send PSI always even if across PDEs
nvdimm: fix typo in label-size definition
contrib/vhost-user-blk: enable protocol feature for vhost-user-blk
hw/virtio: Fix brace Werror with clang 6.0.0
libvhost-user: Send messages with no data
vhost-user+postcopy: Use qemu_set_nonblock
virtio: support setting memory region based host notifier
vhost-user: support receiving file descriptors in slave_read
vhost-user: add Net prefix to internal state structure
linux-headers: add kvm header for mips
linux-headers: add unistd.h on all arches
update-linux-headers.sh: unistd.h, kvm consistency
...
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
-----BEGIN PGP SIGNATURE-----
iQIzBAABCAAdFiEEIZA+SEU3p8KQzj6ytFirsNjTeOMFAlsEYJoACgkQtFirsNjT
eONBxRAAin6dSNGmN38I1znn/yBaxFnK+jEeEJgQX1WKS29str1UCGsnxT6Y47Hk
hptrXTsS1uQGC0cr8d4TCxi2lckW87T73eANX52Y7JWf1KPHmLlFsZ4CezdkbhT6
lANubq9HMM8SMCQGnIlz+NO4hDn/+peGWRH+2ehy7eMKHRY+vyVLAaUqnWgu17kn
y/p4CPnT/89QwVQotistJLPayTCQ1Emmq6B1wUs4y2MsLDmHBti1pYPolUb1vpKi
TM98Ah/U9UYxIn6uxGAuzO46Bkge5CNZZ/ev9o5OoJI7g9mXcn9e1fhbAyrsh/uR
GABEou6j4m5VgRBzH5KAGTO8FLkx/8q24eD8IcVqFRnKZfpPv7EGqoqYyE3HhX62
JMOXDXdKZjaHSG1J8ndM8FdmJg1NyD2XJ4yR8kTQUHUYayfZblnG/yiEw2CF+zdU
prpQctGIqC1FeCGx/FF7tCl4/C1TH1ekWVv8qv4ZlKXdwebSqr9+YuvQFdKBZzFd
g7f1A91nVITf8BfEjKji8+gA2yBMMHsbd8wiQLHjaNhC3wCUiPvCvP5h/f+3WzFA
o5tx2/C6WlEQ9cJXT7X9cVnYhC55JukP+ZTk2Q7lXNX+4Yn6ge0NCeWL0sRQD1j7
xlw9swt6GXv354LzK/2MgxPvkZ5EVUWazysGPfFpw34fojjt72w=
=M2dr
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/mwalle/tags/lm32-queue/20180521' into staging
target/lm32: BQL patch
# gpg: Signature made Tue 22 May 2018 19:25:30 BST
# gpg: using RSA key B458ABB0D8D378E3
# gpg: Good signature from "Michael Walle <michael@walle.cc>"
# gpg: WARNING: This key is not certified with a trusted signature!
# gpg: There is no indication that the signature belongs to the owner.
# Primary key fingerprint: 2190 3E48 4537 A7C2 90CE 3EB2 B458 ABB0 D8D3 78E3
* remotes/mwalle/tags/lm32-queue/20180521:
lm32: take BQL before writing IP/IM register
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Switch to the header we imported from Linux,
this allows us to drop a hack in kvm_i386.h.
More code will be dropped in the next patch.
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
AMD Zen expose the Intel equivalant to Speculative Store Bypass Disable
via the 0x80000008_EBX[25] CPUID feature bit.
This needs to be exposed to guest OS to allow them to protect
against CVE-2018-3639.
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
Message-Id: <20180521215424.13520-3-berrange@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
"Some AMD processors only support a non-architectural means of enabling
speculative store bypass disable (SSBD). To allow a simplified view of
this to a guest, an architectural definition has been created through a new
CPUID bit, 0x80000008_EBX[25], and a new MSR, 0xc001011f. With this, a
hypervisor can virtualize the existence of this definition and provide an
architectural method for using SSBD to a guest.
Add the new CPUID feature, the new MSR and update the existing SSBD
support to use this MSR when present." (from x86/speculation: Add virtualized
speculative store bypass disable support in Linux).
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
Message-Id: <20180521215424.13520-4-berrange@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
New microcode introduces the "Speculative Store Bypass Disable"
CPUID feature bit. This needs to be exposed to guest OS to allow
them to protect against CVE-2018-3639.
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Message-Id: <20180521215424.13520-2-berrange@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Writing to these registers may raise an interrupt request. Actually,
this prevents the milkymist board from starting.
Cc: qemu-stable@nongnu.org
Signed-off-by: Michael Walle <michael@walle.cc>
Tested-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
-----BEGIN PGP SIGNATURE-----
iQFDBAABCAAtFiEEe3O61ovnosKJMUsicBtPaxppPlkFAlsBEgAPHG1qdEB0bHMu
bXNrLnJ1AAoJEHAbT2saaT5Z74oH/1rKw64XSout5P3RNbhy8LjDfhu/wxtda8oR
I3e/KbAOsmdU956Bvzk9eiL6gYyG2LrXgY5mJYaY5b+IwRZZEoywRvUX042fSEU0
Hdkrjx8MpAVPN0RztLM2UDQta2NPv9uBc/X9tMR+brRnLID6qXDdd1tjKG5e8vDt
iAfEgGdSg1opJlEyhOI3EPokdGa0QnrBgdPnEqhxtTWNhWWPxbdRVVTeyzJYOztC
sM647Ca6P5y+0lBQNB4CXydiAR0GnE/SS1krFWbrpuI6rbOwzBOsPTi84JtwQzv+
VWjmXfjnXy5c0+DG93e0sS6vqJF4V2ZoHKca2VVzLcBFPUxmZPw=
=MQI4
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/mjt/tags/trivial-patches-fetch' into staging
trivial patches for 2018-05-20
# gpg: Signature made Sun 20 May 2018 07:13:20 BST
# gpg: using RSA key 701B4F6B1A693E59
# gpg: Good signature from "Michael Tokarev <mjt@tls.msk.ru>"
# gpg: aka "Michael Tokarev <mjt@corpit.ru>"
# gpg: aka "Michael Tokarev <mjt@debian.org>"
# Primary key fingerprint: 6EE1 95D1 886E 8FFB 810D 4324 457C E0A0 8044 65C5
# Subkey fingerprint: 7B73 BAD6 8BE7 A2C2 8931 4B22 701B 4F6B 1A69 3E59
* remotes/mjt/tags/trivial-patches-fetch: (22 commits)
acpi: fix a comment about aml_call0()
qapi/net.json: Fix the version number of the "vlan" removal
gdbstub: Handle errors in gdb_accept()
gdbstub: Use qemu_set_cloexec()
replace functions which are only available in glib-2.24
typedefs: Remove PcGuestInfo from qemu/typedefs.h
qemu-options: Allow -no-user-config again
hw/timer/mt48t59: Fix bit-rotten NVRAM_PRINTF format strings
Remove unnecessary variables for function return value
trivial: Do not include pci.h if it is not necessary
tests: fix tpm-crb tpm-tis tests race
hw/ide/ahci: Keep ALLWINNER_AHCI() macro internal
qemu-img-cmds.hx: add passive-aggressive note
qemu-img: Make documentation between .texi and .hx consistent
qemu-img: remove references to GEN_DOCS
qemu-img.texi: fix command ordering
qemu-img-commands.hx: argument ordering fixups
HACKING: document preference for g_new instead of g_malloc
qemu-option-trace: -trace enable= is a pattern, not a file
slirp/debug: Print IP addresses in human readable form
...
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Acked-by: Max Filippov <jcmvbkbc@gmail.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Also do not dump both "fpu" and "vector" registers
as the former overlaps the latter.
Cc: Alexander Graf <agraf@suse.de>
Reviewed-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Cc: Alexander Graf <agraf@suse.de>
Cc: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20180516223007.10256-26-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20180516223007.10256-25-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20180516223007.10256-24-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20180516223007.10256-23-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20180516223007.10256-22-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20180516223007.10256-21-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20180516223007.10256-20-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20180516223007.10256-19-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20180516223007.10256-18-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20180516223007.10256-17-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20180516223007.10256-16-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20180516223007.10256-15-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20180516223007.10256-14-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20180516223007.10256-13-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20180516223007.10256-12-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20180516223007.10256-11-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Excepting MOVPRFX, which isn't a reduction. Presumably it is
placed within the group because of its encoding.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20180516223007.10256-10-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20180516223007.10256-7-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20180516223007.10256-6-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
These were the instructions that were stubbed out when
introducing the decode skeleton.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20180516223007.10256-4-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Including only 4, as-yet unimplemented, instruction patterns
so that the whole thing compiles.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20180516223007.10256-3-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Move some stuff that will be common to both translate-a64.c
and translate-sve.c.
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20180516223007.10256-2-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Generate an XML description for the cp-regs.
Register these regs with the gdb_register_coprocessor().
Add arm_gdb_get_sysreg() to use it as a callback to read those regs.
Add a dummy arm_gdb_set_sysreg().
Signed-off-by: Abdallah Bouassida <abdallah.bouassida@lauterbach.com>
Tested-by: Alex Bennée <alex.bennee@linaro.org>
Message-id: 1524153386-3550-4-git-send-email-abdallah.bouassida@lauterbach.com
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This is a preparation for the coming feature of creating dynamically an XML
description for the ARM sysregs.
Add "_S" suffix to the secure version of sysregs that have both S and NS views
Replace (S) and (NS) by _S and _NS for the register that are manually defined,
so all the registers follow the same convention.
Signed-off-by: Abdallah Bouassida <abdallah.bouassida@lauterbach.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Tested-by: Alex Bennée <alex.bennee@linaro.org>
Message-id: 1524153386-3550-3-git-send-email-abdallah.bouassida@lauterbach.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This is a preparation for the coming feature of creating dynamically an XML
description for the ARM sysregs.
A register has ARM_CP_NO_GDB enabled will not be shown in the dynamic XML.
This bit is enabled automatically when creating CP_ANY wildcard aliases.
This bit could be enabled manually for any register we want to remove from the
dynamic XML description.
Signed-off-by: Abdallah Bouassida <abdallah.bouassida@lauterbach.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Tested-by: Alex Bennée <alex.bennee@linaro.org>
Message-id: 1524153386-3550-2-git-send-email-abdallah.bouassida@lauterbach.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Only MIPS requires snan_bit_is_one to be variable. While we are
specializing softfloat behaviour, allow other targets to eliminate
this runtime check.
Cc: Aurelien Jarno <aurelien@aurel32.net>
Cc: Yongbok Kim <yongbok.kim@mips.com>
Cc: David Gibson <david@gibson.dropbear.id.au>
Cc: Alexander Graf <agraf@suse.de>
Cc: Guan Xuetao <gxt@mprc.pku.edu.cn>
Tested-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
This is now handled properly by the generic softfloat code.
Cc: Alexander Graf <agraf@suse.de>
Reviewed-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
This is now handled properly by the generic softfloat code.
Cc: Palmer Dabbelt <palmer@sifive.com>
Cc: Sagar Karandikar <sagark@eecs.berkeley.edu>
Cc: Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
Reviewed-by: Michael Clark <mjc@sifive.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
This is now handled properly by the generic softfloat code.
Cc: Aurelien Jarno <aurelien@aurel32.net>
Cc: Yongbok Kim <yongbok.kim@mips.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
This is now handled properly by the generic softfloat code.
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
This is now handled properly by the generic softfloat code.
Tested-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Tested-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
The ARM ARM specifies FZ16 is suppressed for conversions. Rather than
pushing this logic into the softfloat code we can simply save the FZ
state and temporarily disable it for the softfloat call.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Instead of passing env and leaving it up to the helper to get the
right fpstatus we pass it explicitly. There was already a get_fpstatus
helper for neon for the 32 bit code. We also add an get_ahp_flag() for
passing the state of the alternative FP16 format flag. This leaves
scope for later tracking the AHP state in translation flags.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* KnightsMill CPU model
* CLDEMOTE(Demote Cache Line) cpu feature
* pc-i440fx-2.13 and pc-q35-2.13 machine-types
* Add model-specific cache information to EPYC CPU model
-----BEGIN PGP SIGNATURE-----
iQIcBAABCAAGBQJa+1bIAAoJECgHk2+YTcWmebgP/2337b3L2z5K6YRBOjaH/Kmw
xuBmns8B2ta4/KFX1IjX+7TRvl8qJT39TadLs6OGaPKDHi/OIK7vhXZZdgqm3o2H
WKpsPYp6MPYx/Ff8eOesHGUyxDTqP49rMYTTbP1GdgoulqjUq0XqgmdF4+uKesei
1v4NG7M6H2VxlAdkF1PeZUyuuEEZT6F1T2O43FDTQHOQRWmb1XAHiSgiqC4431J9
9SVe4E0v8i96zrMVeaezvQrMVCCGIo3zC3JZBfbX205Yehl/eCK5WktO03zIn9yS
IK7Gqwh4RfWmRg2wef+5qEQ1fl11XlH895/F0wMcZ8sRrSm04jNcBs9O4kc59Zz2
kyvaGkCG14aTC2Y35H0BsNd1AszKaWfknIJKNMnPkeXYLYAbPm1bwgBmXVXAa0Ux
mI6dk4ArwHALNl2srNOnJRvtmgohm+1HCZtlstSzHVWzNFS9nfGWK3kpgDqXft8w
TKe2r928uQtympCK7s43wIstT1KRwaVn7ivNpcotJgojCuUg0tK4PQvP82Z/ZXUh
pUxhW03al6Dz2Khh8vpz88rryCh72i5BQ9S7D8kAcFpyQDd6DfbLv6ebu/mWgjLR
EwMdJkLQjXGs1SNcpA085KVksGW5U1S2avjeakkp3ceSA/fBt4UdftTmwG6oORaS
pCxNIzozXx4QdDK81aaE
=uP+F
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/ehabkost/tags/x86-next-pull-request' into staging
x86 queue, 2018-05-15
* KnightsMill CPU model
* CLDEMOTE(Demote Cache Line) cpu feature
* pc-i440fx-2.13 and pc-q35-2.13 machine-types
* Add model-specific cache information to EPYC CPU model
# gpg: Signature made Tue 15 May 2018 22:53:12 BST
# gpg: using RSA key 2807936F984DC5A6
# gpg: Good signature from "Eduardo Habkost <ehabkost@redhat.com>"
# Primary key fingerprint: 5A32 2FD5 ABC4 D3DB ACCF D1AA 2807 936F 984D C5A6
* remotes/ehabkost/tags/x86-next-pull-request:
i386: Add new property to control cache info
pc: add 2.13 machine types
i386: Initialize cache information for EPYC family processors
i386: Add cache information in X86CPUDefinition
i386: Helpers to encode cache information consistently
x86/cpu: Enable CLDEMOTE(Demote Cache Line) cpu feature
i386: add KnightsMill cpu model
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The property legacy-cache will be used to control the cache information.
If user passes "-cpu legacy-cache" then older information will
be displayed even if the hardware supports new information. Otherwise
use the statically loaded cache definitions if available.
Renamed the previous cache structures to legacy_*. If there is any change in
the cache information, then it needs to be initialized in builtin_x86_defs.
Signed-off-by: Babu Moger <babu.moger@amd.com>
Tested-by: Geoffrey McRae <geoff@hostfission.com>
Message-Id: <20180514164156.27034-3-babu.moger@amd.com>
Reviewed-by: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Instead of having a collection of macros that need to be used in
complex expressions to build CPUID data, define a CPUCacheInfo
struct that can hold information about a given cache. Helper
functions will take a CPUCacheInfo struct as input to encode
CPUID leaves for a cache.
This will help us ensure consistency between cache information
CPUID leaves, and make the existing inconsistencies in CPUID info
more visible.
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Babu Moger <babu.moger@amd.com>
Tested-by: Geoffrey McRae <geoff@hostfission.com>
Message-Id: <20180510204148.11687-2-babu.moger@amd.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
The CLDEMOTE instruction hints to hardware that the cache line that
contains the linear address should be moved("demoted") from
the cache(s) closest to the processor core to a level more distant
from the processor core. This may accelerate subsequent accesses
to the line by other cores in the same coherence domain,
especially if the line was written by the core that demotes the line.
Intel Snow Ridge has added new cpu feature, CLDEMOTE.
The new cpu feature needs to be exposed to guest VM.
The bit definition:
CPUID.(EAX=7,ECX=0):ECX[bit 25] CLDEMOTE
The release document ref below link:
https://software.intel.com/sites/default/files/managed/c5/15/\
architecture-instruction-set-extensions-programming-reference.pdf
Signed-off-by: Jingqi Liu <jingqi.liu@intel.com>
Message-Id: <1525406253-54846-1-git-send-email-jingqi.liu@intel.com>
Reviewed-by: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
A new cpu model called "KnightsMill" is added to model Knights Mill
processors. Compared to "Skylake-Server" cpu model, the following
features are added:
avx512_4vnniw avx512_4fmaps avx512pf avx512er avx512_vpopcntdq
and the following features are removed:
pcid invpcid clflushopt avx512dq avx512bw clwb smap rtm mpx
xsavec xgetbv1 hle
Signed-off-by: Boqun Feng <boqun.feng@intel.com>
Message-Id: <20180320000821.8337-1-boqun.feng@intel.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
We are meant to explicitly pass fpst, not cpu_env.
Cc: qemu-stable@nongnu.org
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Tested-by: Alex Bennée <alex.bennee@linaro.org>
Message-id: 20180512003217.9105-12-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
All the hard work is already done by vfp_expand_imm, we just need to
make sure we pick up the correct size.
Cc: qemu-stable@nongnu.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Tested-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20180512003217.9105-11-richard.henderson@linaro.org
[rth: Merge unallocated_encoding check with TCGMemOp conversion.]
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
These were missed out from the rest of the half-precision work.
Cc: qemu-stable@nongnu.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Tested-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20180512003217.9105-10-richard.henderson@linaro.org
[rth: Fix erroneous check vs type]
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
These where missed out from the rest of the half-precision work.
Cc: qemu-stable@nongnu.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Tested-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20180512003217.9105-9-richard.henderson@linaro.org
[rth: Diagnose lack of FP16 before fp_access_check]
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
We missed all of the scalar fp16 fma operations.
Cc: qemu-stable@nongnu.org
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Tested-by: Alex Bennée <alex.bennee@linaro.org>
Message-id: 20180512003217.9105-8-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
We missed all of the scalar fp16 binary operations.
Cc: qemu-stable@nongnu.org
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Tested-by: Alex Bennée <alex.bennee@linaro.org>
Message-id: 20180512003217.9105-7-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Cc: qemu-stable@nongnu.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Tested-by: Alex Bennée <alex.bennee@linaro.org>
Message-id: 20180512003217.9105-6-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Cc: qemu-stable@nongnu.org
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Tested-by: Alex Bennée <alex.bennee@linaro.org>
Message-id: 20180512003217.9105-5-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Cc: qemu-stable@nongnu.org
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Tested-by: Alex Bennée <alex.bennee@linaro.org>
Message-id: 20180512003217.9105-4-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
No sense in emitting code after the exception.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Tested-by: Alex Bennée <alex.bennee@linaro.org>
Message-id: 20180512003217.9105-3-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Adding the fp16 moves to/from general registers.
Cc: qemu-stable@nongnu.org
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Tested-by: Alex Bennée <alex.bennee@linaro.org>
Message-id: 20180512003217.9105-2-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
In commit d81ce0ef2c we added an extra float_status field
fp_status_fp16 for Arm, but forgot to initialize it correctly
by setting it to float_tininess_before_rounding. This currently
will only cause problems for the new V8_FP16 feature, since the
float-to-float conversion code doesn't use it yet. The effect
would be that we failed to set the Underflow IEEE exception flag
in all the cases where we should.
Add the missing initialization.
Fixes: d81ce0ef2c
Cc: qemu-stable@nongnu.org
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20180512004311.9299-16-richard.henderson@linaro.org
Begin with the 0x08 major opcode, the system instructions.
Acked-by: Stafford Horne <shorne@gmail.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
The architecture manual is unclear about this, but the or1ksim
does writeback before the exception. This requires splitting
the helpers in half, with the exception raised by the second.
Acked-by: Stafford Horne <shorne@gmail.com>
Reviewed-by: Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Update the variable checked by the loop condition (expDiff).
Backport the update from Previous.
Fixes: 591596b77a ("target/m68k: add fmod/frem")
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
Reviewed-by: Thomas Huth <huth@tuxfamily.org>
Message-Id: <20180508203937.16796-1-laurent@vivier.eu>
The warning is
target/s390x/misc_helper.c:209:21: error: suggest
braces around initialization of subobject [-Werror,-Wmissing-braces]
SysIB sysib = { 0 };
^
{}
While the original code is correct, and technically exactly correct
as per ISO C89, both GCC and Clang support plain empty set of braces
as an extension.
Cc: Alexander Graf <agraf@suse.de>
Cc: David Hildenbrand <david@redhat.com>
Cc: Cornelia Huck <cohuck@redhat.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20180512045950.12386-5-richard.henderson@linaro.org>
Reviewed-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
Calling pause_all_vcpus()/resume_all_vcpus() from a VCPU thread might
not be the best idea. As pause_all_vcpus() temporarily drops the qemu
mutex, two parallel calls to pause_all_vcpus() can be active at a time,
resulting in a deadlock. (either by two VCPUs or by the main thread and a
VCPU)
Let's handle it via the main loop instead, as suggested by Paolo. If we
would have two parallel reset requests by two different VCPUs at the
same time, the last one would win.
We use the existing ipl device to handle it. The nice side effect is
that we can get rid of reipl_requested.
This change implies that all reset handling now goes via the common
path, so "no-reboot" handling is now active for all kinds of reboots.
Let's execute any CPU initialization code on the target CPU using
run_on_cpu.
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20180424101859.10239-1-david@redhat.com>
Acked-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
* dtc configure fixes
* MemoryRegionCache second try
* Deprecated option removal
* add support for Hyper-V reenlightenment MSRs
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2.0.22 (GNU/Linux)
iQEcBAABAgAGBQJa9Y2qAAoJEL/70l94x66Df8EIAI4pi+zf1mTlH0Koi+oqOg+d
geBC6N9IA+n1p90XERnPbuiT19NjON2R1Z907SbzDkijxdNRoYUoQf7Z+ZBTENjn
dYsVvgLYzajGLWWtJetPPaNFAqeF2z8B3lbVQnGVLzH5pQQ2NS1NJsvXQA2LslLs
2ll1CJ2EEBhayoBSbHK+0cY85f+DUgK/T1imIV2T/rwcef9Rw218nvPfGhPBSoL6
tI2xIOxz8bBOvZNg2wdxpaoPuDipBFu6koVVbaGSgXORg8k5CEcKNxInztufdELW
KZK5ORa3T0uqu5T/GGPAfm/NbYVQ4aTB5mddshsXtKbBhnbSfRYvpVsR4kQB/Hc=
=oC1r
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/bonzini/tags/for-upstream' into staging
* Don't silently truncate extremely long words in the command line
* dtc configure fixes
* MemoryRegionCache second try
* Deprecated option removal
* add support for Hyper-V reenlightenment MSRs
# gpg: Signature made Fri 11 May 2018 13:33:46 BST
# gpg: using RSA key BFFBD25F78C7AE83
# gpg: Good signature from "Paolo Bonzini <bonzini@gnu.org>"
# gpg: aka "Paolo Bonzini <pbonzini@redhat.com>"
# Primary key fingerprint: 46F5 9FBD 57D6 12E7 BFD4 E2F7 7E15 100C CD36 69B1
# Subkey fingerprint: F133 3857 4B66 2389 866C 7682 BFFB D25F 78C7 AE83
* remotes/bonzini/tags/for-upstream: (29 commits)
rename included C files to foo.inc.c, remove osdep.h
pc-dimm: fix error messages if no slots were defined
build: Silence dtc directory creation
shippable: Remove Debian 8 libfdt kludge
configure: Display if libfdt is from system or git
configure: Really use local libfdt if the system one is too old
i386/kvm: add support for Hyper-V reenlightenment MSRs
qemu-doc: provide details of supported build platforms
qemu-options: Remove deprecated -no-kvm-irqchip
qemu-options: Remove deprecated -no-kvm-pit-reinjection
qemu-options: Bail out on unsupported options instead of silently ignoring them
qemu-options: Remove remainders of the -tdf option
qemu-options: Mark -virtioconsole as deprecated
target/i386: sev: fix memory leaks
opts: don't silently truncate long option values
opts: don't silently truncate long parameter keys
accel: use g_strsplit for parsing accelerator names
update-linux-headers: drop hyperv.h
qemu-thread: always keep the posix wrapper layer
exec: reintroduce MemoryRegion caching
...
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* hw/arm/iotkit.c: fix minor memory leak
* softfloat: fix wrong-exception-flags bug for multiply-add corner case
* arm: isolate and clean up DTB generation
* implement Arm v8.1-Atomics extension
* Fix some bugs and missing instructions in the v8.2-FP16 extension
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQIcBAABCAAGBQJa9IUCAAoJEDwlJe0UNgzeEGMQAKKjVRzZ7MBgvxQj0FJSWhSP
BZkATf3ktid255PRpIssBZiY9oM+uY6n+/IRozAGvfDBp9eQOkrZczZjfW5hpe0B
YsQadtk5cUOXqQzRTegSMPOoMmz8f5GaGOk4R6AEXJEX+Rug/zbOn9Q8Yx7JTd7o
yBvU1+fys3galSiB88cffA95B9fwGfLsM7rP6OC4yNdUBYwjHf3wtY53WsxtWqX9
oX4keEiROQkrOfbSy9wYPZzu/0iRo8v35+7wIZhvNSlf02k6yJ7a+w0C4EQIRhWm
5zciE+aMYr7nOGpj7AEJLrRekhwnD6Ppje6aUd15yrxfNRZkpk/FeECWnaOPDis7
QNijx5Zqg6+GyItQKi5U4vFVReMj09OB7xDyAq77xDeBj4l3lg2DNkRfRhqQZAcv
2r4EW+pfLNj76Ah1qtQ410fprw462Sopb6bHmeuFbf1QFbQvJ4CL1+7Jl3ExrDX4
2+iQb4sQghWDxhDLfRSLxQ7K+bX+mNfGdFW8h+jPShD/+JY42dTKkFZEl4ghNgMD
mpj8FrQuIkSMqnDmPfoTG5MVTMERacqPU7GGM7/fxudIkByO3zTiLxJ/E+Iy8HvX
29xKoOBjKT5FJrwJABsN6VpA3EuyAARgQIZ/dd6N5GZdgn2KAIHuaI+RHFOesKFd
dJGM6sdksnsAAz28aUEJ
=uXY+
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/pmaydell/tags/pull-target-arm-20180510' into staging
target-arm queue:
* hw/arm/iotkit.c: fix minor memory leak
* softfloat: fix wrong-exception-flags bug for multiply-add corner case
* arm: isolate and clean up DTB generation
* implement Arm v8.1-Atomics extension
* Fix some bugs and missing instructions in the v8.2-FP16 extension
# gpg: Signature made Thu 10 May 2018 18:44:34 BST
# gpg: using RSA key 3C2525ED14360CDE
# gpg: Good signature from "Peter Maydell <peter.maydell@linaro.org>"
# gpg: aka "Peter Maydell <pmaydell@gmail.com>"
# gpg: aka "Peter Maydell <pmaydell@chiark.greenend.org.uk>"
# Primary key fingerprint: E1A5 C593 CD41 9DE2 8E83 15CF 3C25 25ED 1436 0CDE
* remotes/pmaydell/tags/pull-target-arm-20180510: (21 commits)
target/arm: Clear SVE high bits for FMOV
target/arm: Fix float16 to/from int16
target/arm: Implement vector shifted FCVT for fp16
target/arm: Implement vector shifted SCVF/UCVF for fp16
target/arm: Enable ARM_FEATURE_V8_ATOMICS for user-only
target/arm: Implement CAS and CASP
target/arm: Fill in disas_ldst_atomic
target/arm: Introduce ARM_FEATURE_V8_ATOMICS and initial decode
target/riscv: Use new atomic min/max expanders
tcg: Use GEN_ATOMIC_HELPER_FN for opposite endian atomic add
tcg: Introduce atomic helpers for integer min/max
target/xtensa: Use new min/max expanders
target/arm: Use new min/max expanders
tcg: Introduce helpers for integer min/max
atomic.h: Work around gcc spurious "unused value" warning
make sure that we aren't overwriting mc->get_hotplug_handler by accident
arm/boot: split load_dtb() from arm_load_kernel()
platform-bus-device: use device plug callback instead of machine_done notifier
pc: simplify MachineClass::get_hotplug_handler handling
softfloat: Handle default NaN mode after pickNaNMulAdd, not before
...
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
# Conflicts:
# target/riscv/translate.c
* Convert six targets to the translator loop.
-----BEGIN PGP SIGNATURE-----
iQEcBAABAgAGBQJa8y3rAAoJEGTfOOivfiFf2C0H/3vvTgrXUru2fNbyX/Gw/qEP
u0V4XPb94gdjMV16oKWfOIGlCb2f7u+z3cvKFJoyiaqIHCBzK28yGkDyLHQvDeqU
uTo4OyB2AX0MbkSjWsR3ym61RWoLCzzP3OtHGKsTGAh9+OuFKYxUBMVciLwJ9/X+
utx7xm20vM72JOEK9g3Zf7VcVI75s5JhgLmXbwfyN5Y2M1VzGuWOO6RV3vGxJnml
F1wwPf+9ZC61eqYWOzi1IruHmBVDi4k9xV50inDtxvLNmaKmPDcE0vq8YkrP/Vcx
3j2FmxfUEGnc3e5OW4jtvdfn15ITKYlANkN7YMQJA7XuiaHgyXr74yh3EZzRNvI=
=7fmA
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/rth/tags/cota-target-pull-request' into staging
* Fix all next_page checks for overflow.
* Convert six targets to the translator loop.
# gpg: Signature made Wed 09 May 2018 18:20:43 BST
# gpg: using RSA key 64DF38E8AF7E215F
# gpg: Good signature from "Richard Henderson <richard.henderson@linaro.org>"
# Primary key fingerprint: 7A48 1E78 868B 4DB6 A85A 05C0 64DF 38E8 AF7E 215F
* remotes/rth/tags/cota-target-pull-request: (28 commits)
target/riscv: convert to TranslatorOps
target/riscv: convert to DisasContextBase
target/riscv: convert to DisasJumpType
target/openrisc: convert to TranslatorOps
target/openrisc: convert to DisasContextBase
target/s390x: convert to TranslatorOps
target/s390x: convert to DisasContextBase
target/s390x: convert to DisasJumpType
target/mips: convert to TranslatorOps
target/mips: use *ctx for DisasContext
target/mips: convert to DisasContextBase
target/mips: convert to DisasJumpType
target/mips: use lookup_and_goto_ptr on BS_STOP
target/sparc: convert to TranslatorOps
target/sparc: convert to DisasContextBase
target/sparc: convert to DisasJumpType
target/sh4: convert to TranslatorOps
translator: merge max_insns into DisasContextBase
target/mips: avoid integer overflow in next_page PC check
target/s390x: avoid integer overflow in next_page PC check
...
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
osdep.h is only needed for files that are compiled directly.
Remove it from included C source files, and rename them to
*.inc.c so that scripts/clean-includes knows to skip them.
Cc: Eric Blake <eblake@redhat.com>
Cc: Michael S. Tsirkin <mst@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
KVM recently gained support for Hyper-V Reenlightenment MSRs which are
required to make KVM-on-Hyper-V enable TSC page clocksource to its guests
when INVTSC is not passed to it (and it is not passed by default in Qemu
as it effectively blocks migration).
Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Message-Id: <20180411115036.31832-2-vkuznets@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Fedora 28 ships with the released gcc 8.
The Werror stems from the compiler finding a path through the second
switch via a missing default case in which src1 is uninitialized, and
not being able to prove that the missing default case is unreachable
due to the first switch.
Simplify the second switch to merge default with OS_LONG,
which returns directly. This removes the unreachable path.
Cc: Laurent Vivier <laurent@vivier.eu>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Laurent Vivier <laurent@vivier.eu>
Message-id: 20180508185520.23757-1-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Use write_fp_dreg and clear_vec_high to zero the bits
that need zeroing for these cases.
Cc: qemu-stable@nongnu.org
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20180502221552.3873-5-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The instruction "ucvtf v0.4h, v04h, #2", with input 0x8000u,
overflows the intermediate float16 to infinity before we have a
chance to scale the output. Use float64 as the intermediate type
so that no input argument (uint32_t in this case) can overflow
or round before scaling. Given the declared argument, the signed
int32_t function has the same problem.
When converting from float16 to integer, using u/int32_t instead
of u/int16_t means that the bounding is incorrect.
Cc: qemu-stable@nongnu.org
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20180502221552.3873-4-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
While we have some of the scalar paths for FCVT for fp16,
we failed to decode the fp16 version of these instructions.
Cc: qemu-stable@nongnu.org
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20180502221552.3873-3-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
While we have some of the scalar paths for *CVF for fp16,
we failed to decode the fp16 version of these instructions.
Cc: qemu-stable@nongnu.org
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20180502221552.3873-2-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20180508151437.4232-11-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20180508151437.4232-10-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This implements all of the v8.1-Atomics instructions except
for compare-and-swap, which is decoded elsewhere.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20180508151437.4232-9-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The insns in the ARMv8.1-Atomics are added to the existing
load/store exclusive and load/store reg opcode spaces.
Rearrange the top-level decoders for these to accomodate.
The Atomics insns themselves still generate Unallocated.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20180508151437.4232-8-richard.henderson@linaro.org
[PMM: Drop the ARM_FEATURE_V8_1 feature flag]
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Michael Clark <mjc@sifive.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20180508151437.4232-7-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The generic expanders replace nearly identical code in the translator.
Acked-by: Max Filippov <jcmvbkbc@gmail.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20180508151437.4232-4-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The generic expanders replace nearly identical code in the translator.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20180508151437.4232-3-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Notes:
- Did not convert {num,max}_insns, since the corresponding code
will go away in the next patch.
- ctx->pc becomes ctx->base.pc_next, and ctx->next_pc becomes
ctx->pc_succ_insn.
While at it, convert the remaining tb->cflags readers to tb_cflags().
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Cc: Michael Clark <mjc@sifive.com>
Cc: Palmer Dabbelt <palmer@sifive.com>
Cc: Sagar Karandikar <sagark@eecs.berkeley.edu>
Cc: Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
Signed-off-by: Emilio G. Cota <cota@braap.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Notes:
- Changed the num_insns test in insn_start to check for
dc->base.num_insns > 1, since when tb_start is first
called in a TB, base.num_insns is already set to 1.
- Removed DISAS_NEXT from the switch in tb_stop; use
DISAS_TOO_MANY instead.
- Added an assert_not_reached on tb_stop for DISAS_NEXT
and the default case.
- Merged the two separate log_target_disas calls into the
disas_log op.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Cc: Stafford Horne <shorne@gmail.com>
Signed-off-by: Emilio G. Cota <cota@braap.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
While at it, set is_jmp to DISAS_NORETURN when generating
an exception.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Cc: Stafford Horne <shorne@gmail.com>
Signed-off-by: Emilio G. Cota <cota@braap.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Note: I looked into dropping dc->do_debug. However, I don't see
an easy way to do it given that TOO_MANY is also valid
when we just translate more than max_insns. Thus, the check
for do_debug in "case DISAS_PC_CC_UPDATED" would still need
additional state to know whether or not we came from
breakpoint_check.
Acked-by: Cornelia Huck <cohuck@redhat.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Tested-by: David Hildenbrand <david@redhat.com>
Cc: David Hildenbrand <david@redhat.com>
Cc: Cornelia Huck <cohuck@redhat.com>
Cc: Alexander Graf <agraf@suse.de>
Cc: qemu-s390x@nongnu.org
Signed-off-by: Emilio G. Cota <cota@braap.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Notes:
- Did not convert {num,max}_insns and is_jmp, since the corresponding
code will go away in the next patch.
- Avoided a checkpatch error in use_exit_tb.
- As suggested by David, (1) Drop ctx.pc and use
ctx.base.pc_next instead, and (2) Rename ctx.next_pc to
ctx.pc_tmp and add a comment about it.
Acked-by: Cornelia Huck <cohuck@redhat.com>
Suggested-by: David Hildenbrand <david@redhat.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Cc: David Hildenbrand <david@redhat.com>
Cc: Cornelia Huck <cohuck@redhat.com>
Cc: Alexander Graf <agraf@suse.de>
Cc: qemu-s390x@nongnu.org
Signed-off-by: Emilio G. Cota <cota@braap.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
The only non-trivial modification is the use of DISAS_TOO_MANY
in the same way is used by the generic translation loop.
Acked-by: Cornelia Huck <cohuck@redhat.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Cc: David Hildenbrand <david@redhat.com>
Cc: Cornelia Huck <cohuck@redhat.com>
Cc: Alexander Graf <agraf@suse.de>
Cc: qemu-s390x@nongnu.org
Signed-off-by: Emilio G. Cota <cota@braap.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Notes:
- DISAS_TOO_MANY replaces the former "break" in the translation loop.
However, care must be taken not to overwrite a previous condition
in is_jmp; that's why in translate_insn we first check is_jmp and
return if it's != DISAS_NEXT.
- Added an assert in translate_insn, before exiting due to an exception,
to make sure that is_jmp is set to DISAS_NORETURN (the exception
generation function always sets it.)
- Added an assert for the default case in is_jmp's switch.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Cc: Aurelien Jarno <aurelien@aurel32.net>
Cc: Yongbok Kim <yongbok.kim@mips.com>
Signed-off-by: Emilio G. Cota <cota@braap.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
No changes to the logic here; this is just to make the diff
that follows easier to read.
While at it, remove the unnecessary 'struct' in
'struct TranslationBlock'.
Note that checkpatch complains with a false positive:
ERROR: space prohibited after that '&' (ctx:WxW)
#75: FILE: target/mips/translate.c:20220:
+ ctx->kscrexist = (env->CP0_Config4 >> CP0C4_KScrExist) & 0xff;
^
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Cc: Aurelien Jarno <aurelien@aurel32.net>
Cc: Yongbok Kim <yongbok.kim@mips.com>
Signed-off-by: Emilio G. Cota <cota@braap.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Cc: Aurelien Jarno <aurelien@aurel32.net>
Cc: Yongbok Kim <yongbok.kim@mips.com>
Signed-off-by: Emilio G. Cota <cota@braap.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Notes:
- BS_EXCP in generate_exception_err and after hen_helper_wait
becomes DISAS_NORETURN, because we do not return after
raising an exception.
- Some uses of BS_EXCP are misleading in that they're used
only as a "not BS_STOP" exit condition, i.e. they have nothing
to do with an actual exception. For those cases, define
and use DISAS_EXIT, which is clearer. With this and the
above change, BS_EXCP goes away completely.
- fix a comment typo (s/intetrupt/interrupt/).
Suggested-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Cc: Aurelien Jarno <aurelien@aurel32.net>
Cc: Yongbok Kim <yongbok.kim@mips.com>
Signed-off-by: Emilio G. Cota <cota@braap.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
The TB after BS_STOP is not fixed (e.g. helper_mtc0_hwrena
changes hflags, which ends up changing the TB flags via
cpu_get_tb_cpu_state). This requires a full lookup (i.e.
with flags) via lookup_and_goto_ptr instead of gen_goto_tb,
since the latter only looks at the PC for in-page goto's. Fix it.
Reported-by: Richard Henderson <richard.henderson@linaro.org>
Suggested-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Emilio G. Cota <cota@braap.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Notes:
- Moved the cross-page check from the end of translate_insn to
init_disas_context.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Tested-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Cc: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Cc: Artyom Tarasenko <atar4qemu@gmail.com>
Signed-off-by: Emilio G. Cota <cota@braap.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Notes:
- pc and npc are left unmodified, since they can point to out-of-TB
jump targets.
- Got rid of last_pc in gen_intermediate_code(), using base.pc_next
instead. Only update pc_next (1) on a breakpoint (so that tb->size
includes the insn), and (2) after reading the current instruction
from memory. This allows us to use base.pc_next in the BP check,
which is what the translator loop does.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Cc: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Cc: Artyom Tarasenko <atar4qemu@gmail.com>
Signed-off-by: Emilio G. Cota <cota@braap.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
This was fairly straightforward since it had already been converted
to DisasContextBase; just had to add TARGET_TOO_MANY to the switch
in tb_stop.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Cc: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Emilio G. Cota <cota@braap.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
While at it, use int for both num_insns and max_insns to make
sure we have same-type comparisons.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Michael Clark <mjc@sifive.com>
Signed-off-by: Emilio G. Cota <cota@braap.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
If the PC is in the last page of the address space, next_page_start
overflows to 0. Fix it.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Cc: Aurelien Jarno <aurelien@aurel32.net>
Cc: Yongbok Kim <yongbok.kim@mips.com>
Signed-off-by: Emilio G. Cota <cota@braap.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
If the PC is in the last page of the address space, next_page_start
overflows to 0. Fix it.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
Acked-by: Cornelia Huck <cohuck@redhat.com>
Cc: Cornelia Huck <cohuck@redhat.com>
Cc: Alexander Graf <agraf@suse.de>
Cc: David Hildenbrand <david@redhat.com>
Cc: qemu-s390x@nongnu.org
Signed-off-by: Emilio G. Cota <cota@braap.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
If the PC is in the last page of the address space, next_page_start
overflows to 0. Fix it.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Cc: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Emilio G. Cota <cota@braap.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
If the PC is in the last page of the address space, next_page_start
overflows to 0. Fix it.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Cc: "Edgar E. Iglesias" <edgar.iglesias@gmail.com>
Signed-off-by: Emilio G. Cota <cota@braap.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
If the PC is in the last page of the address space, next_page_start
overflows to 0. Fix it.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Emilio G. Cota <cota@braap.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
If the PC is in the last page of the address space, next_page_start
overflows to 0. Fix it.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Cc: Guan Xuetao <gxt@mprc.pku.edu.cn>
Signed-off-by: Emilio G. Cota <cota@braap.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
If the PC is in the last page of the address space, next_page_start
overflows to 0. Fix it.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Acked-by: Max Filippov <jcmvbkbc@gmail.com>
Cc: Max Filippov <jcmvbkbc@gmail.com>
Signed-off-by: Emilio G. Cota <cota@braap.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
If the PC is in the last page of the address space, next_page_start
overflows to 0. Fix it.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Acked-by: Michael Walle <michael@walle.cc>
Cc: Michael Walle <michael@walle.cc>
Signed-off-by: Emilio G. Cota <cota@braap.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
If the PC is in the last page of the address space, next_page_start
overflows to 0. Fix it.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Cc: "Edgar E. Iglesias" <edgar.iglesias@gmail.com>
Signed-off-by: Emilio G. Cota <cota@braap.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
If the PC is in the last page of the address space, next_page_start
overflows to 0. Fix it.
Reported-by: Richard Henderson <richard.henderson@linaro.org>
Suggested-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Michael Clark <mjc@sifive.com>
Acked-by: Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
Cc: Michael Clark <mjc@sifive.com>
Cc: Palmer Dabbelt <palmer@sifive.com>
Cc: Sagar Karandikar <sagark@eecs.berkeley.edu>
Cc: Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
Signed-off-by: Emilio G. Cota <cota@braap.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Several code cleanups, minor specification conformance changes,
fixes to make ROM read-only and add device-tree size checks.
* Honour privileged ISA v1.10 counter enable CSRs.
* Implements WARL behavior for CSRs that don't support writes
* Past behavior of raising traps was non-conformant
with the RISC-V Privileged ISA Specification v1.10.
* Allow S-mode access to sstatus.MXR when priv ISA >= v1.10
* Sets mtval/stval to zero on exceptions without addresses
* Past behavior of leaving the last value was non-conformant
with the RISC-V Privileged ISA Specition v1.10. mtval/stval
must be set on all exceptions; to zero if not supported.
* Make ROMs read-only and implement device-tree size checks
* Uses memory_region_init_rom and rom_add_blob_fixed_as
* Adds hexidecimal instruction bytes to disassembly output.
* Fixes missing break statement for rv128 disassembly.
* Several code cleanups
* Replacing hard-coded constants with enums
* Dead-code elimination
This is an incremental pull that contains 20 reviewed changes out
of 38 changes currently queued in the qemu-2.13-for-upstream branch.
-----BEGIN PGP SIGNATURE-----
iF0EABECAB0WIQR8mZMOsXzYugc9Xvpr8dezV+8+TwUCWu496QAKCRBr8dezV+8+
T1ZEAJ4wQRHZtn4suN5yMEHQMA2FkX1iNACgiYWLtcNcgoa88eaTcJJJu4QZryY=
=I2wf
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/riscv/tags/riscv-qemu-2.13-pull-20180506' into staging
RISC-V: QEMU 2.13 Privileged ISA emulation updates
Several code cleanups, minor specification conformance changes,
fixes to make ROM read-only and add device-tree size checks.
* Honour privileged ISA v1.10 counter enable CSRs.
* Implements WARL behavior for CSRs that don't support writes
* Past behavior of raising traps was non-conformant
with the RISC-V Privileged ISA Specification v1.10.
* Allow S-mode access to sstatus.MXR when priv ISA >= v1.10
* Sets mtval/stval to zero on exceptions without addresses
* Past behavior of leaving the last value was non-conformant
with the RISC-V Privileged ISA Specition v1.10. mtval/stval
must be set on all exceptions; to zero if not supported.
* Make ROMs read-only and implement device-tree size checks
* Uses memory_region_init_rom and rom_add_blob_fixed_as
* Adds hexidecimal instruction bytes to disassembly output.
* Fixes missing break statement for rv128 disassembly.
* Several code cleanups
* Replacing hard-coded constants with enums
* Dead-code elimination
This is an incremental pull that contains 20 reviewed changes out
of 38 changes currently queued in the qemu-2.13-for-upstream branch.
# gpg: Signature made Sun 06 May 2018 00:27:37 BST
# gpg: using DSA key 6BF1D7B357EF3E4F
# gpg: Good signature from "Michael Clark <michaeljclark@mac.com>"
# gpg: aka "Michael Clark <mjc@sifive.com>"
# gpg: aka "Michael Clark <michael@metaparadigm.com>"
# gpg: WARNING: This key is not certified with a trusted signature!
# gpg: There is no indication that the signature belongs to the owner.
# Primary key fingerprint: 7C99 930E B17C D8BA 073D 5EFA 6BF1 D7B3 57EF 3E4F
* remotes/riscv/tags/riscv-qemu-2.13-pull-20180506:
RISC-V: Mark ROM read-only after copying in code
RISC-V: No traps on writes to misa,minstret,mcycle
RISC-V: Make mtvec/stvec ignore vectored traps
RISC-V: Add mcycle/minstret support for -icount auto
RISC-V: Use [ms]counteren CSRs when priv ISA >= v1.10
RISC-V: Allow S-mode mxr access when priv ISA >= v1.10
RISC-V: Clear mtval/stval on exceptions without info
RISC-V: Hardwire satp to 0 for no-mmu case
RISC-V: Update E and I extension order
RISC-V: Remove erroneous comment from translate.c
RISC-V: Remove EM_RISCV ELF_MACHINE indirection
RISC-V: Make virt header comment title consistent
RISC-V: Make some header guards more specific
RISC-V: Fix missing break statement in disassembler
RISC-V: Include instruction hex in disassembly
RISC-V: Remove unused class definitions
RISC-V: Remove identity_translate from load_elf
RISC-V: Use ROM base address and size from memmap
RISC-V: Make virt board description match spike
RISC-V: Replace hardcoded constants with enum values
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
These fields are marked WARL (Write Any Values, Reads
Legal Values) in the RISC-V Privileged Architecture
Specification so instead of raising exceptions,
illegal writes are silently dropped.
Cc: Sagar Karandikar <sagark@eecs.berkeley.edu>
Cc: Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
Cc: Palmer Dabbelt <palmer@sifive.com>
Cc: Alistair Francis <Alistair.Francis@wdc.com>
Signed-off-by: Michael Clark <mjc@sifive.com>
Vectored traps for asynchrounous interrupts are optional.
The mtvec/stvec mode field is WARL and hence does not trap
if an illegal value is written. Illegal values are ignored.
Later we can add RISCV_FEATURE_VECTORED_TRAPS however
until then the correct behavior for WARL (Write Any, Read
Legal) fields is to drop writes to unsupported bits.
Cc: Sagar Karandikar <sagark@eecs.berkeley.edu>
Cc: Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
Cc: Palmer Dabbelt <palmer@sifive.com>
Cc: Alistair Francis <Alistair.Francis@wdc.com>
Signed-off-by: Michael Clark <mjc@sifive.com>
Previously the mycycle/minstret CSRs and rdcycle/rdinstret
psuedo instructions would return the time as a proxy for an
increasing instruction counter in the absence of having a
precise instruction count. If QEMU is invoked with -icount,
the mcycle/minstret CSRs and rdcycle/rdinstret psuedo
instructions will return the instruction count.
Cc: Sagar Karandikar <sagark@eecs.berkeley.edu>
Cc: Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
Cc: Palmer Dabbelt <palmer@sifive.com>
Cc: Alistair Francis <Alistair.Francis@wdc.com>
Signed-off-by: Michael Clark <mjc@sifive.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Privileged ISA v1.9.1 defines mscounteren and mucounteren:
* mscounteren contains a mask of counters available to S-mode
* mucounteren contains a mask of counters available to U-mode
Privileged ISA v1.10 defines mcounteren and scounteren:
* mcounteren contains a mask of counters available to S-mode
* scounteren contains a mask of counters available to U-mode
mcounteren and scounteren CSR registers were implemented
however they were not honoured for counter accesses when
the privilege ISA was >= v1.10. This fix solves the issue
by coalescing the counter enable registers. In addition
the code now generates illegal instruction exceptions
for accesses to the counter enabled registers depending
on the privileged ISA version.
- Coalesce mscounteren and mcounteren into one variable
- Coalesce mucounteren and scounteren into one variable
- Makes mcounteren and scounteren CSR accesses generate
illegal instructions when the privileged ISA <= v1.9.1
- Makes mscounteren and mucounteren CSR accesses generate
illegal instructions when the privileged ISA >= v1.10
Cc: Sagar Karandikar <sagark@eecs.berkeley.edu>
Cc: Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
Cc: Palmer Dabbelt <palmer@sifive.com>
Cc: Alistair Francis <Alistair.Francis@wdc.com>
Signed-off-by: Michael Clark <mjc@sifive.com>
The mstatus.MXR alias in sstatus should only be writable
by S-mode if the privileged ISA version >= v1.10. Also MXR
was masked in sstatus CSR read but not sstatus CSR writes.
Now we correctly mask sstatus.mxr in both read and write.
Cc: Sagar Karandikar <sagark@eecs.berkeley.edu>
Cc: Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
Cc: Palmer Dabbelt <palmer@sifive.com>
Cc: Alistair Francis <Alistair.Francis@wdc.com>
Signed-off-by: Michael Clark <mjc@sifive.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
mtval/stval must be set on all exceptions but zero is
a legal value if there is no exception specific info.
Placing the instruction bytes for illegal instruction
exceptions in mtval/stval is an optional feature and
is currently not supported by QEMU RISC-V.
Cc: Sagar Karandikar <sagark@eecs.berkeley.edu>
Cc: Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
Cc: Palmer Dabbelt <palmer@sifive.com>
Cc: Alistair Francis <Alistair.Francis@wdc.com>
Signed-off-by: Michael Clark <mjc@sifive.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
satp is WARL so it should not trap on illegal writes, rather
it can be hardwired to zero and silently ignore illegal writes.
It seems the RISC-V WARL behaviour is preferred to having to
trap overhead versus simply reading back the value and checking
if the write took (saves hundreds of cycles and more complex
trap handling code).
Cc: Sagar Karandikar <sagark@eecs.berkeley.edu>
Cc: Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
Cc: Palmer Dabbelt <palmer@sifive.com>
Cc: Alistair Francis <Alistair.Francis@wdc.com>
Signed-off-by: Michael Clark <mjc@sifive.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Section 22.8 Subset Naming Convention of the RISC-V ISA Specification
defines the canonical order for extensions in the ISA string. It is
silent on the position of the E extension however E is a substitute
for I so it must come early in the extension list order. A comment
is added to state E and I are mutually exclusive, as the E extension
will be added to the RISC-V port in the future.
Cc: Sagar Karandikar <sagark@eecs.berkeley.edu>
Cc: Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
Cc: Palmer Dabbelt <palmer@sifive.com>
Cc: Alistair Francis <Alistair.Francis@wdc.com>
Signed-off-by: Michael Clark <mjc@sifive.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
In case the MSI is translated by an IOMMU we need to fixup the
MSI route with the translated address.
Signed-off-by: Eric Auger <eric.auger@redhat.com>
Signed-off-by: Bharat Bhushan <Bharat.Bhushan@nxp.com>
Message-id: 1524665762-31355-12-git-send-email-eric.auger@redhat.com
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
For v8M the instructions VLLDM and VLSTM support lazy saving
and restoring of the secure floating-point registers. Even
if the floating point extension is not implemented, these
instructions must act as NOPs in Secure state, so they can
be used as part of the secure-to-nonsecure call sequence.
Fixes: https://bugs.launchpad.net/qemu/+bug/1768295
Cc: qemu-stable@nongnu.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20180503105730.5958-1-peter.maydell@linaro.org
Path analysis shows that size == 3 && !is_q has been eliminated.
Fixes: Coverity CID1385853
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Message-id: 20180501180455.11214-3-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The (size > 3 && !is_q) condition is identical to the preceeding test
of bit 3 in immh; eliminate it. For the benefit of Coverity, assert
that size is within the bounds we expect.
Fixes: Coverity CID1385846
Fixes: Coverity CID1385849
Fixes: Coverity CID1385852
Fixes: Coverity CID1385857
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Message-id: 20180501180455.11214-2-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The duplication of id_tlbtr_reginfo was unintentionally added within
3281af8114 which should have been
id_mpuir_reginfo.
The effect was that for OMAP and StrongARM CPUs we would
incorrectly UNDEF writes to MPUIR rather than NOPing them.
Signed-off-by: Mathew Maidment <mathew1800@gmail.com>
Message-id: 20180501184933.37609-2-mathew1800@gmail.com
[PMM: tweak commit message]
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
-----BEGIN PGP SIGNATURE-----
iQIcBAABAgAGBQJa7BLUAAoJEDhwtADrkYZTumIQAJC6wXmN+wBYc2MoR2Y8SQgY
+gTM9J6R6H50ijb7RkkERLTgys7IxCDD/jy2p0yX/Re3ReXbYwzYQXmSFpF1KWGe
SXB84uDtwSILbvR5iS0TBdQSyO+u5DRboukuLfTEZHjYQUP+guT1we3YwqWGzIKp
o5kV/7Nq0vPWO5Sbs4FWB0t9hWzWV3Kef9b4gRPn05sWPaq2/sU6A3xai+ty6qS7
PCm7VwT4z5SACdR4LRiL45h3HdThgr/alJJ6lUr2kaNCBiDBvM4h6d7W+lI/Vi3Y
rG+wqyPQFyWLXf0uuI3AmSScVUzfYv9C4TcBTJkFnebrFcybPsGwEJLGtaIgFnBU
1Mcz/TCl1bB4fDvhwV2qexxlXryOWXKn+ygdu9sBSY/QSA+NEqbJQo6cCDqMQ9Qy
6zqrGxUrM/peVLvhfle4cIbyPslGRGn2s95oQzCJi8TlZxBj8lgW1x1kr7OhSlf4
rNteSYAHDNSiNVL1PcW3vOS7ndTA6O0vHAtGa+0vbQzAf+RUfFG0sfggG6350O8e
97Hp4LKT3VpGEuwyQEw6wk3zODNfAgtkkwjQHTnQYHriKB/fcVfY3g7gpYp4zMLF
GJ3h5KZj71JNoFoxVJniAgkWY8+IP11ggXMyYWSMxMZ3M81EqQ/rbvOvGxn1wjd8
kHbpUEMmGBHF1VmKs7e1
=Kukn
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/armbru/tags/pull-qapi-2018-05-04' into staging
QAPI patches for 2018-05-04
# gpg: Signature made Fri 04 May 2018 08:59:16 BST
# gpg: using RSA key 3870B400EB918653
# gpg: Good signature from "Markus Armbruster <armbru@redhat.com>"
# gpg: aka "Markus Armbruster <armbru@pond.sub.org>"
# Primary key fingerprint: 354B C8B3 D7EB 2A6B 6867 4E5F 3870 B400 EB91 8653
* remotes/armbru/tags/pull-qapi-2018-05-04:
qapi: deprecate CpuInfoFast.arch
qapi: discriminate CpuInfoFast on SysEmuTarget, not CpuInfoArch
qapi: change the type of TargetInfo.arch from string to enum SysEmuTarget
qapi: add SysEmuTarget to "common.json"
qapi: fill in CpuInfoFast.arch in query-cpus-fast
qobject: Modify qobject_ref() to return obj
qobject: Replace qobject_incref/QINCREF qobject_decref/QDECREF
qobject: use a QObjectBase_ struct
qobject: Ensure base is at offset 0
qobject: Use qobject_to() instead of type cast
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
- new machine type
- extend SCLP event masks
- support configuration of consoles via -serial
- firmware improvements: non-sequential entries in boot menu, support
for indirect loading via .INS files in s390-netboot
- bugfixes and cleanups
-----BEGIN PGP SIGNATURE-----
iQJGBAABCAAwFiEEw9DWbcNiT/aowBjO3s9rk8bwL68FAlrsCZ0SHGNvaHVja0By
ZWRoYXQuY29tAAoJEN7Pa5PG8C+vrN8QAKP3poc2wm/+32vCUv+qUyHby6cm5bl+
2PeHS/CLC1q/nIggb1l8z9I3BeVSgeWB3B5/dKHvuRM4sVGslg2t2ivSXXbU06Na
4sv9NaPh1DV0YLuSIj7gIbk9BZdsuw5Ik2846KIFW4HjYmgWJZJc9WhC+ezBqMLI
jbOUQiQk7JfhJ0julc5Z1BcZN50PxUquvN8BmmS+QHhbdcQ0xMjmlDpkhGNzk9Hg
+Ui6Fu5HOnybGXE3u9V+xS1I9Gn0cG90lgGFkIRgGO6oqn0C0hmYfrcXc11xg9yH
/hUx+lIg3k44T6e2nG6IxDyuAfugxJiKeD1PscAd8DzBceKHxpIVk37xoITlCO25
iRAcvToruaxZf0RSprQsW3DCto5cEhdX5XLVs6J5I/jlBqNgllKkezS5mG2fpibe
xH7MlRL00DqaNEqWCrQ2+64w5THIkkiukYQLv6eDdoaTP/6emJ6KeqGr7KDbijvx
ViR8LQ2aaGR/sL90X/HDNvR3otnC3doAQTCjlxDlHkjE3hSL9Z6Nvq4KBztiP418
leHoiscmzRLJzagAhOn+uZWjETnoBKv1OnEN1yLf80ADz/FaArvnb9zq5KR6Oh43
30+5RKLFKaDx6fnXGB2eqIOgq/4x1wXcGwRCBpRfYDXc/pQjbnj7AsKQiSTdTOa3
hfTCL7/LscMU
=cYvD
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/cohuck/tags/s390x-20180504' into staging
First s390x pull request for 2.13.
- new machine type
- extend SCLP event masks
- support configuration of consoles via -serial
- firmware improvements: non-sequential entries in boot menu, support
for indirect loading via .INS files in s390-netboot
- bugfixes and cleanups
# gpg: Signature made Fri 04 May 2018 08:19:57 BST
# gpg: using RSA key DECF6B93C6F02FAF
# gpg: Good signature from "Cornelia Huck <conny@cornelia-huck.de>"
# gpg: aka "Cornelia Huck <huckc@linux.vnet.ibm.com>"
# gpg: aka "Cornelia Huck <cornelia.huck@de.ibm.com>"
# gpg: aka "Cornelia Huck <cohuck@kernel.org>"
# gpg: aka "Cornelia Huck <cohuck@redhat.com>"
# Primary key fingerprint: C3D0 D66D C362 4FF6 A8C0 18CE DECF 6B93 C6F0 2FAF
* remotes/cohuck/tags/s390x-20180504:
pc-bios/s390: Update firmware images
s390-ccw: force diag 308 subcode to unsigned long
pc-bios/s390-ccw/net: Add support for .INS config files
pc-bios/s390-ccw/net: Use diag308 to reset machine before jumping to the OS
pc-bios/s390-ccw/net: Split up net_load() into init, load and release parts
pc-bios/s390-ccw: fix non-sequential boot entries (enum)
pc-bios/s390-ccw: fix non-sequential boot entries (eckd)
pc-bios/s390-ccw: fix loadparm initialization and int conversion
pc-bios/s390-ccw: rename MAX_TABLE_ENTRIES to MAX_BOOT_ENTRIES
pc-bios/s390-ccw: size_t should be unsigned
hw/s390x: Allow to configure the consoles with the "-serial" parameter
s390x/kvm: cleanup calls to cpu_synchronize_state()
vfio-ccw: introduce vfio_ccw_get_device()
s390x/sclp: extend SCLP event masks to 64 bits
s390x: introduce 2.13 compat machine
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Now that we can safely call QOBJECT() on QObject * as well as its
subtypes, we can have macros qobject_ref() / qobject_unref() that work
everywhere instead of having to use QINCREF() / QDECREF() for QObject
and qobject_incref() / qobject_decref() for its subtypes.
The replacement is mechanical, except I broke a long line, and added a
cast in monitor_qmp_cleanup_req_queue_locked(). Unlike
qobject_decref(), qobject_unref() doesn't accept void *.
Note that the new macros evaluate their argument exactly once, thus no
need to shout them.
Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-Id: <20180419150145.24795-4-marcandre.lureau@redhat.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
[Rebased, semantic conflict resolved, commit message improved]
Signed-off-by: Markus Armbruster <armbru@redhat.com>
The pseries-2.7 and older machine types require CPUPPCState::insns_flags
to be strictly equal between source and destination. This checking is
abusive and breaks migration of KVM guests when the host CPU models
are different, even if they are compatible enough to allow the guest
to run transparently. This buggy behaviour was fixed for pseries-2.8
and we added some hacks to allow backward migration of older machine
types. These hacks assume that the CPU belongs to the POWER8 family,
which was true for most KVM based setup we cared about at the time.
But now POWER9 systems are coming, and backward migration of pre 2.8
guests running in POWER8 architected mode from a POWER9 host to a
POWER8 host is broken:
qemu-system-ppc64: error while loading state for instance 0x0 of device
'cpu'
qemu-system-ppc64: load of migration failed: Invalid argument
This happens because POWER9 doesn't set PPC_MEM_TLBIE in insns_flags,
while POWER8 does. Let's force PPC_MEM_TLBIE in the migration hack to
fix the issue. This is an acceptable hack because these old machine
types only support CPU models that do set PPC_MEM_TLBIE.
Signed-off-by: Greg Kurz <groug@kaod.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
cpu_ppc_set_papr() does several things:
1) it sets up the virtual hypervisor interface
2) it prevents the cpu from ever entering hypervisor mode
3) it tells KVM that we're emulating a cpu in PAPR mode
and 4) it configures the LPCR and AMOR (hypervisor privileged registers)
so that TCG will behave correctly for PAPR guests, without
attempting to emulate the cpu in hypervisor mode
(1) & (2) make sense for any virtual hypervisor (if another one ever
exists).
(3) belongs more properly in the machine type specific to a PAPR guest, so
move it to spapr_cpu_init(). While we're at it, remove an ugly test on
kvm_enabled() by making kvmppc_set_papr() a safe no-op on non-KVM.
(4) also belongs more properly in the machine type specific code. (4) is
done by mangling the default values of the SPRs, so that they will be set
correctly at reset time. Manipulating usually-static parameters of the cpu
model like this is kind of ugly, especially since the values used really
have more to do with the platform than the cpu.
The spapr code already has places for PAPR specific initializations of
register state in spapr_cpu_reset(), so move this handling there.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Cédric Le Goater <clg@kaod.org>
Tested-by: Cédric Le Goater <clg@kaod.org>
In cpu_ppc_set_papr() the UPRT and GTSE bits of the LPCR default value are
initialized based on on ppc64_radix_guest(). Which seems reasonable,
except that ppc64_radix_guest() is based on spapr->patb_entry which is
only set up in spapr_machine_reset, called _after_ cpu_ppc_set_papr() for
boot cpus. Well, and the fact that modifying the SPR default value for an
instance rather than a class is kind of yucky.
The initialization here is really only necessary or valid for
hotplugged cpus; the base cpu initialization already sets a value
that's good enough for the boot cpus until the guest uses an hcall to
configure it's preferred MMU mode.
So, move this initialization to the rtas_start_cpu() path, at which point
ppc64_radix_guest() will have a sensible value, to make sure secondary cpus
come up in an MMU mode matching the existing cpus.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Cédric Le Goater <clg@kaod.org>
Tested-by: Cédric Le Goater <clg@kaod.org>
There are some fields in the cpu state which need to be updated when the
LPCR register is changed, which is done by ppc_hash64_update_rmls() and
ppc_hash64_update_vrma(). Code which alters env->spr[SPR_LPCR] needs to
call them afterwards to make sure the state is up to date.
That's easy to get wrong. The normal way of dealing with sitautions like
that is to use a helper which both updates the basic register value and the
derived state.
So, do that.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Cédric Le Goater <clg@kaod.org>
Tested-by: Cédric Le Goater <clg@kaod.org>
Reviewed-by: Greg Kurz <groug@kaod.org>
Current POWER cpus allow for a VRMA, a special mapping which describes a
guest's view of memory when in real mode (MMU off, from the guest's point
of view). Older cpus didn't have that which meant that to support a guest
a special host-contiguous region of memory was needed to give the guest its
Real Mode Area (RMA).
KVM used to provide special calls to allocate a contiguous RMA for those
cases. This was useful in the early days of KVM on Power to allow it to be
tested on PowerPC 970 chips as used in Macintosh G5 machines. Now, those
machines are so old as to be almost irrelevant.
The normal qemu deprecation process would require this to be marked
deprecated then removed in 2 releases. However, this can only be used
with corresponding support in the host kernel - which was dropped
years ago (in c17b98cf "KVM: PPC: Book3S HV: Remove code for PPC970
processors" of 2014-12-03 to be precise). Therefore it should be ok
to drop this immediately.
Just to be clear this only affects *KVM HV* guests with PowerPC 970,
and those already require an ancient host kernel. TCG and KVM PR
guests with PowerPC 970 should still work.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Acked-by: Thomas Huth <thuth@redhat.com>
The Partition Table Control Register (PTCR) is a hypervisor privileged
SPR. It contains the host real address of the Partition Table and its
size.
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
commit e57ca75ce3 ("target/ppc: Manage external HPT via virtual
hypervisor") exported a set of methods to manipulate the HPT from the
core hash MMU. But SPR_SDR1 is still used under some circumstances to
get the base address of the HPT, which is incorrect for the sPAPR
machines.
Only the logging should be impacted.
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Drop TCGV_PTR_TO_NAT and TCGV_NAT_TO_PTR internal macros.
Add tcg_temp_local_new_ptr, tcg_gen_brcondi_ptr, tcg_gen_ext_i32_ptr,
tcg_gen_trunc_i64_ptr, tcg_gen_extu_ptr_i64, tcg_gen_trunc_ptr_i32.
Use inlines instead of macros where possible.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
floatx80_sin() and floatx80_cos() are derived from one
sincos() function. They have both unused code coming from
their common origin. Remove it.
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-Id: <20180430170156.1860-2-laurent@vivier.eu>
return the result of packFloatx80() instead of
dropping it.
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20180430170156.1860-1-laurent@vivier.eu>
Make the TLBX MISS bit read-only.
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Make TLBSX write-only and guest-error log reads from it.
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Do not clobber the IMM register on reversed load/stores.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Fix trap checks for FPU insns when extended FPU insns are enabled.
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Respect MSR.PVR as read-only. We were wrongly overwriting the PVR bit.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
This patch fixes decrement of the pointers for subx mem, mem instructions.
Without the patch pointers are decremented by OS_* constant value instead of
retrieving the corresponding data size and using it as a decrement.
Signed-off-by: Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
Reviewed-by: Laurent Vivier <laurent@vivier.eu>
Message-Id: <20180418064152.24606.71975.stgit@pasha-VirtualBox>
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
We have a call to cpu_synchronize_state() on every kvm_arch_handle_exit().
Let's remove the ones that are no longer needed.
Remaining places (for s390x) are in
- target/s390x/sigp.c, on the target CPU
- target/s390x/cpu.c:s390_cpu_get_crash_info()
While at it, use kvm_cpu_synchronize_state() instead of
cpu_synchronize_state() in KVM code. (suggested by Thomas Huth)
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20180412093521.2469-1-david@redhat.com>
Acked-by: Christian Borntraeger <borntraeger@de.ibm.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
cpu_ppc_set_papr() removes the EP and HV bits from the MSR mask. While
removing the HV bit makes sense (a cpu in PAPR mode should never be
emulated in hypervisor mode), the EP bit is just bizarre. Although it's
true that a papr mode guest shouldn't be able to change the exception
prefix, the MSR[EP] bit doesn't even exist on the cpus supported for PAPR
mode, so it's pointless to do anything with it here.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Thomas Huth <thuth@redhat.com>
The env->slb_nr field gives the size of the SLB (Segment Lookaside Buffer).
This is another static-after-initialization parameter of the specific
version of the 64-bit hash MMU in the CPU. So, this patch folds the field
into PPCHash64Options with the other hash MMU options.
This is a bit more complicated that the things previously put in there,
because slb_nr was foolishly included in the migration stream. So we need
some of the usual dance to handle backwards compatible migration.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Greg Kurz <groug@kaod.org>
These macros were introduced to deal with the fact that the mmu_model
field has bit flags mixed in with what's otherwise an enum of various mmu
types.
We've now eliminated all those flags except for one, and that one -
POWERPC_MMU_64 - is already included/compared in the MMU_VER macros. So,
we can get rid of those macros and just directly compare mmu_model values
in the places it was used.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Cédric Le Goater <clg@kaod.org>
Reviewed-by: Greg Kurz <groug@kaod.org>
The only place we test this flag is in conjunction with
ppc64_use_proc_tbl(). That checks for the LPCR_UPRT bit, which we already
ensure can't be set except on a machine with a v3 MMU (i.e. POWER9).
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Cédric Le Goater <clg@kaod.org>
Reviewed-by: Greg Kurz <groug@kaod.org>
The ci_large_pages boolean in CPUPPCState is only relevant to 64-bit hash
MMU machines, indicating whether it's possible to map large (> 4kiB) pages
as cache-inhibitied (i.e. for IO, rather than memory). Fold it as another
flag into the PPCHash64Options structure.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Cédric Le Goater <clg@kaod.org>
Reviewed-by: Greg Kurz <groug@kaod.org>
Currently env->mmu_model is a bit of an unholy mess of an enum of distinct
MMU types, with various flag bits as well. This makes which bits of the
field should be compared pretty confusing.
Make a start on cleaning that up by moving two of the flags bits -
POWERPC_MMU_1TSEG and POWERPC_MMU_AMR - which are specific to the 64-bit
hash MMU into a new flags field in PPCHash64Options structure.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Cédric Le Goater <clg@kaod.org>
Reviewed-by: Greg Kurz <groug@kaod.org>
Currently some cpus set the hash64_opts field in the class structure, with
specific details of their variant of the 64-bit hash mmu. For the
remaining cpus with that mmu, ppc_hash64_realize() fills in defaults.
But there are only a couple of cpus that use those fallbacks, so just have
them to set the has64_opts field instead, simplifying the logic.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Cédric Le Goater <clg@kaod.org>
Reviewed-by: Greg Kurz <groug@kaod.org>
env->sps contains page size encoding information as an embedded structure.
Since this information is specific to 64-bit hash MMUs, split it out into
a separately allocated structure, to reduce the basic env size for other
cpus. Along the way we make a few other cleanups:
* Rename to PPCHash64Options which is more in line with qemu name
conventions, and reflects that we're going to merge some more hash64
mmu specific details in there in future. Also rename its
substructures to match qemu conventions.
* Move structure definitions to the mmu-hash64.[ch] files.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Greg Kurz <groug@kaod.org>
Reviewed-by: Cédric Le Goater <clg@kaod.org>
Initialization of the env->sps structure at the end of instance_init is
specific to the 64-bit hash MMU, so move the code into a helper function
in mmu-hash64.c.
We also create a corresponding function to be called at finalize time -
it's empty for now, but we'll need it shortly.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Cédric Le Goater <clg@kaod.org>
Reviewed-by: Greg Kurz <groug@kaod.org>
CPU definitions for cpus with the 64-bit hash MMU can include a table of
available pagesizes. If this isn't supplied ppc_cpu_instance_init() will
fill it in a fallback table based on the POWERPC_MMU_64K bit in mmu_model.
However, it turns out all the cpus which support 64K pages already include
an explicit table of page sizes, so there's no point to the fallback table
including 64k pages.
That removes the only place which tests POWERPC_MMU_64K, so we can remove
it. Which in turn allows some logic to be removed from
kvm_fixup_page_sizes().
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Cédric Le Goater <clg@kaod.org>
Reviewed-by: Greg Kurz <groug@kaod.org>
In most cases we prefer to pass a PowerPCCPU rather than the (embedded)
CPUPPCState.
For ppc_hash64_update_{rmls,vrma}() change to take "cpu" instead of "env".
For ppc_hash64_set_{dsi,isi}() remove the redundant "env" parameter.
In theory this makes more work for the functions, but since "cs", "cpu"
and "env" are related by at most constant offsets, the compiler should be
able to optimize out the difference at effectively zero cost.
helper_*() functions are left alone - since they're more closely tied to
the TCG generated code, passing "env" is still the standard there.
While we're there, fix an incorrect indentation.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Greg Kurz <groug@kaod.org>
Reviewed-by: Cédric Le Goater <clg@kaod.org>
The #if isn't necessary, because there's a suitable one inside
ppc_cpu_is_valid(). We've already filtered for suitable cpu models in the
functions that search and register them. So by the time we get to realize
having an invalid one indicates a code error, not a user error, so an
assert() is more appropriate than error_setg().
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: Greg Kurz <groug@kaod.org>
Because of the various hooks called some variant on "init" - and the rather
greater number that used to exist, I'm always wondering when a function
called simply "*_init" or "*_initfn" will be called.
To make it easier on myself, and maybe others, rename the instance_init
hooks for ppc cpus to *_instance_init(). While we're at it rename the
realize time hooks to *_realize() (from *_realizefn()) which seems to be
the more common current convention.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Greg Kurz <groug@kaod.org>
There are a couple places (one generic, one target specific) where we need
to get the host page size associated with a particular memory backend. I
have some upcoming code which will add another place which wants this. So,
for convenience, add a helper function to calculate this.
host_memory_backend_pagesize() returns the host pagesize for a given
HostMemoryBackend object.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Greg Kurz <groug@kaod.org>
Acked-by: Paolo Bonzini <pbonzini@redhat.com>
qemu_mempath_getpagesize() gets the effective (host side) page size for
a block of memory backed by an mmap()ed file on the host. It requires
the mem_path parameter to be non-NULL.
This ends up meaning all the callers need a different case for handling
anonymous memory (for memory-backend-ram or default memory with -mem-path
is not specified).
We can make all those callers a little simpler by having
qemu_mempath_getpagesize() accept NULL, and treat that as the anonymous
memory case.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Greg Kurz <groug@kaod.org>
Acked-by: Paolo Bonzini <pbonzini@redhat.com>
According to the Vector/SIMD extension documentation bit 6 that is
currently masked is valid (listed as transient bit) but bits 7 and 8
should be reserved instead. Fix the mask to match this.
Signed-off-by: BALATON Zoltan <balaton@eik.bme.hu>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
The normal gdb definition of the XER registers is only 32 bit,
and that's what the current version of power64-core.xml also
says (seems copied from gdb's). But qemu's idea of the XER register
is target_ulong (in CPUPPCState, ppc_gdb_register_len and
ppc_cpu_gdb_read_register)
That mismatch leads to the following message when attaching
with gdb:
Truncated register 32 in remote 'g' packet
(and following on that qemu stops responding). The simple fix is
to say the truth in the .xml file. But the better fix is to
actually make it 32bit on the wire, as old gdbs don't support
XML files for describing registers. Also the XER state in qemu
doesn't seem to use the high 32 bits, so sending it off to gdb
doesn't seem worthwhile.
Signed-off-by: Michael Matz <matz@suse.de>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
This is a bug fix to ensure 64-bit reads of these registers don't read
adjacent data.
Signed-off-by: Aaron Lindsay <alindsay@codeaurora.org>
Message-id: 1523997485-1905-13-git-send-email-alindsay@codeaurora.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
It was shifted to the left one bit too few.
Signed-off-by: Aaron Lindsay <alindsay@codeaurora.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 1523997485-1905-10-git-send-email-alindsay@codeaurora.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
During code generation, surround CPSR writes and exception returns which
call the EL change hooks with gen_io_start/end. The immediate need is
for the PMU to access the clock and icount during EL change to support
mode filtering.
Signed-off-by: Aaron Lindsay <alindsay@codeaurora.org>
Message-id: 1523997485-1905-9-git-send-email-alindsay@codeaurora.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Because the design of the PMU requires that the counter values be
converted between their delta and guest-visible forms for mode
filtering, an additional hook which occurs before the EL is changed is
necessary.
Signed-off-by: Aaron Lindsay <alindsay@codeaurora.org>
Message-id: 1523997485-1905-8-git-send-email-alindsay@codeaurora.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This eliminates the need for fetching it from el_change_hook_opaque, and
allows for supporting multiple el_change_hooks without having to hack
something together to find the registered opaque belonging to GICv3.
Signed-off-by: Aaron Lindsay <alindsay@codeaurora.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 1523997485-1905-6-git-send-email-alindsay@codeaurora.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This is in preparation for enabling counters other than PMCCNTR
Signed-off-by: Aaron Lindsay <alindsay@codeaurora.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 1523997485-1905-5-git-send-email-alindsay@codeaurora.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
They share the same underlying state
Signed-off-by: Aaron Lindsay <alindsay@codeaurora.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 1523997485-1905-3-git-send-email-alindsay@codeaurora.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
In commit 95695effe8 we changed the v7M/v8M stack
pop code to use a new v7m_stack_read() function that checks
whether the read should fail due to an MPU or bus abort.
We missed one call though, the one which reads the signature
word for the callee-saved register part of the frame.
Correct the omission.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20180419142106.9694-1-peter.maydell@linaro.org
Remove a stale TODO comment -- we have now made the arm_ldl_ptw()
and arm_ldq_ptw() functions propagate physical memory read errors
out to their callers.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-id: 20180419142151.9862-1-peter.maydell@linaro.org
The assumption in the cpu->max_features code is that anything
enabled on GET_SUPPORTED_CPUID should be enabled on "-cpu host".
This shouldn't be the case for FEAT_KVM_HINTS.
This adds a new FeatureWordInfo::no_autoenable_flags field, that
can be used to prevent FEAT_KVM_HINTS bits to be enabled
automatically.
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Message-Id: <20180410211534.26079-1-ehabkost@redhat.com>
Tested-by: Wanpeng Li <wanpengli@tencent.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
68000 CPUs do not save format in the exception stack frame.
This patch adds feature checking to prevent format saving for 68000.
m68k_ret() already includes this modification, this patch fixes
the exception processing function too.
Signed-off-by: Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
Reviewed-by: Laurent Vivier <laurent@vivier.eu>
Message-Id: <20180413133041.29509.59064.stgit@pasha-VirtualBox>
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
In icount mode, instructions that access io memory spaces in the middle
of the translation block invoke TB recompilation. After recompilation,
such instructions become last in the TB and are allowed to access io
memory spaces.
When the code includes instruction like i386 'xchg eax, 0xffffd080'
which accesses APIC, QEMU goes into an infinite loop of the recompilation.
This instruction includes two memory accesses - one read and one write.
After the first access, APIC calls cpu_report_tpr_access, which restores
the CPU state to get the current eip. But cpu_restore_state_from_tb
resets the cpu->can_do_io flag which makes the second memory access invalid.
Therefore the second memory access causes a recompilation of the block.
Then these operations repeat again and again.
This patch moves resetting cpu->can_do_io flag from
cpu_restore_state_from_tb to cpu_loop_exit* functions.
It also adds a parameter for cpu_restore_state which controls restoring
icount. There is no need to restore icount when we only query CPU state
without breaking the TB. Restoring it in such cases leads to the
incorrect flow of the virtual time.
In most cases new parameter is true (icount should be recalculated).
But there are two cases in i386 and openrisc when the CPU state is only
queried without the need to break the TB. This patch fixes both of
these cases.
Signed-off-by: Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
Message-Id: <20180409091320.12504.35329.stgit@pasha-VirtualBox>
[rth: Make can_do_io setting unconditional; move from cpu_exec;
make cpu_loop_exit_{noexc,restore} call cpu_loop_exit.]
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Here's a rather late pull request with a handful of fixes for 2.12.
These have been blocked for some time, because I wasn't able to
complete my usual test set due to the SCSI problem fixed in 37c5174
"scsi-disk: Don't enlarge min_io_size to max_io_size".
Since we're in hard freeze, these are all bugfixes. Most are also
regressions, although in one case it's only a "regression" because a
longstanding bug has been exposed by a new machine type (sam460ex) in
the testcases. There are also a couple of sam460ex fixes that aren't
regressions since the board didn't exist before. On the flipside
though, they're low risk because they only touch board specific code
for a board that doesn't exist in any released version.
-----BEGIN PGP SIGNATURE-----
iQIzBAABCAAdFiEEdfRlhq5hpmzETofcbDjKyiDZs5IFAlrMZDAACgkQbDjKyiDZ
s5ITgg/9F4IXYPu7/wW+pBgnC+ZNV7s498K2FQqOVPHQEAh98SyBcxLTCWCWwQhW
eOxHFfcKMv6HTWNVyXReFpXJJez33sZ3a0qD3u0a5w0uFSkDYVbCOM7S5qPjFU94
r7KZWg6IAN3cMAuy0bqfl+Jo5gRolZ8pdo3dSwKLitJfLpOsUclc9DimHiVQfLmW
ve6e8ILlZCKmY646gO4+t1EXCDAK4JthcP5FqSMOSzHnFT0hu/j+Wt3sZ2kr2EZC
GblEG86dW2n9f1uVIjpmRDqJJNljAutWO1eLmplK1k6pRdmjjGrBSHdF3V7s9yoN
kMllr6mthoucNHg55AbjsC6owgNJAXxJz6BnnKMycTRW/7z4exg/MaMlPhMFxZ9w
94gr1p9EdEW0Uvxjm+bYdZVrxskogoDo7HxzBs8HoMmTmCvpXRi7i86XZA87seKz
F/4SNHtZLlt6W20sfcCAtDwo3rw3rkiV9/WbhJSFV9u1lYUJR5x3tq4c4EyLQt66
k3DwMxWyvcS2Uni0ni8eYoM7xtDG3xYtEmspKt7eN6OnHQcmz6FoAXcbqTtyr65D
MmAYm/mplr0dDVvCLjUbwFqRtQSnpePY9quc4vk+dXVH8atf1OMlDS+rA/jwLIuo
7LdfolpLiXBzgIFpBuJ8o/KW0B+zu1u3qwzzJ4OUq8nNgp53UM4=
=VDFg
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/dgibson/tags/ppc-for-2.12-20180410' into staging
ppc patch queue 2018-04-10
Here's a rather late pull request with a handful of fixes for 2.12.
These have been blocked for some time, because I wasn't able to
complete my usual test set due to the SCSI problem fixed in 37c5174
"scsi-disk: Don't enlarge min_io_size to max_io_size".
Since we're in hard freeze, these are all bugfixes. Most are also
regressions, although in one case it's only a "regression" because a
longstanding bug has been exposed by a new machine type (sam460ex) in
the testcases. There are also a couple of sam460ex fixes that aren't
regressions since the board didn't exist before. On the flipside
though, they're low risk because they only touch board specific code
for a board that doesn't exist in any released version.
# gpg: Signature made Tue 10 Apr 2018 08:13:52 BST
# gpg: using RSA key 6C38CACA20D9B392
# gpg: Good signature from "David Gibson <david@gibson.dropbear.id.au>"
# gpg: aka "David Gibson (Red Hat) <dgibson@redhat.com>"
# gpg: aka "David Gibson (ozlabs.org) <dgibson@ozlabs.org>"
# gpg: aka "David Gibson (kernel.org) <dwg@kernel.org>"
# Primary key fingerprint: 75F4 6586 AE61 A66C C44E 87DC 6C38 CACA 20D9 B392
* remotes/dgibson/tags/ppc-for-2.12-20180410:
roms/u-boot-sam460ex: Change to qemu git mirror and update
sam460ex: Fix timer frequency and clock multipliers
tests/boot-serial: Test the sam460ex board
spapr: Initialize reserved areas list in FDT in H_CAS handler
target/ppc: Fix backwards migration of msr_mask
hw/misc/macio: Fix crash when listing device properties of macio device
target/ppc: Initialize lazy_tlb_flush correctly
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The parameters for tcg_gen_insn_start are target_ulong, which may be split
into two TCGArg parameters for storage in the opcode on 32-bit hosts.
Fixes the ARM target and its direct use of tcg_set_insn_param, which would
set the wrong argument in the 64-on-32 case.
Cc: qemu-stable@nongnu.org
Reported-by: alarson@ddci.com
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20180410003558.2470-1-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Currently our PMSAv7 and ARMv7M MPU implementation cannot handle
MPU region sizes smaller than our TARGET_PAGE_SIZE. However we
report that in a slightly confusing way:
DRSR[3]: No support for MPU (sub)region alignment of 9 bits. Minimum is 10
The problem is not the alignment of the region, but its size;
tweak the error message to say so:
DRSR[3]: No support for MPU (sub)region size of 512 bytes. Minimum is 1024.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-id: 20180405172554.27401-1-peter.maydell@linaro.org
Make sure we are not treating architecturally Undefined instructions
as a SWP, by verifying the opcodes as per section A8.8.229 of ARMv7-A
specification. Bits [21:20] must be zero for this to be a SWP or SWPB.
We also choose to UNDEF for the architecturally UNPREDICTABLE case of
bits [11:8] not being zero.
Signed-off-by: Onur Sahin <onursahin08@gmail.com>
[PMM: tweaked commit message]
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
21b786f "PowerPC: Add TS bits into msr_mask" added the transaction states
to msr_mask for recent POWER CPUs to allow correct migration of machines
that are in certain interim transactional memory states.
This was correct, but unfortunately breaks backwards of pseries-2.7 and
earlier machine types which (stupidly) transferred the msr_mask in the
migration stream and failed if it wasn't equal on each end.
This works around the problem by masking out the new MSR bits in the
compatibility code to send the msr_mask on old machine types.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Greg Kurz <groug@kaod.org>
Tested-by: Greg Kurz <groug@kaod.org>
Tested-by: Lukáš Doktor <ldoktor@redhat.com>
ppc_tr_init_disas_context() correctly sets lazy_tlb_flush to true on
certain CPU models. However, it leaves it uninitialized, instead of
setting it to false on all others.
It wasn't caught before now because we didn't have examples in the tests
that exercised this path. However it can now be caught using clang's
undefined behaviour sanitizer and the sam460ex board.
Suggested-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: Greg Kurz <groug@kaod.org>
The 2-byte VEX prefix imples a leading 0Fh opcode byte.
Signed-off-by: Eugene Minibaev <mail@kitsu.me>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
In order to guarantee compatibility on migration, QEMU should have
complete control over the features it announces to the guest via CPUID.
However, for a number of Hyper-V-related cpu properties, if the
corresponding feature is not supported by the underlying KVM, the
propery is silently ignored and the feature is not announced to the
guest.
Refuse to start with an error instead.
Signed-off-by: Roman Kagan <rkagan@virtuozzo.com>
Message-Id: <20180330170209.20627-3-rkagan@virtuozzo.com>
Reviewed-by: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
In order to guarantee compatibility on migration, QEMU should have
complete control over the features it announces to the guest via CPUID.
However, the availability of Hyper-V frequency MSRs
(HV_X64_MSR_TSC_FREQUENCY and HV_X64_MSR_APIC_FREQUENCY) depends solely
on the support for them in the underlying KVM.
Introduce "hv-frequencies" cpu property (off by default) which gives
QEMU full control over whether these MSRs are announced.
While at this, drop the redundant check of the cpu tsc frequency, and
decouple this feature from hv-time.
Signed-off-by: Roman Kagan <rkagan@virtuozzo.com>
Reviewed-by: Eduardo Habkost <ehabkost@redhat.com>
Message-Id: <20180330170209.20627-2-rkagan@virtuozzo.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Implements the CPUID trap for CPUID 1 to include the
CPUID_EXT_HYPERVISOR flag in the ECX results. This was preventing some
older linux kernels from booting when trying to access MSR's that dont
make sense when virtualized.
Signed-off-by: Justin Terry (VM) <juterry@microsoft.com>
Message-Id: <20180326170658.606-1-juterry@microsoft.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Let's simplify it a bit. On some weird circumstances we would have
tried to recompute watchpoints when running under KVM. load_psw() is
called from do_restart_interrupt() during a SIGP RESTART if the target
CPU is STOPPED. Let's touch watchpoints only in the TCG case - where
they are used for PER emulation.
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20180409113019.14568-3-david@redhat.com>
Acked-by: Christian Borntraeger <borntraeger@de.ibm.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
If we already triggered another exception, don't overwrite it with a
protection exception.
Only applies to old KVM instances without the virtual memory access
IOCTL in KVM.
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20180409113019.14568-2-david@redhat.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Acked-by: Christian Borntraeger <borntraeger@de.ibm.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
Manually having to use cpu_synchronize_state() is error prone. And as
Christian Borntraeger discovered, e.g. handle_diag() is currently
missing a cpu_synchronize_state(), as decode_basedisp_s() uses a
general purpose register value internally.
So let's do an overall cpu_synchronize_state(), which fixes at least the
one mentioned BUG. We will clean up the superfluous cpu_synchronize_state()
calls later.
We now also call it (although maybe not neded) for
- KVM_EXIT_S390_RESET -> s390_reipl_request()
- KVM_EXIT_DEBUG -> kvm_arch_handle_debug_exit()
- unmanagable/unimplemented intercepts
- ICPT_CPU_STOP -> do_stop_interrupt() -> cpu gets halted
- Scenarios where we inject an operation exception
- handle_stsi()
I don't think any of these are performance critical. Especially as we
have all information directly contained in kvm_run, there are no
additional IOCTLs to issue on modern kernels.
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20180406093552.13016-1-david@redhat.com>
Acked-by: Christian Borntraeger <borntraeger@de.ibm.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
In commit 7073fbada7, the `andn` instruction
was implemented via `tcg_gen_andc` but passes the operands in the wrong
order:
- X86 defines `andn dest,src1,src2` as: dest = ~src1 & src2
- TCG defines `andc dest,src1,src2` as: dest = src1 & ~src2
The following simple test shows the issue:
#include <stdio.h>
#include <stdint.h>
int main(void) {
uint32_t ret = 0;
__asm (
"mov $0xFF00, %%ecx\n"
"mov $0x0F0F, %%eax\n"
"andn %%ecx, %%eax, %%ecx\n"
"mov %%ecx, %0\n"
: "=r" (ret));
printf("%08X\n", ret);
return 0;
}
This patch fixes the problem by simply swapping the order of the two last
arguments in `tcg_gen_andc_tl`.
Reported-by: Alexandro Sanchez Bach <alexandro@phi.nz>
Signed-off-by: Alexandro Sanchez Bach <alexandro@phi.nz>
Cc: qemu-stable@nongnu.org
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
The string returned by object_property_get_str() is dynamically allocated.
Fixes: d8575c6c02
Signed-off-by: Greg Kurz <groug@kaod.org>
Message-Id: <152231462116.69730.14119625999092384450.stgit@bahia.lan>
Reviewed-by: Cornelia Huck <cohuck@redhat.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
This change is a workaround for a bug where mstatus.FS
is not correctly reporting dirty after operations that
modify floating point registers. This a critical bug
or RISC-V in QEMU as it results in floating point
register file corruption when running SMP Linux due to
task migration and possibly uniprocessor Linux if
more than one process is using the FPU.
This workaround will return dirty if mstatus.FS is
switched from off to initial or clean. According to
the specification it is legal for an implementation
to return only off, or dirty.
Cc: Palmer Dabbelt <palmer@sifive.com>
Cc: Sagar Karandikar <sagark@eecs.berkeley.edu>
Cc: Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
Cc: Peter Maydell <peter.maydell@linaro.org>
Cc: Alex Bennée <alex.bennee@linaro.org>
Cc: Richard Henderson <richard.henderson@linaro.org>
Cc: Philippe Mathieu-Daudé <f4bug@amsat.org>
Tested-by: Richard W.M. Jones <rjones@redhat.com>
Signed-off-by: Michael Clark <mjc@sifive.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
- Model borrowed from target/sh4/cpu.c
- Rewrote riscv_cpu_list to use object_class_get_list
- Dropped 'struct RISCVCPUInfo' and used TypeInfo array
- Replaced riscv_cpu_register_types with DEFINE_TYPES
- Marked base class as abstract
- Fixes -cpu list
Cc: Igor Mammedov <imammedo@redhat.com>
Cc: Sagar Karandikar <sagark@eecs.berkeley.edu>
Cc: Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
Cc: Palmer Dabbelt <palmer@sifive.com>
Signed-off-by: Michael Clark <mjc@sifive.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Igor Mammedov <imammedo@redhat.com>
Re-run Coccinelle script scripts/coccinelle/err-bad-newline.cocci,
and found new error_report() occurrences with '\n'.
Signed-off-by: Laurent Vivier <lvivier@redhat.com>
Message-Id: <20180323143202.28879-3-lvivier@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Eric Blake <eblake@redhat.com>
The value of CCOUNT special register is calculated as time elapsed
since CCOUNT == 0 multiplied by the core frequency. In icount mode time
increment between consecutive instructions that don't involve time
warps is constant, but unless the result of multiplication of this
constant by the core frequency is a whole number the CCOUNT increment
between these instructions may not be constant. E.g. with icount=7 each
instruction takes 128ns, with core clock of 10MHz CCOUNT values for
consecutive instructions are:
502: (128 * 502 * 10000000) / 1000000000 = 642.56
503: (128 * 503 * 10000000) / 1000000000 = 643.84
504: (128 * 504 * 10000000) / 1000000000 = 645.12
I.e.the CCOUNT increments depend on the absolute time. This results in
varying CCOUNT differences for consecutive instructions in tests that
involve time warps and don't set CCOUNT explicitly.
Change frequency of the core used in tests so that clock cycle takes
exactly 64ns. Change icount power used in tests to 6, so that each
instruction takes exactly 1 clock cycle. With these changes CCOUNT
increments only depend on the number of executed instructions and that's
what timer tests expect, so they work correctly.
Longer story:
http://lists.nongnu.org/archive/html/qemu-devel/2018-03/msg04326.html
Cc: Pavel Dovgaluk <Pavel.Dovgaluk@ispras.ru>
Cc: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
Change #include <xtensa-isa.h> to #include "xtensa-isa.h" in imported
files to make references to local files consistent.
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
Fix definitions of existing cores and core importing script to follow
the rule of naming non-top level source files.
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
A recent glibc change relies on the fact that the iaoq must be 3,
and computes an address based on that. QEMU had been ignoring the
priv level for user-only, which produced an incorrect address.
Reported-by: John David Anglin <dave.anglin@bell.net>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
This removes the additional call to WHvGetVirtualProcessorRegisters in
whpx_vcpu_post_run now that the WHV_VP_EXIT_CONTEXT is returned in all
WHV_RUN_VP_EXIT_CONTEXT structures.
Signed-off-by: Justin Terry (VM) <juterry@microsoft.com>
Message-Id: <1521039163-138-4-git-send-email-juterry@microsoft.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
This fixes a breaking change to WHvSetPartitionProperty to pass the 'in'
PropertyCode on function invocation introduced in Windows Insider SDK 17110.
Usage of this indicates the PropertyCode of the opaque PropertyBuffer passed in
on function invocation.
Also fixes the removal of the PropertyCode parameter from the
WHV_PARTITION_PROPERTY struct as it is now passed to the function directly.
Signed-off-by: Justin Terry (VM) <juterry@microsoft.com>
Message-Id: <1521039163-138-3-git-send-email-juterry@microsoft.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
This fixes a breaking change to WHvGetCapability to include the 'out'
WrittenSizeInBytes introduced in Windows Insider SDK 17110.
This specifies on return the safe length to read into the WHV_CAPABILITY
structure passed to the call.
Signed-off-by: Justin Terry (VM) <juterry@microsoft.com>
Message-Id: <1521039163-138-2-git-send-email-juterry@microsoft.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
For debug exceptions due to breakpoints or the BKPT instruction which
are taken to AArch32, the Fault Address Register is architecturally
UNKNOWN. We were using that as license to simply not set
env->exception.vaddress, but this isn't correct, because it will
expose to the guest whatever old value was in that field when
arm_cpu_do_interrupt_aarch32() writes it to the guest IFSR. That old
value might be a FAR for a previous guest EL2 or secure exception, in
which case we shouldn't show it to an EL1 or non-secure exception
handler. It might also be a non-deterministic value, which is bad
for record-and-replay.
Clear env->exception.vaddress before taking breakpoint debug
exceptions, to avoid this minor information leak.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-id: 20180320134114.30418-5-peter.maydell@linaro.org
Now that we have a helper function specifically for the BRK and
BKPT instructions, we can set the exception.fsr there rather
than in arm_cpu_do_interrupt_aarch32(). This allows us to
use our new arm_debug_exception_fsr() helper.
In particular this fixes a bug where we were hardcoding the
short-form IFSR value, which is wrong if the target exception
level has LPAE enabled.
Fixes: https://bugs.launchpad.net/qemu/+bug/1756927
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-id: 20180320134114.30418-4-peter.maydell@linaro.org
When a debug exception is taken to AArch32, it appears as a Prefetch
Abort, and the Instruction Fault Status Register (IFSR) must be set.
The IFSR has two possible formats, depending on whether LPAE is in
use. Factor out the code in arm_debug_excp_handler() which picks
an FSR value into its own utility function, update it to use
arm_fi_to_lfsc() and arm_fi_to_sfsc() rather than hard-coded constants,
and use the correct condition to select long or short format.
In particular this fixes a bug where we could select the short
format because we're at EL0 and the EL1 translation regime is
not using LPAE, but then route the debug exception to EL2 because
of MDCR_EL2.TDE and hand EL2 the wrong format FSR.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-id: 20180320134114.30418-3-peter.maydell@linaro.org
The MDCR_EL2.TDE bit allows the exception level targeted by debug
exceptions to be set to EL2 for code executing at EL0. We handle
this in the arm_debug_target_el() function, but this is only used for
hardware breakpoint and watchpoint exceptions, not for the exception
generated when the guest executes an AArch32 BKPT or AArch64 BRK
instruction. We don't have enough information for a translate-time
equivalent of arm_debug_target_el(), so instead make BKPT and BRK
call a special purpose helper which can do the routing, rather than
the generic exception_with_syndrome helper.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-id: 20180320134114.30418-2-peter.maydell@linaro.org
In OE project 4.15 linux kernel boot hang was observed under
single cpu aarch64 qemu. Kernel code was in a loop waiting for
vtimer arrival, spinning in TC generated blocks, while interrupt
was pending unprocessed. This happened because when qemu tried to
handle vtimer interrupt target had interrupts disabled, as
result flag indicating TCG exit, cpu->icount_decr.u16.high,
was cleared but arm_cpu_exec_interrupt function did not call
arm_cpu_do_interrupt to process interrupt. Later when target
reenabled interrupts, it happened without exit into main loop, so
following code that waited for result of interrupt execution
run in infinite loop.
To solve the problem instructions that operate on CPU sys state
(i.e enable/disable interrupt), and marked as DISAS_UPDATE,
should be considered as DISAS_EXIT variant, and should be
forced to exit back to main loop so qemu will have a chance
processing pending CPU state updates, including pending
interrupts.
This change brings consistency with how DISAS_UPDATE is treated
in aarch32 case.
CC: Peter Maydell <peter.maydell@linaro.org>
CC: Alex Bennée <alex.bennee@linaro.org>
CC: qemu-stable@nongnu.org
Suggested-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Victor Kamensky <kamensky@cisco.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 1521526368-1996-1-git-send-email-kamensky@cisco.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Since commit 46a99c9f73 ("s390x/cpumodel: model PTFF subfunctions
for Multiple-epoch facility") -cpu help no longer shows the MSA8
feature group. Turns out that we forgot to add the new MEPOCH_PTFF
group enum.
Fixes: 46a99c9f73 ("s390x/cpumodel: model PTFF subfunctions for Multiple-epoch facility")
Reviewed-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
Found thanks to ASAN:
Direct leak of 16 byte(s) in 1 object(s) allocated from:
#0 0x7efe20417a38 in __interceptor_calloc (/lib64/libasan.so.4+0xdea38)
#1 0x7efe1f7b2f75 in g_malloc0 ../glib/gmem.c:124
#2 0x7efe1f7b3249 in g_malloc0_n ../glib/gmem.c:355
#3 0x558272879162 in sev_get_info /home/elmarco/src/qemu/target/i386/sev.c:414
#4 0x55827285113b in hmp_info_sev /home/elmarco/src/qemu/target/i386/monitor.c:684
#5 0x5582724043b8 in handle_hmp_command /home/elmarco/src/qemu/monitor.c:3333
Fixes: 63036314
Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Message-Id: <20180319175823.22111-1-marcandre.lureau@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
This version uses a constant size memory buffer sized for
the maximum possible ISA string length. It also uses g_new
instead of g_new0, uses more efficient logic to append
extensions and adds manual zero termination of the string.
Cc: Palmer Dabbelt <palmer@sifive.com>
Cc: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Michael Clark <mjc@sifive.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
[PMM: Use qemu_tolower() rather than tolower()]
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
SRC_EA() and gen_extend() can return either a temporary
TCGv or a memory allocated one. Mark them when they are
allocated, and free them automatically at end of the
instruction translation.
We want to free locally allocated TCGv to avoid
overflow in sequence like:
0xc00ae406: movel %fp@(-132),%fp@(-268)
0xc00ae40c: movel %fp@(-128),%fp@(-264)
0xc00ae412: movel %fp@(-20),%fp@(-212)
0xc00ae418: movel %fp@(-16),%fp@(-208)
0xc00ae41e: movel %fp@(-60),%fp@(-220)
0xc00ae424: movel %fp@(-56),%fp@(-216)
0xc00ae42a: movel %fp@(-124),%fp@(-252)
0xc00ae430: movel %fp@(-120),%fp@(-248)
0xc00ae436: movel %fp@(-12),%fp@(-260)
0xc00ae43c: movel %fp@(-8),%fp@(-256)
0xc00ae442: movel %fp@(-52),%fp@(-276)
0xc00ae448: movel %fp@(-48),%fp@(-272)
...
That can fill a lot of TCGv entries in a sequence,
especially since 15fa08f845 ("tcg: Dynamically allocate TCGOps")
we have no limit to fill the TCGOps cache and we can fill
the entire TCG variables array and overflow it.
Suggested-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20180319113544.704-3-laurent@vivier.eu>
This parameter will be needed to manage automatic release
of temporary allocated TCG variables.
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20180319113544.704-2-laurent@vivier.eu>
Intel processor trace should be disabled when
CPUID.(EAX=14H,ECX=0H).ECX.[bit31] is set.
Generated packets which contain IP payloads will have LIP
values when this bit is set, or IP payloads will have RIP
values.
Currently, The information of CPUID 14H is constant to make
live migration safty and this bit is always 0 in guest even
if host support LIP values.
Guest sees the bit is 0 will expect IP payloads with RIP
values, but the host CPU will generate IP payloads with
LIP values if this bit is set in HW.
To make sure the value of IP payloads correctly, Intel PT
should be disabled when bit[31] is set.
Signed-off-by: Luwei Kang <luwei.kang@intel.com>
Message-Id: <1520969191-18162-1-git-send-email-luwei.kang@intel.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
This patch was generated using the following Coccinelle script:
@@
expression Obj;
@@
(
- qobject_to_qnum(Obj)
+ qobject_to(QNum, Obj)
|
- qobject_to_qstring(Obj)
+ qobject_to(QString, Obj)
|
- qobject_to_qdict(Obj)
+ qobject_to(QDict, Obj)
|
- qobject_to_qlist(Obj)
+ qobject_to(QList, Obj)
|
- qobject_to_qbool(Obj)
+ qobject_to(QBool, Obj)
)
and a bit of manual fix-up for overly long lines and three places in
tests/check-qjson.c that Coccinelle did not find.
Signed-off-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Alberto Garcia <berto@igalia.com>
Message-Id: <20180224154033.29559-4-mreitz@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
[eblake: swap order from qobject_to(o, X), rebase to master, also a fix
to latent false-positive compiler complaint about hw/i386/acpi-build.c]
Signed-off-by: Eric Blake <eblake@redhat.com>
both do nothing as for the first all callers
parse_cpu_model() and qmp_query_cpu_model_()
should provide non NULL value, so just abort if it's not so.
While at it drop cpu_common_class_by_name() which is not need
any more as every target has CPUClass::class_by_name callback
by now, though abort in case a new arch will forget to define one.
Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Message-Id: <1518013857-4372-1-git-send-email-imammedo@redhat.com>
Reviewed-by: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
cpu_init(cpu_model) were replaced by cpu_create(cpu_type) so
no users are left, remove it.
Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Acked-by: David Gibson <david@gibson.dropbear.id.au> (ppc)
Reviewed-by: Eduardo Habkost <ehabkost@redhat.com>
Message-Id: <1518000027-274608-6-git-send-email-imammedo@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
it will be used for providing to cpu name resolving class for
parsing cpu model for system and user emulation code.
Along with change add target to null-machine tests, so
that when switch to CPU_RESOLVING_TYPE happens,
it would ensure that null-machine usecase still works.
Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Reviewed-by: Laurent Vivier <laurent@vivier.eu> (m68k)
Acked-by: David Gibson <david@gibson.dropbear.id.au> (ppc)
Acked-by: Bastian Koppelmann <kbastian@mail.uni-paderborn.de> (tricore)
Message-Id: <1518000027-274608-4-git-send-email-imammedo@redhat.com>
Reviewed-by: Eduardo Habkost <ehabkost@redhat.com>
[ehabkost: Added macro to riscv too]
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
tlbsync also needs to check the Guest Translation Shootdown Enable
(GTSE) bit in the Logical Partition Control Register (LPCR) to
determine at which privilege level it is running.
See commit c6fd28fd57 ("target/ppc: Update tlbie to check privilege
level based on GTSE")
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Reviewed-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
- small cleanup for xtensa registers dumping (-d cpu);
- add support for debugging linux-user process with xtensa-linux-gdb
(as opposed to xtensa-elf-gdb), which can only access unprivileged
registers;
- enable MTTCG for target/xtensa;
- cleanup in linux-user/mmap area making sure that it works correctly
with limited 30-bit-wide user address space;
- import xtensa-specific definitions from the linux kernel,
conditionalize user-only/softmmu-only code and add handlers for
signals, exceptions, process/thread creation and core registers dumping.
-----BEGIN PGP SIGNATURE-----
iQJHBAABCAAxFiEEK2eFS5jlMn3N6xfYUfnMkfg/oEQFAlqr9NsTHGpjbXZia2Jj
QGdtYWlsLmNvbQAKCRBR+cyR+D+gRHjDD/9dQxuirsTjU+oO2OMU5YjDBF6Hy+KA
O4hJoWh/jNyUzgZOAtmpbZmuB1GJ5gNDhl5lifEFIWtAqf/qi/M87ibCQbdjFQ+t
sT+FVgSU9X16J9wBKtUPV4DBMeMvJenHtFlCCw6oZxF5cnqGXw7e4yQtn7/KI8jT
ymu7hiCaGJJ4ao/FG8KbIs3iSpQcfbIN7kEfuL92tMNjVWWTnNVhPVxyg3Bojkib
pRFELL/BO3Ud3P83BncA5TNp6O1rFwKRYBK9nwLGWrjFMEbomdT5LWSZuZK9UVN9
aLoC/GnvGCnvAth8E4L0dDOmyz9MRDJ5rYJoaxoEVYzvz8rexVyAjpC/zOrJVxuK
xrgandQtrFGkp5NJD6QpM92b7YDyR1w1s24KlehZivzHoN83cN3CuCHLWcqgicza
/x4r/OQ4uiSUTex2Cg2hVQJR6m1LkJKa94Mimrd7G/zCHSF/BDks170o5DpW7JT8
QWfYTtZg13auzPsgZmGE+/b1o5PBXhnlBPzD983X6u5cgS5RWyik3jhmp5rEx8wH
sxV5kvMb96JlUDCuwPTu9zJhJ3rqbWtCR7+4Sh1PCcsr6vVgsV0EZHAapwrG5GPp
pOxLlZ54ObK3oSW6SB8TnS1rEiGkBHMhSL1O6VdKOvAXFPCVZsIGBGTpuf6MEn6c
hRg0iBGQ6GMUUw==
=UCny
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/xtensa/tags/20180316-xtensa' into staging
target/xtensa linux-user support.
- small cleanup for xtensa registers dumping (-d cpu);
- add support for debugging linux-user process with xtensa-linux-gdb
(as opposed to xtensa-elf-gdb), which can only access unprivileged
registers;
- enable MTTCG for target/xtensa;
- cleanup in linux-user/mmap area making sure that it works correctly
with limited 30-bit-wide user address space;
- import xtensa-specific definitions from the linux kernel,
conditionalize user-only/softmmu-only code and add handlers for
signals, exceptions, process/thread creation and core registers dumping.
# gpg: Signature made Fri 16 Mar 2018 16:46:19 GMT
# gpg: using RSA key 51F9CC91F83FA044
# gpg: Good signature from "Max Filippov <filippov@cadence.com>"
# gpg: aka "Max Filippov <max.filippov@cogentembedded.com>"
# gpg: aka "Max Filippov <jcmvbkbc@gmail.com>"
# Primary key fingerprint: 2B67 854B 98E5 327D CDEB 17D8 51F9 CC91 F83F A044
* remotes/xtensa/tags/20180316-xtensa:
MAINTAINERS: fix W: address for xtensa
qemu-binfmt-conf.sh: add qemu-xtensa
target/xtensa: add linux-user support
linux-user: drop unused target_msync function
linux-user: fix target_mprotect/target_munmap error return values
linux-user: fix assertion in shmdt
linux-user: fix mmap/munmap/mprotect/mremap/shmat
target/xtensa: support MTTCG
target/xtensa: use correct number of registers in gdbstub
target/xtensa: mark register windows in the dump
target/xtensa: dump correct physical registers
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
# Conflicts:
# linux-user/syscall.c
Import list of syscalls from the kernel source. Conditionalize code/data
that is only used with softmmu. Implement exception handlers. Implement
signal hander (only the core registers for now, no coprocessors or TIE).
Cc: Riku Voipio <riku.voipio@iki.fi>
Cc: Laurent Vivier <laurent@vivier.eu>
Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
* SCSI fix to pass maximum transfer size (Daniel Barboza)
* chardev fixes and improved iothread support (Daniel Berrangé, Peter)
* checkpatch tweak (Eric)
* make help tweak (Marc-André)
* make more PCI NICs available with -net or -nic (myself)
* change default q35 NIC to e1000e (myself)
* SCSI support for NDOB bit (myself)
* membarrier system call support (myself)
* SuperIO refactoring (Philippe)
* miscellaneous cleanups and fixes (Thomas)
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2.0.22 (GNU/Linux)
iQEcBAABAgAGBQJapqaMAAoJEL/70l94x66DQoUH/Rvg+a8giz/SrEA4P8D3Cb2z
4GNbNUUoy4oU0ltD5IAMskMwpOsvl1batE0D+pKIlfO9NV4+Cj2kpgo0p9TxoYqM
VCby3wRtx27zb5nVytC6M++iIKXmeEMqXmFw61I6umddNPSl4IR3hiHEE0DM+7dV
UPIOvJeEiazyQaw3Iw+ZctNn8dDBKc/+6oxP9xRcYTaZ6hB4G9RZkqGNNSLcJkk7
R0UotdjzIZhyWMOkjIwlpTF4sWv8gsYUV4bPYKMYho5B0Obda2dBM3I1kpA8yDa/
xZ5lheOaAVBZvM5aMIcaQPa65MO9hLyXFmhMOgyfpJhLBBz6Qpa4OLLI6DeTN+0=
=UAgA
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/bonzini/tags/for-upstream' into staging
* Record-replay lockstep execution, log dumper and fixes (Alex, Pavel)
* SCSI fix to pass maximum transfer size (Daniel Barboza)
* chardev fixes and improved iothread support (Daniel Berrangé, Peter)
* checkpatch tweak (Eric)
* make help tweak (Marc-André)
* make more PCI NICs available with -net or -nic (myself)
* change default q35 NIC to e1000e (myself)
* SCSI support for NDOB bit (myself)
* membarrier system call support (myself)
* SuperIO refactoring (Philippe)
* miscellaneous cleanups and fixes (Thomas)
# gpg: Signature made Mon 12 Mar 2018 16:10:52 GMT
# gpg: using RSA key BFFBD25F78C7AE83
# gpg: Good signature from "Paolo Bonzini <bonzini@gnu.org>"
# gpg: aka "Paolo Bonzini <pbonzini@redhat.com>"
# Primary key fingerprint: 46F5 9FBD 57D6 12E7 BFD4 E2F7 7E15 100C CD36 69B1
# Subkey fingerprint: F133 3857 4B66 2389 866C 7682 BFFB D25F 78C7 AE83
* remotes/bonzini/tags/for-upstream: (69 commits)
tcg: fix cpu_io_recompile
replay: update documentation
replay: save vmstate of the asynchronous events
replay: don't process async events when warping the clock
scripts/replay-dump.py: replay log dumper
replay: avoid recursive call of checkpoints
replay: check return values of fwrite
replay: push replay_mutex_lock up the call tree
replay: don't destroy mutex at exit
replay: make locking visible outside replay code
replay/replay-internal.c: track holding of replay_lock
replay/replay.c: bump REPLAY_VERSION again
replay: save prior value of the host clock
replay: added replay log format description
replay: fix save/load vm for non-empty queue
replay: fixed replay_enable_events
replay: fix processing async events
cpu-exec: fix exception_index handling
hw/i386/pc: Factor out the superio code
hw/alpha/dp264: Use the TYPE_SMC37C669_SUPERIO
...
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
# Conflicts:
# default-configs/i386-softmmu.mak
# default-configs/x86_64-softmmu.mak
* Update kernel headers (Gerd, myself)
* SEV support (Brijesh)
I have not tested non-x86 compilation, but I reordered the SEV patches
so that all non-x86-specific changes go first to catch any possible
issues (which weren't there anyway :)).
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2.0.22 (GNU/Linux)
iQEcBAABAgAGBQJap/4yAAoJEL/70l94x66DmPoH/igfzYkxFyIHFqzb/hQEut3e
IJA05u9DBSqqdSvL0UeLdUgyJTeDM3S5kKZqZ38BPHIudwOGtydoIM2utWtPSejf
Z+mS77+dSgchEMgf1gxmD0oZ5TrO/2pdOYfaZZuQuGmGLruKsDgz6vH3F87cfk8b
yJSJkoZkFc8C9SpwQERWYuhXn2fYFxSBFgEMc9xSFN+zqQUFqeIfOJhwZ+txjAUl
y1EKlhhVyjkxTLR++SkzhKIJ8D5cycpcY/H19gw3ghHviY/tGwNLot3bLRPbwCM6
QvrXDf4rhvFHTmmOfliCI5y6Xgj0u7IZv2fVoKXEtKk1qyfyD4ZnouYTaqP/U9I=
=Q4/y
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/bonzini/tags/for-upstream-sev' into staging
* Migrate MSR_SMI_COUNT (Liran)
* Update kernel headers (Gerd, myself)
* SEV support (Brijesh)
I have not tested non-x86 compilation, but I reordered the SEV patches
so that all non-x86-specific changes go first to catch any possible
issues (which weren't there anyway :)).
# gpg: Signature made Tue 13 Mar 2018 16:37:06 GMT
# gpg: using RSA key BFFBD25F78C7AE83
# gpg: Good signature from "Paolo Bonzini <bonzini@gnu.org>"
# gpg: aka "Paolo Bonzini <pbonzini@redhat.com>"
# Primary key fingerprint: 46F5 9FBD 57D6 12E7 BFD4 E2F7 7E15 100C CD36 69B1
# Subkey fingerprint: F133 3857 4B66 2389 866C 7682 BFFB D25F 78C7 AE83
* remotes/bonzini/tags/for-upstream-sev: (22 commits)
sev/i386: add sev_get_capabilities()
sev/i386: qmp: add query-sev-capabilities command
sev/i386: qmp: add query-sev-launch-measure command
sev/i386: hmp: add 'info sev' command
cpu/i386: populate CPUID 0x8000_001F when SEV is active
sev/i386: add migration blocker
sev/i386: finalize the SEV guest launch flow
sev/i386: add support to LAUNCH_MEASURE command
target/i386: encrypt bios rom
sev/i386: add command to encrypt guest memory region
sev/i386: add command to create launch memory encryption context
sev/i386: register the guest memory range which may contain encrypted data
sev/i386: add command to initialize the memory encryption context
include: add psp-sev.h header file
sev/i386: qmp: add query-sev command
target/i386: add Secure Encrypted Virtualization (SEV) object
kvm: introduce memory encryption APIs
kvm: add memory encryption context
docs: add AMD Secure Encrypted Virtualization (SEV)
machine: add memory-encryption option
...
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
- emit TCG barriers for MEMW, EXTW, S32RI and L32AI;
- do atomic_cmpxchg_i32 for S32C1I.
Cc: Emilio G. Cota <cota@braap.org>
Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
System emulation should provide access to all registers, userspace
emulation should only provide access to unprivileged registers.
Record register flags from GDB register map definition, calculate both
num_regs and num_core_regs if either is zero. Use num_regs in system
emulation, num_core_regs in userspace emulation gdbstub.
Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
Add arrows that mark beginning of register windows and position of the
current window in the windowed register file.
Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
xtensa_cpu_dump_state outputs CPU physical registers as is, without
synchronization from current window. That may result in different values
printed for the current window and corresponding physical registers.
Synchronize physical registers from window before dumping.
Cc: qemu-stable@nongnu.org
Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
The function can be used to get the current SEV capabilities.
The capabilities include platform diffie-hellman key (pdh) and certificate
chain. The key can be provided to the external entities which wants to
establish a trusted channel between SEV firmware and guest owner.
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Richard Henderson <rth@twiddle.net>
Cc: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
The command can be used by libvirt to query the SEV capabilities.
Cc: "Daniel P. Berrangé" <berrange@redhat.com>
Cc: "Dr. David Alan Gilbert" <dgilbert@redhat.com>
Cc: Markus Armbruster <armbru@redhat.com>
Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
The command can be used by libvirt to retrieve the measurement of SEV guest.
This measurement is a signature of the memory contents that was encrypted
through the LAUNCH_UPDATE_DATA.
Cc: "Daniel P. Berrangé" <berrange@redhat.com>
Cc: "Dr. David Alan Gilbert" <dgilbert@redhat.com>
Cc: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
The command can be used to show the SEV information when memory
encryption is enabled on AMD platform.
Cc: Eric Blake <eblake@redhat.com>
Cc: "Daniel P. Berrangé" <berrange@redhat.com>
Cc: "Dr. David Alan Gilbert" <dgilbert@redhat.com>
Cc: Markus Armbruster <armbru@redhat.com>
Reviewed-by: "Dr. David Alan Gilbert" <dgilbert@redhat.com>
Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
When SEV is enabled, CPUID 0x8000_001F should provide additional
information regarding the feature (such as which page table bit is used
to mark the pages as encrypted etc).
The details for memory encryption CPUID is available in AMD APM
(https://support.amd.com/TechDocs/24594.pdf) Section E.4.17
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Richard Henderson <rth@twiddle.net>
Cc: Eduardo Habkost <ehabkost@redhat.com>
Reviewed-by: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
SEV guest migration is not implemented yet.
Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
SEV launch flow requires us to issue LAUNCH_FINISH command before guest
is ready to run.
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Richard Henderson <rth@twiddle.net>
Cc: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
During machine creation we encrypted the guest bios image, the
LAUNCH_MEASURE command can be used to retrieve the measurement of
the encrypted memory region. This measurement is a signature of
the memory contents that can be sent to the guest owner as an
attestation that the memory was encrypted correctly by the firmware.
VM management tools like libvirt can query the measurement using
query-sev-launch-measure QMP command.
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Richard Henderson <rth@twiddle.net>
Cc: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
The KVM_SEV_LAUNCH_UPDATE_DATA command is used to encrypt a guest memory
region using the VM Encryption Key created using LAUNCH_START.
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Richard Henderson <rth@twiddle.net>
Cc: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
The KVM_SEV_LAUNCH_START command creates a new VM encryption key (VEK).
The encryption key created with the command will be used for encrypting
the bootstrap images (such as guest bios).
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Richard Henderson <rth@twiddle.net>
Cc: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
When SEV is enabled, the hardware encryption engine uses a tweak such
that the two identical plaintext at different location will have a
different ciphertexts. So swapping or moving a ciphertexts of two guest
pages will not result in plaintexts being swapped. Hence relocating
a physical backing pages of the SEV guest will require some additional
steps in KVM driver. The KVM_MEMORY_ENCRYPT_{UN,}REG_REGION ioctl can be
used to register/unregister the guest memory region which may contain the
encrypted data. KVM driver will internally handle the relocating physical
backing pages of registered memory regions.
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Richard Henderson <rth@twiddle.net>
Cc: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
When memory encryption is enabled, KVM_SEV_INIT command is used to
initialize the platform. The command loads the SEV related persistent
data from non-volatile storage and initializes the platform context.
This command should be first issued before invoking any other guest
commands provided by the SEV firmware.
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Richard Henderson <rth@twiddle.net>
Cc: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Using a local m68k floatx80_cosh()
[copied from previous:
Written by Andreas Grabher for Previous, NeXT Computer Emulator.]
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
Message-Id: <20180312202728.23790-12-laurent@vivier.eu>
Using a local m68k floatx80_sinh()
[copied from previous:
Written by Andreas Grabher for Previous, NeXT Computer Emulator.]
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
Message-Id: <20180312202728.23790-11-laurent@vivier.eu>
Using local m68k floatx80_tanh() and floatx80_etoxm1()
[copied from previous:
Written by Andreas Grabher for Previous, NeXT Computer Emulator.]
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
Message-Id: <20180312202728.23790-10-laurent@vivier.eu>
Using a local m68k floatx80_atanh()
[copied from previous:
Written by Andreas Grabher for Previous, NeXT Computer Emulator.]
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
Message-Id: <20180312202728.23790-9-laurent@vivier.eu>
Using a local m68k floatx80_acos()
[copied from previous:
Written by Andreas Grabher for Previous, NeXT Computer Emulator.]
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
Message-Id: <20180312202728.23790-8-laurent@vivier.eu>
Using a local m68k floatx80_asin()
[copied from previous:
Written by Andreas Grabher for Previous, NeXT Computer Emulator.]
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
Message-Id: <20180312202728.23790-7-laurent@vivier.eu>
Using a local m68k floatx80_atan()
[copied from previous:
Written by Andreas Grabher for Previous, NeXT Computer Emulator.]
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
Message-Id: <20180312202728.23790-6-laurent@vivier.eu>
Using a local m68k floatx80_cos()
[copied from previous:
Written by Andreas Grabher for Previous, NeXT Computer Emulator.]
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
Message-Id: <20180312202728.23790-4-laurent@vivier.eu>
Using a local m68k floatx80_sin()
[copied from previous:
Written by Andreas Grabher for Previous, NeXT Computer Emulator.]
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
Message-Id: <20180312202728.23790-3-laurent@vivier.eu>
Using a local m68k floatx80_tan()
[copied from previous:
Written by Andreas Grabher for Previous, NeXT Computer Emulator.]
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
Message-Id: <20180312202728.23790-2-laurent@vivier.eu>
The QMP query command can used to retrieve the SEV information when
memory encryption is enabled on AMD platform.
Cc: Eric Blake <eblake@redhat.com>
Cc: "Daniel P. Berrangé" <berrange@redhat.com>
Cc: "Dr. David Alan Gilbert" <dgilbert@redhat.com>
Cc: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Add a new memory encryption object 'sev-guest'. The object will be used
to create encrypted VMs on AMD EPYC CPU. The object provides the properties
to pass guest owner's public Diffie-hellman key, guest policy and session
information required to create the memory encryption context within the
SEV firmware.
e.g to launch SEV guest
# $QEMU \
-object sev-guest,id=sev0 \
-machine ....,memory-encryption=sev0
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Richard Henderson <rth@twiddle.net>
Cc: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
This MSR returns the number of #SMIs that occurred on
CPU since boot.
KVM commit 52797bf9a875 ("KVM: x86: Add emulation of MSR_SMI_COUNT")
introduced support for emulating this MSR.
This commit adds support for QEMU to save/load this
MSR for migration purposes.
Signed-off-by: Liran Alon <liran.alon@oracle.com>
Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Add Intel Processor Trace related definition. It also add
corresponding part to kvm_get/set_msr and vmstate.
Signed-off-by: Chao Peng <chao.p.peng@linux.intel.com>
Signed-off-by: Luwei Kang <luwei.kang@intel.com>
Message-Id: <1520182116-16485-2-git-send-email-luwei.kang@intel.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Expose Intel Processor Trace feature to guest.
To make Intel PT live migration safe and get same CPUID information
with same CPU model on diffrent host. CPUID[14] is constant in this
patch. Intel PT use EPT is first supported in IceLake, the CPUID[14]
get on this machine as default value. Intel PT would be disabled
if any machine don't support this minial feature list.
Signed-off-by: Chao Peng <chao.p.peng@linux.intel.com>
Signed-off-by: Luwei Kang <luwei.kang@intel.com>
Message-Id: <1520182116-16485-1-git-send-email-luwei.kang@intel.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Add KVM_HINTS_DEDICATED performance hint, guest checks this feature bit
to determine if they run on dedicated vCPUs, allowing optimizations such
as usage of qspinlocks.
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Radim Krčmář <rkrcmar@redhat.com>
Cc: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Wanpeng Li <wanpengli@tencent.com>
Message-Id: <1518185725-69559-1-git-send-email-wanpengli@tencent.com>
[ehabkost: Renamed property to kvm-hint-dedicated]
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Unify half a dozen copies of very similar code (the only difference being
whether comparisons were case-sensitive) and use it also in Tricore,
which did not do any sorting of CPU model names.
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* i.MX: Add i.MX7 SOC implementation and i.MX7 Sabre board
* Report the correct core count in A53 L2CTLR on the ZynqMP board
* linux-user: preliminary SVE support work (signal handling)
* hw/arm/boot: fix memory leak in case of error loading ELF file
* hw/arm/boot: avoid reading off end of buffer if passed very
small image file
* hw/arm: Use more CONFIG switches for the object files
* target/arm: Add "-cpu max" support
* hw/arm/virt: Support -machine gic-version=max
* hw/sd: improve debug tracing
* hw/sd: sdcard: Add the Tuning Command (CMD 19)
* MAINTAINERS: add Philippe as odd-fixes maintainer for SD
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQIcBAABCAAGBQJaosNHAAoJEDwlJe0UNgzexcoP/RHqkdKk91Dzg4MirndihMrJ
nCu4n1J68uEOt79SlS4ES+EVKmfPvo2DP94Kp8L4bjqiHuvSMQfjX7YnPzCvQKOC
Idz4BklbjYg3QP+UWFysoHvv5vXvytRhwu6LeVoTgebpBIwvKKyQh/89mwp1hKRm
ZkpdkTRP2lsQUCG36kYUiAyhcJH+9nxQBtecfYjpzKsQg49Piltt999l9c8VzTfw
yY72rEw4vFKFUjHfkbi0m2lPhZWIwGMoU0/qFNfIrMRi4vp6WDeQaRYgDgxpGfwy
ZCbHVQeuQg87xD48HQMoQO+F3iaCvbjllDKnqAL80W8NreAyKJX8e8Cz9FD2k0n5
RvDeQ6QOq5jzOW6uSDlJgT71kajiEzJH43TLLB6k7/mdJICt/JGU7EWVSP6C0fZz
cEjRLfx8JoZN4HmFy2f8K+IwdWpGkshzTVO1XmFsWZmSHUD+6vcUv9Nd1MP3ACGN
BPqmbjk2guiacuKs3gbOSCB1mLWXCu4HMAc6ppO1d3pVHRWaR9UAVCiYJNuXd/VU
dXELAbcP6WOwHteBxgwGnEALcgz40gx149+gePD3MLc3ImZEqWba91Ghp8T6NNSu
p8ZJqN7Dsow6mOxaS5iYfFfXdf8K13cTmB371Q+0IOi3Z0cO2CgEmFZ8fj5yKCSK
wjtYIFhwzFdWqzGb4e/Q
=CWr5
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/pmaydell/tags/pull-target-arm-20180309' into staging
target-arm queue:
* i.MX: Add i.MX7 SOC implementation and i.MX7 Sabre board
* Report the correct core count in A53 L2CTLR on the ZynqMP board
* linux-user: preliminary SVE support work (signal handling)
* hw/arm/boot: fix memory leak in case of error loading ELF file
* hw/arm/boot: avoid reading off end of buffer if passed very
small image file
* hw/arm: Use more CONFIG switches for the object files
* target/arm: Add "-cpu max" support
* hw/arm/virt: Support -machine gic-version=max
* hw/sd: improve debug tracing
* hw/sd: sdcard: Add the Tuning Command (CMD 19)
* MAINTAINERS: add Philippe as odd-fixes maintainer for SD
# gpg: Signature made Fri 09 Mar 2018 17:24:23 GMT
# gpg: using RSA key 3C2525ED14360CDE
# gpg: Good signature from "Peter Maydell <peter.maydell@linaro.org>"
# gpg: aka "Peter Maydell <pmaydell@gmail.com>"
# gpg: aka "Peter Maydell <pmaydell@chiark.greenend.org.uk>"
# Primary key fingerprint: E1A5 C593 CD41 9DE2 8E83 15CF 3C25 25ED 1436 0CDE
* remotes/pmaydell/tags/pull-target-arm-20180309: (25 commits)
MAINTAINERS: Add entries for SD (SDHCI, SDBus, SDCard)
sdhci: Fix a typo in comment
sdcard: Add the Tuning Command (CMD19)
sdcard: Display which protocol is used when tracing (SD or SPI)
sdcard: Display command name when tracing CMD/ACMD
sdcard: Do not trace CMD55, except when we already expect an ACMD
hw/arm/virt: Support -machine gic-version=max
hw/arm/virt: Add "max" to the list of CPU types "virt" supports
target/arm: Make 'any' CPU just an alias for 'max'
target/arm: Add "-cpu max" support
target/arm: Move definition of 'host' cpu type into cpu.c
target/arm: Query host CPU features on-demand at instance init
arm: avoid heap-buffer-overflow in load_aarch64_image
arm: fix load ELF error leak
hw/arm: Use more CONFIG switches for the object files
aarch64-linux-user: Add support for SVE signal frame records
aarch64-linux-user: Add support for EXTRA signal frame records
aarch64-linux-user: Remove struct target_aux_context
aarch64-linux-user: Split out helpers for guest signal handling
linux-user: Implement aarch64 PR_SVE_SET/GET_VL
...
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Now we have a working '-cpu max', the linux-user-only
'any' CPU is pretty much the same thing, so implement it
that way.
For the moment we don't add any of the extra feature bits
to the system-emulation "max", because we don't set the
ID register bits we would need to to advertise those
features as present.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20180308130626.12393-5-peter.maydell@linaro.org
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Add support for "-cpu max" for ARM guests. This CPU type behaves
like "-cpu host" when KVM is enabled, and like a system CPU with
the maximum possible feature set otherwise. (Note that this means
it won't be migratable across versions, as we will likely add
features to it in future.)
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Message-id: 20180308130626.12393-4-peter.maydell@linaro.org
Move the definition of the 'host' cpu type into cpu.c, where all the
other CPU types are defined. We can do this now we've decoupled it
from the KVM-specific host feature probing. This means we now create
the type unconditionally (assuming we were built with KVM support at
all), but if you try to use it without -enable-kvm this will end
up in the "host cpu probe failed and KVM not enabled" path in
arm_cpu_realizefn(), for an appropriate error message.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Message-id: 20180308130626.12393-3-peter.maydell@linaro.org