use gen_bstrins/gen_bstrpic to replace gen_rr_ms_ls.
Suggested-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Song Gao <gaosong@loongson.cn>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220930024510.800005-2-gaosong@loongson.cn>
Since commit 4047368938 "accel/tcg: Introduce tlb_set_page_full" we
have been seeing this assert
../accel/tcg/cputlb.c:1294: tlb_set_page_with_attrs: Assertion `is_power_of_2(size)' failed.
When running Tock on the OpenTitan machine.
The issue is that pmp_get_tlb_size() would return a TLB size that wasn't
a power of 2. The size was also smaller then TARGET_PAGE_SIZE.
This patch ensures that any TLB size less then TARGET_PAGE_SIZE is
rounded down to 1 to ensure it's a valid size.
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: LIU Zhiwei<zhiwei_liu@linux.alibaba.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20221012011449.506928-1-alistair.francis@opensource.wdc.com
Message-Id: <20221012011449.506928-1-alistair.francis@opensource.wdc.com>
Add support for saving/restoring extended save states when signals
are delivered. This allows using AVX, MPX or PKRU registers in
signal handlers.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
The MSR_CORE_THREAD_COUNT MSR describes CPU package topology, such as number
of threads and cores for a given package. This is information that QEMU has
readily available and can provide through the new user space MSR deflection
interface.
This patch propagates the existing hvf logic from patch 027ac0cb51
("target/i386/hvf: add rdmsr 35H MSR_CORE_THREAD_COUNT") to KVM.
Signed-off-by: Alexander Graf <agraf@csgraf.de>
Message-Id: <20221004225643.65036-4-agraf@csgraf.de>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
KVM has grown support to deflect arbitrary MSRs to user space since
Linux 5.10. For now we don't expect to make a lot of use of this
feature, so let's expose it the easiest way possible: With up to 16
individually maskable MSRs.
This patch adds a kvm_filter_msr() function that other code can call
to install a hook on KVM MSR reads or writes.
Signed-off-by: Alexander Graf <agraf@csgraf.de>
Message-Id: <20221004225643.65036-3-agraf@csgraf.de>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Intel CPUs starting with Haswell-E implement a new MSR called
MSR_CORE_THREAD_COUNT which exposes the number of threads and cores
inside of a package.
This MSR is used by XNU to populate internal data structures and not
implementing it prevents virtual machines with more than 1 vCPU from
booting if the emulated CPU generation is at least Haswell-E.
This patch propagates the existing hvf logic from patch 027ac0cb51
("target/i386/hvf: add rdmsr 35H MSR_CORE_THREAD_COUNT") to TCG.
Signed-off-by: Alexander Graf <agraf@csgraf.de>
Message-Id: <20221004225643.65036-2-agraf@csgraf.de>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <20221001140935.465607-27-richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Expand this function at each of its callers.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <20221001140935.465607-26-richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Create a tcg global temp for this, and use it instead of explicit stores.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <20221001140935.465607-25-richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <20221001140935.465607-24-richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
These functions have only one caller, and the logic is more
obvious this way.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <20221001140935.465607-23-richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
These functions are always passed aflag, so we might as well
read it from DisasContext directly. While we're at it, use
a common subroutine for these two functions.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <20221001140935.465607-22-richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
With gen_jmp_rel, we may chain between two translation blocks
which may only be separated because of TB size limits.
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20221001140935.465607-21-richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <20221001140935.465607-20-richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
With gen_jmp_rel, we may chain to the next tb instead of merely
writing to eip and exiting. For repz, subtract cur_insn_len to
restart the current insn.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <20221001140935.465607-19-richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Create a common helper for pc-relative branches. The jmp jb insn
was missing a mask for CODE32. In all cases the CODE64 check was
incorrectly placed, allowing PREFIX_DATA to truncate %rip to 16 bits.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <20221001140935.465607-18-richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
We can set is_jmp early, using only one if, and let that
be overwritten by gen_rep*'s calls to gen_jmp_tb.
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20221001140935.465607-17-richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Create helpers for loading the address of the next insn.
Use tcg_constant_* in adjacent code where convenient.
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20221001140935.465607-16-richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Use i32 not int or tl for eip and cs arguments.
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20221001140935.465607-15-richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Drop the unused dest argument to gen_jr().
Remove most of the calls to gen_jr, and use DISAS_JUMP.
Remove some unused loads of eip for lcall and ljmp.
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20221001140935.465607-14-richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
All callers pass s->base.pc_next and s->pc, which we can just
as well compute within the functions. Pull out common helpers
and reduce the amount of code under macros.
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20221001140935.465607-13-richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Create common routines for computing the length of the insn.
Use tcg_constant_i32 in the new function, while we're at it.
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20221001140935.465607-12-richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Replace lone calls to gen_eob() with the new enumerator.
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20221001140935.465607-11-richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Replace sequences of gen_update_cc_op, gen_update_eip_next,
and gen_eob with the new is_jmp enumerator.
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20221001140935.465607-10-richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Set is_jmp properly in gen_movl_seg_T0, so that the callers
need to nothing special.
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20221001140935.465607-9-richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Add a few DISAS_TARGET_* aliases to reduce the number of
calls to gen_eob() and gen_eob_inhibit_irq(). So far,
only update i386_tr_translate_insn for exiting the block
because of single-step or previous inhibit irq.
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20221001140935.465607-8-richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Sync EIP before exiting a translation block.
Replace all gen_jmp_im that use s->pc.
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20221001140935.465607-7-richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Like gen_update_cc_op, sync EIP before doing something
that could raise an exception. Replace all gen_jmp_im
that use s->base.pc_next.
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20221001140935.465607-6-richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
All callers pass s->base.pc_next and s->pc, which we can just as
well compute within the function. Adjust to use tcg_constant_i32
while we're at it.
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20221001140935.465607-5-richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
All callers pass s->base.pc_next - s->cs_base, which we can just
as well compute within the function. Note the special case of
EXCP_VSYSCALL in which s->cs_base wasn't subtracted, but cs_base
is always zero in 64-bit mode, when vsyscall is used.
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20221001140935.465607-4-richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Instead of returning the new pc, which is present in
DisasContext, return true if an insn was translated.
This is false when we detect a page crossing and must
undo the insn under translation.
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <20221001140935.465607-3-richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
The DisasContext member and the disas_insn local variable of
the same name are identical to DisasContextBase.pc_next.
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20221001140935.465607-2-richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
There are cases that malicious virtual machine can cause CPU stuck (due
to event windows don't open up), e.g., infinite loop in microcode when
nested #AC (CVE-2015-5307). No event window means no event (NMI, SMI and
IRQ) can be delivered. It leads the CPU to be unavailable to host or
other VMs. Notify VM exit is introduced to mitigate such kind of
attacks, which will generate a VM exit if no event window occurs in VM
non-root mode for a specified amount of time (notify window).
A new KVM capability KVM_CAP_X86_NOTIFY_VMEXIT is exposed to user space
so that the user can query the capability and set the expected notify
window when creating VMs. The format of the argument when enabling this
capability is as follows:
Bit 63:32 - notify window specified in qemu command
Bit 31:0 - some flags (e.g. KVM_X86_NOTIFY_VMEXIT_ENABLED is set to
enable the feature.)
Users can configure the feature by a new (x86 only) accel property:
qemu -accel kvm,notify-vmexit=run|internal-error|disable,notify-window=n
The default option of notify-vmexit is run, which will enable the
capability and do nothing if the exit happens. The internal-error option
raises a KVM internal error if it happens. The disable option does not
enable the capability. The default value of notify-window is 0. It is valid
only when notify-vmexit is not disabled. The valid range of notify-window
is non-negative. It is even safe to set it to zero since there's an
internal hardware threshold to be added to ensure no false positive.
Because a notify VM exit may happen with VM_CONTEXT_INVALID set in exit
qualification (no cases are anticipated that would set this bit), which
means VM context is corrupted. It would be reflected in the flags of
KVM_EXIT_NOTIFY exit. If KVM_NOTIFY_CONTEXT_INVALID bit is set, raise a KVM
internal error unconditionally.
Acked-by: Peter Xu <peterx@redhat.com>
Signed-off-by: Chenyi Qiang <chenyi.qiang@intel.com>
Message-Id: <20220929072014.20705-5-chenyi.qiang@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Now we have an enum for the granule size, use it in the
ARMVAParameters struct instead of the using16k/using64k bools.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20221003162315.2833797-3-peter.maydell@linaro.org
Arm CPUs support some subset of the granule (page) sizes 4K, 16K and
64K. The guest selects the one it wants using bits in the TCR_ELx
registers. If it tries to program these registers with a value that
is either reserved or which requests a size that the CPU does not
implement, the architecture requires that the CPU behaves as if the
field was programmed to some size that has been implemented.
Currently we don't implement this, and instead let the guest use any
granule size, even if the CPU ID register fields say it isn't
present.
Make aa64_va_parameters() check against the supported granule size
and force use of a different one if it is not implemented.
(A subsequent commit will make ARMVAParameters use the new enum
rather than the current pair of using16k/using64k bools.)
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20221003162315.2833797-2-peter.maydell@linaro.org
Adjust GetPhysAddrResult to fill in CPUTLBEntryFull,
so that it may be passed directly to tlb_set_page_full.
The change is large, but mostly mechanical. The major
non-mechanical change is page_size -> lg_page_size.
Most of the time this is obvious, and is related to
TARGET_PAGE_BITS.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20221001162318.153420-21-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Do not apply memattr or shareability for Stage2 translations.
Make sure to apply HCR_{DC,DCT} only to Regime_EL10, per the
pseudocode in AArch64.S1DisabledOutput.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20221001162318.153420-20-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20221001162318.153420-19-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Use arm_hcr_el2_eff_secstate instead of arm_hcr_el2_eff, so
that we use is_secure instead of the current security state.
These AT* operations have been broken since arm_hcr_el2_eff
gained a check for "el2 enabled" for Secure EL2.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20221001162318.153420-18-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
These subroutines did not need ENV for anything except
retrieving the effective value of HCR anyway.
We have computed the effective value of HCR in the callers,
and this will be especially important for interpreting HCR
in a non-current security state.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20221001162318.153420-17-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This value is unused.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20221001162318.153420-16-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Rename the argument to is_secure_ptr, and introduce a
local variable is_secure with the value. We only write
back to the pointer toward the end of the function.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20221001162318.153420-15-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
For page walking, we may require HCR for a security state
that is not "current".
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20221001162318.153420-14-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The effect of TGE does not only apply to non-secure state,
now that Secure EL2 exists.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20221001162318.153420-13-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Use a switch on mmu_idx for the a-profile indexes, instead of
three different if's vs regime_el and arm_mmu_idx_is_stage1_of_2.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20221001162318.153420-12-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
For a-profile aarch64, which does not bank system registers, it takes
quite a lot of code to switch between security states. In the process,
registers such as TCR_EL{1,2} must be swapped, which in itself requires
the flushing of softmmu tlbs. Therefore it doesn't buy us anything to
separate tlbs by security state.
Retain the distinction between Stage2 and Stage2_S.
This will be important as we implement FEAT_RME, and do not wish to
add a third set of mmu indexes for Realm state.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20221001162318.153420-11-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Use get_phys_addr_with_secure directly. For a-profile, this is the
one place where the value of is_secure may not equal arm_is_secure(env).
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20221001162318.153420-10-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This is the last use of regime_is_secure; remove it
entirely before changing the layout of ARMMMUIdx.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20221001162318.153420-9-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Remove the use of regime_is_secure from arm_tr_init_disas_context.
Instead, provide the value of v8m_secure directly from tb_flags.
Rather than use regime_is_secure, use the env->v7m.secure directly,
as per arm_mmu_idx_el.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20221001162318.153420-8-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Remove the use of regime_is_secure from v7m_read_half_insn, using
the new parameter instead.
As it happens, both callers pass true, propagated from the argument
to arm_v7m_mmu_idx_for_secstate which created the mmu_idx argument,
but that is a detail of v7m_handle_execute_nsc we need not expose
to the callee.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20221001162318.153420-7-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Retain the existing get_phys_addr interface using the security
state derived from mmu_idx. Move the kerneldoc comments to the
header file where they belong.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20221001162318.153420-6-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Remove the use of regime_is_secure from regime_translation_disabled,
using the new parameter instead.
This fixes a bug in S1_ptw_translate and get_phys_addr where we had
passed ARMMMUIdx_Stage2 and not ARMMMUIdx_Stage2_S to determine if
Stage2 is disabled, affecting FEAT_SEL2.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20221001162318.153420-5-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Pass the correct stage2 mmu_idx to regime_translation_disabled,
which we computed afterward.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20221001162318.153420-4-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Remove the use of regime_is_secure from get_phys_addr_lpae,
using the new parameter instead.
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20221001162318.153420-3-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
While the stage2 call to get_phys_addr_lpae should never set
attrs.secure when given a non-secure input, it's just as easy
to make the final update to attrs.secure be unconditional and
false in the case of non-secure input.
Suggested-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20221007152159.1414065-1-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The starting security state comes with the translation regime,
not the current state of arm_is_secure_below_el3().
Create a new local variable, s2walk_secure, which does not need
to be written back to result->attrs.secure -- we compute that
value later, after the S2 walk is complete.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20221001162318.153420-2-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Updates write_scr() to allow setting SCR_EL3.EnTP2 when FEAT_SME is
implemented. SCR_EL3 being a 64-bit register, valid_mask is changed
to uint64_t and the SCR_* constants in target/arm/cpu.h are extended
to 64-bit so that masking and bitwise not (~) behave as expected.
This enables booting Linux with Trusted Firmware-A at EL3 with
"-M virt,secure=on -cpu max".
Cc: qemu-stable@nongnu.org
Fixes: 78cb977666 ("target/arm: Enable SME for -cpu max")
Signed-off-by: Jerome Forissier <jerome.forissier@linaro.org>
Reviewed-by: Andre Przywara <andre.przywara@arm.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20221004072354.27037-1-jerome.forissier@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Occasionally the KVM_CREATE_VM ioctl can return EINTR, even though
there is no pending signal to be taken. In commit 94ccff1338
we added a retry-on-EINTR loop to the KVM_CREATE_VM call in the
generic KVM code. Adopt the same approach for the use of the
ioctl in the Arm-specific KVM code (where we use it to create a
scratch VM for probing for various things).
For more information, see the mailing list thread:
https://lore.kernel.org/qemu-devel/8735e0s1zw.wl-maz@kernel.org/
Reported-by: Vitaly Chikunov <vt@altlinux.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Vitaly Chikunov <vt@altlinux.org>
Reviewed-by: Eric Auger <eric.auger@redhat.com>
Acked-by: Marc Zyngier <maz@kernel.org>
Message-id: 20220930113824.1933293-1-peter.maydell@linaro.org
Several hypervisor capabilities in KVM are target-specific. When exposed
to QEMU users as accelerator properties (i.e. -accel kvm,prop=value), they
should not be available for all targets.
Add a hook for targets to add their own properties to -accel kvm, for
now no such property is defined.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <20220929072014.20705-3-chenyi.qiang@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
For the direct triple faults, i.e. hardware detected and KVM morphed
to VM-Exit, KVM will never lose them. But for triple faults sythesized
by KVM, e.g. the RSM path, if KVM exits to userspace before the request
is serviced, userspace could migrate the VM and lose the triple fault.
A new flag KVM_VCPUEVENT_VALID_TRIPLE_FAULT is defined to signal that
the event.triple_fault_pending field contains a valid state if the
KVM_CAP_X86_TRIPLE_FAULT_EVENT capability is enabled.
Acked-by: Peter Xu <peterx@redhat.com>
Signed-off-by: Chenyi Qiang <chenyi.qiang@intel.com>
Message-Id: <20220929072014.20705-2-chenyi.qiang@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
It's always better to convey the type of a pointer if at all
possible. So let's add the DumpState typedef to typedefs.h and move
the dump note functions from the opaque pointers to DumpState
pointers.
Signed-off-by: Janosch Frank <frankja@linux.ibm.com>
CC: Peter Maydell <peter.maydell@linaro.org>
CC: Cédric Le Goater <clg@kaod.org>
CC: Daniel Henrique Barboza <danielhb413@gmail.com>
CC: David Gibson <david@gibson.dropbear.id.au>
CC: Greg Kurz <groug@kaod.org>
CC: Palmer Dabbelt <palmer@dabbelt.com>
CC: Alistair Francis <alistair.francis@wdc.com>
CC: Bin Meng <bin.meng@windriver.com>
CC: Cornelia Huck <cohuck@redhat.com>
CC: Thomas Huth <thuth@redhat.com>
CC: Richard Henderson <richard.henderson@linaro.org>
CC: David Hildenbrand <david@redhat.com>
Acked-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Message-Id: <20220811121111.9878-2-frankja@linux.ibm.com>
This helps us construct strings elsewhere before echoing to the
monitor. It avoids having to jump through hoops like:
monitor_printf(mon, "%s", s->str);
It will be useful in following patches but for now convert all
existing plain "%s" printfs to use the _puts api.
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20220929114231.583801-33-alex.bennee@linaro.org>
Bug fix in gen_tcg_funcs.py
-----BEGIN PGP SIGNATURE-----
iQEzBAABCgAdFiEENjXHiM5iuR/UxZq0ewJE+xLeRCIFAmM7JS4ACgkQewJE+xLe
RCJXxQf9ESfI6LVoB1VBsMs69WOHqhy1HUEVzM4Ku+CgDCNaFRRz7xFoy/sv4FOX
D7h5aYVuCLrX/KfttV6V+1GXX/XIyjMN81uZZ8/eiCvjt7D/9fkrUxp9E1Gh6KlV
Dci21OYjh4aStd4tXin0vPHN5wG+IuuYuSzj0Xvu8SzRjFYKsFkjfxPrVsm1zWvN
G1FfiUJ6AveRf9SJVuMTmLHY7jo9hg0/tpm7YpnxlIgzDVZbZDa1yDwaLEg/m6AT
GFHli/nOEsL1c6mbYmvVnGoSupjEj0+MfNIeOUrn8D5Gd66OgvU+FVVFBJQ4ZKi6
ZuckxBjBE3d5XKyxCVryRA3at+WLYA==
=ron6
-----END PGP SIGNATURE-----
Merge tag 'pull-hex-20221003' of https://github.com/quic/qemu into staging
Make store handling faster and more robust
Bug fix in gen_tcg_funcs.py
# -----BEGIN PGP SIGNATURE-----
#
# iQEzBAABCgAdFiEENjXHiM5iuR/UxZq0ewJE+xLeRCIFAmM7JS4ACgkQewJE+xLe
# RCJXxQf9ESfI6LVoB1VBsMs69WOHqhy1HUEVzM4Ku+CgDCNaFRRz7xFoy/sv4FOX
# D7h5aYVuCLrX/KfttV6V+1GXX/XIyjMN81uZZ8/eiCvjt7D/9fkrUxp9E1Gh6KlV
# Dci21OYjh4aStd4tXin0vPHN5wG+IuuYuSzj0Xvu8SzRjFYKsFkjfxPrVsm1zWvN
# G1FfiUJ6AveRf9SJVuMTmLHY7jo9hg0/tpm7YpnxlIgzDVZbZDa1yDwaLEg/m6AT
# GFHli/nOEsL1c6mbYmvVnGoSupjEj0+MfNIeOUrn8D5Gd66OgvU+FVVFBJQ4ZKi6
# ZuckxBjBE3d5XKyxCVryRA3at+WLYA==
# =ron6
# -----END PGP SIGNATURE-----
# gpg: Signature made Mon 03 Oct 2022 14:08:46 EDT
# gpg: using RSA key 3635C788CE62B91FD4C59AB47B0244FB12DE4422
# gpg: Good signature from "Taylor Simpson (Rock on) <tsimpson@quicinc.com>" [unknown]
# gpg: WARNING: This key is not certified with a trusted signature!
# gpg: There is no indication that the signature belongs to the owner.
# Primary key fingerprint: 3635 C788 CE62 B91F D4C5 9AB4 7B02 44FB 12DE 4422
* tag 'pull-hex-20221003' of https://github.com/quic/qemu:
Hexagon (gen_tcg_funcs.py): avoid duplicated tcg code on A_CVI_NEW
Hexagon (target/hexagon) move store size tracking to translation
Hexagon (target/hexagon) Change decision to set pkt_has_store_s[01]
Hexagon (target/hexagon) add instruction attributes from archlib
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
The value previously chosen overlaps GUSA_MASK.
Rename all DELAY_SLOT_* and GUSA_* defines to emphasize
that they are included in TB_FLAGs. Add aliases for the
FPSCR and SR bits that are included in TB_FLAGS, so that
we don't accidentally reassign those bits.
Fixes: 4da06fb306 ("target/sh4: Implement prctl_unalign_sigbus")
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/856
Reviewed-by: Yoshinori Sato <ysato@users.sourceforge.jp>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
The availability of tb->pc will shortly be conditional.
Introduce accessor functions to minimize ifdefs.
Pass around a known pc to places like tcg_gen_code,
where the caller must already have the value.
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
When PAGE_WRITE_INV is set when calling tlb_set_page,
we immediately set TLB_INVALID_MASK in order to force
tlb_fill to be called on the next lookup. Here in
probe_access_internal, we have just called tlb_fill
and eliminated true misses, thus the lookup must be valid.
This allows us to remove a warning comment from s390x.
There doesn't seem to be a reason to change the code though.
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: David Hildenbrand <david@redhat.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
This structure will shortly contain more than just
data for accessing MMIO. Rename the 'addr' member
to 'xlat_section' to more clearly indicate its purpose.
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
There is no need to guard g_free(P) with if (P): g_free(NULL) is safe.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20220923090428.93529-1-armbru@redhat.com>
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
Hexagon instructions with the A_CVI_NEW attribute produce a vector value
that can be used in the same packet. The python function responsible for
generating code for such instructions has a typo ("if" instead of
"elif"), which makes genptr_dst_write_ext() be executed twice, thus also
generating the same tcg code twice. Fortunately, this doesn't cause any
problems for correctness, but it is less efficient than it could be. Fix
it by using an "elif" and avoiding the unnecessary extra code gen.
Signed-off-by: Matheus Tavares Bernardino <quic_mathbern@quicinc.com>
Signed-off-by: Taylor Simpson <tsimpson@quicinc.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Taylor Simpson <tsimpson@quicinc.com>
Message-Id: <fa706b192b2a3a0ffbd399fa8dbf0d5b2c5b82d9.1664568492.git.quic_mathbern@quicinc.com>
New KVM_CLOCK flags were added in the kernel.(c68dc1b577eabd5605c6c7c08f3e07ae18d30d5d)
```
+ #define KVM_CLOCK_VALID_FLAGS \
+ (KVM_CLOCK_TSC_STABLE | KVM_CLOCK_REALTIME | KVM_CLOCK_HOST_TSC)
case KVM_CAP_ADJUST_CLOCK:
- r = KVM_CLOCK_TSC_STABLE;
+ r = KVM_CLOCK_VALID_FLAGS;
```
kvm_has_adjust_clock_stable needs to handle additional flags,
so that s->clock_is_reliable can be true and kvmclock_current_nsec doesn't need to be called.
Signed-off-by: Ray Zhang <zhanglei002@gmail.com>
Message-Id: <20220922100523.2362205-1-zhanglei002@gmail.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
The store width is needed for packet commit, so it is stored in
ctx->store_width. Currently, it is set when a store has a TCG
override instead of a QEMU helper. In the QEMU helper case, the
ctx->store_width is not set, we invoke a helper during packet commit
that uses the runtime store width.
This patch ensures ctx->store_width is set for all store instructions,
so performance is improved because packet commit can generate the proper
TCG store rather than the generic helper.
We do this by
- Use the attributes from the instructions during translation to
set ctx->store_width
- Remove setting of ctx->store_width from genptr.c
Signed-off-by: Taylor Simpson <tsimpson@quicinc.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220920080746.26791-3-tsimpson@quicinc.com>
We have found cases where pkt_has_store_s[01] is set incorrectly.
This leads to generating an unnecessary store that is left over
from a previous packet.
Add an attribute to determine if an instruction is a scalar store
The attribute is attached to the fSTORE macro (hex_common.py)
Update the logic in decode.c that sets pkt_has_store_s[01]
Signed-off-by: Taylor Simpson <tsimpson@quicinc.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220920080746.26791-4-tsimpson@quicinc.com>
The imported files from the architecture library have added some
instruction attributes. Some of these will be used in a subsequent
patch for determing the size of a store.
Signed-off-by: Taylor Simpson <tsimpson@quicinc.com>
Acked-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220920080746.26791-2-tsimpson@quicinc.com>
SP_EL1 must be kept when EL3 is present but EL2 is not. Therefore mark
it with ARM_CP_EL3_NO_EL2_KEEP.
Cc: qemu-stable@nongnu.org
Fixes: 696ba37718 ("target/arm: Handle cpreg registration for missing EL")
Signed-off-by: Jerome Forissier <jerome.forissier@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220927120058.670901-1-jerome.forissier@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
cpu64.c has ended up in a slightly odd order -- it starts with the
initfns for most of the models-real-hardware CPUs; after that comes a
bunch of support code for SVE, SME, pauth and LPA2 properties. Then
come the initfns for the 'host' and 'max' CPU types, and then after
that one more models-real-hardware CPU initfn, for a64fx. (This
ordering is partly historical and partly required because a64fx needs
the SVE properties.)
Reorder the file into:
* CPU property support functions
* initfns for real hardware CPUs
* initfns for host and max
* class boilerplate
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Our SDCR_VALID_MASK doesn't include all of the bits which are defined
by the current architecture. In particular in commit 0b42f4fab9 we
forgot to add SCCD, which meant that an AArch32 guest couldn't
actually use the SCCD bit to disable counting in Secure state.
Add all the currently defined bits; we don't implement all of them,
but this makes them be reads-as-written, which is architecturally
valid and matches how we currently handle most of the others in the
mask.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220923123412.1214041-4-peter.maydell@linaro.org
In commit 01765386a8 we fixed a bug where we weren't correctly
bracketing changes to some registers with pmu_op_start() and
pmu_op_finish() calls for changes which affect whether the PMU
counters might be enabled. However, we missed the case of writes to
the AArch64 MDCR_EL3 register, because (unlike its AArch32
counterpart) they are currently done directly to the CPU state struct
without going through the sdcr_write() function.
Give MDCR_EL3 a writefn which handles the PMU start/finish calls.
The SDCR writefn then simplfies to "call the MDCR_EL3 writefn after
masking off the bits which don't exist in the AArch32 register".
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220923123412.1214041-3-peter.maydell@linaro.org
In commit 01765386a8 we made some system register write functions
call pmu_op_start()/pmu_op_finish(). This means that they now touch
timers, so for icount to work these registers must have the ARM_CP_IO
flag set.
This fixes a bug where when icount is enabled a guest that touches
MDCR_EL3, MDCR_EL2, PMCNTENSET_EL0 or PMCNTENCLR_EL0 would cause
QEMU to print an error message and exit, for example:
[ 2.495971] TCP: Hash tables configured (established 1024 bind 1024)
[ 2.496213] UDP hash table entries: 256 (order: 1, 8192 bytes)
[ 2.496386] UDP-Lite hash table entries: 256 (order: 1, 8192 bytes)
[ 2.496917] NET: Registered protocol family 1
qemu-system-aarch64: Bad icount read
Reported-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220923123412.1214041-2-peter.maydell@linaro.org
Include the IIR register (which holds the opcode of the failing
instruction) when dumping the hppa registers.
Signed-off-by: Helge Deller <deller@gmx.de>
Reviewed-by: Laurent Vivier <laurent@vivier.eu>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20220918194555.83535-7-deller@gmx.de>
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
Starting with RVV1.0, the original vf[w]redsum_vs instruction was renamed
to vf[w]redusum_vs. The distinction between ordered and unordered is also
more consistent with other instructions, although there is no difference
in implementation between the two for QEMU.
Signed-off-by: Yang Liu <liuyang22@iscas.ac.cn>
Acked-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Frank Chang <frank.chang@sifive.com>
Message-Id: <20220817074802.20765-2-liuyang22@iscas.ac.cn>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Remove duplicate code by wrapping vfwredsum_vs's OP function.
Signed-off-by: Yang Liu <liuyang22@iscas.ac.cn>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Frank Chang <frank.chang@sifive.com>
Message-Id: <20220817074802.20765-1-liuyang22@iscas.ac.cn>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Type 6 trigger is similar to a type 2 trigger, but provides additional
functionality and should be used instead of type 2 in newer
implementations.
Signed-off-by: Frank Chang <frank.chang@sifive.com>
Reviewed-by: Bin Meng <bmeng.cn@gmail.com>
Signed-off-by: Bin Meng <bmeng.cn@gmail.com>
Message-Id: <20220909134215.1843865-9-bmeng.cn@gmail.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Type 2 trigger cannot be fired in VU/VS modes.
Signed-off-by: Frank Chang <frank.chang@sifive.com>
Reviewed-by: Bin Meng <bmeng.cn@gmail.com>
Signed-off-by: Bin Meng <bmeng.cn@gmail.com>
Message-Id: <20220909134215.1843865-8-bmeng.cn@gmail.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Trigger actions are shared among all triggers. Extract to a common
function.
Signed-off-by: Frank Chang <frank.chang@sifive.com>
Reviewed-by: Bin Meng <bmeng.cn@gmail.com>
Signed-off-by: Bin Meng <bmeng.cn@gmail.com>
Reviewed-by: LIU Zhiwei <zhiwei_liu@linux.alibaba.com>
[bmeng: handle the DBG_ACTION_NONE case]
Signed-off-by: Bin Meng <bmeng.cn@gmail.com>
Message-Id: <20220909134215.1843865-7-bmeng.cn@gmail.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
tinfo.info:
One bit for each possible type enumerated in tdata1.
If the bit is set, then that type is supported by the currently
selected trigger.
Signed-off-by: Frank Chang <frank.chang@sifive.com>
Reviewed-by: Bin Meng <bmeng.cn@gmail.com>
Signed-off-by: Bin Meng <bmeng.cn@gmail.com>
Reviewed-by: LIU Zhiwei <zhiwei_liu@linux.alibaba.com>
Message-Id: <20220909134215.1843865-6-bmeng.cn@gmail.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
The value of tselect CSR can be written should be limited within the
range of supported triggers number.
Signed-off-by: Frank Chang <frank.chang@sifive.com>
Reviewed-by: Bin Meng <bmeng.cn@gmail.com>
Signed-off-by: Bin Meng <bmeng.cn@gmail.com>
Reviewed-by: LIU Zhiwei <zhiwei_liu@linux.alibaba.com>
Message-Id: <20220909134215.1843865-5-bmeng.cn@gmail.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Replace type2_trigger_t with the real tdata1, tdata2, and tdata3 CSRs,
which allows us to support more types of triggers in the future.
Signed-off-by: Frank Chang <frank.chang@sifive.com>
Reviewed-by: Bin Meng <bmeng.cn@gmail.com>
Signed-off-by: Bin Meng <bmeng.cn@gmail.com>
Reviewed-by: LIU Zhiwei <zhiwei_liu@linux.alibaba.com>
Message-Id: <20220909134215.1843865-4-bmeng.cn@gmail.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Introduce build_tdata1() to build tdata1 register content, which can be
shared among all types of triggers.
Signed-off-by: Frank Chang <frank.chang@sifive.com>
Reviewed-by: Bin Meng <bmeng.cn@gmail.com>
Signed-off-by: Bin Meng <bmeng.cn@gmail.com>
Reviewed-by: LIU Zhiwei <zhiwei_liu@linux.alibaba.com>
[bmeng: moved RV{32,64}_DATA_MASK definition to this patch]
Signed-off-by: Bin Meng <bmeng.cn@gmail.com>
Message-Id: <20220909134215.1843865-3-bmeng.cn@gmail.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Current RISC-V debug assumes that only type 2 trigger is supported.
To allow more types of triggers to be supported in the future
(e.g. type 6 trigger, which is similar to type 2 trigger with additional
functionality), we should determine the trigger type from tdata1.type.
RV_MAX_TRIGGERS is also introduced in replacement of TRIGGER_TYPE2_NUM.
Signed-off-by: Frank Chang <frank.chang@sifive.com>
Reviewed-by: Bin Meng <bmeng.cn@gmail.com>
Signed-off-by: Bin Meng <bmeng.cn@gmail.com>
Reviewed-by: LIU Zhiwei <zhiwei_liu@linux.alibaba.com>
[bmeng: fixed MXL_RV128 case, and moved macros to the following patch]
Signed-off-by: Bin Meng <bmeng.cn@gmail.com>
Message-Id: <20220909134215.1843865-2-bmeng.cn@gmail.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Now that M68K_FEATURE_M68000 has been renamed to M68K_FEATURE_M68K it is easier
to see that the privilege exception check is wrong: it is currently only generated
for ColdFire CPUs when in fact it should also be generated for Motorola CPUs from
the 68010 onwards.
Introduce a new M68K_FEATURE_MOVEFROMSR_PRIV feature which is set for all non-
Motorola CPUs, and for all Motorola CPUs from the 68010 onwards and use it to
determine whether a privilege exception should be generated for the MOVE-from-SR
instruction.
Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220925134804.139706-3-mark.cave-ayland@ilande.co.uk>
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
There are already 32 feature bits in use, so change the size of the m68k
CPU features to uint64_t (along with the associated m68k_feature()
functions) to allow up to 64 feature bits to be used.
At the same time make use of the BIT_ULL() macro when reading/writing
the CPU feature bits to improve readability, and also update m68k_feature()
to return a bool rather than an int.
Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20220925134804.139706-2-mark.cave-ayland@ilande.co.uk>
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
After RISCVException enum is introduced, riscv_csrrw_debug() returns
RISCV_EXCP_NONE to indicate there's no error. RISC-V vector GDB stub
should check the result against RISCV_EXCP_NONE instead of value 0.
Otherwise, 'E14' packet would be incorrectly reported for vector CSRs
when using "info reg vector" GDB command.
Signed-off-by: Frank Chang <frank.chang@sifive.com>
Reviewed-by: Jim Shu <jim.shu@sifive.com>
Reviewed-by: Tommy Wu <tommy.wu@sifive.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: LIU Zhiwei <zhiwei_liu@linux.alibaba.com>
Message-Id: <20220918083245.13028-1-frank.chang@sifive.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Instead of using our properties to set a config value which then might
be used to set the resetvec (depending on your timing), let's instead
just set the resetvec directly in the env struct.
This allows us to set the reset vec from the command line with:
-global driver=riscv.hart_array,property=resetvec,value=0x20000400
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20220914101108.82571-2-alistair.francis@wdc.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
While testing some changes to GDB's handling for the RISC-V registers
fcsr, fflags, and frm, I spotted that QEMU includes these registers
twice in the target description it sends to GDB, once in the fpu
feature, and once in the csr feature.
Right now things basically work OK, QEMU maps these registers onto two
different register numbers, e.g. fcsr maps to both 68 and 73, and GDB
can use either of these to access the register.
However, GDB's target descriptions don't really work this way, each
register should appear just once in a target description, mapping the
register name onto the number GDB should use when accessing the
register on the target. Duplicate register names actually result in
duplicate registers on the GDB side, however, as the registers have
the same name, the user can only access one of these registers.
Currently GDB has a hack in place, specifically for RISC-V, to spot
the duplicate copies of these three registers, and hide them from the
user, ensuring the user only ever sees a single copy of each.
In this commit I propose fixing this issue on the QEMU side, and in
the process, simplify the fpu register handling a little.
I think we should, remove fflags, frm, and fcsr from the two (32-bit
and 64-bit) fpu feature xml files. These files will only contain the
32 core floating point register f0 to f31. The fflags, frm, and fcsr
registers will continue to be advertised in the csr feature as they
currently are.
With that change made, I will simplify riscv_gdb_get_fpu and
riscv_gdb_set_fpu, removing the extra handling for the 3 status
registers.
Signed-off-by: Andrew Burgess <aburgess@redhat.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-Id: <0fbf2a5b12e3210ff3867d5cf7022b3f3462c9c8.1661934573.git.aburgess@redhat.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
- modify check for mcounteren to work in all less-privilege mode
- modify check for scounteren to work only when S mode is enabled
- distinguish the exception type raised by check for scounteren between U
and VU mode
Signed-off-by: Weiwei Li <liweiwei@iscas.ac.cn>
Signed-off-by: Junqiang Wang <wangjunqiang@iscas.ac.cn>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-Id: <20220817083756.12471-1-liweiwei@iscas.ac.cn>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
sideleg and sedeleg csrs are not part of riscv isa spec
anymore, these csrs were part of N extension which
is removed from the riscv isa specification.
These commits removed all traces of these csrs from
riscv spec (https://github.com/riscv/riscv-isa-manual) -
commit f8d27f805b65 ("Remove or downgrade more references to N extension (#674)")
commit b6cade07034d ("Remove N extension chapter for now")
Signed-off-by: Rahul Pathak <rpathak@ventanamicro.com>
Reviewed-by: Andrew Jones <ajones@ventanamicro.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-Id: <20220824145255.400040-1-rpathak@ventanamicro.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
If the ZPCI_OP ioctl reports that is is available and usable, then the
underlying KVM host will enable load/store intepretation for any guest
device without a SHM bit in the guest function handle. For a device that
will be using interpretation support, ensure the guest function handle
matches the host function handle; this value is re-checked every time the
guest issues a SET PCI FN to enable the guest device as it is the only
opportunity to reflect function handle changes.
By default, unless interpret=off is specified, interpretation support will
always be assumed and exploited if the necessary ioctl and features are
available on the host kernel. When these are unavailable, we will silently
revert to the interception model; this allows existing guest configurations
to work unmodified on hosts with and without zPCI interpretation support,
allowing QEMU to choose the best support model available.
Signed-off-by: Matthew Rosato <mjrosato@linux.ibm.com>
Acked-by: Thomas Huth <thuth@redhat.com>
Message-Id: <20220902172737.170349-4-mjrosato@linux.ibm.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
In order for hosts running inside of TCG to initialize the kernel's
random number generator, we should support the PRNO_TRNG instruction,
backed in the usual way with the qemu_guest_getrandom helper. This is
confirmed working on Linux 5.19.
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
Message-Id: <20220921100729.2942008-2-Jason@zx2c4.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
[thuth: turn prno-trng off in avocado test to avoid breaking it]
Signed-off-by: Thomas Huth <thuth@redhat.com>
In order to fully support MSA_EXT_5, we have to support the SHA-512
special instructions. So implement those.
The implementation began as something TweetNacl-like, and then was
adjusted to be useful here. It's not very beautiful, but it is quite
short and compact, which is what we're going for.
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
[ restructure, add missing exception, add comments, fixup CPU model ]
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20220922153820.221811-1-david@redhat.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
remove unused encodings
add fmin/fmax tests for signed zero
-----BEGIN PGP SIGNATURE-----
iQEzBAABCgAdFiEENjXHiM5iuR/UxZq0ewJE+xLeRCIFAmMou7IACgkQewJE+xLe
RCIYbQgAgjFujecgbbCJfBPVMmpTXNOgk+Jt3w+jfg7/WJRZuhxAU3xB2qpismUH
5MntMlFHAGOjlPXfg6U5AZFSw3RhlanH/RChHpVKuL6peOXFImIfEqdyVXHXfCuu
FlpQFGwJ3Rs50UJhd7lVdlx0I7lup4E4X77hFvFcZQP6aNrt6Ic1Zq5eXhEq9k2A
NnXol1R416JRT/senujYVvcTpgYVHlQCS+4dJEzKUqvFlTdo7lnAbPdjO8MPrz7B
0NgPUGjGZJ70Dcqvd1n8HePIU1YyKTlHJNaWyTlAmw4MECyHyAJnd64jEMNECDb5
0BrpHcY1HCt1Rh4QratemTfJglAJlA==
=UUyr
-----END PGP SIGNATURE-----
Merge tag 'pull-hex-20220919' of https://github.com/quic/qemu into staging
Hexagon update
remove unused encodings
add fmin/fmax tests for signed zero
# -----BEGIN PGP SIGNATURE-----
#
# iQEzBAABCgAdFiEENjXHiM5iuR/UxZq0ewJE+xLeRCIFAmMou7IACgkQewJE+xLe
# RCIYbQgAgjFujecgbbCJfBPVMmpTXNOgk+Jt3w+jfg7/WJRZuhxAU3xB2qpismUH
# 5MntMlFHAGOjlPXfg6U5AZFSw3RhlanH/RChHpVKuL6peOXFImIfEqdyVXHXfCuu
# FlpQFGwJ3Rs50UJhd7lVdlx0I7lup4E4X77hFvFcZQP6aNrt6Ic1Zq5eXhEq9k2A
# NnXol1R416JRT/senujYVvcTpgYVHlQCS+4dJEzKUqvFlTdo7lnAbPdjO8MPrz7B
# 0NgPUGjGZJ70Dcqvd1n8HePIU1YyKTlHJNaWyTlAmw4MECyHyAJnd64jEMNECDb5
# 0BrpHcY1HCt1Rh4QratemTfJglAJlA==
# =UUyr
# -----END PGP SIGNATURE-----
# gpg: Signature made Mon 19 Sep 2022 14:57:54 EDT
# gpg: using RSA key 3635C788CE62B91FD4C59AB47B0244FB12DE4422
# gpg: Good signature from "Taylor Simpson (Rock on) <tsimpson@quicinc.com>" [unknown]
# gpg: WARNING: This key is not certified with a trusted signature!
# gpg: There is no indication that the signature belongs to the owner.
# Primary key fingerprint: 3635 C788 CE62 B91F D4C5 9AB4 7B02 44FB 12DE 4422
* tag 'pull-hex-20220919' of https://github.com/quic/qemu:
Hexagon (tests/tcg/hexagon): add fmin/fmax tests for signed zero
Hexagon (target/hexagon) remove unused encodings
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Remove the use of regime_is_secure from get_phys_addr_pmsav5.
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220822152741.1617527-21-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Remove the use of regime_is_secure from get_phys_addr_pmsav7,
using the new parameter instead.
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220822152741.1617527-19-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Remove the use of regime_is_secure from pmsav7_use_background_region,
using the new parameter instead.
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220822152741.1617527-17-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Remove the use of regime_is_secure from get_phys_addr_pmsav8.
Since we already had a local variable named secure, use that.
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220822152741.1617527-16-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Remove the use of regime_is_secure from get_phys_addr_v6,
passing the new parameter to the lookup instead.
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220822152741.1617527-15-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Remove the use of regime_is_secure from get_phys_addr_v5,
passing the new parameter to the lookup instead.
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
[PMM: Folded in definition of local is_secure in get_phys_addr(),
since I dropped the earlier patch that would have provided it]
Message-id: 20220822152741.1617527-14-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Remove the use of regime_is_secure from pmsav8_mpu_lookup,
passing the new parameter to the lookup instead.
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220822152741.1617527-13-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Remove the use of regime_is_secure from v8m_security_lookup,
passing the new parameter to the lookup instead.
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220822152741.1617527-12-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This can be made redundant with result->page_size, by moving the basic
set of page_size from get_phys_addr_pmsav8. We still need to overwrite
page_size when v8m_security_lookup signals a subpage.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220822152741.1617527-11-richard.henderson@linaro.org
[PMM: Update a comment that used to refer to is_subpage]
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220822152741.1617527-10-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220822152741.1617527-9-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220822152741.1617527-8-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220822152741.1617527-7-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220822152741.1617527-6-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220822152741.1617527-5-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220822152741.1617527-4-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Combine 5 output pointer arguments from get_phys_addr
into a single struct. Adjust all callers.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220822152741.1617527-2-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
When requested, the alignment for VLD4.32 is 8 and not 16.
See ARM documentation about VLD4 encoding:
ebytes = 1 << UInt(size);
if size == '10' then
alignment = if a == '0' then 1 else 8;
else
alignment = if a == '0' then 1 else 4*ebytes;
Signed-off-by: Clément Chigot <chigot@adacore.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220914105058.2787404-1-chigot@adacore.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This queue contains a implementation of PowerISA 3.1B hash insns, ppc
TCG insns cleanups and fixes, and miscellaneus fixes in the spapr and
pnv_phb models.
-----BEGIN PGP SIGNATURE-----
iHUEABYKAB0WIQQX6/+ZI9AYAK8oOBk82cqW3gMxZAUCYyoWlAAKCRA82cqW3gMx
ZDYhAP0eQMeA4NS3hiw7WMcAVg0pei3ZJL9oEh1UE3+MfK7MhQEA0q8qExWnQJAA
a0hfnFH9pLjI+v0f/FbFK6QJBpu/bg8=
=qT+H
-----END PGP SIGNATURE-----
Merge tag 'pull-ppc-20220920' of https://gitlab.com/danielhb/qemu into staging
ppc patch queue for 2022-09-20:
This queue contains a implementation of PowerISA 3.1B hash insns, ppc
TCG insns cleanups and fixes, and miscellaneus fixes in the spapr and
pnv_phb models.
# -----BEGIN PGP SIGNATURE-----
#
# iHUEABYKAB0WIQQX6/+ZI9AYAK8oOBk82cqW3gMxZAUCYyoWlAAKCRA82cqW3gMx
# ZDYhAP0eQMeA4NS3hiw7WMcAVg0pei3ZJL9oEh1UE3+MfK7MhQEA0q8qExWnQJAA
# a0hfnFH9pLjI+v0f/FbFK6QJBpu/bg8=
# =qT+H
# -----END PGP SIGNATURE-----
# gpg: Signature made Tue 20 Sep 2022 15:37:56 EDT
# gpg: using EDDSA key 17EBFF9923D01800AF2838193CD9CA96DE033164
# gpg: Good signature from "Daniel Henrique Barboza <danielhb413@gmail.com>" [unknown]
# gpg: WARNING: This key is not certified with a trusted signature!
# gpg: There is no indication that the signature belongs to the owner.
# Primary key fingerprint: 17EB FF99 23D0 1800 AF28 3819 3CD9 CA96 DE03 3164
* tag 'pull-ppc-20220920' of https://gitlab.com/danielhb/qemu:
hw/ppc/spapr: Fix code style problems reported by checkpatch
hw/pci-host: pnv_phb{3, 4}: Fix heap out-of-bound access failure
hw/ppc: spapr: Use qemu_vfree() to free spapr->htab
target/ppc: Clear fpstatus flags on helpers missing it
target/ppc: Zero second doubleword of VSR registers for FPR insns
target/ppc: Set OV32 when OV is set
target/ppc: Zero second doubleword for VSX madd instructions
target/ppc: Set result to QNaN for DENBCD when VXCVI occurs
target/ppc: Zero second doubleword in DFP instructions
target/ppc: Remove unused xer_* macros
target/ppc: Remove extra space from s128 field in ppc_vsr_t
target/ppc: Merge fsqrt and fsqrts helpers
target/ppc: Move fsqrts to decodetree
target/ppc: Move fsqrt to decodetree
target/ppc: Implement hashstp and hashchkp
target/ppc: Implement hashst and hashchk
target/ppc: Add HASHKEYR and HASHPKEYR SPRs
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Any write to SR can change the security state so always call gen_exit_tb() when
this occurs. In particular MacOS makes use of andiw/oriw in a few places to
handle the switch between user and supervisor mode.
Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20220917112515.83905-5-mark.cave-ayland@ilande.co.uk>
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
The M68K_FEATURE_M68000 feature is misleading in that its name suggests the feature
is defined just for Motorola 68000 CPUs, whilst in fact it is defined for all
Motorola 680X0 CPUs.
In order to avoid confusion with the other M68K_FEATURE_M680X0 constants which
define the features available for specific Motorola CPU models, rename
M68K_FEATURE_M68000 to M68K_FEATURE_M68K and add comments to clarify its usage.
Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20220917112515.83905-2-mark.cave-ayland@ilande.co.uk>
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
Writes to SR may change security state, which may involve
a swap of %ssp with %usp as reflected in %a7. Finish the
writeback of %sp@+ before swapping stack pointers.
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1206
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Laurent Vivier <laurent@vivier.eu>
Reviewed-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Message-Id: <20220913142818.7802-3-richard.henderson@linaro.org>
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
First, we were writing to the entire SR register, instead
of only the flags portion. Second, we were not clearing C
as per the documentation (X was cleared via the 0xf mask).
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Laurent Vivier <laurent@vivier.eu>
Message-Id: <20220913142818.7802-2-richard.henderson@linaro.org>
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
This is slightly more complicated than cas,
because tas is allowed on data registers.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Laurent Vivier <laurent@vivier.eu>
Message-Id: <20220829051746.227094-1-richard.henderson@linaro.org>
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
In ppc emulation, exception flags are not cleared at the end of an
instruction. Instead, the next instruction is responsible to clear
it before its emulation. However, some helpers are not doing it,
causing an issue where the previously set exception flags are being
used and leading to incorrect values being set in FPSCR.
Fix this by clearing fp_status before doing the instruction 'real' work
for the following helpers that were missing this behavior:
- VSX_CVT_INT_TO_FP_VECTOR
- VSX_CVT_FP_TO_FP
- VSX_CVT_FP_TO_INT_VECTOR
- VSX_CVT_FP_TO_INT2
- VSX_CVT_FP_TO_INT
- VSX_CVT_FP_TO_FP_HP
- VSX_CVT_FP_TO_FP_VECTOR
- VSX_CMP
- VSX_ROUND
- xscvqpdp
- xscvdpsp[n]
Signed-off-by: Víctor Colombo <victor.colombo@eldorado.org.br>
Reviewed-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Message-Id: <20220906125523.38765-9-victor.colombo@eldorado.org.br>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
FPR register are mapped to the first doubleword of the VSR registers.
Since PowerISA v3.1, the second doubleword of the target register
must be zeroed for FP instructions.
This patch does it by writting 0 to the second dw everytime the
first dw is being written using set_fpr.
Signed-off-by: Víctor Colombo <victor.colombo@eldorado.org.br>
Reviewed-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Message-Id: <20220906125523.38765-8-victor.colombo@eldorado.org.br>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
According to PowerISA: "OV32 is set whenever OV is implicitly set, and
is set to the same value that OV is defined to be set to in 32-bit
mode".
This patch changes helper_update_ov_legacy to set/clear ov32 when
applicable.
Signed-off-by: Víctor Colombo <victor.colombo@eldorado.org.br>
Reviewed-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Message-Id: <20220906125523.38765-7-victor.colombo@eldorado.org.br>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
In 205eb5a89e we updated most VSX instructions to zero the
second doubleword, as is requested by PowerISA since v3.1.
However, VSX_MADD helper was left behind unchanged, while it
is also affected and should be fixed as well.
This patch applies the fix for MADD instructions.
Fixes: 205eb5a89e ("target/ppc: Change VSX instructions behavior to fill with zeros")
Signed-off-by: Víctor Colombo <victor.colombo@eldorado.org.br>
Reviewed-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Message-Id: <20220906125523.38765-6-victor.colombo@eldorado.org.br>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
According to the ISA, for instruction DENBCD:
"If an invalid BCD digit or sign code is detected in the source
operand, an invalid-operation exception (VXCVI) occurs."
In the Invalid Operation Exception section, there is the situation:
"When Invalid Operation Exception is disabled (VE=0) and Invalid
Operation occurs (...) If the operation is an (...) or format the
target FPR is set to a Quiet NaN". This was not being done in
QEMU.
This patch sets the result to QNaN when the instruction DENBCD causes
an Invalid Operation Exception.
Signed-off-by: Víctor Colombo <victor.colombo@eldorado.org.br>
Reviewed-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Message-Id: <20220906125523.38765-5-victor.colombo@eldorado.org.br>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Starting at PowerISA v3.1, the second doubleword of the registers
used to store results in DFP instructions are supposed to be zeroed.
From the ISA, chapter 7.2.1.1 Floating-Point Registers:
"""
Chapter 4. Floating-Point Facility provides 32 64-bit
FPRs. Chapter 5. Decimal Floating-Point also employs
FPRs in decimal floating-point (DFP) operations. When
VSX is implemented, the 32 FPRs are mapped to
doubleword 0 of VSRs 0-31. (...)
All instructions that operate on an FPR are redefined
to operate on doubleword element 0 of the
corresponding VSR. (...)
and the contents of doubleword element 1 of the
VSR corresponding to the target FPR or FPR pair for these
instructions are set to 0.
"""
Before, the result stored at doubleword 1 was said to be undefined.
With that, this patch changes the DFP facility to zero doubleword 1
when using set_dfp64 and set_dfp128. This fixes the behavior for ISA
3.1 while keeping the behavior correct for previous ones.
Signed-off-by: Víctor Colombo <victor.colombo@eldorado.org.br>
Reviewed-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Message-Id: <20220906125523.38765-4-victor.colombo@eldorado.org.br>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
The macros xer_ov, xer_ca, xer_ov32, and xer_ca32 are both unused and
hiding the usage of env. Remove them.
Signed-off-by: Víctor Colombo <victor.colombo@eldorado.org.br>
Reviewed-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Message-Id: <20220906125523.38765-3-victor.colombo@eldorado.org.br>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Very trivial rogue space removal. There are two spaces between Int128
and s128 in ppc_vsr_t struct, where it should be only one.
Signed-off-by: Víctor Colombo <victor.colombo@eldorado.org.br>
Reviewed-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Message-Id: <20220906125523.38765-2-victor.colombo@eldorado.org.br>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
These two helpers are almost identical, differing only by the softfloat
operation it calls. Merge them into one using a macro.
Also, take this opportunity to capitalize the helper name as we moved
the instruction to decodetree in a previous patch.
Signed-off-by: Víctor Colombo <victor.colombo@eldorado.org.br>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220905123746.54659-4-victor.colombo@eldorado.org.br>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Implementation for instructions hashstp and hashchkp, the privileged
versions of hashst and hashchk, which were added in Power ISA 3.1B.
Signed-off-by: Víctor Colombo <victor.colombo@eldorado.org.br>
Reviewed-by: Lucas Mateus Castro <lucas.araujo@eldorado.org.br>
Message-Id: <20220715205439.161110-4-victor.colombo@eldorado.org.br>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Implementation for instructions hashst and hashchk, which were added
in Power ISA 3.1B.
It was decided to implement the hash algorithm from ground up in this
patch exactly as described in Power ISA.
Signed-off-by: Víctor Colombo <victor.colombo@eldorado.org.br>
Reviewed-by: Lucas Mateus Castro <lucas.araujo@eldorado.org.br>
Message-Id: <20220715205439.161110-3-victor.colombo@eldorado.org.br>
[danielhb: fix block comment in excp_helper.c]
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Add the Special Purpose Registers HASHKEYR and HASHPKEYR, which were
introduced by the Power ISA 3.1B. They are used by the new instructions
hashchk(p) and hashst(p).
The ISA states that the Operating System should generate the value for
these registers when creating a process, so it's its responsability to
do so. We initialize it with 0 for qemu-softmmu, and set a random 64
bits value for linux-user.
Signed-off-by: Víctor Colombo <victor.colombo@eldorado.org.br>
Reviewed-by: Lucas Mateus Castro <lucas.araujo@eldorado.org.br>
Message-Id: <20220715205439.161110-2-victor.colombo@eldorado.org.br>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Remove encodings guarded by ifdef that is not defined
Signed-off-by: Taylor Simpson <tsimpson@quicinc.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20220606222327.7682-4-tsimpson@quicinc.com>
The "O" operand type in the Intel SDM needs to load an 8- to 64-bit
unsigned value, while insn_get is limited to 32 bits. Extract the code
out of disas_insn and into a separate function.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
The later prefix wins if both are present, make it show in s->prefix too.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
INSERTQ is defined to not modify any bits in the lower 64 bits of the
destination, other than the ones being replaced with bits from the
source operand. QEMU instead is using unshifted bits from the source
for those bits.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
SSE4a instructions EXTRQ and INSERTQ have two bit index operands, that can be
immediates or taken from an XMM register. In both cases, the fields are
6-bit wide and the top two bits in the byte are ignored. translate.c is
doing that correctly for the immediate case, but not for the XMM case, so
fix it.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Many instructions which load/store 128-bit values are supposed to
raise #GP when the memory operand isn't 16-byte aligned. This includes:
- Instructions explicitly requiring memory alignment (Exceptions Type 1
in the "AVX and SSE Instruction Exception Specification" section of
the SDM)
- Legacy SSE instructions that load/store 128-bit values (Exceptions
Types 2 and 4).
This change sets MO_ALIGN_16 on 128-bit memory accesses that require
16-byte alignment. It adds cpu_record_sigbus and cpu_do_unaligned_access
hooks that simulate a #GP exception in qemu-user and qemu-system,
respectively.
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/217
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Ricky Zhou <ricky@rzhou.org>
Message-Id: <20220830034816.57091-2-ricky@rzhou.org>
[Do not bother checking PREFIX_VEX, since AVX is not supported. - Paolo]
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Update the ID registers for TCG's '-cpu max' to report a FEAT_PMUv3p5
compliant PMU.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220822132358.3524971-11-peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
With FEAT_PMUv3p5, the event counters are now 64 bit, rather than 32
bit. (Previously, only the cycle counter could be 64 bit, and other
event counters were always 32 bits). For any given event counter,
whether the overflow event is noted for overflow from bit 31 or from
bit 63 is controlled by a combination of PMCR.LP, MDCR_EL2.HLP and
MDCR_EL2.HPMN.
Implement the 64-bit event counter handling. We choose to make our
counters always 64 bits, and mask out the top 32 bits on read or
write of PMXEVCNTR for CPUs which don't have FEAT_PMUv3p5.
(Note that the changes to pmenvcntr_op_start() and
pmenvcntr_op_finish() bring their logic closer into line with that of
pmccntr_op_start() and pmccntr_op_finish(), which already had to cope
with the overflow being either at 32 or 64 bits.)
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220822132358.3524971-10-peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
FEAT_PMUv3p5 introduces new bits which disable the cycle
counter from counting:
* MDCR_EL2.HCCD disables the counter when in EL2
* MDCR_EL3.SCCD disables the counter when Secure
Add the code to support these bits.
(Note that there is a third documented counter-disable
bit, MDCR_EL3.MCCD, which disables the counter when in
EL3. This is not present until FEAT_PMUv3p7, so is
out of scope for now.)
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220822132358.3524971-9-peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Our feature test functions that check the PMU version are named
isar_feature_{aa32,aa64,any}_pmu_8_{1,4}. This doesn't match the
current Arm ARM official feature names, which are FEAT_PMUv3p1 and
FEAT_PMUv3p4. Rename these functions to _pmuv3p1 and _pmuv3p4.
This commit was created with:
sed -i -e 's/pmu_8_/pmuv3p/g' target/arm/*.[ch]
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220822132358.3524971-8-peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
In pmccntr_op_finish() and pmevcntr_op_finish() we calculate the next
point at which we will get an overflow and need to fire the PMU
interrupt or set the overflow flag. We do this by calculating the
number of nanoseconds to the overflow event and then adding it to
qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL). However, we don't check
whether that signed addition overflows, which can happen if the next
PMU interrupt would happen massively far in the future (250 years or
more).
Since QEMU assumes that "when the QEMU_CLOCK_VIRTUAL rolls over" is
"never", the sensible behaviour in this situation is simply to not
try to set the timer if it would be beyond that point. Detect the
overflow, and skip setting the timer in that case.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220822132358.3524971-7-peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
The logic in pmu_counter_enabled() for handling the 'prohibit event
counting' bits MDCR_EL2.HPMD and MDCR_EL3.SPME is written in a way
that assumes that EL2 is never Secure. This used to be true, but the
architecture now permits Secure EL2, and QEMU can emulate this.
Refactor the prohibit logic so that we effectively OR together
the various prohibit bits when they apply, rather than trying to
construct an if-else ladder where any particular state of the CPU
ends up in exactly one branch of the ladder.
This fixes the Secure EL2 case and also is a better structure for
adding the PMUv8.5 bits MDCR_EL2.HCCD and MDCR_EL3.SCCD.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220822132358.3524971-6-peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
The architecture requires that if PMCR.LC is set (for a 64-bit cycle
counter) then PMCR.D (which enables the clock divider so the counter
ticks every 64 cycles rather than every cycle) should be ignored. We
were always honouring PMCR.D; fix the bug so we correctly ignore it
in this situation.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220822132358.3524971-5-peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
The PMU cycle and event counter infrastructure design requires that
operations on the PMU register fields are wrapped in pmu_op_start()
and pmu_op_finish() calls (or their more specific pmmcntr and
pmevcntr equivalents). This includes any changes to registers which
affect whether the counter should be enabled or disabled, but we
forgot to do this.
The effect of this bug is that in sequences like:
* disable the cycle counter (PMCCNTR) using the PMCNTEN register
* write a value such as 0xfffff000 to the PMCCNTR
* restart the counter by writing to PMCNTEN
the value written to the cycle counter is corrupted, and it starts
counting from the wrong place. (Essentially, we fail to record that
the QEMU_CLOCK_VIRTUAL timestamp when the counter should be considered
to have started counting is the point when PMCNTEN is written to enable
the counter.)
Add the necessary bracketing calls, so that updates to the various
registers which affect whether the PMU is counting are handled
correctly.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220822132358.3524971-4-peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
pmu_counter_mask() accidentally returns a value with bits [63:32]
set, because the expression it returns is evaluated as a signed value
that gets sign-extended to 64 bits. Force the whole expression to be
evaluated with 64-bit arithmetic with ULL suffixes.
The main effect of this bug was that a guest could write to the bits
in the high half of registers like PMCNTENSET_EL0 that are supposed
to be RES0.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220822132358.3524971-3-peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
When the cycle counter overflows, we are intended to set bit 31 in PMOVSR
to indicate this. However a missing ULL suffix means that we end up
setting all of bits 63-31. Fix the bug.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220822132358.3524971-2-peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Fix a missing space before a comment terminator.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220819110052.2942289-7-peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
The architectural feature FEAT_ETS (Enhanced Translation
Synchronization) is a set of tightened guarantees about memory
ordering involving translation table walks:
* if memory access RW1 is ordered-before memory access RW2 then it
is also ordered-before any translation table walk generated by RW2
that generates a translation fault, address size fault or access
fault
* TLB maintenance on non-exec-permission translations is guaranteed
complete after a DSB (ie it does not need the context
synchronization event that you have to have if you don’t have
FEAT_ETS)
For QEMU’s implementation we don’t reorder translation table walk
accesses, and we guarantee to finish the TLB maintenance as soon as
the TLB op is done (the tlb_flush functions will complete at the end
of the TLB, and TLB ops always end the TB because they’re sysreg
writes).
So we’re already compliant and all we need to do is say so in the ID
registers for the 'max' CPU.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220819110052.2942289-6-peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
In Armv8.6, a new AArch32 ID register ID_DFR1 is defined; implement
it. We don't have any CPUs with features that they need to advertise
here yet, but plumbing in the ID register gives it the right name
when debugging and will help in future when we do add a CPU that
has non-zero ID_DFR1 fields.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220819110052.2942289-5-peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
In Armv8.6 a new AArch32 ID register ID_MMFR5 is defined.
Implement this; we want to be able to use it to report to
the guest that we implement FEAT_ETS.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220819110052.2942289-4-peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
The code that reads the AArch32 ID registers from KVM in
kvm_arm_get_host_cpu_features() does so almost but not quite in
encoding order. Move the read of ID_PFR2 down so it's really in
encoding order.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220819110052.2942289-3-peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
In the AArch32 ID register scheme, coprocessor registers with
encoding cp15, 0, c0, c{0-7}, {0-7} are all in the space covered by
what in v6 and v7 was called the "CPUID scheme", and are supposed to
RAZ if they're not allocated to a specific ID register. For our
pre-v8 CPUs we get this right, because the regdefs in
id_pre_v8_midr_cp_reginfo[] cover these RAZ requirements. However
for v8 we failed to put in the necessary patterns to cover this, so
we end up UNDEFing on everything we didn't have an ID register for.
This is a problem because in Armv8 some encodings in 0, c0, c3, {0-7}
are now being used for new ID registers, and guests might thus start
trying to read them. (We already have one of these: ID_PFR2.)
For v8 CPUs, we already have regdefs for 0, c0, c{0-2}, {0-7} (that
is, the space is completely allocated with no reserved spaces). Add
entries to v8_idregs[] covering 0, c0, c3, {0-7}:
* c3, {0-2} is the reserved AArch32 space corresponding to the
AArch64 MVFR[012]_EL1
* c3, {3,5,6,7} are reserved RAZ for both AArch32 and AArch64
(in fact some of these are given defined meanings in Armv8.6,
but we don't implement them yet)
* c3, 4 is ID_PFR2 (already defined)
We then programmatically add RAZ patterns for AArch32 for
0, c0, c{4..15}, {0-7}:
* c4-c7 are unused, and not shared with AArch64 (these
are the encodings corresponding to where the AArch64
specific ID registers live in the system register space)
* c8-c15 weren't required to RAZ in v6/v7, but v8 extends
the AArch32 reserved-should-RAZ space to cover these;
the equivalent area of the AArch64 sysreg space is not
defined as must-RAZ
Note that the architecture allows some registers in this space
to return an UNKNOWN value; we always return 0.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220819110052.2942289-2-peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Add cortex A35 core and enable it for virt board.
Signed-off-by: Hao Wu <wuhaotsh@google.com>
Reviewed-by: Joe Komlodi <komlodi@google.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-Id: <20220819002015.1663247-1-wuhaotsh@google.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
The riscv target incorrectly enabled semihosting always, whether the
user asked for it or not. Call semihosting_enabled() passing the
correct value to the is_userspace argument, which fixes this and also
handles the userspace=on argument. Because we do this at translate
time, we no longer need to check the privilege level in
riscv_cpu_do_interrupt().
Note that this is a behaviour change: we used to default to
semihosting being enabled, and now the user must pass
"-semihosting-config enable=on" if they want it.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-Id: <20220822141230.3658237-8-peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Honour the commandline -semihosting-config userspace=on option,
instead of always permitting userspace semihosting calls in system
emulation mode, by passing the correct value to the is_userspace
argument of semihosting_enabled().
Note that this is a behaviour change: if the user wants to
do semihosting calls from userspace they must now specifically
enable them on the command line.
xtensa semihosting is not implemented for linux-user builds.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Acked-by: Max Filippov <jcmvbkbc@gmail.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220822141230.3658237-7-peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Honour the commandline -semihosting-config userspace=on option,
instead of always permitting userspace semihosting calls in system
emulation mode, by passing the correct value to the is_userspace
argument of semihosting_enabled().
Note that this is a behaviour change: if the user wants to
do semihosting calls from userspace they must now specifically
enable them on the command line.
nios2 semihosting is not implemented for linux-user builds.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220822141230.3658237-6-peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Honour the commandline -semihosting-config userspace=on option,
instead of always permitting userspace semihosting calls in system
emulation mode, by passing the correct value to the is_userspace
argument of semihosting_enabled().
Note that this is a behaviour change: if the user wants to
do semihosting calls from userspace they must now specifically
enable them on the command line.
MIPS semihosting is not implemented for linux-user builds.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220822141230.3658237-5-peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Honour the commandline -semihosting-config userspace=on option,
instead of never permitting userspace semihosting calls in system
emulation mode, by passing the correct value to the is_userspace
argument of semihosting_enabled(), instead of manually checking and
always forbidding semihosting if the guest is in userspace.
(Note that target/m68k doesn't support semihosting at all
in the linux-user build.)
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Laurent Vivier <laurent@vivier.eu>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220822141230.3658237-4-peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Honour the commandline -semihosting-config userspace=on option,
instead of never permitting userspace semihosting calls in system
emulation mode, by passing the correct value to the is_userspace
argument of semihosting_enabled(), instead of manually checking and
always forbidding semihosting if the guest is in userspace and this
isn't the linux-user build.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220822141230.3658237-3-peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Currently our semihosting implementations generally prohibit use of
semihosting calls in system emulation from the guest userspace. This
is a very long standing behaviour justified originally "to provide
some semblance of security" (since code with access to the
semihosting ABI can do things like read and write arbitrary files on
the host system). However, it is sometimes useful to be able to run
trusted guest code which performs semihosting calls from guest
userspace, notably for test code. Add a command line suboption to
the existing semihosting-config option group so that you can
explicitly opt in to semihosting from guest userspace with
-semihosting-config userspace=on
(There is no equivalent option for the user-mode emulator, because
there by definition all code runs in userspace and has access to
semihosting already.)
This commit adds the infrastructure for the command line option and
adds a bool 'is_user' parameter to the function
semihosting_userspace_enabled() that target code can use to check
whether it should be permitting the semihosting call for userspace.
It mechanically makes all the callsites pass 'false', so they
continue checking "is semihosting enabled in general". Subsequent
commits will make each target that implements semihosting honour the
userspace=on option by passing the correct value and removing
whatever "don't do this for userspace" checking they were doing by
hand.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Acked-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220822141230.3658237-2-peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
The semihosting abi used by m68k uses the gdb remote
protocol filesys errnos.
Acked-by: Laurent Vivier <laurent@vivier.eu>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
This separates guest file descriptors from host file descriptors,
and utilizes shared infrastructure for integration with gdbstub.
Acked-by: Laurent Vivier <laurent@vivier.eu>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
This separates guest file descriptors from host file descriptors,
and utilizes shared infrastructure for integration with gdbstub.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
The sscofpmf extension was ratified as a part of priv spec v1.12.
Mark the csr_ops accordingly.
Reviewed-by: Weiwei Li <liweiwei@iscas.ac.cn>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Signed-off-by: Atish Patra <atishp@rivosinc.com>
Message-Id: <20220824221701.41932-6-atishp@rivosinc.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Qemu virt machine can support few cache events and cycle/instret counters.
It also supports counter overflow for these events.
Add a DT node so that OpenSBI/Linux kernel is aware of the virt machine
capabilities. There are some dummy nodes added for testing as well.
Acked-by: Alistair Francis <alistair.francis@wdc.com>
Signed-off-by: Atish Patra <atish.patra@wdc.com>
Signed-off-by: Atish Patra <atishp@rivosinc.com>
Message-Id: <20220824221701.41932-5-atishp@rivosinc.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Qemu can monitor the following cache related PMU events through
tlb_fill functions.
1. DTLB load/store miss
3. ITLB prefetch miss
Increment the PMU counter in tlb_fill function.
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Tested-by: Heiko Stuebner <heiko@sntech.de>
Signed-off-by: Atish Patra <atish.patra@wdc.com>
Signed-off-by: Atish Patra <atishp@rivosinc.com>
Message-Id: <20220824221701.41932-4-atishp@rivosinc.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
All the hpmcounters and the fixed counters (CY, IR, TM) can be represented
as a unified counter. Thus, the predicate function doesn't need handle each
case separately.
Simplify the predicate function so that we just handle things differently
between RV32/RV64 and S/HS mode.
Reviewed-by: Bin Meng <bmeng.cn@gmail.com>
Acked-by: Alistair Francis <alistair.francis@wdc.com>
Signed-off-by: Atish Patra <atishp@rivosinc.com>
Message-Id: <20220824221701.41932-3-atishp@rivosinc.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
The Sscofpmf ('Ss' for Privileged arch and Supervisor-level extensions,
and 'cofpmf' for Count OverFlow and Privilege Mode Filtering)
extension allows the perf to handle overflow interrupts and filtering
support. This patch provides a framework for programmable
counters to leverage the extension. As the extension doesn't have any
provision for the overflow bit for fixed counters, the fixed events
can also be monitoring using programmable counters. The underlying
counters for cycle and instruction counters are always running. Thus,
a separate timer device is programmed to handle the overflow.
Tested-by: Heiko Stuebner <heiko@sntech.de>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Signed-off-by: Atish Patra <atish.patra@wdc.com>
Signed-off-by: Atish Patra <atishp@rivosinc.com>
Message-Id: <20220824221701.41932-2-atishp@rivosinc.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
vstimecmp CSR allows the guest OS or to program the next guest timer
interrupt directly. Thus, hypervisor no longer need to inject the
timer interrupt to the guest if vstimecmp is used. This was ratified
as a part of the Sstc extension.
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Signed-off-by: Atish Patra <atishp@rivosinc.com>
Message-Id: <20220824221357.41070-4-atishp@rivosinc.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
stimecmp allows the supervisor mode to update stimecmp CSR directly
to program the next timer interrupt. This CSR is part of the Sstc
extension which was ratified recently.
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Signed-off-by: Atish Patra <atishp@rivosinc.com>
Message-Id: <20220824221357.41070-3-atishp@rivosinc.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Historically, The mtime/mtimecmp has been part of the CPU because
they are per hart entities. However, they actually belong to aclint
which is a MMIO device.
Move them to the ACLINT device. This also emulates the real hardware
more closely.
Reviewed-by: Anup Patel <anup@brainfault.org>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Andrew Jones <ajones@ventanamicro.com>
Signed-off-by: Atish Patra <atishp@rivosinc.com>
Message-Id: <20220824221357.41070-2-atishp@rivosinc.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
The arch review of AIA spec is completed and we now have official
extension names for AIA: Smaia (M-mode AIA CSRs) and Ssaia (S-mode
AIA CSRs).
Refer, section 1.6 of the latest AIA v0.3.1 stable specification at
https://github.com/riscv/riscv-aia/releases/download/0.3.1-draft.32/riscv-interrupts-032.pdf)
Based on above, we update QEMU RISC-V to:
1) Have separate config options for Smaia and Ssaia extensions
which replace RISCV_FEATURE_AIA in CPU features
2) Not generate AIA INTC compatible string in virt machine
Signed-off-by: Anup Patel <apatel@ventanamicro.com>
Reviewed-by: Andrew Jones <ajones@ventanamicro.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-id: 20220820042958.377018-1-apatel@ventanamicro.com
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
XVentanaCondOps is Ventana custom extension. Add
its extension entry in the ISA Ext array
Signed-off-by: Rahul Pathak <rpathak@ventanamicro.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-id: 20220816045408.1231135-1-rpathak@ventanamicro.com
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Normally, riscv_csrrw_check is called when executing Zicsr instructions.
And we can only do access control for existed CSRs. So the priority of
CSR related check, from highest to lowest, should be as follows:
1) check whether Zicsr is supported: raise RISCV_EXCP_ILLEGAL_INST if not
2) check whether csr is existed: raise RISCV_EXCP_ILLEGAL_INST if not
3) do access control: raise RISCV_EXCP_ILLEGAL_INST or RISCV_EXCP_VIRT_
INSTRUCTION_FAULT if not allowed
The predicates contain parts of function of both 2) and 3), So they need
to be placed in the middle of riscv_csrrw_check
Signed-off-by: Weiwei Li <liweiwei@iscas.ac.cn>
Signed-off-by: Junqiang Wang <wangjunqiang@iscas.ac.cn>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-Id: <20220803123652.3700-1-liweiwei@iscas.ac.cn>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Added support for RISC-V PAUSE instruction from Zihintpause extension,
enabled by default.
Tested-by: Heiko Stuebner <heiko@sntech.de>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Signed-off-by: Dao Lu <daolu@rivosinc.com>
Message-Id: <20220725034728.2620750-2-daolu@rivosinc.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
According to v-spec, mask agnostic behavior can be either kept as
undisturbed or set elements' bits to all 1s. To distinguish the
difference of mask policies, QEMU should be able to simulate the mask
agnostic behavior as "set mask elements' bits to all 1s".
There are multiple possibility for agnostic elements according to
v-spec. The main intent of this patch-set tries to add option that
can distinguish between mask policies. Setting agnostic elements to
all 1s allows QEMU to express this.
This commit adds option 'rvv_ma_all_1s' is added to enable the
behavior, it is default as disabled.
Signed-off-by: eop Chen <eop.chen@sifive.com>
Reviewed-by: Frank Chang <frank.chang@sifive.com>
Reviewed-by: Weiwei Li <liweiwei@iscas.ac.cn>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-Id: <165570784143.17634.35095816584573691-10@git.sr.ht>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
According to v-spec, mask agnostic behavior can be either kept as
undisturbed or set elements' bits to all 1s. To distinguish the
difference of mask policies, QEMU should be able to simulate the mask
agnostic behavior as "set mask elements' bits to all 1s".
There are multiple possibility for agnostic elements according to
v-spec. The main intent of this patch-set tries to add option that
can distinguish between mask policies. Setting agnostic elements to
all 1s allows QEMU to express this.
This is the first commit regarding the optional mask agnostic
behavior. Follow-up commits will add this optional behavior
for all rvv instructions.
Signed-off-by: eop Chen <eop.chen@sifive.com>
Reviewed-by: Frank Chang <frank.chang@sifive.com>
Reviewed-by: Weiwei Li <liweiwei@iscas.ac.cn>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-Id: <165570784143.17634.35095816584573691-1@git.sr.ht>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Just add 1 to the effective privledge level when in HS mode, then reuse
the check of 'effective_priv < csr_priv' in riscv_csrrw_check to replace
the privilege level related check in hmode. Then, hmode will only check
whether H extension is supported.
When accessing Hypervior CSRs:
1) If accessing from M privilege level, the check of
'effective_priv< csr_priv' passes, returns hmode(...) which will return
RISCV_EXCP_ILLEGAL_INST when H extension is not supported and return
RISCV_EXCP_NONE otherwise.
2) If accessing from HS privilege level, effective_priv will add 1,
the check passes and also returns hmode(...) too.
3) If accessing from VS/VU privilege level, the check fails, and
returns RISCV_EXCP_VIRT_INSTRUCTION_FAULT
4) If accessing from U privilege level, the check fails, and returns
RISCV_EXCP_ILLEGAL_INST
Signed-off-by: Weiwei Li <liweiwei@iscas.ac.cn>
Signed-off-by: Junqiang Wang <wangjunqiang@iscas.ac.cn>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Andrew Jones <ajones@ventanamicro.com>
Message-Id: <20220718130955.11899-7-liweiwei@iscas.ac.cn>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Add check for the implicit dependence between H and S
Csrs only existed in RV32 will not trigger virtual instruction fault
when not in RV32 based on section 8.6.1 of riscv-privileged spec
(draft-20220717)
Signed-off-by: Weiwei Li <liweiwei@iscas.ac.cn>
Signed-off-by: Junqiang Wang <wangjunqiang@iscas.ac.cn>
Reviewed-by: Andrew Jones <ajones@ventanamicro.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-Id: <20220718130955.11899-6-liweiwei@iscas.ac.cn>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Add umode/umode32 predicate for mcounteren, menvcfg/menvcfgh
Signed-off-by: Weiwei Li <liweiwei@iscas.ac.cn>
Signed-off-by: Junqiang Wang <wangjunqiang@iscas.ac.cn>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Andrew Jones <ajones@ventanamicro.com>
Message-Id: <20220718130955.11899-5-liweiwei@iscas.ac.cn>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Fix the lines with over 80 characters
Fix the lines which are obviously misalgined with other lines in the
same group
Signed-off-by: Weiwei Li <liweiwei@iscas.ac.cn>
Signed-off-by: Junqiang Wang <wangjunqiang@iscas.ac.cn>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Andrew Jones <ajones@ventanamicro.com>
Message-Id: <20220718130955.11899-4-liweiwei@iscas.ac.cn>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Add check for "H depends on an I base integer ISA with 32 x registers"
which is stated at the beginning of chapter 8 of the riscv-privileged
spec(draft-20220717)
Signed-off-by: Weiwei Li <liweiwei@iscas.ac.cn>
Signed-off-by: Junqiang Wang <wangjunqiang@iscas.ac.cn>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Andrew Jones <ajones@ventanamicro.com>
Message-Id: <20220718130955.11899-3-liweiwei@iscas.ac.cn>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
There are 3 suggested privilege mode combinations listed in section 1.2
of the riscv-privileged spec(draft-20220717):
1) M, 2) M, U 3) M, S, U
Signed-off-by: Weiwei Li <liweiwei@iscas.ac.cn>
Signed-off-by: Junqiang Wang <wangjunqiang@iscas.ac.cn>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Andrew Jones <ajones@ventanamicro.com>
Message-Id: <20220718130955.11899-2-liweiwei@iscas.ac.cn>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
- Zmmul is ratified and is now version 1.0
Signed-off-by: Weiwei Li <liweiwei@iscas.ac.cn>
Signed-off-by: Junqiang Wang <wangjunqiang@iscas.ac.cn>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-Id: <20220710101546.3907-1-liweiwei@iscas.ac.cn>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
For rv128c shifts, a shamt of 0 is a shamt of 64, while for rv32c/rv64c
it stays 0 and is a hint instruction that does not change processor state.
For rv128c right shifts, the 6-bit shamt is in addition sign extended to
7 bits.
Signed-off-by: Frédéric Pétrot <frederic.petrot@univ-grenoble-alpes.fr>
Reviewed-by: Weiwei Li <liweiwei@iscas.ac.cn>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220710110451.245567-1-frederic.petrot@univ-grenoble-alpes.fr>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
We should disable extensions in riscv_cpu_realize() if minimum required
priv spec version is not satisfied. This also ensures that machines with
priv spec v1.11 (or lower) cannot enable H, V, and various multi-letter
extensions.
Fixes: a775398be2 ("target/riscv: Add isa extenstion strings to the device tree")
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Signed-off-by: Anup Patel <apatel@ventanamicro.com>
Signed-off-by: Rahul Pathak <rpathak@ventanamicro.com>
Message-Id: <20220630061150.905174-3-apatel@ventanamicro.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
We should write transformed instruction encoding of the trapped
instruction in [m|h]tinst CSR at time of taking trap as defined
by the RISC-V privileged specification v1.12.
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Signed-off-by: Anup Patel <apatel@ventanamicro.com>
Acked-by: dramforever <dramforever@live.com>
Message-Id: <20220630061150.905174-2-apatel@ventanamicro.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Right now the translator stops right *after* the end of a page, which
breaks reporting of fault locations when the last instruction of a
multi-insn translation block crosses a page boundary.
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1155
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Acked-by: Ilya Leoshkevich <iii@linux.ibm.com>
Tested-by: Ilya Leoshkevich <iii@linux.ibm.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
These will be useful in properly ending the TB.
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Acked-by: Ilya Leoshkevich <iii@linux.ibm.com>
Tested-by: Ilya Leoshkevich <iii@linux.ibm.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Right now translator stops right *after* the end of a page, which
breaks reporting of fault locations when the last instruction of a
multi-insn translation block crosses a page boundary.
An implementation, like the one arm and s390x have, would require an
i386 length disassembler, which is burdensome to maintain. Another
alternative would be to single-step at the end of a guest page, but
this may come with a performance impact.
Fix by snapshotting disassembly state and restoring it after we figure
out we crossed a page boundary. This includes rolling back cc_op
updates and emitted ops.
Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1143
Message-Id: <20220817150506.592862-4-iii@linux.ibm.com>
[rth: Simplify end-of-insn cross-page checks.]
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Right now translator stops right *after* the end of a page, which
breaks reporting of fault locations when the last instruction of a
multi-insn translation block crosses a page boundary.
Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220817150506.592862-3-iii@linux.ibm.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Pass these along to translator_loop -- pc may be used instead
of tb->pc, and host_pc is currently unused. Adjust all targets
at one time.
Acked-by: Alistair Francis <alistair.francis@wdc.com>
Acked-by: Ilya Leoshkevich <iii@linux.ibm.com>
Tested-by: Ilya Leoshkevich <iii@linux.ibm.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
The only user can easily use translator_lduw and
adjust the type to signed during the return.
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Acked-by: Ilya Leoshkevich <iii@linux.ibm.com>
Tested-by: Ilya Leoshkevich <iii@linux.ibm.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
When running SMP systems we sometimes were seeing lockups where
IPI interrupts were being raised by never handled.
This looks to be caused by 2 issues in the openrisc interrupt handling
logic.
1. After clearing an interrupt the openrisc_cpu_set_irq handler will
always clear PICSR. This is not correct as masked interrupts
should still be visible in PICSR.
2. After setting PICMR (mask register) and exposed interrupts should
cause an interrupt to be raised. This was not being done so add it.
This patch fixes both issues.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Stafford Horne <shorne@gmail.com>
This patch enables multithread TCG for OpenRISC. Since the or1k shared
syncrhonized timer can be updated from each vCPU via helpers we use a
mutex to synchronize updates.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Stafford Horne <shorne@gmail.com>
When we are tracing it's helpful to know which CPU's are getting
interrupted, add that detail to the log line.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Stafford Horne <shorne@gmail.com>
In commit f0655423ca ("target/openrisc: Reorg tlb lookup") data and
instruction TLB reads were combined. This, broke debugger reads where
we first tried to map using the data tlb then fall back to the
instruction tlb.
This patch replicates this logic by first requesting a PAGE_READ
protection mapping then falling back to PAGE_EXEC.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Stafford Horne <shorne@gmail.com>
Make the AES vector helpers AVX ready
No functional changes to existing helpers
Signed-off-by: Paul Brook <paul@nowt.org>
Message-Id: <20220424220204.2493824-22-paul@nowt.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Make the pclmulqdq helper AVX ready
Signed-off-by: Paul Brook <paul@nowt.org>
Message-Id: <20220424220204.2493824-21-paul@nowt.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Rewrite the blendv helpers so that they can easily be extended to support
the AVX encodings, which make all 4 arguments explicit.
No functional changes to the existing helpers
Signed-off-by: Paul Brook <paul@nowt.org>
Message-Id: <20220424220204.2493824-20-paul@nowt.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Fixup various vector helpers that either trivially exten to 256 bit,
or don't have 256 bit variants.
No functional changes to existing helpers
Signed-off-by: Paul Brook <paul@nowt.org>
Message-Id: <20220424220204.2493824-19-paul@nowt.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Perpare the horizontal atithmetic vector helpers for AVX
These currently use a dummy Reg typed variable to store the result then
assign the whole register. This will cause 128 bit operations to corrupt
the upper half of the register, so replace it with explicit temporaries
and element assignments.
Signed-off-by: Paul Brook <paul@nowt.org>
Message-Id: <20220424220204.2493824-18-paul@nowt.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Make the dpps and dppd helpers AVX-ready
I can't see any obvious reason why dppd shouldn't work on 256 bit ymm
registers, but both AMD and Intel agree that it's xmm only.
Signed-off-by: Paul Brook <paul@nowt.org>
Message-Id: <20220424220204.2493824-17-paul@nowt.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
AVX includes an additional set of comparison predicates, some of which
our softfloat implementation does not expose as separate functions.
Rewrite the helpers in terms of floatN_compare for future extensibility.
Signed-off-by: Paul Brook <paul@nowt.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220424220204.2493824-24-paul@nowt.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Prepare the "easy" floating point vector helpers for AVX
No functional changes to existing helpers.
Signed-off-by: Paul Brook <paul@nowt.org>
Message-Id: <20220424220204.2493824-16-paul@nowt.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
These helpers need to take special care to avoid overwriting source values
before the wole result has been calculated. Currently they use a dummy
Reg typed variable to store the result then assign the whole register.
This will cause 128 bit operations to corrupt the upper half of the register,
so replace it with explicit temporaries and element assignments.
Signed-off-by: Paul Brook <paul@nowt.org>
Message-Id: <20220424220204.2493824-14-paul@nowt.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
More preparatory work for AVX support in various integer vector helpers
No functional changes to existing helpers.
Signed-off-by: Paul Brook <paul@nowt.org>
Message-Id: <20220424220204.2493824-13-paul@nowt.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Rewrite the "simple" vector integer helpers in preperation for AVX support.
While the current code is able to use the same prototype for unary
(a = F(b)) and binary (a = F(b, c)) operations, future changes will cause
them to diverge.
No functional changes to existing helpers
Signed-off-by: Paul Brook <paul@nowt.org>
Message-Id: <20220424220204.2493824-12-paul@nowt.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Rewrite the vector shift helpers in preperation for AVX support (3 operand
form and 256 bit vectors).
For now keep the existing two operand interface.
No functional changes to existing helpers.
Signed-off-by: Paul Brook <paul@nowt.org>
Message-Id: <20220424220204.2493824-11-paul@nowt.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Use a union to store the various possible kinds of function pointers, and
access the correct one based on the flags.
SSEOpHelper_table6 and SSEOpHelper_table7 right now only have one case,
but this would change with AVX's 3- and 4-argument operations. Use
unions there too, to keep the code more similar for the three tables.
Extracted from a patch by Paul Brook <paul@nowt.org>.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
For AVX we're going to need both 128 bit (xmm) and 256 bit (ymm) variants of
floating point helpers. Add the register type suffix to the existing
*PS and *PD helpers (SS and SD variants are only valid on 128 bit vectors)
No functional changes.
Signed-off-by: Paul Brook <paul@nowt.org>
Message-Id: <20220424220204.2493824-15-paul@nowt.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Extracted from a patch by Paul Brook <paul@nowt.org>.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Put more flags to work to avoid hardcoding lists of opcodes. The op7 case
for SSE_OPF_CMP is included for homogeneity and because AVX needs it, but
it is never used by SSE or MMX.
Extracted from a patch by Paul Brook <paul@nowt.org>.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Handle 3DNOW instructions early to avoid complicating the MMX/SSE logic.
Signed-off-by: Paul Brook <paul@nowt.org>
Message-Id: <20220424220204.2493824-25-paul@nowt.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Add a flags field each row in sse_op_table6 and sse_op_table7.
Initially this is only used as a replacement for the magic SSE41_SPECIAL
pointer. The other flags are mostly relevant for the AVX implementation
but can be applied to SSE as well.
Signed-off-by: Paul Brook <paul@nowt.org>
Message-Id: <20220424220204.2493824-6-paul@nowt.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Add a flags field to each row in sse_op_table1.
Initially this is only used as a replacement for the magic
SSE_SPECIAL and SSE_DUMMY pointers, the other flags are mostly
relevant for the AVX implementation but can be applied to SSE as well.
Signed-off-by: Paul Brook <paul@nowt.org>
Message-Id: <20220424220204.2493824-5-paul@nowt.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Add a convenience macro to get the address of an xmm_regs element within
CPUX86State.
This was originally going to be the basis of an implementation that broke
operations into 128 bit chunks. I scrapped that idea, so this is now a purely
cosmetic change. But I think a worthwhile one - it reduces the number of
function calls that need to be split over multiple lines.
No functional changes.
Signed-off-by: Paul Brook <paul@nowt.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220424220204.2493824-9-paul@nowt.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Extracted from a patch by Paul Brook <paul@nowt.org>.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Write down explicitly the load/store sequence.
Extracted from a patch by Paul Brook <paul@nowt.org>.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
In the first 7.2 queue we have changes in the powernv pnv-phb handling,
the start of the QOMification of the ppc405 model, the removal of the
taihu machine, a new SLOF image and others.
-----BEGIN PGP SIGNATURE-----
iHUEABYKAB0WIQQX6/+ZI9AYAK8oOBk82cqW3gMxZAUCYw/AFgAKCRA82cqW3gMx
ZI6XAP0d8m6r1JqKXPSfCwVYy+AfrwY7oZWYbeTqdamK6xHcUQD+JyCcFcogY4Vz
YwvHLd9W2cqvoWiZ4tmkK4Mb0Xt0Xg4=
=0uL/
-----END PGP SIGNATURE-----
Merge tag 'pull-ppc-20220831' of https://gitlab.com/danielhb/qemu into staging
ppc patch queue for 2022-08-31:
In the first 7.2 queue we have changes in the powernv pnv-phb handling,
the start of the QOMification of the ppc405 model, the removal of the
taihu machine, a new SLOF image and others.
# -----BEGIN PGP SIGNATURE-----
#
# iHUEABYKAB0WIQQX6/+ZI9AYAK8oOBk82cqW3gMxZAUCYw/AFgAKCRA82cqW3gMx
# ZI6XAP0d8m6r1JqKXPSfCwVYy+AfrwY7oZWYbeTqdamK6xHcUQD+JyCcFcogY4Vz
# YwvHLd9W2cqvoWiZ4tmkK4Mb0Xt0Xg4=
# =0uL/
# -----END PGP SIGNATURE-----
# gpg: Signature made Wed 31 Aug 2022 16:09:58 EDT
# gpg: using EDDSA key 17EBFF9923D01800AF2838193CD9CA96DE033164
# gpg: Good signature from "Daniel Henrique Barboza <danielhb413@gmail.com>" [unknown]
# gpg: WARNING: This key is not certified with a trusted signature!
# gpg: There is no indication that the signature belongs to the owner.
# Primary key fingerprint: 17EB FF99 23D0 1800 AF28 3819 3CD9 CA96 DE03 3164
* tag 'pull-ppc-20220831' of https://gitlab.com/danielhb/qemu: (60 commits)
ppc4xx: Fix code style problems reported by checkpatch
ppc/ppc4xx: Fix sdram trace events
hw/ppc/Kconfig: Move imply before select
hw/ppc/sam460ex: Remove PPC405 dependency from sam460ex
ppc405: Move machine specific code to ppc405_boards.c
ppc/ppc405: QOM'ify FPGA
ppc/ppc405: Use an explicit I2C object
hw/intc/ppc-uic: Convert ppc-uic to a PPC4xx DCR device
ppc/ppc405: Use an embedded PPCUIC model in SoC state
ppc4xx: Rename ppc405-ebc to ppc4xx-ebc
ppc4xx: Move EBC model to ppc4xx_devs.c
ppc4xx: Rename ppc405-plb to ppc4xx-plb
ppc4xx: Move PLB model to ppc4xx_devs.c
ppc/ppc405: QOM'ify MAL
ppc/ppc405: QOM'ify PLB
ppc/ppc405: QOM'ify POB
ppc/ppc405: QOM'ify OPBA
ppc/ppc405: QOM'ify EBC
ppc/ppc405: QOM'ify DMA
ppc/ppc405: QOM'ify GPIO
...
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
The DPPS (Dot Product) instruction is defined to first sum pairs of
intermediate results, then sum those values to get the final result.
i.e. (A+B)+(C+D)
We incrementally sum the results, i.e. ((A+B)+C)+D, which can result
in incorrect rouding.
For consistency, also change the variable names to the ones used
in the Intel SDM and implement DPPD following the manual.
Based on a patch by Paul Brook <paul@nowt.org>.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
The computation must not overwrite neither the destination
nor the source before the last element has been computed.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
kvm_put_sregs2() fails to reset 'locked' CR4/CR0 bits upon vCPU reset when
it is in VMX root operation. Do kvm_put_msr_feature_control() before
kvm_put_sregs2() to (possibly) kick vCPU out of VMX root operation. It also
seems logical to do kvm_put_msr_feature_control() before
kvm_put_nested_state() and not after it, especially when 'real' nested
state is set.
Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Message-Id: <20220818150113.479917-3-vkuznets@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Make sure env->nested_state is cleaned up when a vCPU is reset, it may
be stale after an incoming migration, kvm_arch_put_registers() may
end up failing or putting vCPU in a weird state.
Reviewed-by: Maxim Levitsky <mlevitsk@redhat.com>
Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Message-Id: <20220818150113.479917-2-vkuznets@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
This bit is not saved across interrupts, so we must
delay delivering the interrupt until the skip has
been processed.
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1118
Reviewed-by: Michael Rolnik <mrolnik@gmail.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
We cannot deliver two interrupts simultaneously;
the first interrupt handler must execute first.
Reviewed-by: Michael Rolnik <mrolnik@gmail.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
There is no need to go through cc->tcg_ops when
we know what value that must have.
Reviewed-by: Michael Rolnik <mrolnik@gmail.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
While there are no target-specific nonfaulting probes,
generic code may grow some uses at some point.
Note that the attrs argument was incorrect -- it should have
been MEMTXATTRS_UNSPECIFIED. Just use the simpler interface.
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
When an overflow exception occurs and OE is set the intermediate result
should be adjusted (by subtracting from the exponent) to avoid rounding
to inf. The same applies to an underflow exceptionion and UE (but adding
to the exponent). To do this set the fp_status.rebias_overflow when OE
is set and fp_status.rebias_underflow when UE is set as the FPU will
recalculate in case of a overflow/underflow if the according rebias* is
set.
Signed-off-by: Lucas Mateus Castro (alqotel) <lucas.araujo@eldorado.org.br>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220805141522.412864-3-lucas.araujo@eldorado.org.br>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
ppc_cpu_compare_class_pvr_mask() should match the best CPU class in the
family, because it is used by the KVM subsystem to find the host CPU
class. Since commit 03ae4133ab ("target-ppc: Add pvr_match()
callback"), it matches any class in the family (the first one in the
comparison list).
Since commit f30c843ced ("ppc/pnv: Introduce PowerNV machines with
fixed CPU models"), pnv has relied on pnv_match having these new
semantics to check machine compatibility with a CPU family.
Resolve this by adding a parameter to the pvr_match function to select
the best or any match, and restore the old behaviour for the KVM case.
Prior to this fix, e.g., a POWER9 DD2.3 KVM host matches to the
power9_v1.0 class (because that happens to be the first POWER9 family
CPU compared). After the patch, it matches the power9_v2.0 class.
This approach requires pnv_match contain knowledge of the CPU classes
implemented in the same family, which feels ugly. But pushing the 'best'
match down to the class would still require they know about one another
which is not obviously much better. For now this gets things working.
Fixes: 03ae4133ab ("target-ppc: Add pvr_match() callback")
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Reviewed-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Message-Id: <20220731013358.170187-1-npiggin@gmail.com>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
I2 is 16 bits, not 32.
Found by running valgrind's none/tests/s390x/traps.
Fixes: 1c26875182 ("target-s390: Implement COMPARE AND TRAP")
Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com>
Message-Id: <20220817161529.597414-1-iii@linux.ibm.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
Add stfle 197 (processor-activity-instrumentation extension 1) to the
gen16 default model and fence it off for 7.1 and older.
Signed-off-by: Christian Borntraeger <borntraeger@linux.ibm.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
Message-Id: <20220727135120.12784-1-borntraeger@linux.ibm.com>
Acked-by: Cornelia Huck <cohuck@redhat.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
The proberi assembler instruction checks the read/write access rights
for the page of a given address and shall return a value of 1 if the
test succeeds and a value of 0 on failure in the target register.
But when run in linux-user mode, qemu currently simply returns the
return code of page_check_range() which returns 0 on success and -1 on
failure, which is the opposite of what proberi should return.
Fix it by checking the return code of page_check_range() and return the
expected return value.
The easiest way to reproduce the issue is by running
"/lib/ld.so.1 --version" in a chroot which fails without this patch.
At startup of ld.so the __canonicalize_funcptr_for_compare() function is
used to resolve the function address out of a function descriptor, which
fails because proberi (due to the wrong return code) seems to indicate
that the given address isn't accessible.
Signed-off-by: Helge Deller <deller@gmx.de>
1. Add some information about how to boot the LoongArch virt
machine by uefi bios and linux kernel and how to access the
source code or binary file.
2. Move the explanation of LoongArch system emulation in the
target/loongarch/README to docs/system/loongarch/loongson3.rst
Signed-off-by: Xiaojuan Yang <yangxiaojuan@loongson.cn>
Reviewed-by: Song Gao <gaosong@loongson.cn>
Message-Id: <20220812091957.3338126-1-yangxiaojuan@loongson.cn>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
The newly added neoverse-n1 CPU has ID register values which indicate
the presence of the Statistical Profiling Extension, because the real
hardware has this feature. QEMU's TCG emulation does not yet
implement SPE, though (not even as a minimal stub implementation), so
guests will crash if they try to use it because the SPE system
registers don't exist.
Force ID_AA64DFR0_EL1.PMSVer to 0 in CPU realize for TCG, so that
we don't advertise to the guest a feature that doesn't exist.
(We could alternatively do this by editing the value that
aarch64_neoverse_n1_initfn() sets for this ID register, but
suppressing the field in realize means we won't re-introduce this bug
when we add other CPUs that have SPE in hardware, such as the
Neoverse-V1.)
An example of a non-booting guest is current mainline Linux (5.19),
when booting in EL2 on the virt board (ie with -machine
virtualization=on).
Reported-by: Zenghui Yu <yuzenghui@huawei.com>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Zenghui Yu <yuzenghui@huawei.com>
Message-id: 20220811131127.947334-1-peter.maydell@linaro.org
All of the fpu operations are defined with TCG_CALL_NO_WG, but they
all modify FCSR0. The most efficient way to fix this is to remove
cpu_fcsr0, and instead use explicit load and store operations for the
two instructions that manipulate that value.
Acked-by: Qi Hu <huqi@loongson.cn>
Reviewed-by: Song Gao <gaosong@loongson.cn>
Reported-by: Feiyang Chen <chenfeiyang@loongson.cn>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Coverity notes that we forgot to check the error return from
lock_user() in one place in the handling of the UHI_plog semihosting
call. Add the missing error handling.
report_fault() is rather brutal in that it will call abort(), but
this is the same error-handling used in the rest of this file.
Resolves: Coverity CID 1490684
Fixes: ea4210600d ("target/mips: Avoid qemu_semihosting_log_out for UHI_plog")
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20220719191737.384744-1-peter.maydell@linaro.org>
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
GDB LoongArch fpu use fcc register, update gdb_set_fpu()
and gdb_get_fpu() to match it.
Signed-off-by: Song Gao <gaosong@loongson.cn>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Acked-by: Alex Bennée <alex.bennee@linaro.org>
Message-Id: <20220805033523.1416837-6-gaosong@loongson.cn>
Rename loongarch-fpu64.xml to loongarch-fpu.xml and update
loongarch-fpu.xml to match upstream GDB [1]
[1]:https://github.com/bminor/binutils-gdb/blob/master/gdb/features/loongarch/fpu.xml
Signed-off-by: Song Gao <gaosong@loongson.cn>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Acked-by: Alex Bennée <alex.bennee@linaro.org>
Message-Id: <20220805033523.1416837-5-gaosong@loongson.cn>
GDB LoongArch add a register orig_a0, see the base64.xml [1].
We should add the orig_a0 to match the upstream GDB.
[1]: https://github.com/bminor/binutils-gdb/blob/master/gdb/features/loongarch/base64.xml
Signed-off-by: Song Gao <gaosong@loongson.cn>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Acked-by: Alex Bennée <alex.bennee@linaro.org>
Message-Id: <20220805033523.1416837-2-gaosong@loongson.cn>
The macros SET_FPU_* are used to set corresponding bits of fcsr.
Unfortunately it forgets to set the result and it causes fcsr's
"CAUSE" never being updated. This patch is to fix this bug.
Signed-off-by: Qi Hu <huqi@loongson.cn>
Reviewed-by: Song Gao <gaosong@loongson.cn>
Message-Id: <20220804132450.314329-1-huqi@loongson.cn>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
When the user queries CPU models via QMP there is a 'deprecated' flag
present, however, this is not done for the CLI '-cpu help' command.
Reviewed-by: Cornelia Huck <cohuck@redhat.com>
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
When the user queries CPU models via QMP there is a 'deprecated' flag
present, however, this is not done for the CLI '-cpu help' command.
Reviewed-by: Cornelia Huck <cohuck@redhat.com>
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
When the user queries CPU models via QMP there is a 'deprecated' flag
present, however, this is not done for the CLI '-cpu help' command.
Reviewed-by: Cornelia Huck <cohuck@redhat.com>
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
Delay generating the exception until after we know the
insn length, and record that length in env->error_code.
Fixes: 8ec7e3c53d ("target/mips: Use an exception for semihosting")
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1126
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
- Improve wordings in some files
-----BEGIN PGP SIGNATURE-----
iQJFBAABCAAvFiEEJ7iIR+7gJQEY8+q5LtnXdP5wLbUFAmLn6aYRHHRodXRoQHJl
ZGhhdC5jb20ACgkQLtnXdP5wLbWekg/+NVIT1jp3tcbfPIE6pB0vI/AhqN3i2hUd
zfJ4V3rSe5tg54JpmuuSt542mp4BDM9bPfYcY/DYESWEtW0c9wv80iP/5LFdJF0G
GYtk7Q4pRXvB32kF0v9OxjCEGPUeEqSRrDrsI/Ify5evEIhr55oHPnDjN/US1Bx+
TIuVfmyz8jhSPHsUvZzfVyFxkHre1+BWDxgM3zxoHFIaWEscIPE1KhwRILbKIxWx
MHpL8JLAneGFwljQoUAMCl7GzHkVna59RhqkbBJ+8iTaNGipQj9FhHZBo2CulO0J
SR7scWowYN8Jt2FNMe3tcKM2xQn/2Fg2TEK4sp6q+hCXhJuvFfWFHBiFYTNpagFA
LGgZmPfDr4uZtMEqY4AdEZdL14YZcoM9E/RpW7GhSvMHy73wOj16O8luH1bU0jtG
6X1VvAZlw8/Son1Tbq2CC6WejlMfJFXSzF6Fy6M7SflMPW44vJOs5uKdW405MYjE
Pksbfz1rwoNfK+1qBNQop7SccgDRvPtlLf3lDAU9V/JHWVEITs1KTfyS+46U8jKA
9SVBzKuTpVd+aXvMgvMAmmqnyvUBPHJ9KcFq4vHNbIETsGaQsXu0Q6waBmpcK8YB
KUL/g0EsdfhkpVVgKYZ4Bzj7shG6SKTdwc/lUcOt+wQuDrZZzaC+A2cu/6ReQN6T
BIHtoaxTz8E=
=K6RW
-----END PGP SIGNATURE-----
Merge tag 'pull-request-2022-08-01' of https://gitlab.com/thuth/qemu into staging
- Some fixes for various tests
- Improve wordings in some files
# -----BEGIN PGP SIGNATURE-----
#
# iQJFBAABCAAvFiEEJ7iIR+7gJQEY8+q5LtnXdP5wLbUFAmLn6aYRHHRodXRoQHJl
# ZGhhdC5jb20ACgkQLtnXdP5wLbWekg/+NVIT1jp3tcbfPIE6pB0vI/AhqN3i2hUd
# zfJ4V3rSe5tg54JpmuuSt542mp4BDM9bPfYcY/DYESWEtW0c9wv80iP/5LFdJF0G
# GYtk7Q4pRXvB32kF0v9OxjCEGPUeEqSRrDrsI/Ify5evEIhr55oHPnDjN/US1Bx+
# TIuVfmyz8jhSPHsUvZzfVyFxkHre1+BWDxgM3zxoHFIaWEscIPE1KhwRILbKIxWx
# MHpL8JLAneGFwljQoUAMCl7GzHkVna59RhqkbBJ+8iTaNGipQj9FhHZBo2CulO0J
# SR7scWowYN8Jt2FNMe3tcKM2xQn/2Fg2TEK4sp6q+hCXhJuvFfWFHBiFYTNpagFA
# LGgZmPfDr4uZtMEqY4AdEZdL14YZcoM9E/RpW7GhSvMHy73wOj16O8luH1bU0jtG
# 6X1VvAZlw8/Son1Tbq2CC6WejlMfJFXSzF6Fy6M7SflMPW44vJOs5uKdW405MYjE
# Pksbfz1rwoNfK+1qBNQop7SccgDRvPtlLf3lDAU9V/JHWVEITs1KTfyS+46U8jKA
# 9SVBzKuTpVd+aXvMgvMAmmqnyvUBPHJ9KcFq4vHNbIETsGaQsXu0Q6waBmpcK8YB
# KUL/g0EsdfhkpVVgKYZ4Bzj7shG6SKTdwc/lUcOt+wQuDrZZzaC+A2cu/6ReQN6T
# BIHtoaxTz8E=
# =K6RW
# -----END PGP SIGNATURE-----
# gpg: Signature made Mon 01 Aug 2022 07:56:38 AM PDT
# gpg: using RSA key 27B88847EEE0250118F3EAB92ED9D774FE702DB5
# gpg: issuer "thuth@redhat.com"
# gpg: Good signature from "Thomas Huth <th.huth@gmx.de>" [undefined]
# gpg: aka "Thomas Huth <thuth@redhat.com>" [undefined]
# gpg: aka "Thomas Huth <th.huth@posteo.de>" [unknown]
# gpg: aka "Thomas Huth <huth@tuxfamily.org>" [undefined]
# gpg: WARNING: This key is not certified with a trusted signature!
# gpg: There is no indication that the signature belongs to the owner.
# Primary key fingerprint: 27B8 8847 EEE0 2501 18F3 EAB9 2ED9 D774 FE70 2DB5
* tag 'pull-request-2022-08-01' of https://gitlab.com/thuth/qemu:
tests/qtest/migration-test: Run the dirty ring tests only with the x86 target
trivial: Fix duplicated words
misc: fix commonly doubled up words
tests/unit/test-qga: Replace the word 'blacklist' in the guest agent unit test
migration-test: Allow test to run without uffd
migration-test: Use migrate_ensure_converge() for auto-converge
tests/tcg/linux-test: Fix random hangs in test_socket
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
The test for the IF block indicates no ID registers are exposed, much
less host support for SVE. Move the SVE probe into the ELSE block.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220726045828.53697-4-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Because we weren't setting this flag, our probe of ID_AA64ZFR0
was always returning zero. This also obviates the adjustment
of ID_AA64PFR0, which had sanitized the SVE field.
The effects of the bug are not visible, because the only thing that
ID_AA64ZFR0 is used for within qemu at present is tcg translation.
The other tests for SVE within KVM are via ID_AA64PFR0.SVE.
Reported-by: Zenghui Yu <yuzenghui@huawei.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220726045828.53697-3-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Indication for support for SVE will not depend on whether we
perform the query on the main kvm_state or the temp vcpu.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220726045828.53697-2-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Some files wrongly contain the same word twice in a row.
One of them should be removed or replaced.
Message-Id: <20220722145859.1952732-1-thuth@redhat.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
Message-Id: <20220707163720.1421716-5-berrange@redhat.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Thomas Huth <thuth@redhat.com>
VyV operand is only used in the vshuff and vdeal instructions. These
instructions write to both VyV and VxV operands. In the case where
both operands are the same register, we need a separate location for
VyV. We use the existing vtmp field in CPUHexagonState.
Test case added in tests/tcg/hexagon/hvx_misc.c
Signed-off-by: Taylor Simpson <tsimpson@quicinc.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220718230320.24444-2-tsimpson@quicinc.com>
1. Rename 'loongson3.c' to 'virt.c' and change the meson.build file.
2. Rename 'loongson3.rst' to 'virt.rst'.
Signed-off-by: Xiaojuan Yang <yangxiaojuan@loongson.cn>
Message-Id: <20220729073018.27037-2-yangxiaojuan@loongson.cn>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
perror() is designed to append the decoded errno value to a
string. This, however, only makes sense if we called something that
actually sets errno prior to that.
For the callers that check for split irqchip support that is not the
case, and we end up with confusing error messages that end in
"success". Use error_report() instead.
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
Message-Id: <20220728142446.438177-1-cohuck@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Short queue with 2 Coverity fixes and one fix of the
'wait' insns that is causing hangs if the guest kernel uses
the most up to date wait opcode.
- target/ppc:
- implement new wait variants to fix guest hang when using the new opcode
- ppc440_uc: initialize length passed to cpu_physical_memory_map()
- spapr_nvdimm: check if spapr_drc_index() returns NULL
-----BEGIN PGP SIGNATURE-----
iHUEABYKAB0WIQQX6/+ZI9AYAK8oOBk82cqW3gMxZAUCYuK8VgAKCRA82cqW3gMx
ZOc7AQDPMsFY9NHNqJ3O0MiX4Qoy8IGUreZ9dzZSS3zT1nxtEAD+Lwl0/aGO+dk+
+NiIO80A5Agy/0g8PHie4qR3EqHEnwA=
=Q4eR
-----END PGP SIGNATURE-----
Merge tag 'pull-ppc-20220728' of https://gitlab.com/danielhb/qemu into staging
ppc patch queue for 2022-07-28:
Short queue with 2 Coverity fixes and one fix of the
'wait' insns that is causing hangs if the guest kernel uses
the most up to date wait opcode.
- target/ppc:
- implement new wait variants to fix guest hang when using the new opcode
- ppc440_uc: initialize length passed to cpu_physical_memory_map()
- spapr_nvdimm: check if spapr_drc_index() returns NULL
# -----BEGIN PGP SIGNATURE-----
#
# iHUEABYKAB0WIQQX6/+ZI9AYAK8oOBk82cqW3gMxZAUCYuK8VgAKCRA82cqW3gMx
# ZOc7AQDPMsFY9NHNqJ3O0MiX4Qoy8IGUreZ9dzZSS3zT1nxtEAD+Lwl0/aGO+dk+
# +NiIO80A5Agy/0g8PHie4qR3EqHEnwA=
# =Q4eR
# -----END PGP SIGNATURE-----
# gpg: Signature made Thu 28 Jul 2022 09:41:58 AM PDT
# gpg: using EDDSA key 17EBFF9923D01800AF2838193CD9CA96DE033164
# gpg: Good signature from "Daniel Henrique Barboza <danielhb413@gmail.com>" [unknown]
# gpg: WARNING: This key is not certified with a trusted signature!
# gpg: There is no indication that the signature belongs to the owner.
# Primary key fingerprint: 17EB FF99 23D0 1800 AF28 3819 3CD9 CA96 DE03 3164
* tag 'pull-ppc-20220728' of https://gitlab.com/danielhb/qemu:
target/ppc: Implement new wait variants
hw/ppc/ppc440_uc: Initialize length passed to cpu_physical_memory_map()
hw/ppc: check if spapr_drc_index() returns NULL in spapr_nvdimm.c
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
ISA v2.06 adds new variations of wait, specified by the WC field. These
are not all compatible with the prior wait implementation, because they
add additional conditions that cause the processor to resume, which can
cause software to hang or run very slowly.
At this moment, with the current wait implementation and a pseries guest
using mainline kernel with new wait upcodes [1], QEMU hangs during boot if
more than one CPU is present:
qemu-system-ppc64 -M pseries,x-vof=on -cpu POWER10 -smp 2 -nographic
-kernel zImage.pseries -no-reboot
QEMU will exit (as there's no filesystem) if the test "passes", or hang
during boot if it hits the bug.
ISA v3.0 changed the wait opcode and removed the new variants (retaining
the WC field but making non-zero values reserved).
ISA v3.1 added new WC values to the new wait opcode, and added a PL
field.
This patch implements the new wait encoding and supports WC variants
with no-op implementations, which provides basic correctness as
explained in comments.
[1] https://lore.kernel.org/all/20220720132132.903462-1-npiggin@gmail.com/
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Reviewed-by: Víctor Colombo <victor.colombo@eldorado.org.br>
Tested-by: Joel Stanley <joel@jms.id.au>
Reviewed-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Message-Id: <20220720133352.904263-1-npiggin@gmail.com>
[danielhb: added information about the bug being fixed]
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
We got to talking about how Zmmul and M interact with each other
https://github.com/riscv/riscv-isa-manual/issues/869 , and it turns out
that QEMU's behavior is slightly wrong: having Zmmul and M is a legal
combination, it just means that the multiplication instructions are
supported even when M is disabled at runtime via misa.
This just stops overriding M from Zmmul, with that the other checks for
the multiplication instructions work as per the ISA.
Signed-off-by: Palmer Dabbelt <palmer@rivosinc.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-Id: <20220714180033.22385-1-palmer@rivosinc.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
In commit 7390e0e9ab, we added support for SME loads and stores.
Unlike SVE loads and stores, these include handling of 128-bit
elements. The SME load/store functions call down into the existing
sve_cont_ldst_elements() function, which uses the element size MO_*
value as an index into the pred_esz_masks[] array. Because this code
path now has to handle MO_128, we need to add an extra element to the
array.
This bug was spotted by Coverity because it meant we were reading off
the end of the array.
Resolves: Coverity CID 1490539, 1490541, 1490543, 1490544, 1490545,
1490546, 1490548, 1490549, 1490550, 1490551, 1490555, 1490557,
1490558, 1490560, 1490561, 1490563
Fixes: 7390e0e9ab ("target/arm: Implement SME LD1, ST1")
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220718100144.3248052-1-peter.maydell@linaro.org
store effectively happens before the load. There are two bug fixes
in this series.
-----BEGIN PGP SIGNATURE-----
iQEzBAABCgAdFiEENjXHiM5iuR/UxZq0ewJE+xLeRCIFAmLXIT8ACgkQewJE+xLe
RCKL7gf9E1NE9EXNVpFUcv9E6SvEnZdqxYotXqNi2MduNKe+d9EYvDMEeNgZLWme
ltw9kYuAP41WLTYFQkrx2zCbonYljxEWqouQe8jZUwR78Ig/8GXk0MJ7C4l5AkEy
ZMuTMA59nbU98Ss6E9vbafFegWzZVP0SQcoV5YyHleihJU67we5kG8gNmQliUj93
rahG66gKxVbcUGsknSN6OeGjgPG1vD8EQyrTzIw55alD510Ih2C1I2Xhi1eibbfJ
B1sQulYJ3Q5qBq6GHlZovsz+c0fPQu4bY8j3G3fDcNCjYYEk37DqPNGKiZhIMjRE
DJ5uJhcot7ImCSB3tO5lKKTp9VSY5g==
=1Kkg
-----END PGP SIGNATURE-----
Merge tag 'pull-hex-20220719-1' of https://github.com/quic/qemu into staging
Recall that the semantics of a Hexagon mem_noshuf packet are that the
store effectively happens before the load. There are two bug fixes
in this series.
# gpg: Signature made Tue 19 Jul 2022 22:25:19 BST
# gpg: using RSA key 3635C788CE62B91FD4C59AB47B0244FB12DE4422
# gpg: Good signature from "Taylor Simpson (Rock on) <tsimpson@quicinc.com>" [undefined]
# gpg: WARNING: This key is not certified with a trusted signature!
# gpg: There is no indication that the signature belongs to the owner.
# Primary key fingerprint: 3635 C788 CE62 B91F D4C5 9AB4 7B02 44FB 12DE 4422
* tag 'pull-hex-20220719-1' of https://github.com/quic/qemu:
Hexagon (target/hexagon) fix bug in mem_noshuf load exception
Hexagon (target/hexagon) fix store w/mem_noshuf & predicated load
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The semantics of a mem_noshuf packet are that the store effectively
happens before the load. However, in cases where the load raises an
exception, we cannot simply execute the store first.
This change adds a probe to check that the load will not raise an
exception before executing the store.
If the load is predicated, this requires special handling. We check
the condition before performing the probe. Since, we need the EA to
perform the check, we move the GET_EA portion inside CHECK_NOSHUF_PRED.
Test case added in tests/tcg/hexagon/mem_noshuf_exception.c
Suggested-by: Alessandro Di Federico <ale@rev.ng>
Suggested-by: Anton Johansson <anjo@rev.ng>
Signed-off-by: Taylor Simpson <tsimpson@quicinc.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220707210546.15985-3-tsimpson@quicinc.com>
Call the CHECK_NOSHUF macro multiple times: once in the
fGEN_TCG_PRED_LOAD() and again in fLOAD().
Before this commit, a packet with a store and a predicated
load with mem_noshuf that gets encoded like this:
{ P0 = cmp.eq(R17,#0x0)
memw(R18+#0x0) = R2
if (!P0.new) R3 = memw(R17+#0x4) }
... would end up generating a branch over both the load
and the store like so:
...
brcond_i32 loc17,$0x0,eq,$L1
mov_i32 loc18,store_addr_1
qemu_st_i32 store_val32_1,store_addr_1,leul,0
qemu_ld_i32 loc16,loc7,leul,0
set_label $L1
...
Test cases added to tests/tcg/hexagon/mem_noshuf.c
Co-authored-by: Taylor Simpson <tsimpson@quicinc.com>
Signed-off-by: Brian Cain <bcain@quicinc.com>
Signed-off-by: Taylor Simpson <tsimpson@quicinc.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220707210546.15985-2-tsimpson@quicinc.com>
Add LoongArch flatted device tree, adding cpu device node, firmware cfg node,
pcie node into it, and create fdt rom memory region. Now fdt info is not
full since only uefi bios uses fdt, linux kernel does not use fdt.
Loongarch Linux kernel uses acpi table which is full in qemu virt
machine.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Xiaojuan Yang <yangxiaojuan@loongson.cn>
Message-Id: <20220712083206.4187715-7-yangxiaojuan@loongson.cn>
[rth: Set TARGET_NEED_FDT, add fdt to meson.build]
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
We should result zero when exception is invalid and operation is nan
Signed-off-by: Song Gao <gaosong@loongson.cn>
Message-Id: <20220716085426.3098060-4-gaosong@loongson.cn>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
We should config cpucfg[20] to set value for the scache's ways, sets,
and size arguments when loongarch cpu init. However, the old code
wirte 'sets argument' twice, so we change one of them to 'size argument'.
Signed-off-by: Xiaojuan Yang <yangxiaojuan@loongson.cn>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220715064829.1521482-1-yangxiaojuan@loongson.cn>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
The boundary size of cpucfg array should be 0 to ARRAY_SIZE(cpucfg)-1.
So, using index bigger than max boundary to access cpucfg[] must be
forbidden.
Fix coverity CID: 1489760
Signed-off-by: Xiaojuan Yang <yangxiaojuan@loongson.cn>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220715060740.1500628-6-yangxiaojuan@loongson.cn>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Fix out-of-bounds errors when access excp_names[] array. the valid
boundary size of excp_names should be 0 to ARRAY_SIZE(excp_names)-1.
However, the general code do not consider the max boundary.
Fix coverity CID: 1489758
Signed-off-by: Xiaojuan Yang <yangxiaojuan@loongson.cn>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220715060740.1500628-4-yangxiaojuan@loongson.cn>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
The cpu_model argument may already have the '-loongarch-cpu' suffix,
e.g. when using the default for the LS7A1000 machine. If that fails,
try again with the suffix. Validate that the object created by the
function is derived from the proper base class.
Signed-off-by: Xiaojuan Yang <yangxiaojuan@loongson.cn>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220715060740.1500628-2-yangxiaojuan@loongson.cn>
[rth: Try without and then with the suffix, to avoid testsuite breakage.]
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
vfmin_res() / vfmax_res() are trying to check whether a and b are both
zeroes, but in reality they check that they are the same kind of zero.
This causes incorrect results when comparing positive and negative
zeroes.
Fixes: da4807527f ("s390x/tcg: Implement VECTOR FP (MAXIMUM|MINIMUM)")
Co-developed-by: Ulrich Weigand <ulrich.weigand@de.ibm.com>
Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: David Hildenbrand <david@redhat.com>
Message-Id: <20220713182612.3780050-2-iii@linux.ibm.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
According to PowerISA 3.1B, Book III 6.7.6 programming note, the
page directory base addresses are expected to be aligned to their
size. Real hardware seems to rely on that and will access the
wrong address if they are misaligned. This results in a
translation failure even if the page tables seem to be properly
populated.
Signed-off-by: Leandro Lupori <leandro.lupori@eldorado.org.br>
Reviewed-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Message-Id: <20220628133959.15131-4-leandro.lupori@eldorado.org.br>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Check if the number and size of Radix levels are valid on
POWER9/POWER10 CPUs, according to the supported Radix Tree
Configurations described in their User Manuals.
Signed-off-by: Leandro Lupori <leandro.lupori@eldorado.org.br>
Reviewed-by: Fabiano Rosas <farosas@linux.ibm.com>
Message-Id: <20220628133959.15131-3-leandro.lupori@eldorado.org.br>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Check if partition and process tables are properly aligned, in
their size, according to PowerISA 3.1B, Book III 6.7.6 programming
note. Hardware and KVM also raise an exception in these cases.
Signed-off-by: Leandro Lupori <leandro.lupori@eldorado.org.br>
Reviewed-by: Fabiano Rosas <farosas@linux.ibm.com>
Message-Id: <20220628133959.15131-2-leandro.lupori@eldorado.org.br>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
When using "-machine none", env->tb_env is not allocated, causing the
segmentation fault reported in issue #85 (launchpad bug #811683). To
avoid this problem, check if the pointer != NULL before calling the
methods to print TBU/TBL/DECR.
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/85
Signed-off-by: Matheus Ferst <matheus.ferst@eldorado.org.br>
Reviewed-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Message-Id: <20220714172343.80539-1-matheus.ferst@eldorado.org.br>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Equivalent to CHK_SV and CHK_HV, but can be used in decodetree methods.
Reviewed-by: Leandro Lupori <leandro.lupori@eldorado.org.br>
Signed-off-by: Matheus Ferst <matheus.ferst@eldorado.org.br>
Signed-off-by: Lucas Coutinho <lucas.coutinho@eldorado.org.br>
Message-Id: <20220701133507.740619-3-lucas.coutinho@eldorado.org.br>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
GEN_PRIV and related CHK_* macros just assumed that variable named
"ctx" would be in scope when they are used, and that it would be a
pointer to DisasContext. Change these macros to receive the pointer
explicitly.
Reviewed-by: Leandro Lupori <leandro.lupori@eldorado.org.br>
Signed-off-by: Matheus Ferst <matheus.ferst@eldorado.org.br>
Signed-off-by: Lucas Coutinho <lucas.coutinho@eldorado.org.br>
Message-Id: <20220701133507.740619-2-lucas.coutinho@eldorado.org.br>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
This initial version supports the invalidation of one or all
TLB entries. Flush by PID/LPID, or based in process/partition
scope is not supported, because it would make using the
generic QEMU TLB implementation hard. In these cases, all
entries are flushed.
Signed-off-by: Leandro Lupori <leandro.lupori@eldorado.org.br>
Reviewed-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Message-Id: <20220712193741.59134-3-leandro.lupori@eldorado.org.br>
[danielhb: moved 'set' declaration to TLBIE_RIC_PWC block]
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Also decode RIC, PRS and R operands.
Signed-off-by: Leandro Lupori <leandro.lupori@eldorado.org.br>
Reviewed-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Message-Id: <20220712193741.59134-2-leandro.lupori@eldorado.org.br>
[danielhb: mark bit 31 in @X_tlbie pattern as ignored]
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
The 'error' argument of gen_inval_exception will be or-ed with
POWERPC_EXCP_INVAL, so it should always be a constant prefixed with
POWERPC_EXCP_INVAL_. No functional change is intended,
spr_write_excp_vector is only used by register_BookE_sprs, and
powerpc_excp_booke ignores the lower 4 bits of the error code on
POWERPC_EXCP_INVAL exceptions.
Also, take the opportunity to replace printf with qemu_log_mask.
Signed-off-by: Matheus Ferst <matheus.ferst@eldorado.org.br>
Reviewed-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Message-Id: <20220627141104.669152-7-matheus.ferst@eldorado.org.br>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
A call to "gen_(hv)priv_exception" should use POWERPC_EXCP_PRIV_* as the
'error' argument instead of POWERPC_EXCP_INVAL_*, and POWERPC_EXCP_FU is
an exception type, not an exception error code. To correctly set
FSCR[IC], we should raise Facility Unavailable with this exception type
and IC value as the error code.
Fixes: 565cb10967 ("target/ppc: add user read/write functions for MMCR0")
Signed-off-by: Matheus Ferst <matheus.ferst@eldorado.org.br>
Reviewed-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Message-Id: <20220627141104.669152-6-matheus.ferst@eldorado.org.br>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
POWERPC_EXCP_INVAL should only be or-ed with other constants prefixed
with POWERPC_EXCP_INVAL_. Also, take the opportunity to move both
helpers under #if !defined(CONFIG_USER_ONLY) as the instructions that
use them are privileged.
No functional change is intended, the lower 4 bits of the error code are
ignored by all powerpc_excp_* methods on POWERPC_EXCP_INVAL exceptions.
Reported-by: Laurent Vivier <laurent@vivier.eu>
Signed-off-by: Matheus Ferst <matheus.ferst@eldorado.org.br>
Reviewed-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Message-Id: <20220627141104.669152-5-matheus.ferst@eldorado.org.br>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
The only PowerPC implementations with these insns were the 460 and 460F,
which had their definitions removed in [1].
[1] 7ff26aa6c6 ("target/ppc: Remove unused PPC 460 and 460F definitions")
Signed-off-by: Matheus Ferst <matheus.ferst@eldorado.org.br>
Reviewed-by: Fabiano Rosas <farosas@linux.ibm.com>
Message-Id: <20220627141104.669152-4-matheus.ferst@eldorado.org.br>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Testing on a POWER9 DD2.3, we observed that the Linux kernel delivers a
signal with si_code ILL_PRVOPC (5) when a userspace application tries to
use slbfee. To obtain this behavior on linux-user, we should use
POWERPC_EXCP_PRIV with POWERPC_EXCP_PRIV_OPC.
No functional change is intended for softmmu targets as
gen_hvpriv_exception uses the same 'exception' argument
(POWERPC_EXCP_HV_EMU) for raise_exception_*, and the powerpc_excp_*
methods do not use lower bits of the exception error code when handling
POWERPC_EXCP_{INVAL,PRIV}.
Reported-by: Laurent Vivier <laurent@vivier.eu>
Signed-off-by: Matheus Ferst <matheus.ferst@eldorado.org.br>
Reviewed-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Message-Id: <20220627141104.669152-3-matheus.ferst@eldorado.org.br>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
The code in linux-user/ppc/cpu_loop.c expects POWERPC_EXCP_PRIV
exception with error POWERPC_EXCP_PRIV_OPC or POWERPC_EXCP_PRIV_REG,
while POWERPC_EXCP_INVAL_SPR is expected in POWERPC_EXCP_INVAL
exceptions. This mismatch caused an EXCP_DUMP with the message "Unknown
privilege violation (03)", as seen in [1].
[1] https://gitlab.com/qemu-project/qemu/-/issues/588
Fixes: 9b2fadda3e ("ppc: Rework generation of priv and inval interrupts")
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/588
Reviewed-by: Fabiano Rosas <farosas@linux.ibm.com>
Signed-off-by: Matheus Ferst <matheus.ferst@eldorado.org.br>
Message-Id: <20220627141104.669152-2-matheus.ferst@eldorado.org.br>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Some systems have /proc/device-tree/cpus/../clock-frequency. However,
this is not the expected path for a CPU device tree directory.
Signed-off-by: Murilo Opsfelder Araujo <muriloo@linux.ibm.com>
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Message-Id: <20220712210810.35514-1-muriloo@linux.ibm.com>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
The architecture requires that for faults on loads and stores which
do writeback, the syndrome information does not have the ISS
instruction syndrome information (i.e. ISV is 0). We got this wrong
for the load and store instructions covered by disas_ldst_reg_imm9().
Calculate iss_valid correctly so that if the insn is a writeback one
it is false.
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1057
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220715123323.1550983-1-peter.maydell@linaro.org
In regime_tcr() we return the appropriate TCR register for the
translation regime. For Secure EL2, we return the VSTCR_EL2 value,
but in this translation regime some fields that control behaviour are
in VTCR_EL2. When this code was originally written (as the comment
notes), QEMU didn't care about any of those fields, but we have since
added support for features such as LPA2 which do need the values from
those fields.
Synthesize a TCR value by merging in the relevant VTCR_EL2 fields to
the VSTCR_EL2 value.
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1103
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220714132303.1287193-8-peter.maydell@linaro.org
Change the representation of the TCR_EL* registers in the CPU state
struct from struct TCR to uint64_t. This allows us to drop the
custom vmsa_ttbcr_raw_write() function, moving the "enforce RES0"
checks to their more usual location in the writefn
vmsa_ttbcr_write(). We also don't need the resetfn any more.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220714132303.1287193-7-peter.maydell@linaro.org
Change the representation of the VSTCR_EL2 and VTCR_EL2 registers in
the CPU state struct from struct TCR to uint64_t.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220714132303.1287193-6-peter.maydell@linaro.org
We have a bug in our handling of accesses to the AArch32 VTCR
register on big-endian hosts: we were not adjusting the part of the
uint64_t field within TCR that the generated code would access. That
can be done with offsetoflow32(), by using an ARM_CP_STATE_BOTH cpreg
struct, or by defining a full set of read/write/reset functions --
the various other TCR cpreg structs used one or another of those
strategies, but for VTCR we did not, so on a big-endian host VTCR
accesses would touch the wrong half of the register.
Use offsetoflow32() in the VTCR register struct. This works even
though the field in the CPU struct is currently a struct TCR, because
the first field in that struct is the uint64_t raw_tcr.
None of the other TCR registers have this bug -- either they are
AArch64 only, or else they define resetfn, writefn, etc, and
expect to be passed the full struct pointer.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220714132303.1287193-5-peter.maydell@linaro.org
The only caller of regime_tcr() is now regime_tcr_value(); fold the
two together, and use the shorter and more natural 'regime_tcr'
name for the new function.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220714132303.1287193-4-peter.maydell@linaro.org
In get_level1_table_address(), instead of using precalculated values
of mask and base_mask from the TCR struct, calculate them directly
(in the same way we currently do in vmsa_ttbcr_raw_write() to
populate the TCR struct fields).
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220714132303.1287193-3-peter.maydell@linaro.org
The regime_tcr() function returns a pointer to a struct TCR
corresponding to the TCR controlling a translation regime. The
struct TCR has the raw value of the register, plus two fields mask
and base_mask which are used as a small optimization in the case of
32-bit short-descriptor lookups. Almost all callers of regime_tcr()
only want the raw register value. Define and use a new
regime_tcr_value() function which returns only the raw 64-bit
register value.
This is a preliminary to removing the 32-bit short descriptor
optimization -- it only saves a handful of bit operations, which is
tiny compared to the overhead of doing a page table walk at all, and
the TCR struct is awkward and makes fixing
https://gitlab.com/qemu-project/qemu/-/issues/1103 unnecessarily
difficult.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220714132303.1287193-2-peter.maydell@linaro.org
The documentation for PROT_MTE says that it cannot be cleared
by mprotect. Further, the implementation of the VM_ARCH_CLEAR bit,
contains PROT_BTI confiming that bit should be cleared.
Introduce PAGE_TARGET_STICKY to allow target/arch/cpu.h to control
which bits may be reset during page_set_flags. This is sort of the
opposite of VM_ARCH_CLEAR, but works better with qemu's PAGE_* bits
that are separate from PROT_* bits.
Reported-by: Vitaly Buka <vitalybuka@google.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220711031420.17820-1-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
We were only checking for SVE disabled and not taking into
account PSTATE.SM to check SME disabled, which resulted in
vectors being incorrectly truncated.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220713045848.217364-3-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
When PSTATE.SM, VL = SVL even if SVE is disabled.
This is visible in kselftest ssve-test.
Reported-by: Mark Brown <broonie@kernel.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220713045848.217364-2-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Pass through RDPID and RDTSCP support in CPUID if host supports it.
Correctly detect if CPU_BASED_TSC_OFFSET and CPU_BASED2_RDTSCP would
be supported in primary and secondary processor-based VM-execution
controls. Enable RDTSCP in secondary processor controls if RDTSCP
support is indicated in CPUID.
Signed-off-by: Cameron Esfahani <dirty@apple.com>
Message-Id: <20220214185605.28087-7-f4bug@amsat.org>
Tested-by: Silvio Moioli <moio@suse.com>
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1011
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Inline these macros into the only two callers.
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220628111701.677216-9-richard.henderson@linaro.org>
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
With semihosting_get_arg, we already have a check vs argc, so
there's no point replicating it -- just check the result vs NULL.
Merge copy_argn_to_target into its caller.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20220628111701.677216-8-richard.henderson@linaro.org>
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Always log the assert locally. Do not report_fault, but
instead include the fact of the fault in the assertion.
Don't bother freeing allocated strings before the abort().
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20220628111701.677216-6-richard.henderson@linaro.org>
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Use semihost_sys_write and/or qemu_semihosting_console_write
for implementing plog. When using gdbstub, copy the temp
string below the stack so that gdb has a guest address from
which to perform the log.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20220628111701.677216-5-richard.henderson@linaro.org>
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
This separates guest file descriptors from host file descriptors,
and utilizes shared infrastructure for integration with gdbstub.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20220628111701.677216-4-richard.henderson@linaro.org>
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
We don't implement it with _WIN32 hosts, and the syscall
is missing from the gdb remote file i/o interface.
Since we can't implement it universally, drop it.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20220628111701.677216-3-richard.henderson@linaro.org>
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
The UHI specification does not have an EFAULT value,
and further specifies that "undefined UHI operations
should not return control to the target".
So, log the error and abort.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20220628111701.677216-2-richard.henderson@linaro.org>
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
This patch introduces Octeon-specific decoder and implements
check-bit-and-jump instructions.
Signed-off-by: Pavel Dovgalyuk <Pavel.Dovgalyuk@ispras.ru>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <165572672705.167724.16667636081912075906.stgit@pasha-ThinkPad-X280>
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
This patch adds decodetree for Cavium Octeon extension and
an instruction set extension flag for using it in CPU models.
Signed-off-by: Pavel Dovgalyuk <Pavel.Dovgalyuk@ispras.ru>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <165572672162.167724.13656301229517693806.stgit@pasha-ThinkPad-X280>
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Enable SME, TPIDR2_EL0, and FA64 if supported by the cpu.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220708151540.18136-45-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
There's no reason to set CPACR_EL1.ZEN if SVE disabled.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220708151540.18136-44-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Note that SME remains effectively disabled for user-only,
because we do not yet set CPACR_EL1.SMEN. This needs to
wait until the kernel ABI is implemented.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220708151540.18136-33-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
We can handle both exception entry and exception return by
hooking into aarch64_sve_change_el.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220708151540.18136-32-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This is an SVE instruction that operates using the SVE vector
length but that it is present only if SME is implemented.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220708151540.18136-31-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This is an SVE instruction that operates using the SVE vector
length but that it is present only if SME is implemented.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220708151540.18136-30-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This is an SVE instruction that operates using the SVE vector
length but that it is present only if SME is implemented.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220708151540.18136-29-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This is SMOPA, SUMOPA, USMOPA_s, UMOPA, for both Int8 and Int16.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220708151540.18136-28-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220708151540.18136-27-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220708151540.18136-26-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220708151540.18136-25-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220708151540.18136-24-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
We can reuse the SVE functions for LDR and STR, passing in the
base of the ZA vector and a zero offset.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220708151540.18136-23-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Add a TCGv_ptr base argument, which will be cpu_env for SVE.
We will reuse this for SME save and restore array insns.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220708151540.18136-22-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
We cannot reuse the SVE functions for LD[1-4] and ST[1-4],
because those functions accept only a Zreg register number.
For SME, we want to pass a pointer into ZA storage.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220708151540.18136-21-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
We can reuse the SVE functions for implementing moves to/from
horizontal tile slices, but we need new ones for moves to/from
vertical tile slices.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220708151540.18136-20-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220708151540.18136-19-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
These SME instructions are nominally within the SVE decode space,
so we add them to sve.decode and translate-sve.c.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220708151540.18136-18-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The pseudocode for CheckSVEEnabled gains a check for Streaming
SVE mode, and for SME present but SVE absent.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220708151540.18136-17-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
These functions will be used to verify that the cpu
is in the correct state for a given instruction.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220708151540.18136-16-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Mark these as a non-streaming instructions, which should trap
if full a64 support is not enabled in streaming mode.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220708151540.18136-15-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Mark these as a non-streaming instructions, which should trap
if full a64 support is not enabled in streaming mode.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220708151540.18136-14-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Mark these as a non-streaming instructions, which should trap if full
a64 support is not enabled in streaming mode. In this case, introduce
PRF_ns (prefetch non-streaming) to handle the checks.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220708151540.18136-13-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Mark these as a non-streaming instructions, which should trap
if full a64 support is not enabled in streaming mode.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220708151540.18136-12-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Mark these as non-streaming instructions, which should trap
if full a64 support is not enabled in streaming mode.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220708151540.18136-11-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Mark these as a non-streaming instructions, which should trap
if full a64 support is not enabled in streaming mode.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220708151540.18136-10-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Mark these as a non-streaming instructions, which should trap
if full a64 support is not enabled in streaming mode.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220708151540.18136-9-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Mark these as a non-streaming instructions, which should trap
if full a64 support is not enabled in streaming mode.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220708151540.18136-8-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Mark these as a non-streaming instructions, which should trap
if full a64 support is not enabled in streaming mode.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220708151540.18136-7-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Mark these as a non-streaming instructions, which should trap
if full a64 support is not enabled in streaming mode.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220708151540.18136-6-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Mark ADR as a non-streaming instruction, which should trap
if full a64 support is not enabled in streaming mode.
Removing entries from sme-fa64.decode is an easy way to see
what remains to be done.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220708151540.18136-5-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This new behaviour is in the ARM pseudocode function
AArch64.CheckFPAdvSIMDEnabled, which applies to AArch32
via AArch32.CheckAdvSIMDOrFPEnabled when the EL to which
the trap would be delivered is in AArch64 mode.
Given that ARMv9 drops support for AArch32 outside EL0, the trap EL
detection ought to be trivially true, but the pseudocode still contains
a number of conditions, and QEMU has not yet committed to dropping A32
support for EL[12] when v9 features are present.
Since the computation of SME_TRAP_NONSTREAMING is necessarily different
for the two modes, we might as well preserve bits within TBFLAG_ANY and
allocate separate bits within TBFLAG_A32 and TBFLAG_A64 instead.
Note that DDI0616A.a has typos for bits [22:21] of LD1RO in the table
of instructions illegal in streaming mode.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220708151540.18136-4-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This includes the build rules for the decoder, and the
new file for translation, but excludes any instructions.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220708151540.18136-3-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Dump SVCR, plus use the correct access check for Streaming Mode.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220708151540.18136-2-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* Fix booting from devices that use 4k sectors, but are not like DASDs
* Re-evaluate pending interrupts after EXECUTE of certain instructions
-----BEGIN PGP SIGNATURE-----
iQJFBAABCAAvFiEEJ7iIR+7gJQEY8+q5LtnXdP5wLbUFAmLGhkURHHRodXRoQHJl
ZGhhdC5jb20ACgkQLtnXdP5wLbU76Q//Y4mEXxWZDpJTg7tL5SZP+UzBVttnCifv
6q+2I0keOUs6wFhPq8TzBqkazT9wlv51cNrY2Q3fU9I3dpDeRbAXZR34UD4kw5an
Q+ZQcebuGSKLjzMrIb1DLAieq8OmZR5FvDUu16BbeJr6GIQIE80lMRfWh9j30UfW
tlxkXr15BnyPx6m0rSGwzkZD2vgfj5zSUrDtYJcUsfypIA9OOBMA7yNGNlO+d94V
UZiKgNQtAoBNm4hZh2M86nsUtem+WpMTZQnDnCpMLYvFV/u9jRQBFSR+Ay41hcEN
WYuLK61rkjc9gPWSjeNNT28x8RMvFJU4YNn1UDiMRSzrigxeui6MOW3SI/h3y6tI
94yXmXV2IuDMibvOjK07nkDaEItqPxfj6zuM2xW1Nc+l8Sk12korFBpk/AZiD0Jo
R3u36efci3zNqDRDJvhGUv8sGcv0mwO7Agq1Bm3h5941gYwzQKILHCShL7DPzvQa
h+K1MsT7vWfh5++unkGUrN/Zd9CazEylbDuWtywK8lgQcTGDO/9rab8GeXfH/5es
Tp0RGJwxmalgrAHZPK9lqgpQaGw92ct2G5odvc82EXQhgccnN9mh54BHPfdKs95E
JZVrXtZH3Gtgl5MGZ+yJevWSc9h1iRnRF4a7QC3UlVBjA/9yAWzQUAnNGZOamE/s
F+pi89oWLn8=
=UsTi
-----END PGP SIGNATURE-----
Merge tag 'pull-request-2022-07-07' of https://gitlab.com/thuth/qemu into staging
* Check validity of the address in the SET PREFIX instruction
* Fix booting from devices that use 4k sectors, but are not like DASDs
* Re-evaluate pending interrupts after EXECUTE of certain instructions
# -----BEGIN PGP SIGNATURE-----
#
# iQJFBAABCAAvFiEEJ7iIR+7gJQEY8+q5LtnXdP5wLbUFAmLGhkURHHRodXRoQHJl
# ZGhhdC5jb20ACgkQLtnXdP5wLbU76Q//Y4mEXxWZDpJTg7tL5SZP+UzBVttnCifv
# 6q+2I0keOUs6wFhPq8TzBqkazT9wlv51cNrY2Q3fU9I3dpDeRbAXZR34UD4kw5an
# Q+ZQcebuGSKLjzMrIb1DLAieq8OmZR5FvDUu16BbeJr6GIQIE80lMRfWh9j30UfW
# tlxkXr15BnyPx6m0rSGwzkZD2vgfj5zSUrDtYJcUsfypIA9OOBMA7yNGNlO+d94V
# UZiKgNQtAoBNm4hZh2M86nsUtem+WpMTZQnDnCpMLYvFV/u9jRQBFSR+Ay41hcEN
# WYuLK61rkjc9gPWSjeNNT28x8RMvFJU4YNn1UDiMRSzrigxeui6MOW3SI/h3y6tI
# 94yXmXV2IuDMibvOjK07nkDaEItqPxfj6zuM2xW1Nc+l8Sk12korFBpk/AZiD0Jo
# R3u36efci3zNqDRDJvhGUv8sGcv0mwO7Agq1Bm3h5941gYwzQKILHCShL7DPzvQa
# h+K1MsT7vWfh5++unkGUrN/Zd9CazEylbDuWtywK8lgQcTGDO/9rab8GeXfH/5es
# Tp0RGJwxmalgrAHZPK9lqgpQaGw92ct2G5odvc82EXQhgccnN9mh54BHPfdKs95E
# JZVrXtZH3Gtgl5MGZ+yJevWSc9h1iRnRF4a7QC3UlVBjA/9yAWzQUAnNGZOamE/s
# F+pi89oWLn8=
# =UsTi
# -----END PGP SIGNATURE-----
# gpg: Signature made Thu 07 Jul 2022 12:37:49 PM +0530
# gpg: using RSA key 27B88847EEE0250118F3EAB92ED9D774FE702DB5
# gpg: issuer "thuth@redhat.com"
# gpg: Good signature from "Thomas Huth <th.huth@gmx.de>" [undefined]
# gpg: aka "Thomas Huth <thuth@redhat.com>" [undefined]
# gpg: aka "Thomas Huth <th.huth@posteo.de>" [unknown]
# gpg: aka "Thomas Huth <huth@tuxfamily.org>" [undefined]
# gpg: WARNING: This key is not certified with a trusted signature!
# gpg: There is no indication that the signature belongs to the owner.
# Primary key fingerprint: 27B8 8847 EEE0 2501 18F3 EAB9 2ED9 D774 FE70 2DB5
* tag 'pull-request-2022-07-07' of https://gitlab.com/thuth/qemu:
target/s390x: Exit tb after executing ex_value
target/s390x: Remove DISAS_PC_STALE_NOCHAIN
target/s390x: Remove DISAS_PC_STALE
target/s390x: Remove DISAS_GOTO_TB
pc-bios/s390-ccw: Update the s390-ccw bios binaries with the virtio-blk fixes
pc-bios/s390-ccw/netboot.mak: Ignore Clang's warnings about GNU extensions
pc-bios/s390-ccw/virtio: Remove "extern" keyword from prototypes
pc-bios/s390-ccw/virtio-blkdev: Request the right feature bits
pc-bios/s390-ccw: Split virtio-scsi code from virtio_blk_setup_device()
pc-bios/s390-ccw/virtio: Beautify the code for reading virtqueue configuration
pc-bios/s390-ccw/virtio: Read device config after feature negotiation
pc-bios/s390-ccw/virtio: Set missing status bits while initializing
pc-bios/s390-ccw/virtio-blkdev: Remove virtio_assume_scsi()
pc-bios/s390-ccw/virtio-blkdev: Simplify/fix virtio_ipl_disk_is_valid()
pc-bios/s390-ccw/bootmap: Improve the guessing logic in zipl_load_vblk()
pc-bios/s390-ccw/virtio: Introduce a macro for the DASD block size
pc-bios/s390-ccw: Add a proper prototype for main()
target/s390x/tcg: SPX: check validity of new prefix
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
In commit 39a1fd2528 we fixed a bug in the handling of LPAE block
descriptors where we weren't correctly zeroing out some RES0 bits.
However this fix has a bug because the calculation of the mask is
done at the wrong width: in
descaddr &= ~(page_size - 1);
page_size is a target_ulong, so in the 'qemu-system-arm' binary it is
only 32 bits, and the effect is that we always zero out the top 32
bits of the calculated address. Fix the calculation by forcing the
mask to be calculated with the same type as descaddr.
This only affects 32-bit CPUs which support LPAE (e.g. cortex-a15)
when used on board models which put RAM or devices above the 4GB
mark and when the 'qemu-system-arm' executable is being used.
It was also masked in 7.0 by the main bug reported in
https://gitlab.com/qemu-project/qemu/-/issues/1078 where the
virt board incorrectly does not enable 'highmem' for 32-bit CPUs.
The workaround is to use 'qemu-system-aarch64' with the same
command line.
Reported-by: He Zhe <zhe.he@windriver.com>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220627134620.3190252-1-peter.maydell@linaro.org
Fixes: 39a1fd2528 ("target/arm: Fix handling of LPAE block descriptors")
Cc: qemu-stable@nongnu.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The architecture defines the OS DoubleLock as a register which
(similarly to the OS Lock) suppresses debug events for use in CPU
powerdown sequences. This functionality is required in Arm v7 and
v8.0; from v8.2 it becomes optional and in v9 it must not be
implemented.
Currently in QEMU we implement the OSDLR_EL1 register as a NOP. This
is wrong both for the "feature implemented" and the "feature not
implemented" cases: if the feature is implemented then the DLK bit
should read as written and cause suppression of debug exceptions, and
if it is not implemented then the bit must be RAZ/WI.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Starting with v7 of the debug architecture, there are three extra
ID registers that add information on top of that provided in
DBGDIDR. These are DBGDEVID, DBGDEVID1 and DBGDEVID2. In the
v7 debug architecture, DBGDEVID is optional, present only of
DBGDIDR.DEVID_imp is set. In v7.1 all three must be present.
Implement the missing registers. Note that we only need to set the
values in the ARMISARegisters struct for the CPUs Cortex-A7, A15,
A53, A57 and A72 (plus the 32-bit 'max' which uses the Cortex-A53
values): earlier CPUs didn't implement v7 of the architecture, and
our other 64-bit CPUs (Cortex-A76, Neoverse-N1 and A64fx) don't have
AArch32 support at EL1.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220630194116.3438513-5-peter.maydell@linaro.org
The "OS Lock" in the Arm debug architecture is a way for software
to suppress debug exceptions while it is trying to power down
a CPU and save the state of the breakpoint and watchpoint
registers. In QEMU we implemented the support for writing
the OS Lock bit via OSLAR_EL1 and reading it via OSLSR_EL1,
but didn't implement the actual behaviour.
The required behaviour with the OS Lock set is:
* debug exceptions (apart from BKPT insns) are suppressed
* some MDSCR_EL1 bits allow write access to the corresponding
EDSCR external debug status register that they shadow
(we can ignore this because we don't implement external debug)
* similarly with the OSECCR_EL1 which shadows the EDECCR
(but we don't implement OSECCR_EL1 anyway)
Implement the missing behaviour of suppressing debug
exceptions.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220630194116.3438513-4-peter.maydell@linaro.org
The target/arm/helper.c file is very long and is a grabbag of all
kinds of functionality. We have already a debug_helper.c which has
code for implementing architectural debug. Move the code which
defines the debug-related system registers out to this file also.
This affects the define_debug_regs() function and the various
functions and arrays which are used only by it.
The functions raw_write() and arm_mdcr_el2_eff() and
define_debug_regs() now need to be global rather than local to
helper.c; everything else is pure code movement.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220630194116.3438513-3-peter.maydell@linaro.org
Before moving debug system register helper functions to a
different file, fix the code style issues (mostly block
comment syntax) so checkpatch doesn't complain about the
code-motion patch.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220630194116.3438513-2-peter.maydell@linaro.org
Fixes a bug in that we were not honoring MTE from user-only
SVE. Copy the user-only MTE logic from allocation_tag_mem
into sve_probe_page.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The comment was correct, but the test was not:
disable mte if tagged is *not* set.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
When EXECUTE sets ex_value to interrupt the constructed instruction,
we implicitly disable interrupts so that the value is not corrupted.
Exit to the main loop after execution, so that we re-evaluate any
pending interrupts.
Reported-by: Sven Schnelle <svens@linux.ibm.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220702060228.420454-5-richard.henderson@linaro.org>
Signed-off-by: Thomas Huth <thuth@redhat.com>
Replace this with a flag: exit_to_mainloop.
We can now control the exit for each of DISAS_TOO_MANY,
DISAS_PC_UPDATED, and DISAS_PC_CC_UPDATED, and fold in
the check for PER.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220702060228.420454-4-richard.henderson@linaro.org>
Signed-off-by: Thomas Huth <thuth@redhat.com>
There is nothing to distinguish this from DISAS_TOO_MANY.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220702060228.420454-3-richard.henderson@linaro.org>
Signed-off-by: Thomas Huth <thuth@redhat.com>
There is nothing to distinguish this from DISAS_NORETURN.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220702060228.420454-2-richard.henderson@linaro.org>
Signed-off-by: Thomas Huth <thuth@redhat.com>
Commit 80d11f4467 ("Add definitions for Freescale PowerPC implementations")
changed core type of MPC8555 and MPC8560 from e500v1 to e500v2.
But both MPC8555 and MPC8560 have just e500v1 cores, there are no features
of e500v2 cores. It can be verified by reading NXP documentations:
https://www.nxp.com/docs/en/data-sheet/MPC8555EEC.pdfhttps://www.nxp.com/docs/en/data-sheet/MPC8560EC.pdfhttps://www.nxp.com/docs/en/reference-manual/MPC8555ERM.pdfhttps://www.nxp.com/docs/en/reference-manual/MPC8560RM.pdf
Therefore fix core type of MPC8555 and MPC8560 back to e500v1.
Just for completeness, here is list of all Motorola/Freescale/NXP
processors which were released and have e500v1 or e500v2 cores:
e500v1: MPC8540 MPC8541 MPC8555 MPC8560
e500v2: BSC9131 BSC9132
C291 C292 C293
MPC8533 MPC8535 MPC8536 MPC8543 MPC8544 MPC8545 MPC8547
MPC8548 MPC8567 MPC8568 MPC8569 MPC8572
P1010 P1011 P1012 P1013 P1014 P1015 P1016 P1020 P1021
P1022 P1024 P1025 P2010 P2020
Sorted alphabetically; not by release date / generation / feature set.
All this is from public information available on NXP website.
Seems that qemu has support only for some subset of MPC85xx processors.
Historically processors with e500 cores have mpc85xx family codename and
lot of software have them in mpc85xx architecture subdirectory.
Note that GCC uses -mcpu=8540 option for specifying e500v1 core and
-mcpu=8548 option for specifying e500v2 core.
So sometimes (mpc)8540 is alias for e500v1 and (mpc)8548 is alias for
e500v2.
Fixes: 80d11f4467 ("Add definitions for Freescale PowerPC implementations")
Signed-off-by: Pali Rohár <pali@kernel.org>
Reviewed-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Message-Id: <20220703195029.23793-1-pali@kernel.org>
[danielhb: added more context in the commit msg]
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
QEMU emulates a *lot* of PowerPC-based machines - having a CPU
that is named "default" and cannot be used with most of those
machines sounds just wrong. Thus let's remove this old and confusing
alias now.
Signed-off-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: Greg Kurz <groug@kaod.org>
Reviewed-by: Cédric Le Goater <clg@kaod.org>
Message-Id: <20220705151030.662140-1-thuth@redhat.com>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
All ppc CPUs represent hardware that exists in the real world, i.e.: we
do not have a "max" CPU with all possible emulated features enabled.
Return the default CPU type for the machine because that has greater
chance of being useful as the "max" CPU.
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1038
Cc: Cédric Le Goater <clg@kaod.org>
Cc: Daniel Henrique Barboza <danielhb413@gmail.com>
Cc: Daniel P. Berrangé <berrange@redhat.com>
Cc: Greg Kurz <groug@kaod.org>
Cc: Matheus K. Ferst <matheus.ferst@eldorado.org.br>
Cc: Thomas Huth <thuth@redhat.com>
Signed-off-by: Murilo Opsfelder Araujo <muriloo@linux.ibm.com>
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Reviewed-by: Víctor Colombo <victor.colombo@eldorado.org.br>
Message-Id: <20220628205513.81917-1-muriloo@linux.ibm.com>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Implements the Convert Declets To Binary Coded Decimal instruction.
Since libdecnumber doesn't expose the methods for direct conversion
(decDigitsFromDPD, DPD2BCD, etc), a positive decimal32 with zero
exponent is used as an intermediate value to convert the declets.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Matheus Ferst <matheus.ferst@eldorado.org.br>
Signed-off-by: Víctor Colombo <victor.colombo@eldorado.org.br>
Message-Id: <20220629162904.105060-12-victor.colombo@eldorado.org.br>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Implements the Convert Binary Coded Decimal To Declets instruction.
Since libdecnumber doesn't expose the methods for direct conversion
(decDigitsToDPD, BCD2DPD, etc.), the BCD values are converted to
decimal32 format, from which the declets are extracted.
Where the behavior is undefined, we try to match the result observed in
a POWER9 DD2.3.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Matheus Ferst <matheus.ferst@eldorado.org.br>
Signed-off-by: Víctor Colombo <victor.colombo@eldorado.org.br>
Message-Id: <20220629162904.105060-11-victor.colombo@eldorado.org.br>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Adds an insns_flags2 for the BCD assist instructions introduced in
Power ISA 2.06. These instructions are not listed in the manuals for
e5500[1] and e6500[2], so the flag is only added for POWER7/8/9/10
models.
[1] https://www.nxp.com/files-static/32bit/doc/ref_manual/EREF_RM.pdf
[2] https://www.nxp.com/docs/en/reference-manual/E6500RM.pdf
Signed-off-by: Matheus Ferst <matheus.ferst@eldorado.org.br>
Signed-off-by: Víctor Colombo <victor.colombo@eldorado.org.br>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220629162904.105060-9-victor.colombo@eldorado.org.br>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Some lines in insn32.decode have inconsistent alignment when compared
to others.
Fix this by changing the alignment of some lines, making it more
consistent throughout the file.
Signed-off-by: Víctor Colombo <victor.colombo@eldorado.org.br>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220629162904.105060-2-victor.colombo@eldorado.org.br>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
It keeps repeating, move it to the header. This uses __builtin_ffsll() to
allow using the macros in #define.
This is not using the QEMU's FIELD macros as this would require changing
all such macros found in skiboot (the PPC PowerNV firmware).
Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Reviewed-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Message-Id: <20220628080544.1509428-1-aik@ozlabs.ru>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
And also move the insn to decodetree and remove the now unused
avr_qw_not, avr_qw_cmpu, and avr_qw_add methods.
Signed-off-by: Matheus Ferst <matheus.ferst@eldorado.org.br>
Reviewed-by: Víctor Colombo <victor.colombo@eldorado.org.br>
Message-Id: <20220606150037.338931-8-matheus.ferst@eldorado.org.br>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
And also move the insns to decodetree.
Signed-off-by: Matheus Ferst <matheus.ferst@eldorado.org.br>
Reviewed-by: Víctor Colombo <victor.colombo@eldorado.org.br>
Message-Id: <20220606150037.338931-7-matheus.ferst@eldorado.org.br>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
And also move the insn to decodetree
Signed-off-by: Matheus Ferst <matheus.ferst@eldorado.org.br>
Reviewed-by: Víctor Colombo <victor.colombo@eldorado.org.br>
Message-Id: <20220606150037.338931-6-matheus.ferst@eldorado.org.br>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
And also move the insn to decodetree.
Signed-off-by: Matheus Ferst <matheus.ferst@eldorado.org.br>
Reviewed-by: Víctor Colombo <victor.colombo@eldorado.org.br>
Message-Id: <20220606150037.338931-5-matheus.ferst@eldorado.org.br>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
And also move the insns to decodetree and remove the now unused
avr_qw_addc method.
Signed-off-by: Matheus Ferst <matheus.ferst@eldorado.org.br>
Reviewed-by: Víctor Colombo <victor.colombo@eldorado.org.br>
Message-Id: <20220606150037.338931-4-matheus.ferst@eldorado.org.br>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
And also move the insn to decodetree.
Signed-off-by: Matheus Ferst <matheus.ferst@eldorado.org.br>
Reviewed-by: Víctor Colombo <victor.colombo@eldorado.org.br>
Message-Id: <20220606150037.338931-3-matheus.ferst@eldorado.org.br>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Also drop VECTOR_FOR_INORDER_I usage since there is no need to access
the elements in any particular order, and move the instruction to
decodetree.
Signed-off-by: Matheus Ferst <matheus.ferst@eldorado.org.br>
Reviewed-by: Víctor Colombo <victor.colombo@eldorado.org.br>
Message-Id: <20220606150037.338931-2-matheus.ferst@eldorado.org.br>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
FPSCR_* bit values in QEMU are in the 'inverted' order from what Power
ISA defines (e.g. FPSCR.FI is bit 46 but is defined as 17 in cpu.h).
Now that PPC_BIT_NR macro was introduced to fix this situation for the
MSR bits, we can use it for the FPSCR bits too.
Also, adjust the comments to make then fit in 80 columns
Signed-off-by: Víctor Colombo <victor.colombo@eldorado.org.br>
Reviewed-by: Fabiano Rosas <farosas@linux.ibm.com>
Message-Id: <20220622193203.127698-1-victor.colombo@eldorado.org.br>
[danielhb: fixed 'exceptio' typo in target/ppc/cpu.h]
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
According to the architecture, SET PREFIX must try to access the new
prefix area and recognize an addressing exception if the area is not
accessible.
For qemu this check prevents a crash in cpu_map_lowcore after an
inaccessible prefix area has been set.
Signed-off-by: Janis Schoetterl-Glausch <scgl@linux.ibm.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
Message-Id: <20220630094340.3646279-1-scgl@linux.ibm.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
* Fix edk2/opensbi jobs to not run automatically by accident
* Improve timings in the migration qtest
* Remove libvixl disassembler
* Add ukrainian translation
* Require a recent version of libpng
-----BEGIN PGP SIGNATURE-----
iQJFBAABCAAvFiEEJ7iIR+7gJQEY8+q5LtnXdP5wLbUFAmLECEkRHHRodXRoQHJl
ZGhhdC5jb20ACgkQLtnXdP5wLbV7lxAAmEItM6PIoW58eWPzReKVH8LE2w3UlvOZ
JQhNgJjuN23fqjUVkcT0yCfdNCz/nKvafHnxfHQnrAXyB5V5vU8ovBgSuWK2mcmD
NTFK+/2x5lcsyBrOe3QoeD2g1r7+Os3AYVkdnN/t2HAMLwQyaoshKaMV/UHC9O/i
Kle1svYRNyCgyXJgxaOdbVMBSLi/L9h2R5AaG31GIi9wnf0n8HDH/ONtmeIpN09g
BlMeZqPhGJT+tpMvviif65/Za57Y9h/r+TOgEIIs00cWmxqaBmcXXN9qog2s0n7A
nOm3ck2lpGJCQ6+sl6/Mphyr3X6nWHsxGrLDElS0Ba5bg6T/Xqfg2pBcb81Klkjc
QcTdFPiMxKUczgpFq326sqiaVzMgys4vwnW5iPSd5swNzrkYKADAIreki5jyM3cH
lohBG/ruOmg5xMkX2K6pra0iOAeCz44Ku/HTREfY1CTUgEQZJY4SZrMJSnmUTnM+
EQCkDcmOsnFDaQazneCbo18l37cXOgEhH8VoGAOqg1aRjr7TNlsJzx87PoD+9zNR
GEh7kp18ABRGik5ZACdLQ/HhhOJa8+UWsGCwCdeBGv/TVug1Byz0OUG0PxX3X5SV
WwubeKyZcqzoH92SQI3jZGSmuGBySy9q51T2k8FjZvaDsPiUN/MLPspNezH1qj2B
W7qEaqIyGmo=
=Q2vV
-----END PGP SIGNATURE-----
Merge tag 'pull-request-2022-07-05' of https://gitlab.com/thuth/qemu into staging
* Fix memory leak in test-cutils
* Fix edk2/opensbi jobs to not run automatically by accident
* Improve timings in the migration qtest
* Remove libvixl disassembler
* Add ukrainian translation
* Require a recent version of libpng
# -----BEGIN PGP SIGNATURE-----
#
# iQJFBAABCAAvFiEEJ7iIR+7gJQEY8+q5LtnXdP5wLbUFAmLECEkRHHRodXRoQHJl
# ZGhhdC5jb20ACgkQLtnXdP5wLbV7lxAAmEItM6PIoW58eWPzReKVH8LE2w3UlvOZ
# JQhNgJjuN23fqjUVkcT0yCfdNCz/nKvafHnxfHQnrAXyB5V5vU8ovBgSuWK2mcmD
# NTFK+/2x5lcsyBrOe3QoeD2g1r7+Os3AYVkdnN/t2HAMLwQyaoshKaMV/UHC9O/i
# Kle1svYRNyCgyXJgxaOdbVMBSLi/L9h2R5AaG31GIi9wnf0n8HDH/ONtmeIpN09g
# BlMeZqPhGJT+tpMvviif65/Za57Y9h/r+TOgEIIs00cWmxqaBmcXXN9qog2s0n7A
# nOm3ck2lpGJCQ6+sl6/Mphyr3X6nWHsxGrLDElS0Ba5bg6T/Xqfg2pBcb81Klkjc
# QcTdFPiMxKUczgpFq326sqiaVzMgys4vwnW5iPSd5swNzrkYKADAIreki5jyM3cH
# lohBG/ruOmg5xMkX2K6pra0iOAeCz44Ku/HTREfY1CTUgEQZJY4SZrMJSnmUTnM+
# EQCkDcmOsnFDaQazneCbo18l37cXOgEhH8VoGAOqg1aRjr7TNlsJzx87PoD+9zNR
# GEh7kp18ABRGik5ZACdLQ/HhhOJa8+UWsGCwCdeBGv/TVug1Byz0OUG0PxX3X5SV
# WwubeKyZcqzoH92SQI3jZGSmuGBySy9q51T2k8FjZvaDsPiUN/MLPspNezH1qj2B
# W7qEaqIyGmo=
# =Q2vV
# -----END PGP SIGNATURE-----
# gpg: Signature made Tue 05 Jul 2022 03:15:45 PM +0530
# gpg: using RSA key 27B88847EEE0250118F3EAB92ED9D774FE702DB5
# gpg: issuer "thuth@redhat.com"
# gpg: Good signature from "Thomas Huth <th.huth@gmx.de>" [undefined]
# gpg: aka "Thomas Huth <thuth@redhat.com>" [undefined]
# gpg: aka "Thomas Huth <th.huth@posteo.de>" [unknown]
# gpg: aka "Thomas Huth <huth@tuxfamily.org>" [undefined]
# gpg: WARNING: This key is not certified with a trusted signature!
# gpg: There is no indication that the signature belongs to the owner.
# Primary key fingerprint: 27B8 8847 EEE0 2501 18F3 EAB9 2ED9 D774 FE70 2DB5
* tag 'pull-request-2022-07-05' of https://gitlab.com/thuth/qemu:
include/qemu/host-utils: Remove unused code in the *_overflow wrappers
meson.build: Require a recent version of libpng
po: add ukrainian translation
disas: Remove libvixl disassembler
tests: use consistent bandwidth/downtime limits in migration tests
tests: increase migration test converge downtime to 30 seconds
tests: wait for migration completion before looking for STOP event
tests: wait max 120 seconds for migration test status changes
gitlab-ci: Extend timeout for ubuntu-20.04-s390x-all to 75m
gitlab: honour QEMU_CI variable in edk2/opensbi jobs
gitlab: tweak comments in edk2/opensbi jobs
gitlab: normalize indentation in edk2/opensbi rules
tests/fp: Do not build softfloat3 tests if TCG is disabled
tests: fix test-cutils leaks
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
We should make sure that tlb is clean when cpu reset.
Signed-off-by: Song Gao <gaosong@loongson.cn>
Message-Id: <20220705070950.2364243-1-gaosong@loongson.cn>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
The disassembly via capstone should be superiour to our old vixl
sources nowadays, so let's finally cut this old disassembler out
of the QEMU source tree.
Message-Id: <20220603164249.112459-1-thuth@redhat.com>
Tested-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Thomas Huth <thuth@redhat.com>
There is such error info when running linux kernel:
tcg_handle_interrupt: assertion failed: (qemu_mutex_iothread_locked()).
calling stack:
#0 in raise () at /lib64/libc.so.6
#1 in abort () at /lib64/libc.so.6
#2 in g_assertion_message_expr.cold () at /lib64/libglib-2.0.so.0
#3 in g_assertion_message_expr () at /lib64/libglib-2.0.so.0
#4 in tcg_handle_interrupt (cpu=0x632000030800, mask=2) at ../accel/tcg/tcg-accel-ops.c:79
#5 in cpu_interrupt (cpu=0x632000030800, mask=2) at ../softmmu/cpus.c:248
#6 in loongarch_cpu_set_irq (opaque=0x632000030800, irq=11, level=0)
at ../target/loongarch/cpu.c:100
#7 in helper_csrwr_ticlr (env=0x632000039440, val=1) at ../target/loongarch/csr_helper.c:85
#8 in code_gen_buffer ()
#9 in cpu_tb_exec (cpu=0x632000030800, itb=0x7fff946ac280, tb_exit=0x7ffe4fcb6c30)
at ../accel/tcg/cpu-exec.c:358
Add mutex iothread lock around loongarch_cpu_set_irq in csrwr_ticlr() to
fix the bug.
Signed-off-by: Xiaojuan Yang <yangxiaojuan@loongson.cn>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220701093407.2150607-10-yangxiaojuan@loongson.cn>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
By the manual of LoongArch CSR, the VS field(18:16 bits) of
ECFG reg means that the number of instructions between each
exception entry is 2^VS.
Signed-off-by: Xiaojuan Yang <yangxiaojuan@loongson.cn>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220701093407.2150607-9-yangxiaojuan@loongson.cn>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Some functions and member of the structure are different with softmmu-mode
So we need adjust them to support user-mode.
Signed-off-by: Song Gao <gaosong@loongson.cn>
Signed-off-by: Xiaojuan Yang <yangxiaojuan@loongson.cn>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220624031049.1716097-12-gaosong@loongson.cn>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Song Gao <gaosong@loongson.cn>
Signed-off-by: Xiaojuan Yang <yangxiaojuan@loongson.cn>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220624031049.1716097-11-gaosong@loongson.cn>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Raise EXCCODE_BCE instead of EXCCODE_ADEM for helper_asrtle_d/asrtgt_d.
Signed-off-by: Song Gao <gaosong@loongson.cn>
Signed-off-by: Xiaojuan Yang <yangxiaojuan@loongson.cn>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220624031049.1716097-10-gaosong@loongson.cn>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
loongarch_cpu_do_interrupt() should update CSR_BADV for some EXCCODE.
Signed-off-by: Song Gao <gaosong@loongson.cn>
Signed-off-by: Xiaojuan Yang <yangxiaojuan@loongson.cn>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220624031049.1716097-9-gaosong@loongson.cn>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
We can use CSR_BADV to replace badaddr.
Signed-off-by: Song Gao <gaosong@loongson.cn>
Signed-off-by: Xiaojuan Yang <yangxiaojuan@loongson.cn>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220624031049.1716097-8-gaosong@loongson.cn>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
The latest AIA draft v0.3.0 defines a relatively simpler scheme for
default priority assignments where:
1) local interrupts 24 to 31 and 48 to 63 are reserved for custom use
and have implementation specific default priority.
2) remaining local interrupts 0 to 23 and 32 to 47 have a recommended
(not mandatory) priority assignments.
We update the default priority table and hviprio mapping as-per above.
Signed-off-by: Anup Patel <apatel@ventanamicro.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-Id: <20220616031543.953776-3-apatel@ventanamicro.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Based on architecture review committee feedback, the [m|s|vs]seteienum,
[m|s|vs]clreienum, [m|s|vs]seteipnum, and [m|s|vs]clreipnum CSRs are
removed in the latest AIA draft v0.3.0 specification.
(Refer, https://github.com/riscv/riscv-aia/releases/tag/0.3.0-draft.31)
These CSRs were mostly for software convenience and software can always
use [m|s|vs]iselect and [m|s|vs]ireg CSRs to update the IMSIC interrupt
file bits.
We update the IMSIC CSR emulation as-per above to match the latest AIA
draft specification.
Signed-off-by: Anup Patel <apatel@ventanamicro.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-Id: <20220616031543.953776-2-apatel@ventanamicro.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
The minimum priv spec versino for mcountinhibit to v1.11 so that it
is not available for v1.10 (or lower).
Fixes: eab4776b2bad ("target/riscv: Add support for hpmcounters/hpmevents")
Signed-off-by: Anup Patel <apatel@ventanamicro.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-Id: <20220628101737.786681-3-apatel@ventanamicro.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
The riscv_cpu_realize() sets priv spec version to v1.12 when it is
when "env->priv_ver == 0" (i.e. default v1.10) because the enum
value of priv spec v1.10 is zero.
Due to above issue, the sifive_u machine will see priv spec v1.12
instead of priv spec v1.10.
To fix this issue, we set latest priv spec version (i.e. v1.12)
for base rv64/rv32 cpu and riscv_cpu_realize() will override priv
spec version only when "cpu->cfg.priv_spec != NULL".
Fixes: 7100fe6c24 ("target/riscv: Enable privileged spec version 1.12")
Signed-off-by: Anup Patel <apatel@ventanamicro.com>
Reviewed-by: Frank Chang <frank.chang@sifive.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Atish Patra <atishp@rivosinc.com>
Reviewed-by: Bin Meng <bmeng.cn@gmail.com>
Message-Id: <20220611080107.391981-2-apatel@ventanamicro.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
The Ibex CPU supports version 1.11 of the priv spec [1], so let's
correct that in QEMU as well.
1: https://ibex-core.readthedocs.io/en/latest/01_overview/compliance.html
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Bin Meng <bmeng.cn@gmail.com>
Message-Id: <20220629233102.275181-3-alistair.francis@opensource.wdc.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
There is nothing in the RISC-V spec that mandates version 1.12 is
required for ePMP and there is currently hardware [1] that implements
ePMP (a draft version though) with the 1.11 priv spec.
1: https://ibex-core.readthedocs.io/en/latest/01_overview/compliance.html
Fixes: a4b2fa4331 ("target/riscv: Introduce privilege version field in the CSR ops.")
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Bin Meng <bmeng.cn@gmail.com>
Message-Id: <20220629233102.275181-2-alistair.francis@opensource.wdc.com>
mcycle/minstret are actually WARL registers and can be written with any
given value. With SBI PMU extension, it will be used to store a initial
value provided from supervisor OS. The Qemu also need prohibit the counter
increment if mcountinhibit is set.
Support mcycle/minstret through generic counter infrastructure.
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Signed-off-by: Atish Patra <atish.patra@wdc.com>
Signed-off-by: Atish Patra <atishp@rivosinc.com>
Message-Id: <20220620231603.2547260-8-atishp@rivosinc.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
With SBI PMU extension, user can use any of the available hpmcounters to
track any perf events based on the value written to mhpmevent csr.
Add read/write functionality for these csrs.
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Bin Meng <bmeng.cn@gmail.com>
Signed-off-by: Atish Patra <atish.patra@wdc.com>
Signed-off-by: Atish Patra <atishp@rivosinc.com>
Message-Id: <20220620231603.2547260-7-atishp@rivosinc.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
As per the privilege specification v1.11, mcountinhibit allows to start/stop
a pmu counter selectively.
Reviewed-by: Bin Meng <bmeng.cn@gmail.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Signed-off-by: Atish Patra <atish.patra@wdc.com>
Signed-off-by: Atish Patra <atishp@rivosinc.com>
Message-Id: <20220620231603.2547260-6-atishp@rivosinc.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
The RISC-V privilege specification provides flexibility to implement
any number of counters from 29 programmable counters. However, the QEMU
implements all the counters.
Make it configurable through pmu config parameter which now will indicate
how many programmable counters should be implemented by the cpu.
Reviewed-by: Bin Meng <bmeng.cn@gmail.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Signed-off-by: Atish Patra <atish.patra@wdc.com>
Signed-off-by: Atish Patra <atishp@rivosinc.com>
Message-Id: <20220620231603.2547260-5-atishp@rivosinc.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
The PMU counters are supported via cpu config "Counters" which doesn't
indicate the correct purpose of those counters.
Rename the config property to pmu to indicate that these counters
are performance monitoring counters. This aligns with cpu options for
ARM architecture as well.
Reviewed-by: Bin Meng <bmeng.cn@gmail.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Signed-off-by: Atish Patra <atish.patra@wdc.com>
Signed-off-by: Atish Patra <atishp@rivosinc.com>
Message-Id: <20220620231603.2547260-4-atishp@rivosinc.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Currently, the predicate function for PMU related CSRs only works if
virtualization is enabled. It also does not check mcounteren bits before
before cycle/minstret/hpmcounterx access.
Support supervisor mode access in the predicate function as well.
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Bin Meng <bmeng.cn@gmail.com>
Signed-off-by: Atish Patra <atish.patra@wdc.com>
Signed-off-by: Atish Patra <atishp@rivosinc.com>
Message-Id: <20220620231603.2547260-3-atishp@rivosinc.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
The predicate function calculates the counter index incorrectly for
hpmcounterx. Fix the counter index to reflect correct CSR number.
Fixes: e39a8320b0 ("target/riscv: Support the Virtual Instruction fault")
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Bin Meng <bmeng.cn@gmail.com>
Signed-off-by: Atish Patra <atish.patra@wdc.com>
Signed-off-by: Atish Patra <atishp@rivosinc.com>
Message-Id: <20220620231603.2547260-2-atishp@rivosinc.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
For a TOR entry to match, the stard address must be lower than the end
address. Normally this is always the case, but correct code might still
run into the following scenario:
Initial state:
pmpaddr3 = 0x2000 pmp3cfg = OFF
pmpaddr4 = 0x3000 pmp4cfg = TOR
Execution:
1. write 0x40ff to pmpaddr3
2. write 0x32ff to pmpaddr4
3. set pmp3cfg to NAPOT with a read-modify-write on pmpcfg0
4. set pmp4cfg to NAPOT with a read-modify-write on pmpcfg1
When (2) is emulated, a call to pmp_update_rule() creates a negative
range for pmp4 as pmp4cfg is still set to TOR. And when (3) is emulated,
a call to tlb_flush() is performed, causing pmp_get_tlb_size() to return
a very creatively large TLB size for pmp4. This, in turn, may result in
accesses to non-existent/unitialized memory regions and a fault, so that
(4) ends up never being executed.
This is in m-mode with MPRV unset, meaning that unlocked PMP entries
should have no effect. Therefore such a behavior based on PMP content
is very unexpected.
Make sure no negative PMP range can be created, whether explicitly by
the emulated code or implicitly like the above.
Signed-off-by: Nicolas Pitre <nico@fluxnic.net>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-Id: <3oq0sqs1-67o0-145-5n1s-453o118804q@syhkavp.arg>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
The set of instructions that require decode_save_opc for
unwinding is really fairly small -- only insns that can
raise ILLEGAL_INSN at runtime. This includes CSR, anything
that uses a *new* fp rounding mode, and many privileged insns.
Since unwind info is stored as the difference from the
previous insn, storing a 0 for most insns minimizes the
size of the unwind info.
Booting a debian kernel image to the missing rootfs panic yields
- gen code size 22226819/1026886656
+ gen code size 21601907/1026886656
on 41k TranslationBlocks, a savings of 610kB or a bit less than 3%.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-Id: <20220604231004.49990-4-richard.henderson@linaro.org>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
The function doesn't set mtval, it sets badaddr. Move the set
of badaddr directly into gen_exception_inst_addr_mis and use
generate_exception.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-Id: <20220604231004.49990-3-richard.henderson@linaro.org>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
While we set env->bins when unwinding for ILLEGAL_INST,
from e.g. csrrw, we weren't setting it for immediately
illegal instructions.
Add a testcase for mtval via both exception paths.
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1060
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-Id: <20220604231004.49990-2-richard.henderson@linaro.org>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Commit 57c108b864 introduced gen_set_gpri(), which already contains
a check for if the destination register is 'zero'. The check in auipc
and lui are then redundant. This patch removes those checks.
Signed-off-by: Víctor Colombo <victor.colombo@eldorado.org.br>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-Id: <20220610165517.47517-1-victor.colombo@eldorado.org.br>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Semihosting is not enabled for nios2-linux-user.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reorg nios2_semi_return_* to gdb_syscall_complete_cb.
Use the 32-bit version normally, and the 64-bit version
for HOSTED_LSEEK.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
We don't implement it with _WIN32 hosts, and the syscalls
are missing from the gdb remote file i/o interface.
Since we can't implement them universally, drop them.
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
From the Unified Hosting Interface, MD01069 Reference Manual,
version 1.1.6, 06 July 2015.
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Within do_interrupt, we hold the iothread lock, which
is required for Chardev access for the console, and for
the round trip for use_gdb_syscalls().
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
While we had a call to do_m68k_semihosting in linux-user, it
wasn't actually reachable. We don't include DISAS_INSN(halt)
as an instruction unless system mode.
Reviewed-by: Laurent Vivier <laurent@vivier.eu>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reorg m68k_semi_return_* to gdb_syscall_complete_cb.
Use the 32-bit version normally, and the 64-bit version
for HOSTED_LSEEK.
Reviewed-by: Laurent Vivier <laurent@vivier.eu>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Change 'ret' to uint64_t. This resolves a FIXME in the
m68k and nios2 semihosting that we've lost data.
Change 'err' to int. There is nothing target-specific
about the width of the errno value.
Reviewed-by: Luc Michel <lmichel@kalray.eu>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Move the ARM and RISCV specific helpers into
their own header file.
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Luc Michel <lmichel@kalray.eu>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
We have two copies of these structures, and require them
in semihosting/ going forward.
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
There were 3 copies of these flags. Place them in the
file with gdb_do_syscall, with which they belong.
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Perform the cleanup in the FIXME comment in common_semi_gdb_syscall.
Do not modify guest registers until the syscall is complete,
which in the gdbstub case is asynchronous.
In the synchronous non-gdbstub case, use common_semi_set_ret
to set the result. Merge set_swi_errno into common_semi_cb.
Rely on the latter for combined return value / errno setting.
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
We have a subdirectory for semihosting; move this file out of exec.
Rename to emphasize the contents are a replacement for the functions
in linux-user/bsd-user uaccess.c.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
In machvirt_init we create a cpu but do not fully initialize it.
Thus the propagation of V7VE to LPAE has not been done, and we
compute the wrong value for some v7 cpus, e.g. cortex-a15.
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1078
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reported-by: He Zhe <zhe.he@windriver.com>
Message-id: 20220619001541.131672-3-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Move the code from hw/arm/virt.c that is supposed
to handle v7 into the one function.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reported-by: He Zhe <zhe.he@windriver.com>
Message-id: 20220619001541.131672-2-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
We will need these functions in translate-sme.c.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220620175235.60881-21-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
We need SVL separate from VL for RDSVL et al, as well as
ZA storage loads and stores, which do not require PSTATE.SM.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220620175235.60881-20-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
When Streaming SVE mode is enabled, the size is taken from
SMCR_ELx instead of ZCR_ELx. The format is shared, but the
set of vector lengths is not. Further, Streaming SVE does
not require any particular length to be supported.
Adjust sve_vqm1_for_el to pass the current value of PSTATE.SM
to the new function.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220620175235.60881-19-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Mirror the properties for SVE. The main difference is
that any arbitrary set of powers of 2 may be supported,
and not the stricter constraints that apply to SVE.
Include a property to control FEAT_SME_FA64, as failing
to restrict the runtime to the proper subset of insns
could be a major point for bugs.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20220620175235.60881-18-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
These functions are not used outside cpu64.c,
so make them static.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220620175235.60881-17-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Drop the aa32-only inline fallbacks,
and just use a couple of ifdefs.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220620175235.60881-16-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Rename from cpu_arm_{get,set}_sve_default_vec_len,
and take the pointer to default_vq from opaque.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220620175235.60881-15-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Rename from cpu_arm_{get,set}_sve_vq, and take the
ARMVQMap as the opaque parameter.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220620175235.60881-14-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Pull the three sve_vq_* values into a structure.
This will be reused for SME.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220620175235.60881-13-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Keep all of the error messages together. This does mean that
when setting many sve length properties we'll only generate
one error, but we only really need one.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220620175235.60881-12-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
These two instructions are aliases of MSR (immediate).
Use the two helpers to properly implement svcr_write.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220620175235.60881-11-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Place this late in the resettable section of the structure,
to keep the most common element offsets from being > 64k.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220620175235.60881-10-richard.henderson@linaro.org
[PMM: expanded comment on zarray[] format]
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
These are required to determine if various insns
are allowed to issue.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220620175235.60881-9-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Implement the streaming mode identification register, and the
two streaming priority registers. For QEMU, they are all RES0.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220620175235.60881-8-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
These cpregs control the streaming vector length and whether the
full a64 instruction set is allowed while in streaming mode.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220620175235.60881-7-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This cpreg is used to access two new bits of PSTATE
that are not visible via any other mechanism.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220620175235.60881-6-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This will be used for controlling access to SME cpregs.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220620175235.60881-5-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This will be used for raising various traps for SME.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220620175235.60881-4-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This is CheckSMEAccess, which is the basis for a set of
related tests for various SME cpregs and instructions.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220620175235.60881-3-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This register is part of SME, but isn't closely related to the
rest of the extension.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220620175235.60881-2-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Some features such as running in EL3 or running M profile code are
incompatible with virtualization as QEMU implements it today. To prevent
users from picking invalid configurations on other virt solutions like
Hvf, let's run the same checks there too.
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1073
Signed-off-by: Alexander Graf <agraf@csgraf.de>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220620192242.70573-2-agraf@csgraf.de
[PMM: Allow qtest accelerator too; tweak comment]
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
CPUClass::memory_rw_debug() holds a callback for GDB memory access.
If not provided, cpu_memory_rw_debug() is used by the GDB stub.
Drop avr_cpu_memory_rw_debug() which does nothing special.
Signed-off-by: Bin Meng <bmeng.cn@gmail.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20220322095004.70682-1-bmeng.cn@gmail.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
The 'resume_as_sreset' attribute of a cpu is set when a thread is
entering a stop state on ppc books. It causes the thread to be
re-routed to vector 0x100 when woken up by an exception. So it must be
cleared on reset or a thread might be re-routed unexpectedly after a
reset, when it was not in a stop state and/or when the appropriate
exception handler isn't set up yet.
Using skiboot, it can be tested by resetting the system when it is
quiet and most threads are idle and in stop state.
After the reset occurs, skiboot elects a primary thread and all the
others wait in secondary_wait. The primary thread does all the system
initialization from main_cpu_entry() and at some point, the
decrementer interrupt starts ticking. The exception vector for the
decrementer interrupt is in place, so that shouldn't be a
problem. However, if that primary thread was in stop state prior to
the reset, and because the resume_as_sreset parameters is still set,
it is re-routed to exception vector 0x100. Which, at that time, is
still defined as the entry point for BML. So that primary thread
restarts as new and ends up being treated like any other secondary
thread. All threads are now waiting in secondary_wait.
It results in a full system hang with no message on the console, as
the uart hasn't been init'ed yet. It's actually not obvious to realise
what's happening if not tracing reset (-d cpu_reset). The fix is
simply to clear the 'resume_as_sreset' attribute on reset.
Signed-off-by: Frederic Barrat <fbarrat@linux.ibm.com>
Reviewed-by: Fabiano Rosas <farosas@linux.ibm.com>
Reviewed-by: Cédric Le Goater <clg@kaod.org>
Message-Id: <20220617095222.612212-1-fbarrat@linux.ibm.com>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Commit c29018cc73 added an env->fpscr OR operation using a ternary
that checks if 'error' is not zero:
env->fpscr |= error ? FP_FEX : 0;
However, in the current body of do_fpscr_check_status(), 'error' is
granted to be always non-zero at that point. The result is that Coverity
is less than pleased:
Control flow issues (DEADCODE)
Execution cannot reach the expression "0ULL" inside this statement:
"env->fpscr |= (error ? 1073...".
Remove the ternary and always make env->fpscr |= FP_FEX.
Cc: Lucas Mateus Castro (alqotel) <lucas.araujo@eldorado.org.br>
Cc: Richard Henderson <richard.henderson@linaro.org>
Fixes: Coverity CID 1489442
Fixes: c29018cc73 ("target/ppc: Implemented xvf*ger*")
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Reviewed-by: Lucas Mateus Castro (alqotel) <lucas.araujo@eldorado.org.br>
Message-Id: <20220602191048.137511-1-danielhb413@gmail.com>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Coverity is not thrilled about the multiply operations being done in
ger_rank8() and ger_rank2(), giving an error like the following:
Integer handling issues (OVERFLOW_BEFORE_WIDEN)
Potentially overflowing expression "sextract32(a, 4 * i, 4) *
sextract32(b, 4 * i, 4)" with type "int" (32 bits, signed) is evaluated
using 32-bit arithmetic, and then used in a context that expects an
expression of type "int64_t" (64 bits, signed).
Fix both instances where this occur by adding an int64_t cast in the
first operand, forcing the result to be 64 bit.
Fixes: Coverity CID 1489444, 1489443
Fixes: 345531533f ("target/ppc: Implemented xvi*ger* instructions")
Cc: Lucas Mateus Castro (alqotel) <lucas.araujo@eldorado.org.br>
Cc: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Lucas Mateus Castro (alqotel) <lucas.araujo@eldorado.org.br>
Message-Id: <20220602141449.118173-1-danielhb413@gmail.com>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
The extract64 arguments are not endian dependent as they are only used
for bitwise operations. The current behavior in little-endian hosts is
correct; since the indexes in VRB are in PowerISA-ordering, we should
always invert the value before calling extract64. Also, using the VsrD
macro, we can have a single EXTRACT_BIT definition for big and
little-endian with the correct behavior.
Signed-off-by: Matheus Ferst <matheus.ferst@eldorado.org.br>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220601125355.1266165-1-matheus.ferst@eldorado.org.br>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Implement the following PowerISA v3.1 instructions:
vmodsw: Vector Modulo Signed Word
vmoduw: Vector Modulo Unsigned Word
vmodsd: Vector Modulo Signed Doubleword
vmodud: Vector Modulo Unsigned Doubleword
Signed-off-by: Lucas Mateus Castro (alqotel) <lucas.araujo@eldorado.org.br>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220525134954.85056-8-lucas.araujo@eldorado.org.br>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Implement the following PowerISA v3.1 instructions:
vdivesw: Vector Divide Extended Signed Word
vdiveuw: Vector Divide Extended Unsigned Word
Signed-off-by: Lucas Mateus Castro (alqotel) <lucas.araujo@eldorado.org.br>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220525134954.85056-4-lucas.araujo@eldorado.org.br>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Implement the following PowerISA v3.1 instructions:
vdivsw: Vector Divide Signed Word
vdivuw: Vector Divide Unsigned Word
vdivsd: Vector Divide Signed Doubleword
vdivud: Vector Divide Unsigned Doubleword
Signed-off-by: Lucas Mateus Castro (alqotel) <lucas.araujo@eldorado.org.br>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220525134954.85056-2-lucas.araujo@eldorado.org.br>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Switch statements for the code segments that handle nanoMIPS
instruction pools P.LL, P.SC, P.SHIFT, P.LS.S1, P.LS.E0, PP.LSXS
do not have proper default case, resulting in not generating
reserved instruction exception for certain illegal opcodes.
Fix this by adding default cases for these switch statements that
trigger reserved instruction exception.
Signed-off-by: Stefan Pejic <stefan.pejic@syrmia.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20220504110403.613168-7-stefan.pejic@syrmia.com>
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
nanoMIPS ISA does not support unaligned memory access. Adjust
DisasContext's default_tcg_memop_mask to reflect this.
Signed-off-by: Dragan Mladjenovic <dragan.mladjenovic@syrmia.com>
Signed-off-by: Stefan Pejic <stefan.pejic@syrmia.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20220504110403.613168-6-stefan.pejic@syrmia.com>
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
If both rs and rt are the same register, the nanoMIPS instruction
BNEC[32] rs, rt, address is equivalent to NOP (branch is not taken and
there is no delay slot). This commit provides such behavior. Without
this commit, this scenario results in an incorrect behavior.
Signed-off-by: Dragan Mladjenovic <dragan.mladjenovic@syrmia.com>
Signed-off-by: Stefan Pejic <stefan.pejic@syrmia.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20220504110403.613168-5-stefan.pejic@syrmia.com>
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
There are currently two problems related to the emulation of the
instruction BPOSGE32C.
The nanoMIPS instruction BPOSGE32C belongs to DSP R3 instructions
(actually, as of now, it is the only instruction of DSP R3). The
presence of DSP R3 instructions in QEMU is indicated by the flag
MIPS_HFLAG_DSP_R3 (0x20000000). This flag is currently being properly
set in CPUMIPSState's hflags (for example, for I7200 nanoMIPS CPU).
However, it is not propagated to DisasContext's hflags, since the flag
MIPS_HFLAG_DSP_R3 is not set in MIPS_HFLAG_TMASK (while similar flags
MIPS_HFLAG_DSP_R2 and MIPS_HFLAG_DSP are set in this mask, and there
is no problem in functioning check_dsp_r2(), check_dsp()). This means
the function check_dsp_r3() currently does not work properly, and the
emulation of BPOSGE32C can not work properly as well.
Change MIPS_HFLAG_TMASK from 0x1F5807FF to 0x3F5807FF (logical OR
with 0x20000000) to fix this.
Additionally, check_cp1_enabled() is currently incorrectly called
while emulating BPOSGE32C. BPOSGE32C is in the same pool (P.BR1) as
FPU branch instruction BC1EQZC and BC1NEZC, but it not a part of FPU
(CP1) instructions, and check_cp1_enabled() should not be involved
while emulating BPOSGE32C.
Rearrange invocations of check_cp1_enabled() within P.BR1 pool
handling to affect only BC1EQZC and BC1NEZC emulation, and not
BPOSGE32C emulation.
Signed-off-by: Dragan Mladjenovic <dragan.mladjenovic@syrmia.com>
Signed-off-by: Stefan Pejic <stefan.pejic@syrmia.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20220504110403.613168-4-stefan.pejic@syrmia.com>
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
The field rs in the instruction EXTRV_S.H rt, ac, rs is specified in
nanoMIPS documentation as opcode[20..16]. It is, however, erroneously
considered as opcode[25..21] in the current QEMU implementation. In
function gen_pool32axf_2_nanomips_insn(), the variable v0_t corresponds
to rt/opcode[25..21], and v1_t corresponds to rs/opcode[20..16]), and
v0_t is by mistake passed to the helper gen_helper_extr_s_h().
Use v1_t rather than v0_t in the invocation of gen_helper_extr_s_h()
to fix this.
Signed-off-by: Dragan Mladjenovic <dragan.mladjenovic@syrmia.com>
Signed-off-by: Stefan Pejic <stefan.pejic@syrmia.com>
Fixes: 8b3698b294 ("target/mips: Add emulation of DSP ASE for nanoMIPS")
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20220504110403.613168-3-stefan.pejic@syrmia.com>
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
The field ac in nanoMIPS instruction MTHLIP rs, ac is specified in
nanoMIPS documentation as opcode[15..14] (2 bits). However, in the
current QEMU code, the corresponding argument passed to the helper
gen_helper_mthlip() has the value of opcode[15..11] (5 bits). Right
shift the value of this argument by three bits to fix this.
Signed-off-by: Stefan Pejic <stefan.pejic@syrmia.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20220504110403.613168-2-stefan.pejic@syrmia.com>
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Fix the FTRUNC_S and FTRUNC_U trans helper problem.
Fixes: 5c5b64000c ("target/mips: Convert MSA 2RF instruction format to decodetree")
Signed-off-by: nihui <shuizhuyuanluo@126.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20220503144241.289239-1-shuizhuyuanluo@126.com>
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
This patch fix the issue that helper_msa_st_b() write high 64bit
data to where the low 64bit resides, leaving high 64bit undefined.
Fixes: 68ad9260e0 ("target/mips: Use 8-byte memory ops for msa load/store")
Signed-off-by: Ni Hui <shuizhuyuanluo@126.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20220504023319.12923-1-shuizhuyuanluo@126.com>
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Only for msa COPY_U/COPY_S with wd zero, we treat it as NOP.
Move this special rule into COPY_U and COPY_S trans function.
Fixes: 97fe675519 ("target/mips: Convert MSA COPY_S and INSERT opcodes to decodetree")
Signed-off-by: Ni Hui <shuizhuyuanluo@126.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20220503130708.272850-4-shuizhuyuanluo@126.com>
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Fix issue that condition of check_msa_enabled(ctx) is reversed
that causes segfault when msa elm_fn op encountered.
Fixes: 2f2745c81a ("target/mips: Convert MSA COPY_U opcode to decodetree")
Fixes: 97fe675519 ("target/mips: Convert MSA COPY_S and INSERT opcodes to decodetree")
Signed-off-by: Ni Hui <shuizhuyuanluo@126.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20220503130708.272850-3-shuizhuyuanluo@126.com>
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Actually look into dfe structure data so that df_extract_val() and
df_extract_df() can return immediate and datafield other than BYTE.
Fixes: 4701d23aef ("target/mips: Convert MSA BIT instruction format to decodetree")
Signed-off-by: Ni Hui <shuizhuyuanluo@126.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20220503130708.272850-2-shuizhuyuanluo@126.com>
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Fix the SAT_S and SAT_U trans helper confusion.
Fixes: 4701d23aef ("target/mips: Convert MSA BIT instruction format to decodetree")
Signed-off-by: Ni Hui <shuizhuyuanluo@126.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20220503130708.272850-1-shuizhuyuanluo@126.com>
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
bit 31 (M) of WatchHiN register is a read-only register indicating
whether the next WatchHi register is present. It must not be reset
during user writes to the register.
Signed-off-by: Marcin Nowakowski <marcin.nowakowski@fungible.com>
Reviewed-by: David Daney <david.daney@fungible.com>
Signed-off-by: Philippe Mathieu-Daudé <philmd@fungible.com>
Message-Id: <20220511212953.74738-1-philmd@fungible.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Since DDI0487F.a, the RW bit is RAO/WI. When specifically
targeting such a cpu, e.g. cortex-a76, it is legitimate to
ignore the bit within the secure monitor.
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1062
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220609214657.1217913-3-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Because reset always initializes the AA64 version, SCR_EL3,
test the mode of EL3 instead of the type of the cpreg.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220609214657.1217913-2-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
We were using arm_is_secure and is_a64, which are
tests against the current EL, as opposed to
arm_el_is_aa64 and arm_is_secure_below_el3, which
can be applied to a different EL than current.
Consolidate the two tests.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220609202901.1177572-24-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This function is no longer used outside debug_helper.c.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220609202901.1177572-23-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Handle the debug vs current el exception test in one place.
Leave EXCP_BKPT alone, since that treats debug < current differently.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220609202901.1177572-22-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This function is no longer used. At the same time, remove
DisasContext.secure_routed_to_el3, as it in turn becomes unused.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220609202901.1177572-21-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
With the helper we can use exception_target_el at runtime,
instead of default_exception_el at translate time.
While we're at it, remove the DisasContext parameter from
gen_exception, as it is no longer used.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220609202901.1177572-20-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Split out a common helper function for gen_exception_el
and gen_exception_insn_el_v.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220609202901.1177572-19-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Create a new wrapper function that passes the default
exception target to gen_exception_el.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220609202901.1177572-18-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220609202901.1177572-17-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This function is not required by any other translation file.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220609202901.1177572-16-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
We no longer need this value during translation,
as it is now handled within the helpers.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220609202901.1177572-15-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Move the computation from gen_swstep_exception into a helper.
This fixes a bug when:
- MDSCR_EL1.KDE == 1 to enable debug exceptions within EL_D itself
- we singlestep an ERET from EL_D to some lower EL
Previously we were computing 'same el' based on the EL which
executed the ERET instruction, whereas it ought to be computed
based on the EL to which ERET returned. This happens naturally
with the new helper, which runs after EL has been changed.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220609202901.1177572-14-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Create a new wrapper function that passes the default
exception target to gen_exception_insn_el.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220609202901.1177572-13-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220609202901.1177572-12-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Create a function below gen_exception_insn that takes
the target_el as a TCGv_i32, replacing gen_exception_el.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220609202901.1177572-11-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Rename to helper_exception_with_syndrome_el, to emphasize
that the target el is a parameter.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220609202901.1177572-10-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This function now now only used in debug_helper.c, so there is
no reason to have a declaration in a header.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220609202901.1177572-9-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220609202901.1177572-8-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Use the accessor rather than the raw structure member.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220609202901.1177572-7-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Move arm_generate_debug_exceptions and its two subroutines,
{aa32,aa64}_generate_debug_exceptions into debug_helper.c,
and the one interface declaration to internals.h.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220609202901.1177572-6-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Move the function to debug_helper.c, and the
declaration to internals.h.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220609202901.1177572-5-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Move the function to op_helper.c, near raise_exception.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220609202901.1177572-4-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
With ARMv8, this field is always RES0.
With ARMv7, targeting EL2 and TA=0, it is always 0xA.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220609202901.1177572-3-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220609202901.1177572-2-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
When running a 32-bit guest, with a e64 vmv.v.x and vl_eq_vlmax set to
true the `tcg_debug_assert(vece <= MO_32)` will be triggered inside
tcg_gen_gvec_dup_i32().
This patch checks that condition and instead uses tcg_gen_gvec_dup_i64()
is required.
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1028
Suggested-by: Robert Bu <robert.bu@gmail.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220608234701.369536-1-alistair.francis@opensource.wdc.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
There are currently two types of RISC-V CPUs:
- Generic CPUs (base or any) that allow complete custimisation
- "Named" CPUs that match existing hardware
Users can use the base CPUs to custimise the extensions that they want, for
example -cpu rv64,v=true.
We originally exposed these as part of the named CPUs as well, but that was
by accident.
Exposing the CPU properties to named CPUs means that we accidently
enable extensions that don't exist on the CPUs by default. For example
the SiFive E CPU currently support the zba extension, which is a bug.
This patch instead only exposes the CPU extensions to the generic CPUs.
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Bin Meng <bmeng.cn@gmail.com>
Message-Id: <20220608061437.314434-1-alistair.francis@opensource.wdc.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
According to v-spec, tail agnostic behavior can be either kept as
undisturbed or set elements' bits to all 1s. To distinguish the
difference of tail policies, QEMU should be able to simulate the tail
agnostic behavior as "set tail elements' bits to all 1s".
There are multiple possibility for agnostic elements according to
v-spec. The main intent of this patch-set tries to add option that
can distinguish between tail policies. Setting agnostic elements to
all 1s allows QEMU to express this.
This commit adds option 'rvv_ta_all_1s' is added to enable the
behavior, it is default as disabled.
Signed-off-by: eop Chen <eop.chen@sifive.com>
Reviewed-by: Frank Chang <frank.chang@sifive.com>
Reviewed-by: Weiwei Li <liweiwei@iscas.ac.cn>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-Id: <165449614532.19704.7000832880482980398-16@git.sr.ht>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
The tail elements in the destination mask register are updated under
a tail-agnostic policy.
Signed-off-by: eop Chen <eop.chen@sifive.com>
Reviewed-by: Frank Chang <frank.chang@sifive.com>
Reviewed-by: Weiwei Li <liweiwei@iscas.ac.cn>
Acked-by: Alistair Francis <alistair.francis@wdc.com>
Message-Id: <165449614532.19704.7000832880482980398-14@git.sr.ht>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Compares write mask registers, and so always operate under a tail-
agnostic policy.
Signed-off-by: eop Chen <eop.chen@sifive.com>
Reviewed-by: Frank Chang <frank.chang@sifive.com>
Reviewed-by: Weiwei Li <liweiwei@iscas.ac.cn>
Acked-by: Alistair Francis <alistair.francis@wdc.com>
Message-Id: <165449614532.19704.7000832880482980398-12@git.sr.ht>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Compares write mask registers, and so always operate under a tail-
agnostic policy.
Signed-off-by: eop Chen <eop.chen@sifive.com>
Reviewed-by: Frank Chang <frank.chang@sifive.com>
Reviewed-by: Weiwei Li <liweiwei@iscas.ac.cn>
Acked-by: Alistair Francis <alistair.francis@wdc.com>
Message-Id: <165449614532.19704.7000832880482980398-9@git.sr.ht>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
`vmadc` and `vmsbc` produces a mask value, they always operate with
a tail agnostic policy.
Signed-off-by: eop Chen <eop.chen@sifive.com>
Reviewed-by: Frank Chang <frank.chang@sifive.com>
Reviewed-by: Weiwei Li <liweiwei@iscas.ac.cn>
Acked-by: Alistair Francis <alistair.francis@wdc.com>
Message-Id: <165449614532.19704.7000832880482980398-7@git.sr.ht>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Destination register of unit-stride mask load and store instructions are
always written with a tail-agnostic policy.
A vector segment load / store instruction may contain fractional lmul
with nf * lmul > 1. The rest of the elements in the last register should
be treated as tail elements.
Signed-off-by: eop Chen <eop.chen@sifive.com>
Reviewed-by: Frank Chang <frank.chang@sifive.com>
Reviewed-by: Weiwei Li <liweiwei@iscas.ac.cn>
Acked-by: Alistair Francis <alistair.francis@wdc.com>
Message-Id: <165449614532.19704.7000832880482980398-6@git.sr.ht>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
According to v-spec, tail agnostic behavior can be either kept as
undisturbed or set elements' bits to all 1s. To distinguish the
difference of tail policies, QEMU should be able to simulate the tail
agnostic behavior as "set tail elements' bits to all 1s".
There are multiple possibility for agnostic elements according to
v-spec. The main intent of this patch-set tries to add option that
can distinguish between tail policies. Setting agnostic elements to
all 1s allows QEMU to express this.
This is the first commit regarding the optional tail agnostic
behavior. Follow-up commits will add this optional behavior
for all rvv instructions.
Signed-off-by: eop Chen <eop.chen@sifive.com>
Reviewed-by: Frank Chang <frank.chang@sifive.com>
Reviewed-by: Weiwei Li <liweiwei@iscas.ac.cn>
Acked-by: Alistair Francis <alistair.francis@wdc.com>
Message-Id: <165449614532.19704.7000832880482980398-5@git.sr.ht>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
According to v-spec (section 5.4):
When vstart ≥ vl, there are no body elements, and no elements are
updated in any destination vector register group, including that
no tail elements are updated with agnostic values.
vmsbf.m, vmsif.m, vmsof.m, viota.m, vcompress instructions themselves
require vstart to be zero. So they don't need the early exit.
Signed-off-by: eop Chen <eop.chen@sifive.com>
Reviewed-by: Frank Chang <frank.chang@sifive.com>
Reviewed-by: Weiwei Li <liweiwei@iscas.ac.cn>
Acked-by: Alistair Francis <alistair.francis@wdc.com>
Message-Id: <165449614532.19704.7000832880482980398-4@git.sr.ht>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
No functional change intended in this commit.
Signed-off-by: eop Chen <eop.chen@sifive.com>
Reviewed-by: Frank Chang <frank.chang@sifive.com>
Reviewed-by: Weiwei Li <liweiwei@iscas.ac.cn>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-Id: <165449614532.19704.7000832880482980398-3@git.sr.ht>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
No functional change intended in this commit.
Signed-off-by: eop Chen <eop.chen@sifive.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-Id: <165449614532.19704.7000832880482980398-2@git.sr.ht>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
No functional change intended in this commit.
Signed-off-by: eop Chen <eop.chen@sifive.com>
Reviewed-by: Frank Chang <frank.chang@sifive.com>
Reviewed-by: Weiwei Li <liweiwei@iscas.ac.cn>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-Id: <165449614532.19704.7000832880482980398-1@git.sr.ht>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Add an MXL_RV128 case in two switches so that no error is triggered when
using the -cpu x-rv128 option.
Signed-off-by: Frédéric Pétrot <frederic.petrot@univ-grenoble-alpes.fr>
Acked-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Bin Meng <bmeng.cn@gmail.com>
Message-Id: <20220602155246.38837-1-frederic.petrot@univ-grenoble-alpes.fr>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Whether or not VSEIP is pending isn't reflected in env->mip and must
instead be determined from hstatus.vgein and hgeip. As a result a
CPU in WFI won't wake on a VSEIP, which violates the WFI behavior as
specified in the privileged ISA. Just use riscv_cpu_all_pending()
instead, which already accounts for VSEIP.
Signed-off-by: Andrew Bresticker <abrestic@rivosinc.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-Id: <20220531210544.181322-1-abrestic@rivosinc.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Add support for the zmmul extension v0.1. This extension includes all
multiplication operations from the M extension but not the divide ops.
Signed-off-by: Weiwei Li <liweiwei@iscas.ac.cn>
Signed-off-by: Junqiang Wang <wangjunqiang@iscas.ac.cn>
Reviewed-by: Víctor Colombo <victor.colombo@eldorado.org.br>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-Id: <20220531030732.3850-1-liweiwei@iscas.ac.cn>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
This register is allocated from the existing block of id registers,
so it is already RES0 for cpus that do not implement SME.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220607203306.657998-21-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This will be used for implementing FEAT_SME.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220607203306.657998-20-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
We will need this over in sme_helper.c.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220607203306.657998-19-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Move the data to vec_helper.c and the inline to vec_internal.h.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220607203306.657998-18-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Use the function instead of the array directly.
Because the function performs its own masking, via the uint8_t
parameter, we need to do nothing extra within the users: the bits
above the first 2 (_uh) or 4 (_uw) will be discarded by assignment
to the local bmask variables, and of course _uq uses the entire
uint64_t result.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220607203306.657998-17-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Put the inline function near the array declaration.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220607203306.657998-16-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Export all of the support functions for performing bulk
fault analysis on a set of elements at contiguous addresses
controlled by a predicate.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220607203306.657998-15-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Begin creation of sve_ldst_internal.h by moving the primitives
that access host and tlb memory.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220607203306.657998-14-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This will be used for both Normal and Streaming SVE, and the value
does not necessarily come from ZCR_ELx. While we're at it, emphasize
the units in which the value is returned.
Patch produced by
git grep -l sve_zcr_len_for_el | \
xargs -n1 sed -i 's/sve_zcr_len_for_el/sve_vqm1_for_el/g'
and then adding a function comment.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220607203306.657998-13-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The bitmap need only hold 15 bits; bitmap is over-complicated.
We can simplify operations quite a bit with plain logical ops.
The introduction of SVE_VQ_POW2_MAP eliminates the need for
looping in order to search for powers of two. Simply perform
the logical ops and use count leading or trailing zeros as
required to find the result.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220607203306.657998-12-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This function is used only once, and will need modification
for Streaming SVE mode.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220607203306.657998-11-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
We don't need to constrain the value set in zcr_el[1],
because it will be done by sve_zcr_len_for_el.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220607203306.657998-10-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This check is buried within arm_hcr_el2_eff(), but since we
have to have the explicit check for CPTR_EL2.TZ, we might as
well just check it once at the beginning of the block.
Once this is done, we can test HCR_EL2.{E2H,TGE} directly,
rather than going through arm_hcr_el2_eff().
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220607203306.657998-9-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The ARM pseudocode function CheckNormalSVEEnabled uses this
predicate now, and I think it's a bit clearer.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220607203306.657998-8-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The ARM pseudocode function NVL uses this predicate now,
and I think it's a bit clearer. Simplify the pseudocode
condition by noting that IsInHost is always false for EL1.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220607203306.657998-7-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This (newish) ARM pseudocode function is easier to work with
than open-coded tests for HCR_E2H etc. Use of the function
will be staged into the code base in parts.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220607203306.657998-6-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Instead of checking these bits in fp_exception_el and
also in sve_exception_el, document that we must compare
the results. The only place where we have not already
checked that FP EL is zero is in rebuild_hflags_a64.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220607203306.657998-5-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
We handle this routing in raise_exception. Promoting the value early
means that we can't directly compare FPEXC_EL and SVEEXC_EL.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220607203306.657998-4-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Add an interface function to extract the digested vector length
rather than the raw zcr_el[1] value. This fixes an incorrect
return from do_prctl_set_vl where we didn't take into account
the set of vector lengths supported by the cpu.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220607203306.657998-3-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
With SME, the vector length does not only come from ZCR_ELx.
Comment that this is either NVL or SVL, like the pseudocode.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220607203306.657998-2-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The use of ARM_CPU to recover env from cs calls
object_class_dynamic_cast, which shows up on the profile.
This is pointless, because all callers already have env, and
the reverse operation, env_cpu, is only pointer arithmetic.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220604040607.269301-29-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220604040607.269301-28-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220604040607.269301-27-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220604040607.269301-26-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220604040607.269301-25-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220604040607.269301-24-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220604040607.269301-23-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220604040607.269301-22-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220604040607.269301-21-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220604040607.269301-20-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220604040607.269301-19-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
These functions are used for both page table walking and for
deciding what format in which to deliver exception results.
Since ptw.c is only present for system mode, put the functions
into tlb_helper.c.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220604040607.269301-18-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Move the ptw load functions, plus 3 common subroutines:
S1_ptw_translate, ptw_attrs_are_device, and regime_translation_big_endian.
This also allows get_phys_addr_lpae to become static again.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220604040607.269301-17-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220604040607.269301-16-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
There are a handful of helpers for combine_cacheattrs
that we can move at the same time as the main entry point.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220604040607.269301-15-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220604040607.269301-14-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220604040607.269301-13-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This function has one private helper, v8m_is_sau_exempt,
so move that at the same time.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220604040607.269301-12-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220604040607.269301-11-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This is the final user of get_phys_addr_pmsav7_default
within helper.c, so make it static within ptw.c.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220604040607.269301-10-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220604040607.269301-9-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220604040607.269301-8-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220604040607.269301-7-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220604040607.269301-6-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220604040607.269301-5-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220604040607.269301-4-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Begin moving all of the page table walking functions
out of helper.c, starting with get_phys_addr().
Create a temporary header file, "ptw.h", in which to
share declarations between the two C files while we
are moving functions.
Move a few declarations to "internals.h", which will
remain used by multiple C files.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220604040607.269301-3-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Move the decl from ptw.h to internals.h. Provide an inline
version for user-only, just as we do for arm_stage1_mmu_idx.
Move an endif down to make the definition in helper.c be
system only.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220604040607.269301-2-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
We have about 30 instances of the typo/variant spelling 'writeable',
and over 500 of the more common 'writable'. Standardize on the
latter.
Change produced with:
sed -i -e 's/\([Ww][Rr][Ii][Tt]\)[Ee]\([Aa][Bb][Ll][Ee]\)/\1\2/g' $(git grep -il writeable)
and then hand-undoing the instance in linux-headers/linux/kvm.h.
Most of these changes are in comments or documentation; the
exceptions are:
* a local variable in accel/hvf/hvf-accel-ops.c
* a local variable in accel/kvm/kvm-all.c
* the PMCR_WRITABLE_MASK macro in target/arm/internals.h
* the EPT_VIOLATION_GPA_WRITABLE macro in target/i386/hvf/vmcs.h
(which is never used anywhere)
* the AR_TYPE_WRITABLE_MASK macro in target/i386/hvf/vmx.h
(which is never used anywhere)
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Stefan Weil <sw@weilnetz.de>
Message-id: 20220505095015.2714666-1-peter.maydell@linaro.org
The FEAT_DoubleFault extension adds the following:
* All external aborts on instruction fetches and translation table
walks for instruction fetches must be synchronous. For QEMU this
is already true.
* SCR_EL3 has a new bit NMEA which disables the masking of SError
interrupts by PSTATE.A when the SError interrupt is taken to EL3.
For QEMU we only need to make the bit writable, because we have no
sources of SError interrupts.
* SCR_EL3 has a new bit EASE which causes synchronous external
aborts taken to EL3 to be taken at the same entry point as SError.
(Note that this does not mean that they are SErrors for purposes
of PSTATE.A masking or that the syndrome register reports them as
SErrors: it just means that the vector offset is different.)
* The existing SCTLR_EL3.IESB has an effective value of 1 when
SCR_EL3.NMEA is 1. For QEMU this is a no-op because we don't need
different behaviour based on IESB (we don't need to do anything to
ensure that error exceptions are synchronized).
So for QEMU the things we need to change are:
* Make SCR_EL3.{NMEA,EASE} writable
* When taking a synchronous external abort at EL3, adjust the
vector entry point if SCR_EL3.EASE is set
* Advertise the feature in the ID registers
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220531151431.949322-1-peter.maydell@linaro.org
The architectural feature RASv1p1 introduces the following new
features:
* new registers ERXPFGCDN_EL1, ERXPFGCTL_EL1 and ERXPFGF_EL1
* new bits in the fine-grained trap registers that control traps
for these new registers
* new trap bits HCR_EL2.FIEN and SCR_EL3.FIEN that control traps
for ERXPFGCDN_EL1, ERXPFGCTL_EL1, ERXPFGP_EL1
* a larger number of the ERXMISC<n>_EL1 registers
* the format of ERR<n>STATUS registers changes
The architecture permits that if ERRIDR_EL1.NUM is 0 (as it is for
QEMU) then all these new registers may UNDEF, and the HCR_EL2.FIEN
and SCR_EL3.FIEN bits may be RES0. We don't have any ERR<n>STATUS
registers (again, because ERRIDR_EL1.NUM is 0). QEMU does not yet
implement the fine-grained-trap extension. So there is nothing we
need to implement to be compliant with the feature spec. Make the
'max' CPU report the feature in its ID registers, and document it.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220531114258.855804-1-peter.maydell@linaro.org
Signed-off-by: Xiaojuan Yang <yangxiaojuan@loongson.cn>
Signed-off-by: Song Gao <gaosong@loongson.cn>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220606124333.2060567-42-yangxiaojuan@loongson.cn>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Xiaojuan Yang <yangxiaojuan@loongson.cn>
Signed-off-by: Song Gao <gaosong@loongson.cn>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220606124333.2060567-40-yangxiaojuan@loongson.cn>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Emulate a 3A5000 board use the new loongarch instruction.
3A5000 belongs to the Loongson3 series processors.
The board consists of a 3A5000 cpu model and the virt
bridge. The host 3A5000 board is really complicated and
contains many functions.Now for the tcg softmmu mode
only part functions are emulated.
More detailed info you can see
https://github.com/loongson/LoongArch-Documentation
Signed-off-by: Xiaojuan Yang <yangxiaojuan@loongson.cn>
Signed-off-by: Song Gao <gaosong@loongson.cn>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220606124333.2060567-31-yangxiaojuan@loongson.cn>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
This includes:
-RDTIME{L/H}.W
-RDTIME.D
Signed-off-by: Xiaojuan Yang <yangxiaojuan@loongson.cn>
Signed-off-by: Song Gao <gaosong@loongson.cn>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220606124333.2060567-30-yangxiaojuan@loongson.cn>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
This includes:
- IOCSR{RD/WR}.{B/H/W/D}
Signed-off-by: Xiaojuan Yang <yangxiaojuan@loongson.cn>
Signed-off-by: Song Gao <gaosong@loongson.cn>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220606124333.2060567-27-yangxiaojuan@loongson.cn>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Xiaojuan Yang <yangxiaojuan@loongson.cn>
Signed-off-by: Song Gao <gaosong@loongson.cn>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220606124333.2060567-25-yangxiaojuan@loongson.cn>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Xiaojuan Yang <yangxiaojuan@loongson.cn>
Signed-off-by: Song Gao <gaosong@loongson.cn>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220606124333.2060567-24-yangxiaojuan@loongson.cn>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Xiaojuan Yang <yangxiaojuan@loongson.cn>
Signed-off-by: Song Gao <gaosong@loongson.cn>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220606124333.2060567-23-yangxiaojuan@loongson.cn>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Xiaojuan Yang <yangxiaojuan@loongson.cn>
Signed-off-by: Song Gao <gaosong@loongson.cn>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220606124333.2060567-22-yangxiaojuan@loongson.cn>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Xiaojuan Yang <yangxiaojuan@loongson.cn>
Signed-off-by: Song Gao <gaosong@loongson.cn>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220606124333.2060567-21-yangxiaojuan@loongson.cn>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Xiaojuan Yang <yangxiaojuan@loongson.cn>
Signed-off-by: Song Gao <gaosong@loongson.cn>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220606124333.2060567-20-yangxiaojuan@loongson.cn>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Xiaojuan Yang <yangxiaojuan@loongson.cn>
Signed-off-by: Song Gao <gaosong@loongson.cn>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220606124333.2060567-19-yangxiaojuan@loongson.cn>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Song Gao <gaosong@loongson.cn>
Signed-off-by: Xiaojuan Yang <yangxiaojuan@loongson.cn>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20220606124333.2060567-18-yangxiaojuan@loongson.cn>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
This patch adds support for disassembling via option '-d in_asm'.
Signed-off-by: Song Gao <gaosong@loongson.cn>
Signed-off-by: Xiaojuan Yang <yangxiaojuan@loongson.cn>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220606124333.2060567-17-yangxiaojuan@loongson.cn>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
This includes:
- FCMP.cond.{S/D}
Signed-off-by: Song Gao <gaosong@loongson.cn>
Signed-off-by: Xiaojuan Yang <yangxiaojuan@loongson.cn>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220606124333.2060567-12-yangxiaojuan@loongson.cn>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
This patch adds main translation routines and
basic functions for translation.
Signed-off-by: Song Gao <gaosong@loongson.cn>
Signed-off-by: Xiaojuan Yang <yangxiaojuan@loongson.cn>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220606124333.2060567-4-yangxiaojuan@loongson.cn>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
This patch adds target state header, target definitions
and initialization routines.
Signed-off-by: Song Gao <gaosong@loongson.cn>
Signed-off-by: Xiaojuan Yang <yangxiaojuan@loongson.cn>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20220606124333.2060567-3-yangxiaojuan@loongson.cn>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
This patch gives an introduction to the LoongArch target.
Signed-off-by: Song Gao <gaosong@loongson.cn>
Signed-off-by: Xiaojuan Yang <yangxiaojuan@loongson.cn>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220606124333.2060567-2-yangxiaojuan@loongson.cn>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
When QEMU is started with '-cpu host,host-cache-info=on', it will
passthrough host's number of logical processors sharing cache and
number of processor cores in the physical package. QEMU already
fixes up the later to correctly reflect number of configured cores
for VM, however number of logical processors sharing cache is still
comes from host CPU, which confuses guest started with:
-machine q35,accel=kvm \
-cpu host,host-cache-info=on,l3-cache=off \
-smp 20,sockets=2,dies=1,cores=10,threads=1 \
-numa node,nodeid=0,memdev=ram-node0 \
-numa node,nodeid=1,memdev=ram-node1 \
-numa cpu,socket-id=0,node-id=0 \
-numa cpu,socket-id=1,node-id=1
on 2 socket Xeon 4210R host with 10 cores per socket
with CPUID[04H]:
...
--- cache 3 ---
cache type = unified cache (3)
cache level = 0x3 (3)
self-initializing cache level = true
fully associative cache = false
maximum IDs for CPUs sharing cache = 0x1f (31)
maximum IDs for cores in pkg = 0xf (15)
...
that doesn't match number of logical processors VM was
configured with and as result RHEL 9.0 guest complains:
sched: CPU #10's llc-sibling CPU #0 is not on the same node! [node: 1 != 0]. Ignoring dependency.
WARNING: CPU: 10 PID: 0 at arch/x86/kernel/smpboot.c:421 topology_sane.isra.0+0x67/0x80
...
Call Trace:
set_cpu_sibling_map+0x176/0x590
start_secondary+0x5b/0x150
secondary_startup_64_no_verify+0xc2/0xcb
Fix it by capping max number of logical processors to vcpus/socket
as it was configured, which fixes the issue.
Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Fixes: https://bugzilla.redhat.com/show_bug.cgi?id=2088311
Message-Id: <20220524151020.2541698-3-imammedo@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Accourding Intel's CPUID[EAX=04H] resulting bits 31 - 26 in EAX
should be:
"
**** The nearest power-of-2 integer that is not smaller than (1 + EAX[31:26]) is the number of unique
Core_IDs reserved for addressing different processor cores in a physical package. Core ID is a subset of
bits of the initial APIC ID.
"
ensure that values stored in EAX[31-26] always meets this condition.
Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Message-Id: <20220524151020.2541698-2-imammedo@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
The previous patch used wrong count setting with index value, which got wrong
value from CPUID(EAX=12,ECX=0):EAX. So the SGX1 instruction can't be exposed
to VM and the SGX decice can't work in VM.
Fixes: d19d6ffa07 ("target/i386: introduce helper to access supported CPUID")
Signed-off-by: Yang Zhong <yang.zhong@intel.com>
Message-Id: <20220530131834.1222801-1-yang.zhong@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
The correct A20 masking is done if paging is enabled (protected mode) but it
seems to have been forgotten in real mode. For example from the AMD64 APM Vol. 2
section 1.2.4:
> If the sum of the segment base and effective address carries over into bit 20,
> that bit can be optionally truncated to mimic the 20-bit address wrapping of the
> 8086 processor by using the A20M# input signal to mask the A20 address bit.
Most BIOSes will enable the A20 line on boot, but I found by disabling the A20 line
afterwards, the correct wrapping wasn't taking place.
`handle_mmu_fault' in target/i386/tcg/sysemu/excp_helper.c seems to be the culprit.
In real mode, it fills the TLB with the raw unmasked address. However, for the
protected mode, the `mmu_translate' function does the correct A20 masking.
The fix then should be to just apply the A20 mask in the first branch of the if
statement.
Signed-off-by: Stephen Michael Jothen <sjothen@gmail.com>
Message-Id: <Yo5MUMSz80jXtvt9@air-old.local>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Storage key controlled protection is currently not honored when
emulating instructions.
If available, enable key protection for the MEM_OP ioctl, thereby
enabling it for the s390_cpu_virt_mem_* functions, when using kvm.
As a result, the emulation of the following instructions honors storage
keys:
* CLP
The Synch I/O CLP command would need special handling in order
to support storage keys, but is currently not supported.
* CHSC
Performing commands asynchronously would require special
handling, but commands are currently always synchronous.
* STSI
* TSCH
Must (and does) not change channel if terminated due to
protection.
* MSCH
Suppressed on protection, works because fetching instruction.
* SSCH
Suppressed on protection, works because fetching instruction.
* STSCH
* STCRW
Suppressed on protection, this works because no partial store is
possible, because the operand cannot span multiple pages.
* PCISTB
* MPCIFC
* STPCIFC
Signed-off-by: Janis Schoetterl-Glausch <scgl@linux.ibm.com>
Message-Id: <20220506153956.2217601-3-scgl@linux.ibm.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
One less P needed.
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Message-Id: <20220523115123.150340-1-dgilbert@redhat.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
Also mark raise_exception_ra and raise_exception, lest we
generate a warning about helper_raise_exception returning.
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Laurent Vivier <laurent@vivier.eu>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220602013401.303699-18-richard.henderson@linaro.org>
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
TPF stands for "trap false", and is a long-form nop for ColdFire.
Re-use the immediate consumption code from trapcc; the insn will
already expand to a nop because of the TCG_COND_NEVER test
within do_trapcc.
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Laurent Vivier <laurent@vivier.eu>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220602013401.303699-12-richard.henderson@linaro.org>
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
According to the M68040 Users Manual, section 8.4.1, Four word
stack frame (format 0), includes Illegal Instruction. Use the
correct frame format, which does not use the ADDR argument.
Reviewed-by: Laurent Vivier <laurent@vivier.eu>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220602013401.303699-10-richard.henderson@linaro.org>
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
According to the M68040 Users Manual, section 8.4.3,
Six word stack frame (format 2), Trace (and others) is
supposed to record the next insn in PC and the address
of the trapping instruction in ADDRESS.
Create gen_raise_exception_format2 to record the trapping
pc in env->mmu.ar. Update m68k_interrupt_all to pass the
value to do_stack_frame. Update cpu_loop to handle EXCP_TRACE.
Reviewed-by: Laurent Vivier <laurent@vivier.eu>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220602013401.303699-9-richard.henderson@linaro.org>
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
According to the M68040 Users Manual, section 8.4.3,
Six word stack frame (format 2), Zero Div (and others)
is supposed to record the next insn in PC and the
address of the trapping instruction in ADDRESS.
While the N, Z and V flags are documented to be undefine on DIV0,
the C flag is documented as always cleared.
Update helper_div* to take the instruction length as an argument
and use raise_exception_format2. Hoist the reset of the C flag
above the division by zero check.
Update m68k_interrupt_all to pass mmu.ar to do_stack_frame.
Update cpu_loop to pass mmu.ar to siginfo.si_addr, as the
kernel does in trap_c().
Reviewed-by: Laurent Vivier <laurent@vivier.eu>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220602013401.303699-8-richard.henderson@linaro.org>
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
According to the M68040 Users Manual, section 8.4.3,
Six word stack frame (format 2), CHK, CHK2 (and others)
are supposed to record the next insn in PC and the
address of the trapping instruction in ADDRESS.
Create a raise_exception_format2 function to centralize recording
of the trapping pc in mmu.ar, plus advancing to the next insn.
Update m68k_interrupt_all to pass mmu.ar to do_stack_frame.
Update cpu_loop to pass mmu.ar to siginfo.si_addr, as the
kernel does in trap_c().
Reviewed-by: Laurent Vivier <laurent@vivier.eu>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220602013401.303699-7-richard.henderson@linaro.org>
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
The only value this variable holds is now env->pc.
Reviewed-by: Laurent Vivier <laurent@vivier.eu>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220602013401.303699-6-richard.henderson@linaro.org>
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
Add parenthesis around & vs &&.
Remove assignment to sr in function call argument -- note that
sr is unused after the call, so the assignment was never needed,
only the result of the & expression.
Suggested-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Laurent Vivier <laurent@vivier.eu>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220602013401.303699-4-richard.henderson@linaro.org>
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
Replace an if ladder with a switch for clarity.
Reviewed-by: Laurent Vivier <laurent@vivier.eu>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220602013401.303699-3-richard.henderson@linaro.org>
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
Rather than adjust the PC in all of the consumers, raise
the exception with the correct PC in the first place.
Reviewed-by: Laurent Vivier <laurent@vivier.eu>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220602013401.303699-2-richard.henderson@linaro.org>
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
We now have individual checks on all insns within disas_sve.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220527181907.189259-115-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
For all remaining trans_* functions that do not already
have a check, add one now.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220527181907.189259-114-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220527181907.189259-113-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220527181907.189259-112-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Rename from do_sve2_shr_narrow and hoist the sve2
check into the TRANS_FEAT macro.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220527181907.189259-111-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Rename from do_sve2_shll_tb and hoist the sve2
check into the TRANS_FEAT macro.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220527181907.189259-110-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Rename from do_sve2_narrow_extract and hoist the sve2
check into the TRANS_FEAT macro.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220527181907.189259-109-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220527181907.189259-108-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Since 636ddeb15c, we do not require rd == ra.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220527181907.189259-107-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220527181907.189259-106-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220527181907.189259-105-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220527181907.189259-104-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220527181907.189259-103-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220527181907.189259-102-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220527181907.189259-101-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220527181907.189259-100-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Rename the function to match other expansion functions and
move to be adjacent. Split out gen_gvec_fpst_zzzp as a
helper while we're at it.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220527181907.189259-99-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220527181907.189259-98-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220527181907.189259-97-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220527181907.189259-96-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220527181907.189259-95-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Rename the function to match other expansion function and
move to be adjacent. Split out gen_gvec_fpst_zzp as a
helper while we're at it.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220527181907.189259-94-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Simplify indexing of this array. This will allow folding
of the illegal esz == 0 into the normal fn == NULL check.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220527181907.189259-93-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Rename do_zz_fp to gen_gvec_fpst_arg_zz, and move up.
Split out gen_gvec_fpst_zz as a helper while we're at it.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220527181907.189259-92-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220527181907.189259-91-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220527181907.189259-90-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220527181907.189259-89-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220527181907.189259-88-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220527181907.189259-87-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Rename the function to match gen_gvec_ool_arg_zzz,
and move to be adjacent. Split out gen_gvec_fpst_zzz
as a helper while we're at it.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220527181907.189259-86-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220527181907.189259-85-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220527181907.189259-84-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220527181907.189259-83-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220527181907.189259-82-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This alias is defined on EOR (prediates). While the
same operation could be performed with NAND or NOR,
only bother with the official alias.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220527181907.189259-81-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Combined with the check already present in gen_mov_p,
we can simplify some special cases in trans_AND_pppp
and trans_BIC_pppp.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220527181907.189259-80-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Being able to specify the feature predicate in TRANS_FEAT
makes it easier to split trans_FMMLA by element size,
which also happens to simplify the decode.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220527181907.189259-79-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Use these for the several varieties of floating-point
multiply-add instructions.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220527181907.189259-78-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220527181907.189259-77-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220527181907.189259-76-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220527181907.189259-75-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Remove the unparsed extractions in trans_CPY_{m,z}_i which are intended
to reject an 8-bit shift of an 8-bit constant for 8-bit element.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220527181907.189259-74-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Remove the unparsed extractions in trans_ADD_zzi, trans_SUBR_zzi,
and do_zzi_sat which are intended to reject an 8-bit shift of an
8-bit constant for 8-bit element.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220527181907.189259-73-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Remove the unparsed extraction in trans_DUP_i,
which is intended to reject an 8-bit shift of
an 8-bit constant for 8-bit element.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220527181907.189259-72-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220527181907.189259-71-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220527181907.189259-70-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220527181907.189259-69-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220527181907.189259-68-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220527181907.189259-67-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220527181907.189259-66-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220527181907.189259-65-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220527181907.189259-64-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220527181907.189259-63-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220527181907.189259-62-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220527181907.189259-61-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Convert SVE translation functions using do_zip*
to use TRANS_FEAT and gen_gvec_ool_arg_zzz.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220527181907.189259-60-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220527181907.189259-59-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This is in line with how we treat uzp, and will
eliminate the special case code during translation.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220527181907.189259-58-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220527181907.189259-57-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220527181907.189259-56-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220527181907.189259-55-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220527181907.189259-54-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220527181907.189259-53-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220527181907.189259-52-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220527181907.189259-51-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220527181907.189259-50-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220527181907.189259-49-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Remove the DO_ZPZZZ macro, as it had just the two uses.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220527181907.189259-48-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220527181907.189259-47-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Share code between the various shifts using arg_rpri_esz.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220527181907.189259-46-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220527181907.189259-45-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220527181907.189259-44-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Convert SVE translation functions using do_sve2_fn2i
to use TRANS_FEAT and gen_gvec_fn_arg_zzi.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220527181907.189259-43-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
We have two places that perform this particular operation.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220527181907.189259-42-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The check is already done in gen_gvec_ool_zzzp,
which is called by do_sel_z; remove from callers.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220527181907.189259-41-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220527181907.189259-40-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
We have two places that perform this particular operation.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220527181907.189259-39-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Convert SVE translation functions using do_sve2_zzzz_fn
to use TRANS_FEAT and gen_gvec_fn_arg_zzzz.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220527181907.189259-38-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Merge gen_gvec_fn_zzzz with the sve access check and the
dereference of arg_rrrr_esz.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220527181907.189259-37-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The decode for RAX1 sets esz to MO_8, because that's what
we use by default for "no esz present". We changed that
to MO_64 during translation because it is more logical for
the operation. However, the esz argument to gen_gvec_rax1
is unused and forces MO_64 within that function, so there
is no need to do it here as well.
Simplify to use gen_gvec_fn_arg_zzz and TRANS_FEAT.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220527181907.189259-36-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Convert SVE translation functions using do_sve2_fn_zzz
to use TRANS_FEAT and gen_gvec_fn_arg_zzz.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220527181907.189259-35-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Convert SVE translation functions directly using
gen_gvec_fn_arg_zzz to TRANS_FEAT.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220527181907.189259-34-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Two uses of gen_gvec_fn_zzz can pass on arg_rrr_esz instead.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220527181907.189259-33-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Rename the function to match gen_gvec_fn_zzz,
and move to be adjacent.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220527181907.189259-32-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220527181907.189259-31-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
There is only one caller for gen_gvec_fn_zz; inline it.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220527181907.189259-30-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Convert SVE translation functions using do_sve2_zpzz_ool
to use TRANS_FEAT and gen_gvec_ool_arg_zpzz.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220527181907.189259-29-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Convert SVE translation functions directly using
gen_gvec_ool_arg_zpzz to TRANS_FEAT.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220527181907.189259-28-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Use gen_gvec_ool_arg_zpzz instead of gen_gvec_ool_zzzp
when the arguments come from arg_rprr_esz.
Replaces do_zpzz_ool.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220527181907.189259-27-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220527181907.189259-26-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Convert some SVE translation functions using
gen_gvec_ool_arg_zpzi to TRANS_FEAT.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220527181907.189259-25-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Rename the function to match gen_gvec_ool_arg_zpz,
and move to be adjacent.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220527181907.189259-24-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Convert SVE translation functions using do_sve2_zpz_data
to use TRANS_FEAT and gen_gvec_ool_arg_zpz.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220527181907.189259-23-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Convert SVE translation functions directly using
gen_gvec_ool_arg_zpz to TRANS_FEAT.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220527181907.189259-22-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Use gen_gvec_ool_arg_zpz instead of gen_gvec_ool_zzp
when the arguments come from arg_rpr_esz.
Replaces do_zpz_ool.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220527181907.189259-21-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220527181907.189259-20-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This is the last direct user of tcg_gen_gvec_4_ool.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220527181907.189259-19-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Convert SVE translation functions using do_sve2_zzw_data
to use TRANS_FEAT and gen_gvec_ool_arg_zzz.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220527181907.189259-18-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Convert SVE translation functions using do_sve2_zzzz_data
to use TRANS_FEAT and gen_gvec_ool_{zzzz,zzxz}.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220527181907.189259-17-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Convert SVE translation functions using do_sve2_zzz_data
to use TRANS_FEAT and gen_gvec_ool_zzz.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220527181907.189259-16-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Convert SVE translation functions directly using
gen_gvec_ool_arg_zzxz to TRANS_FEAT. Also include
BFDOT_zzxz, which was using gen_gvec_ool_zzzz.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220527181907.189259-15-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Rename the function to match gen_gvec_ool_arg_zzzz,
and move to be adjacent.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220527181907.189259-14-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Convert SVE translation functions directly using
gen_gvec_ool_arg_zzzz to TRANS_FEAT.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220527181907.189259-13-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Convert SVE translation functions using do_sve2_zzzz_ool
to use TRANS_FEAT and gen_gvec_ool_arg_zzzz.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220527181907.189259-12-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Use gen_gvec_ool_arg_zzzz instead of gen_gvec_ool_zzzz
when the arguments come from arg_rrrr_esz.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220527181907.189259-11-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Convert SVE translation functions directly using
gen_gvec_ool_zzzz to TRANS_FEAT.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220527181907.189259-10-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220527181907.189259-9-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Convert SVE translation functions using do_sve2_zzz_ool
to use TRANS_FEAT and gen_gvec_ool_arg_zzz.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220527181907.189259-8-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Convert SVE translation functions using
gen_gvec_ool_arg_zzz to TRANS_FEAT.
Remove trivial wrappers do_aese, do_sm4.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220527181907.189259-7-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Use gen_gvec_ool_arg_zzz instead of gen_gvec_ool_zzz
when the arguments come from arg_rrr_esz.
Replaces do_zzw_ool and do_zzz_data_ool.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220527181907.189259-6-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220527181907.189259-5-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Convert SVE translation functions using gen_gvec_ool_zz to TRANS_FEAT.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220527181907.189259-4-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220527181907.189259-3-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Steal the idea for these leaf function expanders from PowerPC.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220527181907.189259-2-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Fix when building HVF on macOS Aarch64:
target/arm/hvf/hvf.c:586:15: error: unknown type name 'ARMCPRegInfo'; did you mean 'ARMCPUInfo'?
const ARMCPRegInfo *ri;
^~~~~~~~~~~~
ARMCPUInfo
target/arm/cpu-qom.h:38:3: note: 'ARMCPUInfo' declared here
} ARMCPUInfo;
^
target/arm/hvf/hvf.c:589:14: error: implicit declaration of function 'get_arm_cp_reginfo' is invalid in C99 [-Werror,-Wimplicit-function-declaration]
ri = get_arm_cp_reginfo(arm_cpu->cp_regs, key);
^
target/arm/hvf/hvf.c:589:12: warning: incompatible integer to pointer conversion assigning to 'const ARMCPUInfo *' (aka 'const struct ARMCPUInfo *') from 'int' [-Wint-conversion]
ri = get_arm_cp_reginfo(arm_cpu->cp_regs, key);
^ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
target/arm/hvf/hvf.c:591:26: error: no member named 'type' in 'struct ARMCPUInfo'
assert(!(ri->type & ARM_CP_NO_RAW));
~~ ^
/Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/include/assert.h:99:25: note: expanded from macro 'assert'
(__builtin_expect(!(e), 0) ? __assert_rtn(__func__, __ASSERT_FILE_NAME, __LINE__, #e) : (void)0)
^
target/arm/hvf/hvf.c:591:33: error: use of undeclared identifier 'ARM_CP_NO_RAW'
assert(!(ri->type & ARM_CP_NO_RAW));
^
1 warning and 4 errors generated.
Fixes: cf7c6d1004 ("target/arm: Split out cpregs.h")
Reported-by: Duncan Bayne <duncan@bayne.id.au>
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220525161926.34233-1-philmd@fungible.com
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1029
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Implement the following PowerISA v3.1 instructions:
xxmfacc: VSX Move From Accumulator
xxmtacc: VSX Move To Accumulator
xxsetaccz: VSX Set Accumulator to Zero
The PowerISA 3.1 mentions that for the current version of the
architecture, "the hardware implementation provides the effect of ACC[i]
and VSRs 4*i to 4*i + 3 logically containing the same data" and "The
Accumulators introduce no new logical state at this time" (page 501).
For now it seems unnecessary to create new structures, so this patch
just uses ACC[i] as VSRs 4*i to 4*i+3 and therefore move to and from
accumulators are no-ops.
Signed-off-by: Lucas Mateus Castro (alqotel) <lucas.araujo@eldorado.org.br>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220524140537.27451-2-lucas.araujo@eldorado.org.br>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
This allows an x86 host to no-op lwsyncs, and ppc host can use lwsync
rather than sync.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220519135908.21282-5-npiggin@gmail.com>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
The generated eieio memory ordering semantics do not match the
instruction definition in the architecture. Add a big comment to
explain this strange instruction and correct the memory ordering
behaviour.
Signed-off: Nicholas Piggin <npiggin@gmail.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220519135908.21282-2-npiggin@gmail.com>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Move vmsumshm and vmsumshs to decodetree, declare vmsumshm helper with
TCG_CALL_NO_RWG, and drop the unused env argument.
Signed-off-by: Matheus Ferst <matheus.ferst@eldorado.org.br>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220517123929.284511-13-matheus.ferst@eldorado.org.br>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Move vmsumuhm and vmsumuhs to decodetree, declare vmsumuhm helper with
TCG_CALL_NO_RWG, and drop the unused env argument.
Signed-off-by: Matheus Ferst <matheus.ferst@eldorado.org.br>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220517123929.284511-12-matheus.ferst@eldorado.org.br>
[danielhb: added #undef VMSUMUHM to fix ppc64 build]
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Move vmsumubm and vmsummbm to decodetree, declare both helpers with
TCG_CALL_NO_RWG, and drop the unused env argument.
Signed-off-by: Matheus Ferst <matheus.ferst@eldorado.org.br>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220517123929.284511-11-matheus.ferst@eldorado.org.br>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Move xxextractuw and xxinsertw to decodetree, declare both helpers with
TCG_CALL_NO_RWG, and drop the unused env argument.
Signed-off-by: Matheus Ferst <matheus.ferst@eldorado.org.br>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220517123929.284511-9-matheus.ferst@eldorado.org.br>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Move xvxsigsp to decodetree, declare helper_xvxsigsp with
TCG_CALL_NO_RWG, and drop the unused env argument.
Signed-off-by: Matheus Ferst <matheus.ferst@eldorado.org.br>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220517123929.284511-8-matheus.ferst@eldorado.org.br>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Move xscvspdpn to decodetree, declare helper_xscvspdpn with
TCG_CALL_NO_RWG_SE and drop the unused env argument.
Signed-off-by: Matheus Ferst <matheus.ferst@eldorado.org.br>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220517123929.284511-7-matheus.ferst@eldorado.org.br>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
fsel doesn't change FPSCR and CR1 is handled by gen_set_cr1_from_fpscr,
so helper_fsel doesn't need the env argument and can be declared with
TCG_CALL_NO_RWG_SE. We also take this opportunity to move the insn to
decodetree.
Signed-off-by: Matheus Ferst <matheus.ferst@eldorado.org.br>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220517123929.284511-6-matheus.ferst@eldorado.org.br>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Helpers of VSX instructions without cpu_env as an argument do not access
globals.
Signed-off-by: Matheus Ferst <matheus.ferst@eldorado.org.br>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220517123929.284511-5-matheus.ferst@eldorado.org.br>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Helpers of BCD instructions only access the VSRs supplied by the
TCGv_ptr arguments, no globals are accessed.
Signed-off-by: Matheus Ferst <matheus.ferst@eldorado.org.br>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220517123929.284511-4-matheus.ferst@eldorado.org.br>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Helpers of vector instructions without cpu_env as an argument do not
access globals.
Signed-off-by: Matheus Ferst <matheus.ferst@eldorado.org.br>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220517123929.284511-3-matheus.ferst@eldorado.org.br>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
The bit FI fix used the sfprf flag as a flag for the set_fi parameter
in do_float_check_status where applicable. Now, this patch rename this
flag to sfifprf to state this dual usage.
Signed-off-by: Víctor Colombo <victor.colombo@eldorado.org.br>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Rashmica Gupta <rashmica.g@gmail.com>
Message-Id: <20220517161522.36132-4-victor.colombo@eldorado.org.br>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
This patch fixes another not-so-clear situation in Power ISA
regarding the inexact bits in FPSCR. The ISA states that:
"""
When Overflow Exception is disabled (OE=0) and an
Overflow Exception occurs, the following actions are
taken:
...
2. Inexact Exception is set
XX <- 1
...
FI is set to 1
...
"""
However, when tested on a Power 9 hardware, some instructions that
trigger an OX don't set the FI bit:
xvcvdpsp(0x4050533fcdb7b95ff8d561c40bf90996) = FI: CLEARED -> CLEARED
xvnmsubmsp(0xf3c0c1fc8f3230, 0xbeaab9c5) = FI: CLEARED -> CLEARED
(just a few examples. Other instructions are also affected)
The root cause for this seems to be that only instructions that list
the bit FI in the "Special Registers Altered" should modify it.
QEMU is, today, not working like the hardware:
xvcvdpsp(0x4050533fcdb7b95ff8d561c40bf90996) = FI: CLEARED -> SET
xvnmsubmsp(0xf3c0c1fc8f3230, 0xbeaab9c5) = FI: CLEARED -> SET
(all tests assume FI is cleared beforehand)
Fix this by making float_overflow_excp() return float_flag_inexact
if it should update the inexact flags.
Signed-off-by: Víctor Colombo <victor.colombo@eldorado.org.br>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Rashmica Gupta <rashmica.g@gmail.com>
Message-Id: <20220517161522.36132-3-victor.colombo@eldorado.org.br>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
According to Power ISA, the FI bit in FPSCR is non-sticky.
This means that if an instruction is said to modify the FI bit, then
it should be set or cleared depending on the result of the
instruction. Otherwise, it should be kept as was before.
However, the following inconsistency was found when comparing results
from the hardware (tested on both a Power 9 processor and in
Power 10 Mambo):
(FI bit is set before the execution of the instruction)
Hardware: xscmpeqdp(0xff..ff, 0xff..ff) = FI: SET -> SET
QEMU: xscmpeqdp(0xff..ff, 0xff..ff) = FI: SET -> CLEARED
As the FI bit is non-sticky, and xscmpeqdp does not list it as a field
that is changed by the instruction, it should not be changed after its
execution.
This is happening to multiple instructions in the vsx implementations.
If the ISA does not list the FI bit as altered for a particular
instruction, then it should be kept as it was before the instruction.
QEMU is not following this behavior. Affected instructions include:
- xv* (all vsx-vector instructions);
- xscmp*, xsmax*, xsmin*;
- xstdivdp and similars;
(to identify the affected instructions, just search in the ISA for
the instructions that does not list FI in "Special Registers Altered")
Most instructions use the function do_float_check_status() to commit
changes in the inexact flag. So the fix is to add a parameter to it
that will control if the bit FI should be changed or not.
All users of do_float_check_status() are then modified to provide this
argument, controlling if that specific instruction changes bit FI or
not.
Some macro helpers are responsible for both instructions that change
and instructions that aren't suposed to change FI. This seems to always
overlap with the sfprf flag. So, reuse this flag for this purpose when
applicable.
Signed-off-by: Víctor Colombo <victor.colombo@eldorado.org.br>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220517161522.36132-2-victor.colombo@eldorado.org.br>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Commit 74c4912f09 changed check_tlb_flush() to use
tlb_flush_all_cpus_synced() instead of calling tlb_flush() on each
CPU. However, as side effect of this, a CPU executing a ptesync
after a tlbie will have its TLB flushed only after exiting its
current Translation Block (TB).
This causes memory accesses to invalid pages to succeed, if they
happen to be on the same TB as the ptesync.
To fix this, use tlb_flush_all_cpus() instead, that immediately
flushes the TLB of the CPU executing the ptesync instruction.
Fixes: 74c4912f09 ("target/ppc: Fix synchronization of mttcg with broadcast TLB flushes")
Signed-off-by: Leandro Lupori <leandro.lupori@eldorado.org.br>
Reviewed-by: Fabiano Rosas <farosas@linux.ibm.com>
Message-Id: <20220503163904.22575-1-leandro.lupori@eldorado.org.br>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Zero selects all cpu features in disas/m68k.c,
which is really what we want -- not limited to 68040.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Laurent Vivier <laurent@vivier.eu>
Message-Id: <20220430170225.326447-2-richard.henderson@linaro.org>
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
Hyper-V TLFS allows for L0 and L1 hypervisors to collaborate on L2's
TLB flush hypercalls handling. With the correct setup, L2's TLB flush
hypercalls can be handled by L0 directly, without the need to exit to
L1.
Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Message-Id: <20220525115949.1294004-6-vkuznets@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
KVM kind of supported "extended GVA ranges" (up to 4095 additional GFNs
per hypercall) since the implementation of Hyper-V PV TLB flush feature
(Linux-4.18) as regardless of the request, full TLB flush was always
performed. "Extended GVA ranges for TLB flush hypercalls" feature bit
wasn't exposed then. Now, as KVM gains support for fine-grained TLB
flush handling, exposing this feature starts making sense.
Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Message-Id: <20220525115949.1294004-5-vkuznets@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Hyper-V specification allows to pass parameters for certain hypercalls
using XMM registers ("XMM Fast Hypercall Input"). When the feature is
in use, it allows for faster hypercalls processing as KVM can avoid
reading guest's memory.
KVM supports the feature since v5.14.
Rename HV_HYPERCALL_{PARAMS_XMM_AVAILABLE -> XMM_INPUT_AVAILABLE} to
comply with KVM.
Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Message-Id: <20220525115949.1294004-4-vkuznets@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
The newly introduced enlightenment allow L0 (KVM) and L1 (Hyper-V)
hypervisors to collaborate to avoid unnecessary updates to L2
MSR-Bitmap upon vmexits.
Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Message-Id: <20220525115949.1294004-3-vkuznets@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Previously, HV_CPUID_NESTED_FEATURES.EAX CPUID leaf was handled differently
as it was only used to encode the supported eVMCS version range. In fact,
there are also feature (e.g. Enlightened MSR-Bitmap) bits there. In
preparation to adding these features, move HV_CPUID_NESTED_FEATURES leaf
handling to hv_build_cpuid_leaf() and drop now-unneeded 'hyperv_nested'.
No functional change intended.
Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Message-Id: <20220525115949.1294004-2-vkuznets@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Since KVM commit 5f76f6f5ff96 ("KVM: nVMX: Do not expose MPX VMX controls when guest MPX disabled")
it is not possible to disable MPX on a "-cpu host" just by adding "-mpx"
there if the host CPU does indeed support MPX.
QEMU will fail to set MSR_IA32_VMX_TRUE_{EXIT,ENTRY}_CTLS MSRs in this case
and so trigger an assertion failure.
Instead, besides "-mpx" one has to explicitly add also
"-vmx-exit-clear-bndcfgs" and "-vmx-entry-load-bndcfgs" to QEMU command
line to make it work, which is a bit convoluted.
Make the MPX-related bits in FEAT_VMX_{EXIT,ENTRY}_CTLS dependent on MPX
being actually enabled so such workarounds are no longer necessary.
Signed-off-by: Maciej S. Szmigiero <maciej.szmigiero@oracle.com>
Message-Id: <51aa2125c76363204cc23c27165e778097c33f0b.1653323077.git.maciej.szmigiero@oracle.com>
Cc: qemu-stable@nongnu.org
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Zicsr/Zifencei is not in 'I' since ISA version 20190608,
thus to fully express the capability of the CPU,
they should be exposed in isa_string.
Signed-off-by: Hongren (Zenithal) Zheng <i@zenithal.me>
Tested-by: Jiatai He <jiatai2021@iscas.ac.cn>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-Id: <YoTqwpfrodveJ7CR@Sun>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Currently, the [m|s]tval CSRs are set with trapping instruction encoding
only for illegal instruction traps taken at the time of instruction
decoding.
In RISC-V world, a valid instructions might also trap as illegal or
virtual instruction based to trapping bits in various CSRs (such as
mstatus.TVM or hstatus.VTVM).
We improve setting of [m|s]tval CSRs for all types of illegal and
virtual instruction traps.
Signed-off-by: Anup Patel <apatel@ventanamicro.com>
Reviewed-by: Frank Chang <frank.chang@sifive.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-Id: <20220511144528.393530-4-apatel@ventanamicro.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Currently, QEMU does not set hstatus.GVA bit for traps taken from
HS-mode into HS-mode which breaks the Xvisor nested MMU test suite
on QEMU. This was working previously.
This patch updates riscv_cpu_do_interrupt() to fix the above issue.
Fixes: 86d0c45739 ("target/riscv: Fixup setting GVA")
Signed-off-by: Anup Patel <apatel@ventanamicro.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-Id: <20220511144528.393530-3-apatel@ventanamicro.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
When hypervisor and VS CSRs are accessed from VS-mode or VU-mode,
the riscv_csrrw_check() function should generate virtual instruction
trap instead illegal instruction trap.
Fixes: 0a42f4c440 (" target/riscv: Fix CSR perm checking for HS mode")
Signed-off-by: Anup Patel <apatel@ventanamicro.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Frank Chang <frank.chang@sifive.com>
Message-Id: <20220511144528.393530-2-apatel@ventanamicro.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
"mimpid" cpu option was mistyped to "mipid".
Fixes: 9951ba94 ("target/riscv: Support configuarable marchid, mvendorid, mipid CSR values")
Signed-off-by: Frank Chang <frank.chang@sifive.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-Id: <20220523153147.15371-1-frank.chang@sifive.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
- setting ext_g will implicitly set ext_i
Signed-off-by: Weiwei Li <liweiwei@iscas.ac.cn>
Signed-off-by: Junqiang Wang <wangjunqiang@iscas.ac.cn>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-Id: <20220518012611.6772-1-liweiwei@iscas.ac.cn>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
We should separate "check" and "configure" steps as possible.
This commit separates both steps except vector/Zfinx-related checks.
Signed-off-by: Tsukasa OI <research_trasio@irq.a4lg.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-Id: <c3145fa37a529484cf3047c8cb9841e9effad4b0.1652583332.git.research_trasio@irq.a4lg.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
QEMU allowed inconsistent configurations that made floating point
arithmetic effectively unusable.
This commit adds certain checks for consistent FP arithmetic:
- F requires Zicsr
- Zfinx requires Zicsr
- Zfh/Zfhmin require F
- D requires F
- V requires D
Because F/D/Zicsr are enabled by default (and an error will not occur unless
we manually disable one or more of prerequisites), this commit just enforces
the user to give consistent combinations.
Signed-off-by: Tsukasa OI <research_trasio@irq.a4lg.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-Id: <00e7b1c6060dab32ac7d49813b1ca84d3eb63298.1652583332.git.research_trasio@irq.a4lg.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
On ISA version 20190608 or later, "G" expands to "IMAFD_Zicsr_Zifencei".
Both "Zicsr" and "Zifencei" are enabled by default and "G" is supposed to
be (virtually) enabled as well, it should be safe to change its expansion.
Signed-off-by: Tsukasa OI <research_trasio@irq.a4lg.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-Id: <d1b5be550a2893a0fd32c928f832d2ff7bfafe35.1652583332.git.research_trasio@irq.a4lg.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Because "G" virtual extension expands to "IMAFD", we cannot separately
disable extensions like "F" or "D" without disabling "G". Because all
"IMAFD" are enabled by default, it's harmless to disable "G" by default.
Signed-off-by: Tsukasa OI <research_trasio@irq.a4lg.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-Id: <cab7205f1d7668f642fa242386543334af6bc1bd.1652583332.git.research_trasio@irq.a4lg.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Because ext_? members are boolean variables, operator `&&' should be
used instead of `&'.
Signed-off-by: Tsukasa OI <research_trasio@irq.a4lg.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Víctor Colombo <victor.colombo@eldorado.org.br>
Message-Id: <91633f8349253656dd08bc8dc36498a9c7538b10.1652583332.git.research_trasio@irq.a4lg.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Because some operating systems don't correctly parse long ISA extension
string, this commit adds short-isa-string boolean option to disable
generating long ISA extension strings on Device Tree.
For instance, enabling Zfinx and Zdinx extensions and booting Linux (5.17 or
earlier) with FPU support caused a kernel panic.
Operating Systems which short-isa-string might be helpful:
1. Linux (5.17 or earlier)
2. FreeBSD (at least 14.0-CURRENT)
3. OpenBSD (at least current development version)
Signed-off-by: Tsukasa OI <research_trasio@irq.a4lg.com>
Acked-by: Alistair Francis <alistair.francis@wdc.com>
Message-Id: <7c1fe5f06b0a7646a47e9bcdddb1042bb60c69c8.1652181972.git.research_trasio@irq.a4lg.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
This commit moves ISA string conversion for Zhinx and Zhinxmin extensions.
Because extension category ordering of "H" is going to be after "V",
their ordering is going to be valid (on canonical order).
Signed-off-by: Tsukasa OI <research_trasio@irq.a4lg.com>
Acked-by: Alistair Francis <alistair.francis@wdc.com>
Message-Id: <7a988aedb249b6709f9ce5464ff359b60958ca54.1652181972.git.research_trasio@irq.a4lg.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Vector whole register load instructions have EEW encoded in the opcode,
so we shouldn't take SEW here. Vector whole register store instructions
are always EEW=8.
Signed-off-by: eop Chen <eop.chen@sifive.com>
Reviewed-by: Frank Chang <frank.chang@sifive.com>
Acked-by: Alistair Francis <alistair.francis@wdc.com>
Message-Id: <165181414065.18540.14828125053334599921-0@git.sr.ht>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
VS mode access to hypervisor CSRs should generate virtual, not illegal,
instruction exceptions.
Don't return early and indicate an illegal instruction exception when
accessing a hypervisor CSR from VS mode. Instead, fall through to the
`hmode` predicate to return the correct virtual instruction exception.
Signed-off-by: Dylan Reid <dgreid@rivosinc.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-Id: <20220506165456.297058-1-dgreid@rivosinc.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Live migration can happen when Arch LBR LBREn bit is cleared,
e.g., when migration happens after guest entered SMM mode.
In this case, we still need to migrate Arch LBR MSRs.
Signed-off-by: Yang Weijiang <weijiang.yang@intel.com>
Message-Id: <20220517155024.33270-1-weijiang.yang@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
We had a few CPTR_* bits defined, but missed quite a few.
Complete all of the fields up to ARMv9.2.
Use FIELD_EX64 instead of manual extract32.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220517054850.177016-3-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This feature adds a new register, HCRX_EL2, which controls
many of the newer AArch64 features. So far the register is
effectively RES0, because none of the new features are done.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220517054850.177016-2-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
As per the description of the HCR_EL2.APK field in the ARMv8 ARM,
Pointer Authentication keys accesses should only be trapped to Secure
EL2 if it is enabled.
Signed-off-by: Florian Lugou <florian.lugou@provenrun.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220517145242.1215271-1-florian.lugou@provenrun.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Currently we give all the v7-and-up CPUs a PMU with 4 counters. This
means that we don't provide the 6 counters that are required by the
Arm BSA (Base System Architecture) specification if the CPU supports
the Virtualization extensions.
Instead of having a single PMCR_NUM_COUNTERS, make each CPU type
specify the PMCR reset value (obtained from the appropriate TRM), and
use the 'N' field of that value to define the number of counters
provided.
This means that we now supply 6 counters instead of 4 for:
Cortex-A9, Cortex-A15, Cortex-A53, Cortex-A57, Cortex-A72,
Cortex-A76, Neoverse-N1, '-cpu max'
This CPU goes from 4 to 8 counters:
A64FX
These CPUs remain with 4 counters:
Cortex-A7, Cortex-A8
This CPU goes down from 4 to 3 counters:
Cortex-R5
Note that because we now use the PMCR reset value of the specific
implementation, we no longer set the LC bit out of reset. This has
an UNKNOWN value out of reset for all cores with any AArch32 support,
so guest software should be setting it anyway if it wants it.
This change was originally landed in commit f7fb73b8cd (during
the 6.0 release cycle) but was then reverted by commit
21c2dd77a6 before that release because it did not work with KVM.
This version fixes that by creating the scratch vCPU in
kvm_arm_get_host_cpu_features() with the KVM_ARM_VCPU_PMU_V3 feature
if KVM supports it, and then only asking KVM for the PMCR_EL0 value
if the vCPU has a PMU.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
[PMM: Added the correct value for a64fx]
Message-id: 20220513122852.4063586-1-peter.maydell@linaro.org
In commit 88ce6c6ee8 we switched from directly fishing the number
of breakpoints and watchpoints out of the ID register fields to
abstracting out functions to do this job, but we forgot to delete the
now-obsolete comment in define_debug_regs() about the relation
between the ID field value and the actual number of breakpoints and
watchpoints. Delete the obsolete comment.
Reported-by: CHRIS HOWARD <cvz185@web.de>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220513131801.4082712-1-peter.maydell@linaro.org
Give all the debug registers their correct names including the
index, rather than having multiple registers all with the
same name string, which is confusing when viewed over the
gdbstub interface.
Signed-off-by: CHRIS HOWARD <cvz185@web.de>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 4127D8CA-D54A-47C7-A039-0DB7361E30C0@web.de
[PMM: expanded commit message]
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Make the GICv3 set its number of bits of physical priority from the
implementation-specific value provided in the CPU state struct, in
the same way we already do for virtual priority bits. Because this
would be a migration compatibility break, we provide a property
force-8-bit-prio which is enabled for 7.0 and earlier versioned board
models to retain the legacy "always use 8 bits" behaviour.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220512151457.3899052-6-peter.maydell@linaro.org
Message-id: 20220506162129.2896966-5-peter.maydell@linaro.org
The unsupported_encoding() macro logs a LOG_UNIMP message and then
generates code to raise the usual exception for an unallocated
encoding. Back when we were still implementing the A64 decoder this
was helpful for flagging up when guest code was using something we
hadn't yet implemented. Now we completely cover the A64 instruction
set it is barely used. The only remaining uses are for five
instructions whose semantics are "UNDEF, unless being run under
external halting debug":
* HLT (when not being used for semihosting)
* DCPSR1, DCPS2, DCPS3
* DRPS
QEMU doesn't implement external halting debug, so for us the UNDEF is
the architecturally correct behaviour (because it's not possible to
execute these instructions with halting debug enabled). The
LOG_UNIMP doesn't serve a useful purpose; replace these uses of
unsupported_encoding() with unallocated_encoding(), and delete the
macro.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220509160443.3561604-1-peter.maydell@linaro.org
The Armv8.4 feature FEAT_IDST specifies that exceptions generated by
read accesses to the feature ID space should report a syndrome code
of 0x18 (EC_SYSTEMREGISTERTRAP) rather than 0x00 (EC_UNCATEGORIZED).
The feature ID space is defined to be:
op0 == 3, op1 == {0,1,3}, CRn == 0, CRm == {0-7}, op2 == {0-7}
In our implementation we might return the EC_UNCATEGORIZED syndrome
value for a system register access in four cases:
* no reginfo struct in the hashtable
* cp_access_ok() fails (ie ri->access doesn't permit the access)
* ri->accessfn returns CP_ACCESS_TRAP_UNCATEGORIZED at runtime
* ri->type includes ARM_CP_RAISES_EXC, and the readfn raises
an UNDEF exception at runtime
We have very few regdefs that set ARM_CP_RAISES_EXC, and none of
them are in the feature ID space. (In the unlikely event that any
are added in future they would need to take care of setting the
correct syndrome themselves.) This patch deals with the other
three cases, and enables FEAT_IDST for AArch64 -cpu max.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220509155457.3560724-1-peter.maydell@linaro.org
Enable the FEAT_S2FWB for -cpu max. Since FEAT_S2FWB requires that
CLIDR_EL1.{LoUU,LoUIS} are zero, we explicitly squash these (the
inherited CLIDR_EL1 value from the Cortex-A57 has them as 1).
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220505183950.2781801-5-peter.maydell@linaro.org
Implement the handling of FEAT_S2FWB; the meat of this is in the new
combined_attrs_fwb() function which combines S1 and S2 attributes
when HCR_EL2.FWB is set.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220505183950.2781801-4-peter.maydell@linaro.org
Factor out the part of combine_cacheattrs() that is specific to
handling HCR_EL2.FWB == 0. This is the part where we combine the
memory type and cacheability attributes.
The "force Outer Shareable for Device or Normal Inner-NC Outer-NC"
logic remains in combine_cacheattrs() because it holds regardless
(this is the equivalent of the pseudocode EffectiveShareability()
function).
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220505183950.2781801-3-peter.maydell@linaro.org
In the original Arm v8 two-stage translation, both stage 1 and stage
2 specify memory attributes (memory type, cacheability,
shareability); these are then combined to produce the overall memory
attributes for the whole stage 1+2 access. In QEMU we implement this
by having get_phys_addr() fill in an ARMCacheAttrs struct, and we
convert both the stage 1 and stage 2 attribute bit formats to the
same encoding (an 8-bit attribute value matching the MAIR_EL1 fields,
plus a 2-bit shareability value).
The new FEAT_S2FWB feature allows the guest to enable a different
interpretation of the attribute bits in the stage 2 descriptors.
These bits can now be used to control details of how the stage 1 and
2 attributes should be combined (for instance they can say "always
use the stage 1 attributes" or "ignore the stage 1 attributes and
always be Device memory"). This means we need to pass the raw bit
information for stage 2 down to the function which combines the stage
1 and stage 2 information.
Add a field to ARMCacheAttrs that indicates whether the attrs field
should be interpreted as MAIR format, or as the raw stage 2 attribute
bits from the descriptor, and store the appropriate values when
filling in cacheattrs.
We only need to interpret the attrs field in a few places:
* in do_ats_write(), where we know to expect a MAIR value
(there is no ATS instruction to do a stage-2-only walk)
* in S1_ptw_translate(), where we want to know whether the
combined S1 + S2 attributes indicate Device memory that
should provoke a fault
* in combine_cacheattrs(), which does the S1 + S2 combining
Update those places accordingly.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220505183950.2781801-2-peter.maydell@linaro.org
most of CXL support
fixes, cleanups all over the place
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
-----BEGIN PGP SIGNATURE-----
iQFDBAABCAAtFiEEXQn9CHHI+FuUyooNKB8NuNKNVGkFAmKCuLIPHG1zdEByZWRo
YXQuY29tAAoJECgfDbjSjVRpdDUH/12SmWaAo+0+SdIHgWFFxsmg3t/EdcO38fgi
MV+GpYdbp6TlU3jdQhrMZYmFdkVVydBdxk93ujCLbFS0ixTsKj31j0IbZMfdcGgv
SLqnV+E3JdHqnGP39q9a9rdwYWyqhkgHoldxilIFW76ngOSapaZVvnwnOMAMkf77
1LieL4/Xq7N9Ho86Zrs3IczQcf0czdJRDaFaSIu8GaHl8ELyuPhlSm6CSqqrEEWR
PA/COQsLDbLOMxbfCi5v88r5aaxmGNZcGbXQbiH9qVHw65nlHyLH9UkNTdJn1du1
f2GYwwa7eekfw/LCvvVwxO1znJrj02sfFai7aAtQYbXPvjvQiqA=
=xdSk
-----END PGP SIGNATURE-----
Merge tag 'for_upstream' of git://git.kernel.org/pub/scm/virt/kvm/mst/qemu into staging
virtio,pc,pci: fixes,cleanups,features
most of CXL support
fixes, cleanups all over the place
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
# -----BEGIN PGP SIGNATURE-----
#
# iQFDBAABCAAtFiEEXQn9CHHI+FuUyooNKB8NuNKNVGkFAmKCuLIPHG1zdEByZWRo
# YXQuY29tAAoJECgfDbjSjVRpdDUH/12SmWaAo+0+SdIHgWFFxsmg3t/EdcO38fgi
# MV+GpYdbp6TlU3jdQhrMZYmFdkVVydBdxk93ujCLbFS0ixTsKj31j0IbZMfdcGgv
# SLqnV+E3JdHqnGP39q9a9rdwYWyqhkgHoldxilIFW76ngOSapaZVvnwnOMAMkf77
# 1LieL4/Xq7N9Ho86Zrs3IczQcf0czdJRDaFaSIu8GaHl8ELyuPhlSm6CSqqrEEWR
# PA/COQsLDbLOMxbfCi5v88r5aaxmGNZcGbXQbiH9qVHw65nlHyLH9UkNTdJn1du1
# f2GYwwa7eekfw/LCvvVwxO1znJrj02sfFai7aAtQYbXPvjvQiqA=
# =xdSk
# -----END PGP SIGNATURE-----
# gpg: Signature made Mon 16 May 2022 01:48:50 PM PDT
# gpg: using RSA key 5D09FD0871C8F85B94CA8A0D281F0DB8D28D5469
# gpg: issuer "mst@redhat.com"
# gpg: Good signature from "Michael S. Tsirkin <mst@kernel.org>" [undefined]
# gpg: aka "Michael S. Tsirkin <mst@redhat.com>" [undefined]
# gpg: WARNING: This key is not certified with a trusted signature!
# gpg: There is no indication that the signature belongs to the owner.
# Primary key fingerprint: 0270 606B 6F3C DF3D 0B17 0970 C350 3912 AFBE 8E67
# Subkey fingerprint: 5D09 FD08 71C8 F85B 94CA 8A0D 281F 0DB8 D28D 5469
* tag 'for_upstream' of git://git.kernel.org/pub/scm/virt/kvm/mst/qemu: (86 commits)
vhost-user-scsi: avoid unlink(NULL) with fd passing
virtio-net: don't handle mq request in userspace handler for vhost-vdpa
vhost-vdpa: change name and polarity for vhost_vdpa_one_time_request()
vhost-vdpa: backend feature should set only once
vhost-net: fix improper cleanup in vhost_net_start
vhost-vdpa: fix improper cleanup in net_init_vhost_vdpa
virtio-net: align ctrl_vq index for non-mq guest for vhost_vdpa
virtio-net: setup vhost_dev and notifiers for cvq only when feature is negotiated
hw/i386/amd_iommu: Fix IOMMU event log encoding errors
hw/i386: Make pic a property of common x86 base machine type
hw/i386: Make pit a property of common x86 base machine type
include/hw/pci/pcie_host: Correct PCIE_MMCFG_SIZE_MAX
include/hw/pci/pcie_host: Correct PCIE_MMCFG_BUS_MASK
docs/vhost-user: Clarifications for VHOST_USER_ADD/REM_MEM_REG
vhost-user: more master/slave things
virtio: add vhost support for virtio devices
virtio: drop name parameter for virtio_init()
virtio/vhost-user: dynamically assign VhostUserHostNotifiers
hw/virtio/vhost-user: don't suppress F_CONFIG when supported
include/hw: start documenting the vhost API
...
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* misc qga-vss fixes
* remove the deprecated CPU model 'Icelake-Client'
* support for x86 architectural LBR
* remove deprecated properties
* replace deprecated -soundhw with -audio
-----BEGIN PGP SIGNATURE-----
iQFIBAABCAAyFiEE8TM4V0tmI4mGbHaCv/vSX3jHroMFAmJ/hZ4UHHBib256aW5p
QHJlZGhhdC5jb20ACgkQv/vSX3jHroN2Igf/bFs+yluOikt0eFNmXYnshrGBWPXr
oam0iumPox34vTzZnjpSjF6tJGxHWOgi+wbgIvbwOYHA/ONxx8akW580j+1VhEWa
X29VyUzjZBffgFtmlF4fM74/ELYm7s4c1a1/D9TpVP6Dr0fSWbMujbx4dfeVstvf
sONN+A8sVxaNdV9QKPE6BvqfMlPLoCiigrOetf6iY1KuUtkQDF8xDB0MdzdutqAQ
szAtQ0rrzjxDx9EuGN1SECFM1/riDUbtOOoA9g2C7gGKrx3/iUc6pzrkIcAfWLFK
xXbH7+6Wynia0cbUxnrvRdY4daMIxm4N3wUvN7szXgF9kxYxeQcsdgGsNA==
=n4lu
-----END PGP SIGNATURE-----
Merge tag 'for-upstream' of https://gitlab.com/bonzini/qemu into staging
* fix WHPX debugging
* misc qga-vss fixes
* remove the deprecated CPU model 'Icelake-Client'
* support for x86 architectural LBR
* remove deprecated properties
* replace deprecated -soundhw with -audio
# -----BEGIN PGP SIGNATURE-----
#
# iQFIBAABCAAyFiEE8TM4V0tmI4mGbHaCv/vSX3jHroMFAmJ/hZ4UHHBib256aW5p
# QHJlZGhhdC5jb20ACgkQv/vSX3jHroN2Igf/bFs+yluOikt0eFNmXYnshrGBWPXr
# oam0iumPox34vTzZnjpSjF6tJGxHWOgi+wbgIvbwOYHA/ONxx8akW580j+1VhEWa
# X29VyUzjZBffgFtmlF4fM74/ELYm7s4c1a1/D9TpVP6Dr0fSWbMujbx4dfeVstvf
# sONN+A8sVxaNdV9QKPE6BvqfMlPLoCiigrOetf6iY1KuUtkQDF8xDB0MdzdutqAQ
# szAtQ0rrzjxDx9EuGN1SECFM1/riDUbtOOoA9g2C7gGKrx3/iUc6pzrkIcAfWLFK
# xXbH7+6Wynia0cbUxnrvRdY4daMIxm4N3wUvN7szXgF9kxYxeQcsdgGsNA==
# =n4lu
# -----END PGP SIGNATURE-----
# gpg: Signature made Sat 14 May 2022 03:34:06 AM PDT
# gpg: using RSA key F13338574B662389866C7682BFFBD25F78C7AE83
# gpg: issuer "pbonzini@redhat.com"
# gpg: Good signature from "Paolo Bonzini <bonzini@gnu.org>" [undefined]
# gpg: aka "Paolo Bonzini <pbonzini@redhat.com>" [undefined]
# gpg: WARNING: This key is not certified with a trusted signature!
# gpg: There is no indication that the signature belongs to the owner.
# Primary key fingerprint: 46F5 9FBD 57D6 12E7 BFD4 E2F7 7E15 100C CD36 69B1
# Subkey fingerprint: F133 3857 4B66 2389 866C 7682 BFFB D25F 78C7 AE83
* tag 'for-upstream' of https://gitlab.com/bonzini/qemu: (23 commits)
configure: remove duplicate help messages
configure: remove another dead variable
build: remove useless dependency
introduce -audio as a replacement for -soundhw
soundhw: move help handling to vl.c
soundhw: unify initialization for ISA and PCI soundhw
soundhw: extract soundhw help to a separate function
soundhw: remove ability to create multiple soundcards
rng: make opened property read-only
crypto: make loaded property read-only
target/i386: Support Arch LBR in CPUID enumeration
target/i386: introduce helper to access supported CPUID
target/i386: Enable Arch LBR migration states in vmstate
target/i386: Add MSR access interface for Arch LBR
target/i386: Add XSAVES support for Arch LBR
target/i386: Enable support for XSAVES based features
target/i386: Add kvm_get_one_msr helper
target/i386: Add lbr-fmt vPMU option to support guest LBR
qdev-properties: Add a new macro with bitmask check for uint64_t property
i386/cpu: Remove the deprecated cpu model 'Icelake-Client'
...
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
The check on x86ms->apic_id_limit in pc_machine_done() had two problems.
Firstly, we need KVM to support the X2APIC API in order to allow IRQ
delivery to APICs >= 255. So we need to call/check kvm_enable_x2apic(),
which was done elsewhere in *some* cases but not all.
Secondly, microvm needs the same check. So move it from pc_machine_done()
to x86_cpus_init() where it will work for both.
The check in kvm_cpu_instance_init() is now redundant and can be dropped.
Signed-off-by: David Woodhouse <dwmw2@infradead.org>
Acked-by: Claudio Fontana <cfontana@suse.de>
Message-Id: <20220314142544.150555-1-dwmw2@infradead.org>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
- A few or1ksim fixes and enhancements
- A fix for OpenRISC tcg backend around delay slot handling
-----BEGIN PGP SIGNATURE-----
iQIzBAABCAAdFiEE2cRzVK74bBA6Je/xw7McLV5mJ+QFAmKAWKsACgkQw7McLV5m
J+SoFg/8Dlrc2BqjjXw9gpaQ18+3BRI6dMVPqHA22VJks88gykH7UWLUbrCxtKnS
SBcIcpzu17nKDdfwYWndqCr0UBM/zM3JzFrTv4QEhTEg6Np7lSM2KVNEhjBPVGoW
A7QOjPFrwItWOfAx6hrcczpj+L50iKuWeMW0XnEfqSeDYisxZcSp2yMoe5h3y7bF
tlpo+ha/ir/fd2kMlFrQlPWYiWkWM05RLJJOlXhdRMF7hrW5qlHqEB/SVykUTf7V
6fqOFvY6r3vE5OFm0Scgf/k2AJIxwV8qXkBJ5/egv+ZqUidZBQ9nXtOw++vF2AWp
eKoU2/c2XIxiF1Xdpgdi6a/CxlLqrr9jraQROB3GpaL9zGQvd//wUCg0F+QLicLv
avq4lvNmnat89aXj1DQ+DWpLy0zaZFGmxsPR+KeBJ2wkuEJ3Vd4+uiuAyXnm9M8D
wEE8mgFQYsTL1WlgHF4uNTDIx8OLS+4gYlBE3tffRksxyLLwzKHHgAfLdNZvhfx8
QZBuPy+yyO8zjr3RUVUArBs/ukZHP1QwDE6uxmPKV34tvVEbFVeSFY3a1LmYV3w5
mZNALNqf+h5Dq5Qo7f7cGNMrzhL53GTWPNX0MK5+SBDZF3/fpPZyvCr4Zd69Z5tD
+YClfWBv8HPjdUf+IFHqyE8rURw/sgNvgB76GpalwcUYXRr7zTM=
=tmP4
-----END PGP SIGNATURE-----
Merge tag 'or1k-pull-request-20220515' of https://github.com/stffrdhrn/qemu into staging
OpenRISC Fixes for 7.0
- A few or1ksim fixes and enhancements
- A fix for OpenRISC tcg backend around delay slot handling
# -----BEGIN PGP SIGNATURE-----
#
# iQIzBAABCAAdFiEE2cRzVK74bBA6Je/xw7McLV5mJ+QFAmKAWKsACgkQw7McLV5m
# J+SoFg/8Dlrc2BqjjXw9gpaQ18+3BRI6dMVPqHA22VJks88gykH7UWLUbrCxtKnS
# SBcIcpzu17nKDdfwYWndqCr0UBM/zM3JzFrTv4QEhTEg6Np7lSM2KVNEhjBPVGoW
# A7QOjPFrwItWOfAx6hrcczpj+L50iKuWeMW0XnEfqSeDYisxZcSp2yMoe5h3y7bF
# tlpo+ha/ir/fd2kMlFrQlPWYiWkWM05RLJJOlXhdRMF7hrW5qlHqEB/SVykUTf7V
# 6fqOFvY6r3vE5OFm0Scgf/k2AJIxwV8qXkBJ5/egv+ZqUidZBQ9nXtOw++vF2AWp
# eKoU2/c2XIxiF1Xdpgdi6a/CxlLqrr9jraQROB3GpaL9zGQvd//wUCg0F+QLicLv
# avq4lvNmnat89aXj1DQ+DWpLy0zaZFGmxsPR+KeBJ2wkuEJ3Vd4+uiuAyXnm9M8D
# wEE8mgFQYsTL1WlgHF4uNTDIx8OLS+4gYlBE3tffRksxyLLwzKHHgAfLdNZvhfx8
# QZBuPy+yyO8zjr3RUVUArBs/ukZHP1QwDE6uxmPKV34tvVEbFVeSFY3a1LmYV3w5
# mZNALNqf+h5Dq5Qo7f7cGNMrzhL53GTWPNX0MK5+SBDZF3/fpPZyvCr4Zd69Z5tD
# +YClfWBv8HPjdUf+IFHqyE8rURw/sgNvgB76GpalwcUYXRr7zTM=
# =tmP4
# -----END PGP SIGNATURE-----
# gpg: Signature made Sat 14 May 2022 06:34:35 PM PDT
# gpg: using RSA key D9C47354AEF86C103A25EFF1C3B31C2D5E6627E4
# gpg: Good signature from "Stafford Horne <shorne@gmail.com>" [unknown]
# gpg: WARNING: This key is not certified with a trusted signature!
# gpg: There is no indication that the signature belongs to the owner.
# Primary key fingerprint: D9C4 7354 AEF8 6C10 3A25 EFF1 C3B3 1C2D 5E66 27E4
* tag 'or1k-pull-request-20220515' of https://github.com/stffrdhrn/qemu:
target/openrisc: Do not reset delay slot flag on early tb exit
hw/openrisc: use right OMPIC size variable
hw/openrisc: support 4 serial ports in or1ksim
hw/openrisc: page-align FDT address
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
This was found when running linux crypto algorithm selftests used by
wireguard. We found that randomly the tests would fail. We found
through investigation that a combination of a tick timer interrupt,
raised when executing a delay slot instruction at a page boundary caused
the issue.
This was caused when handling the TB_EXIT_REQUESTED case in cpu_tb_exec.
On OpenRISC, which doesn't implement synchronize_from_tb, set_pc was
being used as a fallback. The OpenRISC set_pc implementation clears
dflag, which caused the exception handling logic to not account for the
delay slot. This was the bug, because it meant when execution resumed
after the interrupt was handling it resumed in the wrong place.
Fix this by implementing synchronize_from_tb which simply updates pc,
and not clear the delay slot flag.
Reported-by: Jason A. Donenfeld <Jason@zx2c4.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Stafford Horne <shorne@gmail.com>
If CPUID.(EAX=07H, ECX=0):EDX[19] is set to 1, the processor
supports Architectural LBRs. In this case, CPUID leaf 01CH
indicates details of the Architectural LBRs capabilities.
XSAVE support for Architectural LBRs is enumerated in
CPUID.(EAX=0DH, ECX=0FH).
Signed-off-by: Yang Weijiang <weijiang.yang@intel.com>
Message-Id: <20220215195258.29149-9-weijiang.yang@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
The Arch LBR record MSRs and control MSRs will be migrated
to destination guest if the vcpus were running with Arch
LBR active.
Signed-off-by: Yang Weijiang <weijiang.yang@intel.com>
Message-Id: <20220215195258.29149-8-weijiang.yang@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
In the first generation of Arch LBR, the max support
Arch LBR depth is 32, both host and guest use the value
to set depth MSR. This can simplify the implementation
of patch given the side-effect of mismatch of host/guest
depth MSR: XRSTORS will reset all recording MSRs to 0s
if the saved depth mismatches MSR_ARCH_LBR_DEPTH.
In most of the cases Arch LBR is not in active status,
so check the control bit before save/restore the big
chunck of Arch LBR MSRs.
Signed-off-by: Yang Weijiang <weijiang.yang@intel.com>
Message-Id: <20220215195258.29149-7-weijiang.yang@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Define Arch LBR bit in XSS and save/restore structure
for XSAVE area size calculation.
Signed-off-by: Yang Weijiang <weijiang.yang@intel.com>
Message-Id: <20220215195258.29149-6-weijiang.yang@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
There're some new features, including Arch LBR, depending
on XSAVES/XRSTORS support, the new instructions will
save/restore data based on feature bits enabled in XCR0 | XSS.
This patch adds the basic support for related CPUID enumeration
and meanwhile changes the name from FEAT_XSAVE_COMP_{LO|HI} to
FEAT_XSAVE_XCR0_{LO|HI} to differentiate clearly the feature
bits in XCR0 and those in XSS.
Signed-off-by: Yang Weijiang <weijiang.yang@intel.com>
Message-Id: <20220215195258.29149-5-weijiang.yang@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
When try to get one msr from KVM, I found there's no such kind of
existing interface while kvm_put_one_msr() is there. So here comes
the patch. It'll remove redundant preparation code before finally
call KVM_GET_MSRS IOCTL.
No functional change intended.
Signed-off-by: Yang Weijiang <weijiang.yang@intel.com>
Message-Id: <20220215195258.29149-4-weijiang.yang@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
The Last Branch Recording (LBR) is a performance monitor unit (PMU)
feature on Intel processors which records a running trace of the most
recent branches taken by the processor in the LBR stack. This option
indicates the LBR format to enable for guest perf.
The LBR feature is enabled if below conditions are met:
1) KVM is enabled and the PMU is enabled.
2) msr-based-feature IA32_PERF_CAPABILITIES is supporterd on KVM.
3) Supported returned value for lbr_fmt from above msr is non-zero.
4) Guest vcpu model does support FEAT_1_ECX.CPUID_EXT_PDCM.
5) User-provided lbr-fmt value doesn't violate its bitmask (0x3f).
6) Target guest LBR format matches that of host.
Co-developed-by: Like Xu <like.xu@linux.intel.com>
Signed-off-by: Like Xu <like.xu@linux.intel.com>
Signed-off-by: Yang Weijiang <weijiang.yang@intel.com>
Message-Id: <20220215195258.29149-3-weijiang.yang@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Icelake, is the codename for Intel 3rd generation Xeon Scalable server
processors. There isn't ever client variants. This "Icelake-Client" CPU
model was added wrongly and imaginarily.
It has been deprecated since v5.2, now it's time to remove it completely
from code.
Signed-off-by: Robert Hoo <robert.hu@linux.intel.com>
Reviewed-by: Igor Mammedov <imammedo@redhat.com>
Message-Id: <1647247859-4947-1-git-send-email-robert.hu@linux.intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
This patch fixes the following error that would occur when trying to resume
a WHPX-accelerated VM from a breakpoint:
qemu: WHPX: Failed to set interrupt state registers, hr=c0350005
The error arises from an incorrect CR8 value being passed to
WHvSetVirtualProcessorRegisters() that doesn't match the
value set via WHvSetVirtualProcessorInterruptControllerState2().
Signed-off-by: Ivan Shcherbakov <ivan@sysprogs.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
When cache_info_passthrough is requested, QEMU passes the host values
of the cache information CPUID leaves down to the guest. However,
it blindly assumes that the CPUID leaf exists on the host, and this
cannot be guaranteed: for example, KVM has recently started to
synthesize AMD leaves up to 0x80000021 in order to provide accurate
CPU bug information to guests.
Querying a nonexistent host leaf fills the output arguments of
host_cpuid with data that (albeit deterministic) is nonsensical
as cache information, namely the data in the highest Intel CPUID
leaf. If said highest leaf is not ECX-dependent, this can even
cause an infinite loop when kvm_arch_init_vcpu prepares the input
to KVM_SET_CPUID2. The infinite loop is only terminated by an
abort() when the array gets full.
Reported-by: Maxim Levitsky <mlevitsk@redhat.com>
Reviewed-by: Maxim Levitsky <mlevitsk@redhat.com>
Cc: qemu-stable@nongnu.org
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Cleaned up with scripts/clean-header-guards.pl.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Message-Id: <20220506134911.2856099-5-armbru@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
We commonly define the header guard symbol without an explicit value.
Normalize the exceptions.
Done with scripts/clean-header-guards.pl.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Message-Id: <20220506134911.2856099-4-armbru@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Leading underscores are ill-advised because such identifiers are
reserved. Trailing underscores are merely ugly. Strip both.
Our header guards commonly end in _H. Normalize the exceptions.
Macros should be ALL_CAPS. Normalize the exception.
Done with scripts/clean-header-guards.pl.
include/hw/xen/interface/ and tools/virtiofsd/ left alone, because
these were imported from Xen and libfuse respectively.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Message-Id: <20220506134911.2856099-3-armbru@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Header guard symbols should match their file name to make guard
collisions less likely.
Cleaned up with scripts/clean-header-guards.pl, followed by some
renaming of new guard symbols picked by the script to better ones.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Message-Id: <20220506134911.2856099-2-armbru@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
[Change to generated file ebpf/rss.bpf.skeleton.h backed out]
Enable the n1 for virt and sbsa board use.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220506180242.216785-25-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Enable the a76 for virt and sbsa board use.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220506180242.216785-24-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This extension concerns not merging memory access, which TCG does
not implement. Thus we can trivially enable this feature.
Add a comment to handle_hint for the DGH instruction, but no code.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220506180242.216785-23-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This extension concerns cache speculation, which TCG does
not implement. Thus we can trivially enable this feature.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220506180242.216785-22-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
There is no branch prediction in TCG, therefore there is no
need to actually include the context number into the predictor.
Therefore all we need to do is add the state for SCXTNUM_ELx.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220506180242.216785-21-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This extension concerns branch speculation, which TCG does
not implement. Thus we can trivially enable this feature.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220506180242.216785-20-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This feature is AArch64 only, and applies to physical SErrors,
which QEMU does not implement, thus the feature is a nop.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220506180242.216785-19-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220506180242.216785-18-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Check for and defer any pending virtual SError.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220506180242.216785-17-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Virtual SError exceptions are raised by setting HCR_EL2.VSE,
and are routed to EL1 just like other virtual exceptions.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220506180242.216785-16-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Enable writes to the TERR and TEA bits when RAS is enabled.
These bits are otherwise RES0.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220506180242.216785-15-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Add only the system registers required to implement zero error
records. This means that all values for ERRSELR are out of range,
which means that it and all of the indexed error record registers
need not be implemented.
Add the EL2 registers required for injecting virtual SError.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220506180242.216785-14-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This extension concerns changes to the External Debug interface,
with Secure and Non-secure access to the debug registers, and all
of it is outside the scope of QEMU. Indicating support for this
is mandatory with FEAT_SEL2, which we do implement.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220506180242.216785-13-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The only portion of FEAT_Debugv8p2 that is relevant to QEMU
is CONTEXTIDR_EL2, which is also conditionally implemented
with FEAT_VHE. The rest of the debug extension concerns the
External debug interface, which is outside the scope of QEMU.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220506180242.216785-12-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Use FIELD_DP{32,64} to manipulate id_pfr1 and id_aa64pfr0
during arm_cpu_realizefn.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220506180242.216785-11-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Update the legacy feature names to the current names.
Provide feature names for id changes that were not marked.
Sort the field updates into increasing bitfield order.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220506180242.216785-10-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Share the code to set AArch32 max features so that we no
longer have code drift between qemu{-system,}-{arm,aarch64}.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220506180242.216785-9-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
We set this for qemu-system-aarch64, but failed to do so
for the strictly 32-bit emulation.
Fixes: 3bec78447a ("target/arm: Provide ARMv8.4-PMU in '-cpu max'")
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220506180242.216785-8-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Instead of starting with cortex-a15 and adding v8 features to
a v7 cpu, begin with a v8 cpu stripped of its aarch64 features.
This fixes the long-standing to-do where we only enabled v8
features for user-only.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220506180242.216785-7-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Previously we were defining some of these in user-only mode,
but none of them are accessible from user-only, therefore
define them only in system mode.
This will shortly be used from cpu_tcg.c also.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220506180242.216785-6-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This register is present for either VHE or Debugv8p2.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220506180242.216785-5-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Drop zcr_no_el2_reginfo and merge the 3 registers into one array,
now that ZCR_EL2 can be squashed to RES0 and ZCR_EL3 dropped
while registering.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220506180242.216785-4-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Drop el3_no_el2_cp_reginfo, el3_no_el2_v8_cp_reginfo, and the local
vpidr_regs definition, and rely on the squashing to ARM_CP_CONST
while registering for v8.
This is a behavior change for v7 cpus with Security Extensions and
without Virtualization Extensions, in that the virtualization cpregs
are now correctly not present. This would be a migration compatibility
break, except that we have an existing bug in which migration of 32-bit
cpus with Security Extensions enabled does not work.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220506180242.216785-3-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
More gracefully handle cpregs when EL2 and/or EL3 are missing.
If the reg is entirely inaccessible, do not register it at all.
If the reg is for EL2, and EL3 is present but EL2 is not,
either discard, squash to res0, const, or keep unchanged.
Per rule RJFFP, mark the 4 aarch32 hypervisor access registers
with ARM_CP_EL3_NO_EL2_KEEP, and mark all of the EL2 address
translation and tlb invalidation "regs" ARM_CP_EL3_NO_EL2_UNDEF.
Mark the 2 virtualization processor id regs ARM_CP_EL3_NO_EL2_C_NZ.
This will simplify cpreg registration for conditional arm features.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220506180242.216785-2-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Capstone should be superior to the old libopcode disassembler,
so we can drop the old file nowadays.
Message-Id: <20220505173619.488350-1-thuth@redhat.com>
Reviewed-by: Cédric Le Goater <clg@kaod.org>
Signed-off-by: Thomas Huth <thuth@redhat.com>
Capstone should be superior to the old libopcode disassembler,
so we can drop the old file nowadays.
Signed-off-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220412165836.355850-4-thuth@redhat.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
Capstone should be superior to the old libopcode disassembler, so
we can drop the old file nowadays.
Signed-off-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220412165836.355850-3-thuth@redhat.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
Support for xcr0 to be able to enable xsave/xrstor. This by itself
is not sufficient to enable xsave/xrstor. WHPX XSAVE API's also
needs to be hooked up.
Signed-off-by: Sunil Muthuswamy <sunilmut@microsoft.com>
Message-Id: <MW2PR2101MB1116F07C07A26FD7A7ED8DCFC0780@MW2PR2101MB1116.namprd21.prod.outlook.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
We don't model caches, so for l*ct opcodes return tags with all bits
(including Valid) set to 0. For all other opcodes don't do anything.
Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
Create clock input for the xtensa CPU device and initialize its
frequency to the default core frequency specified in the config.
Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
This is the core used in e.g. ESP8266 chips. Importing them
using import_core.sh, with the required files sourced from
https://github.com/espressif/xtensa-overlays
core-lx106.c was generated by the script; the only change is removing
the reference to core-matmap.h which doesn't seem to be available.
Signed-off-by: Simon Safar <simon@simonsafar.com>
Reviewed-by: Max Filippov <jcmvbkbc@gmail.com>
Message-Id: <20220423040835.29254-1-simon@simonsafar.com>
Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
- gen_jumpi passes target PC to the helper;
- gen_callw_slot uses callinc (1..3);
- gen_brcondi passes immediate field (less than 32 different possible
values) to the helper;
- disas_xtensa_insn passes PC to the helpers;
- translate_entry passes PC, stack register number (0..15) and stack
frame size to the helper;
- gen_check_exclusive passes PC and boolean flag to the helper;
- test_exceptions_retw passes PC to the helper;
- gen_check_atomctl passes PC to the helper;
- translate_ssai passes immediate shift amount (0..31) to the helper;
- gen_waiti passes next PC and an immediate (0..15) to the helper;
use tcg_constant_* for the constants listed above. Fold gen_waiti body
into the translate_waiti as it's the only user.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
FPU conversion opcodes pass scale (range 0..15) and rounding mode to
their helpers. Use tcg_constant_* for them.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
Numbered special registers are small arrays of consecutive SRs. Use
tcg_constant_* for the SR index.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
dtlb is a boolean flag, use tcg_constant_* for it.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
Use tcg_contant_* for exception number, exception cause, debug cause
code and exception PC.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
Replace tcg_const_* for numeric literals with tcg_constant_*.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
pc and w are allocated with tcg_const_i32 but not freed in
gen_window_check. Use tcg_constant_i32 for them both.
Fixes: 2db59a76c4 ("target-xtensa: record available window in TB flags")
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
Today we have the issue where MSR_* values are the 'inverted order'
bit numbers from what the ISA specifies. e.g. MSR_LE is bit 63 but
is defined as 0 in QEMU.
Add a macro to be used to convert from QEMU order to ISA order.
This solution requires less changes than to use the already defined
PPC_BIT macro, which would turn MSR_* in masks instead of the numbers
itself.
Signed-off-by: Víctor Colombo <victor.colombo@eldorado.org.br>
Acked-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220504210541.115256-23-victor.colombo@eldorado.org.br>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Add FIELDs macros for msr bits that had an unused msr_* before.
Signed-off-by: Víctor Colombo <victor.colombo@eldorado.org.br>
Acked-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220504210541.115256-22-victor.colombo@eldorado.org.br>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
msr_de macro hides the usage of env->msr, which is a bad
behavior. Substitute it with FIELD_EX64 calls that explicitly use
env->msr as a parameter.
Suggested-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Víctor Colombo <victor.colombo@eldorado.org.br>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220504210541.115256-21-victor.colombo@eldorado.org.br>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
msr_hv macro hides the usage of env->msr, which is a bad
behavior. Substitute it with FIELD_EX64 calls that explicitly use
env->msr as a parameter.
Suggested-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Víctor Colombo <victor.colombo@eldorado.org.br>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220504210541.115256-20-victor.colombo@eldorado.org.br>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
msr_ts macro hides the usage of env->msr, which is a bad
behavior. Substitute it with FIELD_EX64 calls that explicitly use
env->msr as a parameter.
Suggested-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Víctor Colombo <victor.colombo@eldorado.org.br>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220504210541.115256-19-victor.colombo@eldorado.org.br>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
msr_fe0 and msr_fe1 macros hide the usage of env->msr, which is a bad
behavior. Substitute it with FIELD_EX64 calls that explicitly use
env->msr as a parameter.
Suggested-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Víctor Colombo <victor.colombo@eldorado.org.br>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220504210541.115256-18-victor.colombo@eldorado.org.br>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
msr_ep macro hides the usage of env->msr, which is a bad behavior
Substitute it with FIELD_EX64 calls that explicitly use env->msr
as a parameter.
Suggested-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Víctor Colombo <victor.colombo@eldorado.org.br>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220504210541.115256-17-victor.colombo@eldorado.org.br>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
msr_dr macro hides the usage of env->msr, which is a bad behavior
Substitute it with FIELD_EX64 calls that explicitly use env->msr
as a parameter.
Suggested-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Víctor Colombo <victor.colombo@eldorado.org.br>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220504210541.115256-16-victor.colombo@eldorado.org.br>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
msr_ir macro hides the usage of env->msr, which is a bad behavior
Substitute it with FIELD_EX64 calls that explicitly use env->msr
as a parameter.
Suggested-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Víctor Colombo <victor.colombo@eldorado.org.br>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220504210541.115256-15-victor.colombo@eldorado.org.br>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
msr_cm macro hides the usage of env->msr, which is a bad behavior
Substitute it with FIELD_EX64 calls that explicitly use env->msr
as a parameter.
Suggested-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Víctor Colombo <victor.colombo@eldorado.org.br>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220504210541.115256-14-victor.colombo@eldorado.org.br>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
msr_fp macro hides the usage of env->msr, which is a bad behavior
Substitute it with FIELD_EX64 calls that explicitly use env->msr
as a parameter.
Suggested-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Víctor Colombo <victor.colombo@eldorado.org.br>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220504210541.115256-13-victor.colombo@eldorado.org.br>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
msr_gs macro hides the usage of env->msr, which is a bad behavior
Substitute it with FIELD_EX64 calls that explicitly use env->msr
as a parameter.
Suggested-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Víctor Colombo <victor.colombo@eldorado.org.br>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220504210541.115256-12-victor.colombo@eldorado.org.br>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
msr_me macro hides the usage of env->msr, which is a bad behavior
Substitute it with FIELD_EX64 calls that explicitly use env->msr
as a parameter.
Suggested-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Víctor Colombo <victor.colombo@eldorado.org.br>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220504210541.115256-11-victor.colombo@eldorado.org.br>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
msr_pow macro hides the usage of env->msr, which is a bad behavior
Substitute it with FIELD_EX64 calls that explicitly use env->msr
as a parameter.
Suggested-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Víctor Colombo <victor.colombo@eldorado.org.br>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220504210541.115256-10-victor.colombo@eldorado.org.br>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
msr_ce macro hides the usage of env->msr, which is a bad behavior
Substitute it with FIELD_EX64 calls that explicitly use env->msr
as a parameter.
Suggested-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Víctor Colombo <victor.colombo@eldorado.org.br>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220504210541.115256-9-victor.colombo@eldorado.org.br>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
msr_ee macro hides the usage of env->msr, which is a bad behavior
Substitute it with FIELD_EX64 calls that explicitly use env->msr
as a parameter.
Suggested-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Víctor Colombo <victor.colombo@eldorado.org.br>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220504210541.115256-8-victor.colombo@eldorado.org.br>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
msr_ile macro hides the usage of env->msr, which is a bad behavior
Substitute it with FIELD_EX64 calls that explicitly use env->msr
as a parameter.
Suggested-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Víctor Colombo <victor.colombo@eldorado.org.br>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220504210541.115256-7-victor.colombo@eldorado.org.br>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
msr_ds macro hides the usage of env->msr, which is a bad behavior
Substitute it with FIELD_EX64 calls that explicitly use env->msr
as a parameter.
Suggested-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Víctor Colombo <victor.colombo@eldorado.org.br>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220504210541.115256-6-victor.colombo@eldorado.org.br>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
msr_le macro hides the usage of env->msr, which is a bad behavior
Substitute it with FIELD_EX64 calls that explicitly use env->msr
as a parameter.
Suggested-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Víctor Colombo <victor.colombo@eldorado.org.br>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220504210541.115256-5-victor.colombo@eldorado.org.br>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
msr_pr macro hides the usage of env->msr, which is a bad behavior
Substitute it with FIELD_EX64 calls that explicitly use env->msr
as a parameter.
Suggested-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Víctor Colombo <victor.colombo@eldorado.org.br>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220504210541.115256-4-victor.colombo@eldorado.org.br>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Some msr_* macros are not used anywhere. Remove them as part of
the work to remove all hidden usage of *env.
Suggested-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Víctor Colombo <victor.colombo@eldorado.org.br>
Message-Id: <20220504210541.115256-3-victor.colombo@eldorado.org.br>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
fpscr_* defined macros are hiding the usage of *env behind them.
Substitute the usage of these macros with `env->fpscr & FP_*` to make
the code cleaner.
Suggested-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Víctor Colombo <victor.colombo@eldorado.org.br>
Message-Id: <20220504210541.115256-2-victor.colombo@eldorado.org.br>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Per E500 core reference manual [1], chapter 8.4.4 "Branch Taken Debug
Event" and chapter 8.4.5 "Instruction Complete Debug Event":
"A branch taken debug event occurs if both MSR[DE] and DBCR0[BRT]
are set ... Branch taken debug events are not recognized if MSR[DE]
is cleared when the branch instruction executes."
"An instruction complete debug event occurs when any instruction
completes execution so long as MSR[DE] and DBCR0[ICMP] are both
set ... Instruction complete debug events are not recognized if
MSR[DE] is cleared at the time of the instruction execution."
Current codes do not check MSR.DE bit before setting HFLAGS_SE and
HFLAGS_BE flag, which would cause the immediate debug interrupt to
be generated, e.g.: when DBCR0.ICMP bit is set by guest software
and MSR.DE is not set.
[1] https://www.nxp.com/docs/en/reference-manual/E500CORERM.pdf
Signed-off-by: Bin Meng <bin.meng@windriver.com>
Reviewed-by: Cédric Le Goater <clg@kaod.org>
Reviewed-by: Fabiano Rosas <farosas@linux.ibm.com>
Reviewed-by: Lucas Mateus Castro <lucas.araujo@eldorado.org.br>
Message-Id: <20220421011729.1148727-1-bmeng.cn@gmail.com>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Init the struct to avoid Valgrind complaints about unitialized bytes,
such as this one:
==39549== Syscall param ioctl(generic) points to uninitialised byte(s)
==39549== at 0x55864E4: ioctl (in /usr/lib64/libc.so.6)
==39549== by 0xD1F7EF: kvm_vm_ioctl (kvm-all.c:3035)
==39549== by 0xAF8F5B: kvm_get_radix_page_info (kvm.c:276)
==39549== by 0xB00533: kvmppc_host_cpu_class_init (kvm.c:2369)
==39549== by 0xD3DCE7: type_initialize (object.c:366)
==39549== by 0xD3FACF: object_class_foreach_tramp (object.c:1071)
==39549== by 0x502757B: g_hash_table_foreach (in /usr/lib64/libglib-2.0.so.0.7000.5)
==39549== by 0xD3FC1B: object_class_foreach (object.c:1093)
==39549== by 0xB0141F: kvm_ppc_register_host_cpu_type (kvm.c:2613)
==39549== by 0xAF87E7: kvm_arch_init (kvm.c:157)
==39549== by 0xD1E2A7: kvm_init (kvm-all.c:2595)
==39549== by 0x8E6E93: accel_init_machine (accel-softmmu.c:39)
==39549== Address 0x1fff00e208 is on thread 1's stack
==39549== in frame #2, created by kvm_get_radix_page_info (kvm.c:267)
==39549== Uninitialised value was created by a stack allocation
==39549== at 0xAF8EE8: kvm_get_radix_page_info (kvm.c:267)
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Message-Id: <20220331001717.616938-5-danielhb413@gmail.com>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Init 'sregs' to avoid Valgrind complaints about uninitialized bytes
from kvmppc_put_books_sregs():
==54059== Thread 3:
==54059== Syscall param ioctl(generic) points to uninitialised byte(s)
==54059== at 0x55864E4: ioctl (in /usr/lib64/libc.so.6)
==54059== by 0xD1FA23: kvm_vcpu_ioctl (kvm-all.c:3053)
==54059== by 0xAFB18B: kvmppc_put_books_sregs (kvm.c:891)
==54059== by 0xAFB47B: kvm_arch_put_registers (kvm.c:949)
==54059== by 0xD1EDA7: do_kvm_cpu_synchronize_post_init (kvm-all.c:2766)
==54059== by 0x481AF3: process_queued_cpu_work (cpus-common.c:343)
==54059== by 0x4EF247: qemu_wait_io_event_common (cpus.c:412)
==54059== by 0x4EF343: qemu_wait_io_event (cpus.c:436)
==54059== by 0xD21E83: kvm_vcpu_thread_fn (kvm-accel-ops.c:54)
==54059== by 0xFFEBF3: qemu_thread_start (qemu-thread-posix.c:556)
==54059== by 0x54E6DC3: start_thread (in /usr/lib64/libc.so.6)
==54059== by 0x5596C9F: clone (in /usr/lib64/libc.so.6)
==54059== Address 0x799d1cc is on thread 3's stack
==54059== in frame #2, created by kvmppc_put_books_sregs (kvm.c:851)
==54059== Uninitialised value was created by a stack allocation
==54059== at 0xAFAEB0: kvmppc_put_books_sregs (kvm.c:851)
This happens because Valgrind does not consider the 'sregs'
initialization done by kvm_vcpu_ioctl() at the end of the function.
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Message-Id: <20220331001717.616938-4-danielhb413@gmail.com>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
'lpcr' is used as an input of kvm_get_one_reg(). Valgrind doesn't
understand that and it returns warnings as such for this function:
==55240== Thread 1:
==55240== Conditional jump or move depends on uninitialised value(s)
==55240== at 0xB011E4: kvmppc_enable_cap_large_decr (kvm.c:2546)
==55240== by 0x92F28F: cap_large_decr_cpu_apply (spapr_caps.c:523)
==55240== by 0x930C37: spapr_caps_cpu_apply (spapr_caps.c:921)
==55240== by 0x955D3B: spapr_reset_vcpu (spapr_cpu_core.c:73)
==55240== by 0x95612B: spapr_cpu_core_reset (spapr_cpu_core.c:209)
==55240== by 0x95619B: spapr_cpu_core_reset_handler (spapr_cpu_core.c:218)
==55240== by 0xD3605F: qemu_devices_reset (reset.c:69)
==55240== by 0x92112B: spapr_machine_reset (spapr.c:1641)
==55240== by 0x4FBD63: qemu_system_reset (runstate.c:444)
==55240== by 0x62812B: qdev_machine_creation_done (machine.c:1247)
==55240== by 0x5064C3: qemu_machine_creation_done (vl.c:2725)
==55240== by 0x5065DF: qmp_x_exit_preconfig (vl.c:2748)
==55240== Uninitialised value was created by a stack allocation
==55240== at 0xB01158: kvmppc_enable_cap_large_decr (kvm.c:2540)
Init 'lpcr' to avoid this warning.
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Message-Id: <20220331001717.616938-3-danielhb413@gmail.com>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Valgrind isn't convinced that we are initializing the values we assign
to env->spr[spr] because it doesn't understand that the 'val' union is
being written by the kvm_vcpu_ioctl() that follows (via struct
kvm_one_reg).
This results in Valgrind complaining about uninitialized values every
time we use env->spr in a conditional, like this instance:
==707578== Thread 1:
==707578== Conditional jump or move depends on uninitialised value(s)
==707578== at 0xA10A40: hreg_compute_hflags_value (helper_regs.c:106)
==707578== by 0xA10C9F: hreg_compute_hflags (helper_regs.c:173)
==707578== by 0xA110F7: hreg_store_msr (helper_regs.c:262)
==707578== by 0xA051A3: ppc_cpu_reset (cpu_init.c:7168)
==707578== by 0xD4730F: device_transitional_reset (qdev.c:799)
==707578== by 0xD4A11B: resettable_phase_hold (resettable.c:182)
==707578== by 0xD49A77: resettable_assert_reset (resettable.c:60)
==707578== by 0xD4994B: resettable_reset (resettable.c:45)
==707578== by 0xD458BB: device_cold_reset (qdev.c:296)
==707578== by 0x48FBC7: cpu_reset (cpu-common.c:114)
==707578== by 0x97B5EB: spapr_reset_vcpu (spapr_cpu_core.c:38)
==707578== by 0x97BABB: spapr_cpu_core_reset (spapr_cpu_core.c:209)
==707578== Uninitialised value was created by a stack allocation
==707578== at 0xB11F08: kvm_get_one_spr (kvm.c:543)
Initializing 'val' has no impact in the logic and makes Valgrind output
more bearable.
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Message-Id: <20220331001717.616938-2-danielhb413@gmail.com>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
The ARMv8 manual defines that PMUSERENR_EL0.ER enables read-access
to both PMXEVCNTR_EL0 and PMEVCNTR<n>_EL0 registers, however,
we only use it for PMXEVCNTR_EL0. Extend to PMEVCNTR<n>_EL0 as well.
Signed-off-by: Alex Zuepke <alex.zuepke@tum.de>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220428132717.84190-1-alex.zuepke@tum.de
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Add the aa64 predicate for detecting RAS support from id registers.
We already have the aa32 version from the M-profile work.
Add the 'any' predicate for testing both aa64 and aa32.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220501055028.646596-34-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220501055028.646596-24-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Since e03b56863d, our host endian indicator is unconditionally
set, which means that we can use a normal C condition.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20220501055028.646596-20-richard.henderson@linaro.org
[PMM: quote correct git hash in commit message]
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Put the block comments into the current coding style.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20220501055028.646596-19-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Perform the override check early, so that it is still done
even when we decide to discard an unreachable cpreg.
Use assert not printf+abort.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20220501055028.646596-18-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Computing isbanked only once makes the code
a bit easier to read.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20220501055028.646596-17-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Bool is a more appropriate type for these variables.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20220501055028.646596-16-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Put most of the value writeback to the same place,
and improve the comment that goes with them.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20220501055028.646596-15-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Move the computation of key to the top of the function.
Hoist the resolution of cp as well, as an input to the
computation of key.
This will be required by a subsequent patch.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20220501055028.646596-14-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Simplify freeing cp_regs hash table entries by using a single
allocation for the entire value.
This fixes a theoretical bug if we were to ever free the entire
hash table, because we've been installing string literal constants
into the cpreg structure in define_arm_vh_e2h_redirects_aliases.
However, at present we only free entries created for AArch32
wildcard cpregs which get overwritten by more specific cpregs,
so this bug is never exposed.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20220501055028.646596-13-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Cast the uint32_t key into a gpointer directly, which
allows us to avoid allocating storage for each key.
Use g_hash_table_lookup when we already have a gpointer
(e.g. for callbacks like count_cpreg), or when using
get_arm_cp_reginfo would require casting away const.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20220501055028.646596-12-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The new_key field is always non-zero -- drop the if.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20220501055028.646596-11-richard.henderson@linaro.org
[PMM: reinstated dropped PL3_RW mask]
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Give this enum a name and use in ARMCPRegInfo and add_cpreg_to_hashtable.
Add the enumerator ARM_CP_SECSTATE_BOTH to clarify how 0
is handled in define_one_arm_cp_reg_with_opaque.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220501055028.646596-10-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Give this enum a name and use in ARMCPRegInfo,
add_cpreg_to_hashtable and define_one_arm_cp_reg_with_opaque.
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220501055028.646596-9-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Create a typedef as well, and use it in ARMCPRegInfo.
This won't be perfect for debugging, but it'll nicely
display the most common cases.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220501055028.646596-8-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Standardize on g_assert_not_reached() for "should not happen".
Retain abort() when preceeded by fprintf or error_report.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20220501055028.646596-7-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Instead of defining ARM_CP_FLAG_MASK to remove flags,
define ARM_CP_SPECIAL_MASK to isolate special cases.
Sort the specials to the low bits. Use an enum.
Split the large comment block so as to document each
value separately.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20220501055028.646596-6-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
These particular data structures are not modified at runtime.
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220501055028.646596-5-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Remove a possible source of error by removing REGINFO_SENTINEL
and using ARRAY_SIZE (convinently hidden inside a macro) to
find the end of the set of regs being registered or modified.
The space saved by not having the extra array element reduces
the executable's .data.rel.ro section by about 9k.
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220501055028.646596-4-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Rearrange the values of the enumerators of CPAccessResult
so that we may directly extract the target el. For the two
special cases in access_check_cp_reg, use CPAccessResult.
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220501055028.646596-3-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Move ARMCPRegInfo and all related declarations to a new
internal header, out of the public cpu.h.
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220501055028.646596-2-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This controls whether the PACI{A,B}SP instructions trap with BTYPE=3
(indirect branch from register other than x16/x17). The linux kernel
sets this in bti_enable().
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/998
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20220427042312.294300-1-richard.henderson@linaro.org
[PMM: remove stray change to makefile comment]
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Capstone should be superior to the old libopcode disassembler,
so we can drop the old file nowadays.
Message-Id: <20220412165836.355850-2-thuth@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Thomas Huth <thuth@redhat.com>
[ dh: take care of compat machines ]
Signed-off-by: David Miller <dmiller423@gmail.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20220428094708.84835-13-david@redhat.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: David Miller <dmiller423@gmail.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
Tested-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
Message-Id: <20220428094708.84835-12-david@redhat.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: David Miller <dmiller423@gmail.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
Tested-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
Message-Id: <20220428094708.84835-11-david@redhat.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: David Miller <dmiller423@gmail.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
Tested-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
Message-Id: <20220428094708.84835-10-david@redhat.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: David Miller <dmiller423@gmail.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
Tested-by: Thomas Huth <thuth@redhat.com>
Message-Id: <20220428094708.84835-9-david@redhat.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: David Miller <dmiller423@gmail.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
Tested-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
Message-Id: <20220428094708.84835-8-david@redhat.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: David Miller <dmiller423@gmail.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
Tested-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
Message-Id: <20220428094708.84835-7-david@redhat.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: David Miller <dmiller423@gmail.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
Tested-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
Message-Id: <20220428094708.84835-6-david@redhat.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
Before we were able to bump up the qemu CPU model to a z13, we included
some experimental features during development in the "max" model only.
Nowadays, the "max" model corresponds exactly to the "qemu" CPU model
of the latest QEMU machine under TCG.
Let's remove all the special casing, effectively making both models
match completely from now on, and clean up.
Signed-off-by: David Hildenbrand <david@redhat.com>
Reviewed-by: Cornelia Huck <cohuck@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220428094708.84835-4-david@redhat.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
We don't include the "msa5" feature in the "qemu" model because it
generates a warning. The PoP states:
"The message-security-assist extension 5 requires
the secure-hash-algorithm (SHA-512) capabilities of
the message-security-assist extension 2 as a prereq-
uisite. (March, 2015)"
As SHA-512 won't be supported in the near future, let's just drop the
feature from the "max" model. This avoids the warning and allows us for
making the "max" model match the "qemu" model (except for compat
machines). We don't lose much, as we only implement the function stubs
for MSA, excluding any real subfunctions.
Signed-off-by: David Hildenbrand <david@redhat.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: Cornelia Huck <cohuck@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/897
Message-Id: <20220428094708.84835-3-david@redhat.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
Fixes: 0e0a5b49ad ("s390x/tcg: Implement VECTOR STORE WITH LENGTH")
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
Tested-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: David Miller <dmiller423@gmail.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
Message-Id: <20220428094708.84835-2-david@redhat.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
- add zbk* and zk* strings to isa_edata_arr
Signed-off-by: Weiwei Li <liweiwei@iscas.ac.cn>
Signed-off-by: Junqiang Wang <wangjunqiang@iscas.ac.cn>
Tested-by: Jiatai He <jiatai2021@iscas.ac.cn>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-Id: <20220426095204.24142-1-liweiwei@iscas.ac.cn>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Two non-subsequent PTEs can be mapped to subsequent paddrs. In this
case, walk_pte will erroneously merge them.
Enforce the split up, by tracking the virtual base address.
Let's say we have the mapping:
0x81200000 -> 0x89623000 (4K)
0x8120f000 -> 0x89624000 (4K)
Before, walk_pte would have shown:
vaddr paddr size attr
---------------- ---------------- ---------------- -------
0000000081200000 0000000089623000 0000000000002000 rwxu-ad
as it only checks for subsequent paddrs. With this patch, it becomes:
vaddr paddr size attr
---------------- ---------------- ---------------- -------
0000000081200000 0000000089623000 0000000000001000 rwxu-ad
000000008120f000 0000000089624000 0000000000001000 rwxu-ad
Signed-off-by: Ralf Ramsauer <ralf.ramsauer@oth-regensburg.de>
Reviewed-by: Bin Meng <bmeng.cn@gmail.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-Id: <20220423215907.673663-1-ralf.ramsauer@oth-regensburg.de>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Signed-off-by: Weiwei Li <liweiwei@iscas.ac.cn>
Signed-off-by: Junqiang Wang <wangjunqiang@iscas.ac.cn>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-Id: <20220423023510.30794-15-liweiwei@iscas.ac.cn>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
- add SEED CSR which must be accessed with a read-write instruction:
A read-only instruction such as CSRRS/CSRRC with rs1=x0 or CSRRSI/CSRRCI
with uimm=0 will raise an illegal instruction exception.
- add USEED, SSEED fields for MSECCFG CSR
Co-authored-by: Ruibo Lu <luruibo2000@163.com>
Co-authored-by: Zewen Ye <lustrew@foxmail.com>
Signed-off-by: Weiwei Li <liweiwei@iscas.ac.cn>
Signed-off-by: Junqiang Wang <wangjunqiang@iscas.ac.cn>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-Id: <20220423023510.30794-13-liweiwei@iscas.ac.cn>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
- add sm3p0, sm3p1, sm4ed and sm4ks instructions
Co-authored-by: Ruibo Lu <luruibo2000@163.com>
Signed-off-by: Weiwei Li <liweiwei@iscas.ac.cn>
Signed-off-by: Junqiang Wang <wangjunqiang@iscas.ac.cn>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-Id: <20220423023510.30794-12-liweiwei@iscas.ac.cn>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
- add sha512sum0, sha512sig0, sha512sum1 and sha512sig1 instructions
Co-authored-by: Zewen Ye <lustrew@foxmail.com>
Signed-off-by: Weiwei Li <liweiwei@iscas.ac.cn>
Signed-off-by: Junqiang Wang <wangjunqiang@iscas.ac.cn>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-Id: <20220423023510.30794-11-liweiwei@iscas.ac.cn>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
- add sha256sig0, sha256sig1, sha256sum0 and sha256sum1 instructions
Co-authored-by: Zewen Ye <lustrew@foxmail.com>
Signed-off-by: Weiwei Li <liweiwei@iscas.ac.cn>
Signed-off-by: Junqiang Wang <wangjunqiang@iscas.ac.cn>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-Id: <20220423023510.30794-9-liweiwei@iscas.ac.cn>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
- add aes32esmi, aes32esi, aes32dsmi and aes32dsi instructions
Co-authored-by: Zewen Ye <lustrew@foxmail.com>
Signed-off-by: Weiwei Li <liweiwei@iscas.ac.cn>
Signed-off-by: Junqiang Wang <wangjunqiang@iscas.ac.cn>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-Id: <20220423023510.30794-7-liweiwei@iscas.ac.cn>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
- share it between target/arm and target/riscv
Signed-off-by: Weiwei Li <liweiwei@iscas.ac.cn>
Signed-off-by: Junqiang Wang <wangjunqiang@iscas.ac.cn>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220423023510.30794-6-liweiwei@iscas.ac.cn>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
- add xperm4 and xperm8 instructions
Signed-off-by: Weiwei Li <liweiwei@iscas.ac.cn>
Signed-off-by: Junqiang Wang <wangjunqiang@iscas.ac.cn>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Acked-by: Alistair Francis <alistair.francis@wdc.com>
Message-Id: <20220423023510.30794-5-liweiwei@iscas.ac.cn>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
- reuse partial instructions of zbc extension, update extension check for them
Signed-off-by: Weiwei Li <liweiwei@iscas.ac.cn>
Signed-off-by: Junqiang Wang <wangjunqiang@iscas.ac.cn>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220423023510.30794-4-liweiwei@iscas.ac.cn>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
- reuse partial instructions of zbb extension, update extension check for them
- add brev8, pack, packh, packw, unzip, zip instructions
Signed-off-by: Weiwei Li <liweiwei@iscas.ac.cn>
Signed-off-by: Junqiang Wang <wangjunqiang@iscas.ac.cn>
Acked-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220423023510.30794-3-liweiwei@iscas.ac.cn>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Signed-off-by: Weiwei Li <liweiwei@iscas.ac.cn>
Signed-off-by: Junqiang Wang <wangjunqiang@iscas.ac.cn>
Acked-by: Alistair Francis <alistair.francis@wdc.com>
Message-Id: <20220423023510.30794-2-liweiwei@iscas.ac.cn>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Allow user to set core's marchid, mvendorid, mipid CSRs through
-cpu command line option.
The default values of marchid and mipid are built with QEMU's version
numbers.
Signed-off-by: Frank Chang <frank.chang@sifive.com>
Reviewed-by: Jim Shu <jim.shu@sifive.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Bin Meng <bmeng.cn@gmail.com>
Message-Id: <20220422040436.2233-1-frank.chang@sifive.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
* refactor to use tcg_constant where appropriate
* Advertise support for FEAT_TTL and FEAT_BBM level 2
* smmuv3: Cache event fault record
* smmuv3: Add space in guest error message
* smmuv3: Advertise support for SMMUv3.2-BBML2
-----BEGIN PGP SIGNATURE-----
iQJNBAABCAA3FiEE4aXFk81BneKOgxXPPCUl7RQ2DN4FAmJqpu4ZHHBldGVyLm1h
eWRlbGxAbGluYXJvLm9yZwAKCRA8JSXtFDYM3pOQD/9G190+ntJm4Vndz0I6bCDP
svDrWwsioOJ4q5Pah6517JACkwN5sx0adMGyAeRC3Kcbz5B2141vv9hJOnJmLB1D
l6KbH8XZaftC0B8fXsPkaH6XEdBHGz6YbOZaLOTwmFqF9d18OFW4d8+CAvfldZRc
+DYeolEhoL9eLTS16BlXPxb0LajQHhbN1Xdu3t8CGh31C52ZrG4h8cus6YMEDjfA
rfBthh/2QvVFmDedIfX4QrlImCTs+bTaSkhUBmX6qakWII0QykItgQTEZ8IHEr8/
QmG+xlkP1MmffyHU3F4inEVXpjCSzula4ycZpNVGsrTHYxLBzsTSD+EzicLHMZSt
64tQhLxPjAzC1MEHp7bJHyQXon7REWd6u1jPRlMWTGpZqbMMchBPjFrsxK3YPdvi
a/8KIulXuX+GjzbOIHnpttIy+U0UrjTEyxjpk+Ay2iZ+U6+hA3i2ni++dzq9dYb6
IiCl+o29r/7fNaWpG3b38kn9vpxjwAAw+qfwwSqyM+8/KMirgJ8rpEmUPei/h7fy
vqpNlVxd1+Tzb3ljCXNRriZ05xo5I9LIb+dLAig1orENS7w3SzW/GnM+S7raOwQb
u9mxNmbQJ1MhkjNC/6wzniBre6EBs31X2GIWeuiWe/js2YFPQC06b1WwIc/bYNUv
anbECOS34mtxbExFfdlxUQ==
=IPEn
-----END PGP SIGNATURE-----
Merge tag 'pull-target-arm-20220428' of https://git.linaro.org/people/pmaydell/qemu-arm into staging
target-arm queue:
* refactor to use tcg_constant where appropriate
* Advertise support for FEAT_TTL and FEAT_BBM level 2
* smmuv3: Cache event fault record
* smmuv3: Add space in guest error message
* smmuv3: Advertise support for SMMUv3.2-BBML2
# -----BEGIN PGP SIGNATURE-----
#
# iQJNBAABCAA3FiEE4aXFk81BneKOgxXPPCUl7RQ2DN4FAmJqpu4ZHHBldGVyLm1h
# eWRlbGxAbGluYXJvLm9yZwAKCRA8JSXtFDYM3pOQD/9G190+ntJm4Vndz0I6bCDP
# svDrWwsioOJ4q5Pah6517JACkwN5sx0adMGyAeRC3Kcbz5B2141vv9hJOnJmLB1D
# l6KbH8XZaftC0B8fXsPkaH6XEdBHGz6YbOZaLOTwmFqF9d18OFW4d8+CAvfldZRc
# +DYeolEhoL9eLTS16BlXPxb0LajQHhbN1Xdu3t8CGh31C52ZrG4h8cus6YMEDjfA
# rfBthh/2QvVFmDedIfX4QrlImCTs+bTaSkhUBmX6qakWII0QykItgQTEZ8IHEr8/
# QmG+xlkP1MmffyHU3F4inEVXpjCSzula4ycZpNVGsrTHYxLBzsTSD+EzicLHMZSt
# 64tQhLxPjAzC1MEHp7bJHyQXon7REWd6u1jPRlMWTGpZqbMMchBPjFrsxK3YPdvi
# a/8KIulXuX+GjzbOIHnpttIy+U0UrjTEyxjpk+Ay2iZ+U6+hA3i2ni++dzq9dYb6
# IiCl+o29r/7fNaWpG3b38kn9vpxjwAAw+qfwwSqyM+8/KMirgJ8rpEmUPei/h7fy
# vqpNlVxd1+Tzb3ljCXNRriZ05xo5I9LIb+dLAig1orENS7w3SzW/GnM+S7raOwQb
# u9mxNmbQJ1MhkjNC/6wzniBre6EBs31X2GIWeuiWe/js2YFPQC06b1WwIc/bYNUv
# anbECOS34mtxbExFfdlxUQ==
# =IPEn
# -----END PGP SIGNATURE-----
# gpg: Signature made Thu 28 Apr 2022 07:38:38 AM PDT
# gpg: using RSA key E1A5C593CD419DE28E8315CF3C2525ED14360CDE
# gpg: issuer "peter.maydell@linaro.org"
# gpg: Good signature from "Peter Maydell <peter.maydell@linaro.org>" [full]
# gpg: aka "Peter Maydell <pmaydell@gmail.com>" [full]
# gpg: aka "Peter Maydell <pmaydell@chiark.greenend.org.uk>" [full]
* tag 'pull-target-arm-20220428' of https://git.linaro.org/people/pmaydell/qemu-arm: (54 commits)
hw/arm/smmuv3: Advertise support for SMMUv3.2-BBML2
target/arm: Advertise support for FEAT_BBM level 2
target/arm: Advertise support for FEAT_TTL
hw/arm/smmuv3: Add space in guest error message
hw/arm/smmuv3: Cache event fault record
target/arm: Use field names for accessing DBGWCRn
target/arm: Disable cryptographic instructions when neon is disabled
target/arm: Use tcg_constant for vector descriptor
target/arm: Use tcg_constant for do_brk{2,3}
target/arm: Use tcg_constant for predicate descriptors
target/arm: Use tcg_constant in do_zzi_{sat, ool}, do_fp_imm
target/arm: Use tcg_constant in SUBR
target/arm: Use tcg_constant in LD1, ST1
target/arm: Use tcg_constant in WHILE
target/arm: Use tcg_constant in do_clast_scalar
target/arm: Use tcg_constant in {incr, wrap}_last_active
target/arm: Use tcg_constant in FCPY, CPY
target/arm: Use tcg_constant in SINCDEC, INCDEC
target/arm: Use tcg_constant for trans_INDEX_*
target/arm: Use tcg_constant in trans_CSEL
...
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
The description in the Arm ARM of the requirements of FEAT_BBM is
admirably clear on the guarantees it provides software, but slightly
more obscure on what that means for implementations. The description
of the equivalent SMMU feature in the SMMU specification (IHI0070D.b
section 3.21.1) is perhaps a bit more detailed and includes some
example valid implementation choices. (The SMMU version of this
feature is slightly tighter than the CPU version: the CPU is permitted
to raise TLB Conflict aborts in some situations that the SMMU may
not. This doesn't matter for QEMU because we don't want to do TLB
Conflict aborts anyway.)
The informal summary of FEAT_BBM is that it is about permitting an OS
to switch a range of memory between "covered by a huge page" and
"covered by a sequence of normal pages" without having to engage in
the 'break-before-make' dance that has traditionally been
necessary. The 'break-before-make' sequence is:
* replace the old translation table entry with an invalid entry
* execute a DSB insn
* execute a broadcast TLB invalidate insn
* execute a DSB insn
* write the new translation table entry
* execute a DSB insn
The point of this is to ensure that no TLB can simultaneously contain
TLB entries for the old and the new entry, which would traditionally
be UNPREDICTABLE (allowing the CPU to generate a TLB Conflict fault
or to use a random mishmash of values from the old and the new
entry). FEAT_BBM level 2 says "for the specific case where the only
thing that changed is the size of the block, the TLB is guaranteed
not to do weird things even if there are multiple entries for an
address", which means that software can now do:
* replace old translation table entry with new entry
* DSB
* broadcast TLB invalidate
* DSB
As the SMMU spec notes, valid ways to do this include:
* if there are multiple entries in the TLB for an address,
choose one of them and use it, ignoring the others
* if there are multiple entries in the TLB for an address,
throw them all out and do a page table walk to get a new one
QEMU's page table walk implementation for Arm CPUs already meets the
requirements for FEAT_BBM level 2. When we cache an entry in our TCG
TLB, we do so only for the specific (non-huge) page that the address
is in, and there is no way for the TLB data structure to ever have
more than one TLB entry for that page. (We handle huge pages only in
that we track what part of the address space is covered by huge pages
so that a TLB invalidate operation for an address in a huge page
results in an invalidation of the whole TLB.) We ignore the Contiguous
bit in page table entries, so we don't have to do anything for the
parts of FEAT_BBM that deal with changis to the Contiguous bit.
FEAT_BBM level 2 also requires that the nT bit in block descriptors
must be ignored; since commit 39a1fd2528 we do this.
It's therefore safe for QEMU to advertise FEAT_BBM level 2 by
setting ID_AA64MMFR2_EL1.BBM to 2.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220426160422.2353158-3-peter.maydell@linaro.org
The Arm FEAT_TTL architectural feature allows the guest to provide an
optional hint in an AArch64 TLB invalidate operation about which
translation table level holds the leaf entry for the address being
invalidated. QEMU's TLB implementation doesn't need that hint, and
we correctly ignore the (previously RES0) bits in TLB invalidate
operation values that are now used for the TTL field. So we can
simply advertise support for it in our 'max' CPU.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220426160422.2353158-2-peter.maydell@linaro.org
While defining these names, use the correct field width of 5 not 4 for
DBGWCR.MASK. This typo prevented setting a watchpoint larger than 32k.
Reported-by: Chris Howard <cvz185@web.de>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Message-id: 20220427051926.295223-1-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
As of now, cryptographic instructions ISAR fields are never cleared so
we can end up with a cpu with cryptographic instructions but no
floating-point/neon instructions which is not a possible configuration
according to Arm specifications.
In QEMU, we have 3 kinds of cpus regarding cryptographic instructions:
+ no support
+ cortex-a57/a72: cryptographic extension is optional,
floating-point/neon is not.
+ cortex-a53: crytographic extension is optional as well as
floating-point/neon. But cryptographic requires
floating-point/neon support.
Therefore we can safely clear the ISAR fields when neon is disabled.
Note that other Arm cpus seem to follow this. For example cortex-a55 is
like cortex-a53 and cortex-a76/cortex-a710 are like cortex-a57/a72.
Signed-off-by: Damien Hedde <damien.hedde@greensocs.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220427090117.6954-1-damien.hedde@greensocs.com
[PMM: fixed commit message typos]
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20220426163043.100432-48-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
In these cases, 't' did double-duty as zero source and
temporary destination. Split the two uses and narrow
the scope of the temp.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20220426163043.100432-47-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
In these cases, 't' did double-duty as zero source and
temporary destination. Split the two uses.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20220426163043.100432-46-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20220426163043.100432-45-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20220426163043.100432-44-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20220426163043.100432-43-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20220426163043.100432-42-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20220426163043.100432-41-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20220426163043.100432-40-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20220426163043.100432-39-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20220426163043.100432-38-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20220426163043.100432-37-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20220426163043.100432-36-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20220426163043.100432-35-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20220426163043.100432-34-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20220426163043.100432-33-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20220426163043.100432-32-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20220426163043.100432-31-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20220426163043.100432-30-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20220426163043.100432-29-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20220426163043.100432-28-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20220426163043.100432-27-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20220426163043.100432-26-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20220426163043.100432-25-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20220426163043.100432-24-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20220426163043.100432-23-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Finish conversion of the file to tcg_constant_*.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20220426163043.100432-22-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20220426163043.100432-21-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20220426163043.100432-20-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20220426163043.100432-19-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20220426163043.100432-18-richard.henderson@linaro.org
[PMM: Restore incorrectly removed free of t_false in disas_fp_csel()]
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Existing temp usage treats t1 as both zero and as a
temporary. Rearrange to only require one temporary,
so remove t1 and rename t2.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20220426163043.100432-17-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20220426163043.100432-16-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20220426163043.100432-15-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20220426163043.100432-14-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20220426163043.100432-13-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20220426163043.100432-12-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20220426163043.100432-11-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20220426163043.100432-10-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20220426163043.100432-9-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20220426163043.100432-8-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20220426163043.100432-7-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20220426163043.100432-6-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Note that tmp was doing double-duty as zero
and then later as a temporary in its own right.
Split the use of 0 to a new variable 'zero'.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20220426163043.100432-5-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20220426163043.100432-4-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20220426163043.100432-3-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20220426163043.100432-2-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The abs1 function in ops_sse.h only works sorrectly when the result fits
in a signed int. This is fine most of the time because we're only dealing
with byte sized values.
However pcmp_elen helper function uses abs1 to calculate the absolute value
of a cpu register. This incorrectly truncates to 32 bits, and will give
the wrong anser for the most negative value.
Fix by open coding the saturation check before taking the absolute value.
Signed-off-by: Paul Brook <paul@nowt.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Coverity warns that 14 << data32 may overflow with respect
to the target_ulong to which it is subsequently added.
We know this wasn't true because data32 is in [1,2],
but the suggested fix is perfectly fine.
Fixes: Coverity CID 1487135, 1487256
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Damien Hedde <damien.hedde@greensocs.com>
Message-Id: <20220401184635.327423-1-richard.henderson@linaro.org>
Coverity rightly reports that 0xff << pos can overflow.
This would affect the ICMH instruction.
Fixes: Coverity CID 1487161
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: David Hildenbrand <david@redhat.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Message-Id: <20220401193659.332079-1-richard.henderson@linaro.org>
The exception return address for nios2 is the instruction
after the one that was executing at the time of the exception.
We have so far implemented this by advancing the pc during the
process of raising the exception. It is perhaps a little less
confusing to do this advance in the translator (and helpers)
when raising the exception in the first place, so that we may
more closely match kernel sources.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-Id: <20220421151735.31996-58-richard.henderson@linaro.org>
This is the cpu side of the operation. Register one irq line,
called EIC. Split out the rather different processing to a
separate function.
Delay initialization of gpio irqs until realize. We need to
provide a window after init in which the board can set eic_present.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-Id: <20220421151735.31996-57-richard.henderson@linaro.org>
When CRS = 0, we restore from estatus; otherwise from sstatus.
Update for the new CRS.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220421151735.31996-56-richard.henderson@linaro.org>
Implement these out of line, so that tcg global temps
(aka the architectural registers) are synced back to
tcg storage as required. This makes sure that we get
the proper results when status.PRS == status.CRS.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220421151735.31996-55-richard.henderson@linaro.org>
Do not actually enable them so far, in terms of being able
to change the current register set, but add all of the
plumbing to address them. Do not enable them for user-only.
Add an env->regs pointer that handles the indirection to
the current register set. The naming of the pointer hides
the difference between old and new, user-only and sysemu.
From the notes on wrprs, which states that r0 must be initialized
before use in shadow register sets, infer that R_ZERO is *not*
hardwired to zero in shadow register sets, but that it is still
read-only. Introduce tbflags bit R0_0 to track that it has been
properly set to zero. Adjust load_gpr to reflect this.
At the same time we might as well special case crs == 0 to avoid
the indirection through env->regs during translation as well; this
is intended to be the most common case for non-interrupt handlers.
Init env->regs at reset.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-Id: <20220421151735.31996-54-richard.henderson@linaro.org>
Indirect branches, plus eret and bret optionally raise
an exception when branching to a misaligned address.
The exception is required when an mmu is enabled, but
enable it always because the fallback behaviour is not
documented (though presumably it discards low bits).
For the purposes of the linux-user cpu loop, if EXCP_UNALIGN
(misaligned data) were to arrive, it would be treated the
same as EXCP_UNALIGND (misaligned destination). See the
!defined(CONFIG_NIOS2_ALIGNMENT_TRAP) block in kernel/traps.c.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220421151735.31996-53-richard.henderson@linaro.org>
Use lookup_and_goto_ptr for indirect chaining between TBs.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220421151735.31996-52-richard.henderson@linaro.org>
Depending on the reason for ending the TB, we can chain
to the next TB because the PC is constant.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220421151735.31996-51-richard.henderson@linaro.org>
Rather than force all callers to set this, do it
within the subroutine.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220421151735.31996-50-richard.henderson@linaro.org>
Split out a function to perform an indirect branch.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220421151735.31996-49-richard.henderson@linaro.org>
Unaligned traps are optional, but required with an mmu.
Turn them on always, because the fallback behaviour undefined.
Enable alignment checks in the config file.
Unwind the guest pc properly from do_unaligned_access.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220421151735.31996-48-richard.henderson@linaro.org>
There's nothing about EH that affects translation,
so there's no need to include it in tb->flags.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220421151735.31996-47-richard.henderson@linaro.org>
Constrain all references to cpu_R[] to load_gpr and dest_gpr.
This will be required for supporting shadow register sets.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220421151735.31996-46-richard.henderson@linaro.org>
Do as little work as possible within the macro.
Split out helper functions and pass in arguments instead.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Rename the macro from gen_r_mul, because these are the multiply
variants that produce a high-part result. Do as little work as
possible within the macro; split out helper functions and pass
in arguments instead.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Split the macro in two, one for reg/imm and one for reg/reg.
Do as little work as possible within the macros; split out
helper functions and pass in arguments instead.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Do as little work as possible within the macro.
Split out helper functions and pass in arguments instead.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Do as little work as possible within the macro.
Split out helper functions and pass in arguments instead.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Currently the structures are anonymous within the macro.
Pull them out to standalone types.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-Id: <20220421151735.31996-44-richard.henderson@linaro.org>
Replace current uses of tcg_const_tl, and remove the frees.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220421151735.31996-43-richard.henderson@linaro.org>
Division may (optionally) raise a division exception.
Since the linux kernel has been prepared for this for
some time, enable it by default.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220421151735.31996-42-richard.henderson@linaro.org>
This interrupt bit is never set, so testing it in
nios2_cpu_has_work is pointless.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220421151735.31996-41-richard.henderson@linaro.org>
Without EIC, this bit is RES1. So set the bit at reset,
and add it to the readonly fields of CR_STATUS.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220421151735.31996-40-richard.henderson@linaro.org>
Copy the existing cpu_index into the space reserved for CR_CPUID.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220421151735.31996-39-richard.henderson@linaro.org>
Create an array of masks which detail the writable and readonly
bits for each control register. Apply them when writing to
control registers, including the write to status during eret.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-Id: <20220421151735.31996-38-richard.henderson@linaro.org>
The 4 lower bits, D, PERM, BAD, DBL, are unconditionally set on any
exception with EH=0, or so says Table 42 (Processor Status After
Taking Exception).
We currently do not set PERM or BAD at all, and only set/clear
DBL for tlb miss, and do not clear DBL for any other exception.
It is a bit confusing to set D in tlb_fill and the rest during
do_interrupt, so move the setting of D to do_interrupt as well.
To do this, split EXP_TLBD into two cases, EXCP_TLB_X and EXCP_TLB_D,
which allows us to distinguish them during do_interrupt. Choose
a value for EXCP_TLB_D such that when truncated it produces the
correct value for exception.CAUSE.
Rename EXCP_TLB[RWX] to EXCP_PERM_[RWX], to emphasize that the
exception is permissions related. Rename EXCP_SUPER[AD] to
EXCP_SUPERA_[DX] to emphasize that they are both "supervisor
address" exceptions, data and execute.
Retain the setting of tlbmisc.WE for the fast-tlb-miss path, as it
is being relied upon, but remove it from the permission path.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220421151735.31996-37-richard.henderson@linaro.org>
The register is entirely read-only for software, and we do not
implement ECC, so we need not deposit the cause into an existing
value; just create a new value from scratch.
Furthermore, exception.CAUSE is not written for break exceptions.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220421151735.31996-36-richard.henderson@linaro.org>
While some of the plumbing for misaligned data is present, in the form
of nios2_cpu_do_unaligned_access, the hook will not be called because
TARGET_ALIGNED_ONLY is not set in configs/targets/nios2-softmmu.mak.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-Id: <20220421151735.31996-35-richard.henderson@linaro.org>
Performing this early means that we can merge more cases
within the non-logging switch statement.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220421151735.31996-34-richard.henderson@linaro.org>
Split out do_exception and do_iic_irq to handle bulk of the interrupt and
exception processing. Parameterize the changes required to cpu state.
The status.EH bit, which protects some data against double-faults,
is only present with the MMU. Several exception cases did not check
for status.EH being set, as required.
The status.IH bit, which had been set by EXCP_IRQ, is exclusive to
the external interrupt controller, which we do not yet implement.
The internal interrupt controller, when the MMU is also present,
sets the status.EH bit.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220421151735.31996-33-richard.henderson@linaro.org>
Decode 'break 1' during translation, rather than doing
it again during exception processing.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220421151735.31996-32-richard.henderson@linaro.org>
These symbols become available to the debugger.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220421151735.31996-31-richard.henderson@linaro.org>
Use FIELD_EX32 and FIELD_DP32 instead of managing the
masking by hand.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220421151735.31996-30-richard.henderson@linaro.org>
WE is the architectural name of the field, not WR.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220421151735.31996-29-richard.henderson@linaro.org>
Retain the helper macros for single bit fields as aliases to
the longer R_*_MASK names. Use FIELD_EX32 and FIELD_DP32
instead of manually manipulating the fields.
Since we're rewriting the references to CR_TLBACC_IGN_* anyway,
we correct the name of this field to IG, which is its name in
the official CPU documentation.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220421151735.31996-28-richard.henderson@linaro.org>
Use FIELD_EX32 and FIELD_DP32 instead of manual manipulation
of the fields.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220421151735.31996-27-richard.henderson@linaro.org>
Use FIELD_DP32 instead of manual shifting and masking.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220421151735.31996-26-richard.henderson@linaro.org>
Add all fields; retain the helper macros for single bit fields.
So far there are no uses of the multi-bit status fields.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220421151735.31996-25-richard.henderson@linaro.org>
Do not print control registers for user-only mode.
Rename reserved control registers to "resN", where
N is the control register index.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220421151735.31996-24-richard.henderson@linaro.org>
Place the control registers into their own array, env->ctrl[].
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220421151735.31996-23-richard.henderson@linaro.org>
This function is unused. The real computation of this value
is located in nios2_cpu_exec_interrupt.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220421151735.31996-22-richard.henderson@linaro.org>
We don't need to reference them often, and when we do it
is just as easy to load/store from cpu_env directly.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220421151735.31996-20-richard.henderson@linaro.org>
We had failed to copy BSTATUS back to STATUS, and diagnose
supervisor-only. The spec is light on the specifics of the
implementation of bret, but it is an easy assumption that
the restore into STATUS should work the same as eret.
Therefore, reuse the existing helper_eret.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Reported-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220421151735.31996-19-richard.henderson@linaro.org>
The implementation of eret will become much more complex
with the introduction of shadow registers.
[rth: Split out of a larger patch for shadow register sets.
Directly exit to the cpu loop from the helper.]
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Amir Gonnen <amir.gonnen@neuroblade.ai>
Message-Id: <20220303153906.2024748-3-amir.gonnen@neuroblade.ai>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220421151735.31996-18-richard.henderson@linaro.org>
It is cleaner to have a separate name for this variable.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220421151735.31996-17-richard.henderson@linaro.org>
Split NUM_CORE_REGS into components that can be used elsewhere.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Amir Gonnen <amir.gonnen@neuroblade.ai>
Message-Id: <20220303153906.2024748-3-amir.gonnen@neuroblade.ai>
[rth: Split out of a larger patch for shadow register sets.]
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220421151735.31996-16-richard.henderson@linaro.org>
Whether the cpu is in user-mode or not is something that we
know at translation-time. We do not need to generate code
after having raised an exception.
Suggested-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220421151735.31996-15-richard.henderson@linaro.org>
eret instruction is only allowed in supervisor mode.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Amir Gonnen <amir.gonnen@neuroblade.ai>
Message-Id: <20220303153906.2024748-2-amir.gonnen@neuroblade.ai>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220421151735.31996-14-richard.henderson@linaro.org>
Remove the #ifdef !defined(CONFIG_USER_ONLY) that surrounds
the whole file, and move helper.c to nios2_softmmu_ss.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-Id: <20220421151735.31996-12-richard.henderson@linaro.org>
Since f5ef0e518d, we have a real page mapped for kuser,
which means the special casing for SIGSEGV can go away.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-Id: <20220421151735.31996-11-richard.henderson@linaro.org>
Since 7827168471, this function is unused for user-only,
when the TCGCPUOps.do_interrupt hook itself became system-only.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-Id: <20220421151735.31996-10-richard.henderson@linaro.org>
The last change to this file has been done in 2012, so it
seems like this is not really used anymore, and the content
is likely very out of date now.
Signed-off-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220412113824.297108-1-thuth@redhat.com>
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Use tcg_constant_{i32,i64} as appropriate throughout.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The operation we're performing with the movcond
is either min/max depending on cond -- simplify.
Use tcg_constant_i64 while we're at it.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Use tcg_constant_{i32,i64} as appropriate throughout.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Use tcg_constant_{i32,i64} as appropriate throughout.
This fixes a bug in trans_VSCCLRM() where we were leaking a TCGv.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The length of the previous insn may be computed from
the difference of start and end addresses.
Use tcg_constant_i32 while we're at it.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Use tcg_gen_umin_i32 instead of tcg_gen_movcond_i32.
Use tcg_constant_i32 while we're at it.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Instead of computing
tmp1 = shift & 0xff;
dest = (tmp1 > 0x1f ? 0 : value) << (tmp1 & 0x1f)
use
tmpd = value << (shift & 0x1f);
dest = shift & 0xe0 ? 0 : tmpd;
which has a flatter dependency tree.
Use tcg_constant_i32 while we're at it.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
For aa32, the function has a parameter to use the new el.
For aa64, that never happens.
Use tcg_constant_i32 while we're at it.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Common code for reset_btype and set_btype.
Use tcg_constant_i32.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This function is incorrect in that it does not properly consider
CPTR_EL2.FPEN. We've already got another mechanism for raising
an FPU access trap: ARM_CP_FPU, so use that instead.
Remove CP_ACCESS_TRAP_FP_EL{2,3}, which becomes unused.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Bool is a more appropriate type for this value.
Adjust the assignments to use true/false.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Bool is a more appropriate type for this value.
Move the member down in the struct to keep the
bool type members together and remove a hole.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Currently we assume all fields are 32-bit.
Prepare for fields of a single byte, using sizeof_field().
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
[PMM: use sizeof_field() instead of raw sizeof()]
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Bool is a more appropriate type for this value.
Adjust the assignments to use true/false.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Bool is a more appropriate type for this value.
Move the member down in the struct to keep the
bool type members together and remove a hole.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Update SCTLR_ELx fields per ARM DDI0487 H.a.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Update SCR_EL3 fields per ARM DDI0487 H.a.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Update isar fields per ARM DDI0487 H.a.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
In a GICv3, it is impossible for the GIC to deliver a VIRQ or VFIQ to
the CPU unless the CPU has EL2, because VIRQ and VFIQ are only
configurable via EL2-only system registers. Moreover, in our
implementation we were only calculating and updating the state of the
VIRQ and VFIQ lines in gicv3_cpuif_virt_irq_fiq_update() when those
EL2 system registers changed. We were therefore able to assert in
arm_cpu_set_irq() that we didn't see a VIRQ or VFIQ line update if
EL2 wasn't present.
This assumption no longer holds with GICv4:
* even if the CPU does not have EL2 the guest is able to cause the
GIC to deliver a virtual LPI by programming the ITS (which is a
silly thing for it to do, but possible)
* because we now need to recalculate the state of the VIRQ and VFIQ
lines in more cases than just "some EL2 GIC sysreg was written",
we will see calls to arm_cpu_set_irq() for "VIRQ is 0, VFIQ is 0"
even if the guest is not using the virtual LPI parts of the ITS
Remove the assertions, and instead simply ignore the state of the
VIRQ and VFIQ lines if the CPU does not have EL2.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220408141550.1271295-6-peter.maydell@linaro.org
* Add support for Ibex SPI to OpenTitan
* Add support for privileged spec version 1.12.0
* Use privileged spec version 1.12.0 for virt machine by default
* Allow software access to MIP SEIP
* Add initial support for the Sdtrig extension
* Optimisations for vector extensions
* Improvements to the misa ISA string
* Add isa extenstion strings to the device tree
* Don't allow `-bios` options with KVM machines
* Fix NAPOT range computation overflow
* Fix DT property mmu-type when CPU mmu option is disabled
* Make RISC-V ACLINT mtime MMIO register writable
* Add and enable native debug feature
* Support 64bit fdt address.
-----BEGIN PGP SIGNATURE-----
iQEzBAABCAAdFiEE9sSsRtSTSGjTuM6PIeENKd+XcFQFAmJh+GQACgkQIeENKd+X
cFTKZQf/UQ8yb5DozdeNbm2pmfjJnEEsnXB6k95wIX9pjrJ3HkypHzoRpLbIDzET
KsPjRW6N5SLPINrYfgBuxUv0A/6jOG7cTC/Bimu16wPyS2zQopiTTgiJv6qLkO5G
QUBWz/6kaXNT+fQiTnXXqjViADO49FigYRWUmRfNabeUwb6YoQwoBY6B5jpwZlbI
B9qDdcKnYet5zwi1rGFedRC1XtP7ZDF1lylqNS2nnfr1ZvOWYkAJb5TJDi/4qUpz
i/wGRx/8KaYD5ehGe7Xd50sMM9lLlzNgOnZL0F5cRnA8e/3nRFjTeQ7RoSKGBdaS
7J4RqA9YMhuPL2tTq95wof6EpVsSNw==
=yLIg
-----END PGP SIGNATURE-----
Merge tag 'pull-riscv-to-apply-20220422-1' of github.com:alistair23/qemu into staging
First RISC-V PR for QEMU 7.1
* Add support for Ibex SPI to OpenTitan
* Add support for privileged spec version 1.12.0
* Use privileged spec version 1.12.0 for virt machine by default
* Allow software access to MIP SEIP
* Add initial support for the Sdtrig extension
* Optimisations for vector extensions
* Improvements to the misa ISA string
* Add isa extenstion strings to the device tree
* Don't allow `-bios` options with KVM machines
* Fix NAPOT range computation overflow
* Fix DT property mmu-type when CPU mmu option is disabled
* Make RISC-V ACLINT mtime MMIO register writable
* Add and enable native debug feature
* Support 64bit fdt address.
# -----BEGIN PGP SIGNATURE-----
#
# iQEzBAABCAAdFiEE9sSsRtSTSGjTuM6PIeENKd+XcFQFAmJh+GQACgkQIeENKd+X
# cFTKZQf/UQ8yb5DozdeNbm2pmfjJnEEsnXB6k95wIX9pjrJ3HkypHzoRpLbIDzET
# KsPjRW6N5SLPINrYfgBuxUv0A/6jOG7cTC/Bimu16wPyS2zQopiTTgiJv6qLkO5G
# QUBWz/6kaXNT+fQiTnXXqjViADO49FigYRWUmRfNabeUwb6YoQwoBY6B5jpwZlbI
# B9qDdcKnYet5zwi1rGFedRC1XtP7ZDF1lylqNS2nnfr1ZvOWYkAJb5TJDi/4qUpz
# i/wGRx/8KaYD5ehGe7Xd50sMM9lLlzNgOnZL0F5cRnA8e/3nRFjTeQ7RoSKGBdaS
# 7J4RqA9YMhuPL2tTq95wof6EpVsSNw==
# =yLIg
# -----END PGP SIGNATURE-----
# gpg: Signature made Thu 21 Apr 2022 05:35:48 PM PDT
# gpg: using RSA key F6C4AC46D4934868D3B8CE8F21E10D29DF977054
# gpg: Good signature from "Alistair Francis <alistair@alistair23.me>" [undefined]
# gpg: WARNING: This key is not certified with a trusted signature!
# gpg: There is no indication that the signature belongs to the owner.
# Primary key fingerprint: F6C4 AC46 D493 4868 D3B8 CE8F 21E1 0D29 DF97 7054
* tag 'pull-riscv-to-apply-20220422-1' of github.com:alistair23/qemu: (31 commits)
hw/riscv: boot: Support 64bit fdt address.
hw/core: tcg-cpu-ops.h: Update comments of debug_check_watchpoint()
target/riscv: cpu: Enable native debug feature
target/riscv: machine: Add debug state description
target/riscv: csr: Hook debug CSR read/write
target/riscv: cpu: Add a config option for native debug
target/riscv: debug: Implement debug related TCGCPUOps
hw/intc: riscv_aclint: Add reset function of ACLINT devices
hw/intc: Make RISC-V ACLINT mtime MMIO register writable
hw/intc: Support 32/64-bit mtimecmp and mtime accesses in RISC-V ACLINT
hw/intc: Add .impl.[min|max]_access_size declaration in RISC-V ACLINT
hw/riscv: virt: fix DT property mmu-type when CPU mmu option is disabled
target/riscv/pmp: fix NAPOT range computation overflow
hw/riscv: virt: Exit if the user provided -bios in combination with KVM
target/riscv: Use cpu_loop_exit_restore directly from mmu faults
target/riscv: fix start byte for vmv<nf>r.v when vstart != 0
target/riscv: Add isa extenstion strings to the device tree
target/riscv: misa to ISA string conversion fix
target/riscv: optimize helper for vmv<nr>r.v
target/riscv: optimize condition assign for scale < 0
...
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Turn on native debug feature by default for all CPUs.
Signed-off-by: Bin Meng <bin.meng@windriver.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-Id: <20220421003324.1134983-6-bmeng.cn@gmail.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Add a subsection to machine.c to migrate debug CSR state.
Signed-off-by: Bin Meng <bin.meng@windriver.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-Id: <20220421003324.1134983-5-bmeng.cn@gmail.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
This adds debug CSR read/write support to the RISC-V CSR RW table.
Signed-off-by: Bin Meng <bin.meng@windriver.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-Id: <20220421003324.1134983-4-bmeng.cn@gmail.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Add a config option to enable support for native M-mode debug.
This is disabled by default and can be enabled with 'debug=true'.
Signed-off-by: Bin Meng <bin.meng@windriver.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-Id: <20220421003324.1134983-3-bmeng.cn@gmail.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Implement .debug_excp_handler, .debug_check_{breakpoint, watchpoint}
TCGCPUOps and hook them into riscv_tcg_ops.
Signed-off-by: Bin Meng <bin.meng@windriver.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-Id: <20220421003324.1134983-2-bmeng.cn@gmail.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
RISC-V privilege spec defines that mtime is exposed as a memory-mapped
machine-mode read-write register. However, as QEMU uses host monotonic
timer as timer source, this makes mtime to be read-only in RISC-V
ACLINT.
This patch makes mtime to be writable by recording the time delta value
between the mtime value to be written and the timer value at the time
mtime is written. Time delta value is then added back whenever the timer
value is retrieved.
Signed-off-by: Frank Chang <frank.chang@sifive.com>
Reviewed-by: Jim Shu <jim.shu@sifive.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-Id: <20220420080901.14655-4-frank.chang@sifive.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
There is an overflow with the current code where a pmpaddr value of
0x1fffffff is decoded as sa=0 and ea=0 whereas it should be sa=0 and
ea=0xffffffff.
Fix that by simplifying the computation. There is in fact no need for
ctz64() nor special case for -1 to achieve proper results.
Signed-off-by: Nicolas Pitre <nico@fluxnic.net>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-Id: <rq81o86n-17ps-92no-p65o-79o88476266@syhkavp.arg>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
The riscv_raise_exception function stores its argument into
exception_index and then exits to the main loop. When we
have already set exception_index, we can just exit directly.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-Id: <20220401125948.79292-2-richard.henderson@linaro.org>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
The spec for vmv<nf>r.v says: 'the instructions operate as if EEW=SEW,
EMUL = NREG, effective length evl= EMUL * VLEN/SEW.'
So the start byte for vstart != 0 should take sew into account
Signed-off-by: Weiwei Li <liweiwei@iscas.ac.cn>
Signed-off-by: Junqiang Wang <wangjunqiang@iscas.ac.cn>
Acked-by: Alistair Francis <alistair.francis@wdc.com>
Message-Id: <20220330021316.18223-1-liweiwei@iscas.ac.cn>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
The Linux kernel parses the ISA extensions from "riscv,isa" DT
property. It used to parse only the single letter base extensions
until now. A generic ISA extension parsing framework was proposed[1]
recently that can parse multi-letter ISA extensions as well.
Generate the extended ISA string by appending the available ISA extensions
to the "riscv,isa" string if it is enabled so that kernel can process it.
[1] https://lkml.org/lkml/2022/2/15/263
Reviewed-by: Anup Patel <anup@brainfault.org>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Frank Chang <frank.chang@sifive.com>
Reviewed-by: Bin Meng <bmeng.cn@gmail.com>
Tested-by: Bin Meng <bmeng.cn@gmail.com>
Signed-off-by: Atish Patra <atishp@rivosinc.com>
Suggested-by: Heiko Stubner <heiko@sntech.de>
Signed-off-by: Atish Patra <atishp@rivosinc.com>
Message-Id: <20220329195657.1725425-1-atishp@rivosinc.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Some bits in RISC-V `misa' CSR should not be reflected in the ISA
string. For instance, `S' and `U' (represents existence of supervisor
and user mode, respectively) in `misa' CSR must not be copied since
neither `S' nor `U' are valid single-letter extensions.
This commit also removes all reserved/dropped single-letter "extensions"
from the list.
- "B": Not going to be a single-letter extension (misa.B is reserved).
- "J": Not going to be a single-letter extension (misa.J is reserved).
- "K": Not going to be a single-letter extension (misa.K is reserved).
- "L": Dropped.
- "N": Dropped.
- "T": Dropped.
It also clarifies that the variable `riscv_single_letter_exts' is a
single-letter extension order list.
Signed-off-by: Tsukasa OI <research_trasio@irq.a4lg.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-Id: <4a4c11213a161a7eedabe46abe58b351bb0e2ef2.1648473008.git.research_trasio@irq.a4lg.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
LEN is not used for GEN_VEXT_VMV_WHOLE macro, so vmv<nr>r.v can share
the same helper
Signed-off-by: Weiwei Li <liweiwei@iscas.ac.cn>
Signed-off-by: Junqiang Wang <wangjunqiang@iscas.ac.cn>
Reviewed-by: Frank Chang <frank.chang@sifive.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-Id: <20220325085902.29500-2-liweiwei@iscas.ac.cn>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
for some cases, scale is always equal or less than 0, since lmul is not larger than 3
Signed-off-by: Weiwei Li <liweiwei@iscas.ac.cn>
Signed-off-by: Junqiang Wang <wangjunqiang@iscas.ac.cn>
Reviewed-by: Frank Chang <frank.chang@sifive.com>
Acked-by: Alistair Francis <alistair.francis@wdc.com>
Message-Id: <20220325085902.29500-1-liweiwei@iscas.ac.cn>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
This adds initial support for the Sdtrig extension via the Trigger
Module, as defined in the RISC-V Debug Specification [1].
Only "Address / Data Match" trigger (type 2) is implemented as of now,
which is mainly used for hardware breakpoint and watchpoint. The number
of type 2 triggers implemented is 2, which is the number that we can
find in the SiFive U54/U74 cores.
[1] https://github.com/riscv/riscv-debug-spec/raw/master/riscv-debug-stable.pdf
Signed-off-by: Bin Meng <bin.meng@windriver.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-Id: <20220315065529.62198-2-bmeng.cn@gmail.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
The RISC-V specification states that:
"Supervisor-level external interrupts are made pending based on the
logical-OR of the software-writable SEIP bit and the signal from the
external interrupt controller."
We currently only allow either the interrupt controller or software to
set the bit, which is incorrect.
This patch removes the miclaim mask when writing MIP to allow M-mode
software to inject interrupts, even with an interrupt controller.
We then also need to keep track of which source is setting MIP_SEIP. The
final value is a OR of both, so we add two bools and use that to keep
track of the current state. This way either source can change without
losing the correct value.
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/904
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Bin Meng <bmeng.cn@gmail.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220317061817.3856850-3-alistair.francis@opensource.wdc.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Bin Meng <bmeng.cn@gmail.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220317061817.3856850-2-alistair.francis@opensource.wdc.com>
Virt machine uses privileged specification version 1.12 now.
All other machine continue to use the default one defined for that
machine unless changed to 1.12 by the user explicitly.
This commit enforces the privilege version for csrs introduced in
v1.12 or after.
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Signed-off-by: Atish Patra <atishp@rivosinc.com>
Message-Id: <20220303185440.512391-7-atishp@rivosinc.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
The RISC-V privileged specification v1.12 defines few execution
environment configuration CSRs that can be used enable/disable
extensions per privilege levels.
Add the basic support for these CSRs.
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Signed-off-by: Atish Patra <atishp@rivosinc.com>
Message-Id: <20220303185440.512391-6-atishp@rivosinc.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
RISC-V privileged specification v1.12 introduced a mconfigptr
which will hold the physical address of a configuration data
structure. As Qemu doesn't have a configuration data structure,
is read as zero which is valid as per the priv spec.
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Signed-off-by: Atish Patra <atishp@rivosinc.com>
Message-Id: <20220303185440.512391-5-atishp@rivosinc.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
To allow/disallow the CSR access based on the privilege spec, a new field
in the csr_ops is introduced. It also adds the privileged specification
version (v1.12) for the CSRs introduced in the v1.12. This includes the
new ratified extensions such as Vector, Hypervisor and secconfig CSR.
However, it doesn't enforce the privilege version in this commit.
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Signed-off-by: Atish Patra <atishp@rivosinc.com>
Message-Id: <20220303185440.512391-4-atishp@rivosinc.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Add the definition for ratified privileged specification version v1.12
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Signed-off-by: Atish Patra <atishp@rivosinc.com>
Message-Id: <20220303185440.512391-3-atishp@rivosinc.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Currently, the privileged specification version are defined in
a complex manner for no benefit.
Simplify it by changing it to a simple enum based on.
Suggested-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Signed-off-by: Atish Patra <atishp@rivosinc.com>
Message-Id: <20220303185440.512391-2-atishp@rivosinc.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
`cpu_pc` at this point does not necessary point to the current
instruction (i.e., the wait instruction being translated), so it's
incorrect to calculate the new value of `cpu_pc` based on this. It must
be updated with `ctx->base.pc_next`, which contains the correct address
of the next instruction.
This change fixes the wait instruction skipping the subsequent branch
when used in an idle loop like this:
0: wait
bra.b 0b
brk // should be unreachable
Signed-off-by: Tomoaki Kawada <i@yvt.jp>
Reviewed-by: Yoshinori Sato <ysato@users.sourceforge.jp>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220417060224.2131788-1-i@yvt.jp>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
This patch fixes the implementation of the wait instruction to
implicitly update PSW.I as required by the ISA specification.
Signed-off-by: Tomoaki Kawada <i@yvt.jp>
Reviewed-by: Yoshinori Sato <ysato@users.sourceforge.jp>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220417045937.2128699-1-i@yvt.jp>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
We properly perform this swap in helper_set_psw for MVTC,
but we missed doing so for the CLRPSW/SETPSW of the U bit.
Reported-by: Tomoaki Kawada <i@yvt.jp>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Yoshinori Sato <ysato@users.sourceforge.jp>
Message-Id: <20220417165130.695085-5-richard.henderson@linaro.org>
Have one check in move_to_cr instead of one in each
function that calls move_to_cr.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Yoshinori Sato <ysato@users.sourceforge.jp>
Message-Id: <20220417165130.695085-4-richard.henderson@linaro.org>
With this, we don't need movcond to determine
which stack pointer is current.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Yoshinori Sato <ysato@users.sourceforge.jp>
Message-Id: <20220417165130.695085-3-richard.henderson@linaro.org>
Copy tb->flags into ctx->tb_flags; we'll want to modify
this value throughout the tb in future.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Yoshinori Sato <ysato@users.sourceforge.jp>
Message-Id: <20220417165130.695085-2-richard.henderson@linaro.org>
G_NORETURN was introduced in glib 2.68, fallback to G_GNUC_NORETURN in
glib-compat.
Note that this attribute must be placed before the function declaration
(bringing a bit of consistency in qemu codebase usage).
Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
Reviewed-by: Warner Losh <imp@bsdimp.com>
Message-Id: <20220420132624.2439741-20-marcandre.lureau@redhat.com>
This patch adds tcg accessors for 2 SPRs which were missing on P10:
- the TBU40 register is used to write the upper 40 bits of the
timebase register. It is used by kvm to update the timebase when
entering/exiting the guest on P9 and above. The missing definition was
causing erratic decrementer interrupts in a pseries/kvm guest running
in a powernv10/tcg host, typically resulting in hangs.
- the missing DPDES SPR was found through code inspection. It exists
unchanged on P10.
Both existed on previous versions of the processor and a bit of git
archaeology hints that they were added while the P10 model was already
being worked on so they may have simply fallen through the cracks.
Signed-off-by: Frederic Barrat <fbarrat@linux.ibm.com>
Reviewed-by: Fabiano Rosas <farosas@linux.ibm.com>
Message-Id: <20220411125900.352028-1-fbarrat@linux.ibm.com>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Implement the following PowerISA v3.1 instructions:
xscvqpsqz: VSX Scalar Convert with round to zero Quad-Precision to
Signed Quadword
xscvqpuqz: VSX Scalar Convert with round to zero Quad-Precision to
Unsigned Quadword
Signed-off-by: Matheus Ferst <matheus.ferst@eldorado.org.br>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220330175932.6995-9-matheus.ferst@eldorado.org.br>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Implement the following PowerISA v3.1 instructions:
xscvsqqp: VSX Scalar Convert with round Signed Quadword to
Quad-Precision
xscvuqqp: VSX Scalar Convert with round Unsigned Quadword to
Quad-Precision format
Signed-off-by: Matheus Ferst <matheus.ferst@eldorado.org.br>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220330175932.6995-8-matheus.ferst@eldorado.org.br>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Use the FILE* from qemu_log_trylock more often.
Support per-thread log files with -d tid.
-----BEGIN PGP SIGNATURE-----
iQFRBAABCgA7FiEEekgeeIaLTbaoWgXAZN846K9+IV8FAmJgStUdHHJpY2hhcmQu
aGVuZGVyc29uQGxpbmFyby5vcmcACgkQZN846K9+IV+c9Af/ZXnKe6bz5yjXy1mS
mNIBJUPKrz1RXFfJxuCfEDWrtNc/gvQyvc3weZG5X0cXpiczeWA5V/9xbE9hu5gV
4rePiIHWmOrais6GZlqEu2F8P3/XyqdPHtcdBfa1hDneixtpqMHCqnh36nQjHyiU
ogFxEJ/M9tTwhuWZrXe/JSYAiALEDYMK9bk4RUMOP1c4v37rXqUNOAM1IPhfxLL/
bK9DQMpz5oUNsWWaqBQ2wQWHkNTOpUEkKGQv0xcQF5SdpYwaxakW9B7/h4QSeOUn
oY6MFTmkJ4BPrLnkcubn+3PICc9LW0OFuzNnUdMCbeqVbjAUQrdMDalKpy4uNFv9
U1VqHg==
=Mt5s
-----END PGP SIGNATURE-----
Merge tag 'pull-log-20220420' of https://gitlab.com/rth7680/qemu into staging
Clean up log locking.
Use the FILE* from qemu_log_trylock more often.
Support per-thread log files with -d tid.
# -----BEGIN PGP SIGNATURE-----
#
# iQFRBAABCgA7FiEEekgeeIaLTbaoWgXAZN846K9+IV8FAmJgStUdHHJpY2hhcmQu
# aGVuZGVyc29uQGxpbmFyby5vcmcACgkQZN846K9+IV+c9Af/ZXnKe6bz5yjXy1mS
# mNIBJUPKrz1RXFfJxuCfEDWrtNc/gvQyvc3weZG5X0cXpiczeWA5V/9xbE9hu5gV
# 4rePiIHWmOrais6GZlqEu2F8P3/XyqdPHtcdBfa1hDneixtpqMHCqnh36nQjHyiU
# ogFxEJ/M9tTwhuWZrXe/JSYAiALEDYMK9bk4RUMOP1c4v37rXqUNOAM1IPhfxLL/
# bK9DQMpz5oUNsWWaqBQ2wQWHkNTOpUEkKGQv0xcQF5SdpYwaxakW9B7/h4QSeOUn
# oY6MFTmkJ4BPrLnkcubn+3PICc9LW0OFuzNnUdMCbeqVbjAUQrdMDalKpy4uNFv9
# U1VqHg==
# =Mt5s
# -----END PGP SIGNATURE-----
# gpg: Signature made Wed 20 Apr 2022 11:03:01 AM PDT
# gpg: using RSA key 7A481E78868B4DB6A85A05C064DF38E8AF7E215F
# gpg: issuer "richard.henderson@linaro.org"
# gpg: Good signature from "Richard Henderson <richard.henderson@linaro.org>" [ultimate]
* tag 'pull-log-20220420' of https://gitlab.com/rth7680/qemu: (39 commits)
util/log: Support per-thread log files
util/log: Limit RCUCloseFILE to file closing
util/log: Rename QemuLogFile to RCUCloseFILE
util/log: Combine two logfile closes
util/log: Hoist the eval of is_daemonized in qemu_set_log_internal
util/log: Rename qemu_logfile_mutex to global_mutex
util/log: Rename qemu_logfile to global_file
util/log: Rename logfilename to global_filename
util/log: Remove qemu_log_close
softmmu: Use qemu_set_log_filename_flags
linux-user: Use qemu_set_log_filename_flags
bsd-user: Use qemu_set_log_filename_flags
util/log: Introduce qemu_set_log_filename_flags
sysemu/os-win32: Test for and use _lock_file/_unlock_file
include/qemu/log: Move entire implementation out-of-line
include/exec/log: Do not reference QemuLogFile directly
tests/unit: Do not reference QemuLogFile directly
linux-user: Expand log_page_dump inline
bsd-user: Expand log_page_dump inline
util/log: Drop call to setvbuf
...
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
This header only defines the tcg_allowed variable and the tcg_enabled()
function - which are not required in many files that include this
header. Drop the #include statement there.
Signed-off-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Message-Id: <20220315144107.1012530-1-thuth@redhat.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
This is redundant with the logging done in cpu_common_reset.
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220417183019.755276-16-richard.henderson@linaro.org>
We have fetched and locked the logfile in translator_loop.
Pass the filepointer down to the disas_log hook so that it
need not be fetched and locked again.
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220417183019.755276-13-richard.henderson@linaro.org>
Inside qemu_log, we perform qemu_log_trylock/unlock, which need
not be done if we have already performed the lock beforehand.
Always check the result of qemu_log_trylock -- only checking
qemu_loglevel_mask races with the acquisition of the lock on
the logfile.
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220417183019.755276-10-richard.henderson@linaro.org>
This function can fail, which makes it more like ftrylockfile
or pthread_mutex_trylock than flockfile or pthread_mutex_lock,
so rename it.
To closer match the other trylock functions, release rcu_read_lock
along the failure path, so that qemu_log_unlock need not be called
on failure.
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220417183019.755276-8-richard.henderson@linaro.org>
This code appears to be trying to make sure there is a logfile.
But that's already true -- the logfile will either be set by -D,
or will be stderr. In either case, not appropriate here.
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220417183019.755276-3-richard.henderson@linaro.org>
During the conversion to the gdb_get_reg128 helpers the high and low
parts of the XMM register where inadvertently swapped. This causes
reads of the register to report the incorrect value to gdb.
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/971
Fixes: b7b8756a9c (target/i386: use gdb_get_reg helpers)
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Cc: qemu-stable@nongnu.org
Message-Id: <20220419091020.3008144-25-alex.bennee@linaro.org>
In commit b7711471f5 in 2014 we refactored the handling of the x86
vector registers so that instead of separate structs XMMReg, YMMReg
and ZMMReg for representing the 16-byte, 32-byte and 64-byte width
vector registers and multiple fields in the CPU state, we have a
single type (XMMReg, later renamed to ZMMReg) and a single struct
field (xmm_regs). However, in 2017 in commit c97d6d2cdf some of
the old struct types and CPU state fields got added back, when we
merged in the hvf support (which had developed in a separate fork
that had presumably not had the refactoring of b7711471f5), as part
of code handling xsave. Commit f585195ec0 then almost immediately
dropped that xsave code again in favour of sharing the xsave handling
with KVM, but forgot to remove the now unused CPU state fields and
struct types.
Delete the unused types and CPUState fields.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Message-Id: <20220412110047.1497190-1-peter.maydell@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
The i386 target consolidates all vector registers so that instead of
XMMReg, YMMReg and ZMMReg structs there is a single ZMMReg that can
fit all of SSE, AVX and AVX512.
When TCG copies data from and to the SSE registers, it uses the
full 64-byte width. This is not a correctness issue because TCG
never lets guest code see beyond the first 128 bits of the ZMM
registers, however it causes uninitialized stack memory to
make it to the CPU's migration stream.
Fix it by only copying the low 16 bytes of the ZMMReg union into
the destination register.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Jon Doron <arilou@gmail.com>
Reviewed-by: Emanuele Giuseppe Esposito <eesposit@redhat.com>
Message-Id: <20220216102500.692781-5-arilou@gmail.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
SynDbg commands can come from two different flows:
1. Hypercalls, in this mode the data being sent is fully
encapsulated network packets.
2. SynDbg specific MSRs, in this mode only the data that needs to be
transfered is passed.
Signed-off-by: Jon Doron <arilou@gmail.com>
Reviewed-by: Emanuele Giuseppe Esposito <eesposit@redhat.com>
Message-Id: <20220216102500.692781-4-arilou@gmail.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Add all required definitions for hyperv synthetic debugger interface.
Signed-off-by: Jon Doron <arilou@gmail.com>
Reviewed-by: Emanuele Giuseppe Esposito <eesposit@redhat.com>
Message-Id: <20220216102500.692781-3-arilou@gmail.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Below is the updated version of the patch adding debugging support to WHPX.
It incorporates feedback from Alex Bennée and Peter Maydell regarding not
changing the emulation logic depending on the gdb connection status.
Instead of checking for an active gdb connection to determine whether QEMU
should intercept the INT1 exceptions, it now checks whether any breakpoints
have been set, or whether gdb has explicitly requested one or more CPUs to
do single-stepping. Having none of these condition present now has the same
effect as not using gdb at all.
Message-Id: <0e7f01d82e9e$00e9c360$02bd4a20$@sysprogs.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
The types are no longer used in bswap.h since commit
f930224fff ("bswap.h: Remove unused float-access functions"), there
isn't much sense in keeping it there and having a dependency on fpu/.
Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Message-Id: <20220323155743.1585078-29-marcandre.lureau@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Since the implementation unit is page-vary.c.
Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220323155743.1585078-24-marcandre.lureau@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Replace the global variables with inlined helper functions. getpagesize() is very
likely annotated with a "const" function attribute (at least with glibc), and thus
optimization should apply even better.
This avoids the need for a constructor initialization too.
Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Message-Id: <20220323155743.1585078-12-marcandre.lureau@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Convert the TARGET_WORDS_BIGENDIAN macro, similarly to what was done
with HOST_BIG_ENDIAN. The new TARGET_BIG_ENDIAN macro is either 0 or 1,
and thus should always be defined to prevent misuse.
Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Suggested-by: Halil Pasic <pasic@linux.ibm.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220323155743.1585078-8-marcandre.lureau@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Replace a config-time define with a compile time condition
define (compatible with clang and gcc) that must be declared prior to
its usage. This avoids having a global configure time define, but also
prevents from bad usage, if the config header wasn't included before.
This can help to make some code independent from qemu too.
gcc supports __BYTE_ORDER__ from about 4.6 and clang from 3.2.
Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
[ For the s390x parts I'm involved in ]
Acked-by: Halil Pasic <pasic@linux.ibm.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220323155743.1585078-7-marcandre.lureau@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
GLib g_get_real_time() is an alternative to gettimeofday() which allows
to simplify our code.
For semihosting, a few bits are lost on POSIX host, but this shouldn't
be a big concern.
Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Reviewed-by: Laurent Vivier <laurent@vivier.eu>
Message-Id: <20220307070401.171986-5-marcandre.lureau@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Add a new field 'cpu0-id' to the response of query-sev-capabilities QMP
command. The value of the field is the base64-encoded unique ID of CPU0
(socket 0), which can be used to retrieve the signed CEK of the CPU from
AMD's Key Distribution Service (KDS).
Signed-off-by: Dov Murik <dovmurik@linux.ibm.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
Message-Id: <20220228093014.882288-1-dovmurik@linux.ibm.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
This is a last minute RISC-V PR for 7.0.
It includes a fix to avoid leaking no translation TLB entries. This
incorrectly cached uncachable baremetal entries. This would break Linux
boot while single stepping. As the fix is pretty straight forward (flush
the cache more often) it's being pulled in for 7.0.
At the same time I have included a RISC-V vector extension fixup patch.
-----BEGIN PGP SIGNATURE-----
iQEzBAABCAAdFiEE9sSsRtSTSGjTuM6PIeENKd+XcFQFAmJGOmYACgkQIeENKd+X
cFS88wf6Aqu4QEXmmpv8F8b5rO9q3PRNb7wCKIBMaIJBSPV0YGF0YeVL6dKQ95qN
HUU40qbmM/TC5PTHLaMkDWNWx3eOAkazRjic7v09ySUdEf8O0rYcP+89lkZfLbP2
re9MhFlNM3Olg4V0pnszPkKVTKJxQoIv298uWNfrzZYBLI9+G6XNiVlruzW46WzO
qUrweFRkiWla1XxjmwawdTUG+jY+xL6EVYsAPiFsV46JBFb4glAGlJNv8j4tDqkT
ft4ipqQ9TYNAOQ/c2+X46brVyB/2q6WnfX0e55lW9LfxZSBLaGNSFKt+hBqj1CiA
smv9kQYPlcSMVfOw7/DtPoS+whGgGA==
=r96A
-----END PGP SIGNATURE-----
Merge tag 'pull-riscv-to-apply-20220401' of github.com:alistair23/qemu into staging
Sixth RISC-V PR for QEMU 7.0
This is a last minute RISC-V PR for 7.0.
It includes a fix to avoid leaking no translation TLB entries. This
incorrectly cached uncachable baremetal entries. This would break Linux
boot while single stepping. As the fix is pretty straight forward (flush
the cache more often) it's being pulled in for 7.0.
At the same time I have included a RISC-V vector extension fixup patch.
# gpg: Signature made Fri 01 Apr 2022 00:33:58 BST
# gpg: using RSA key F6C4AC46D4934868D3B8CE8F21E10D29DF977054
# gpg: Good signature from "Alistair Francis <alistair@alistair23.me>" [full]
# Primary key fingerprint: F6C4 AC46 D493 4868 D3B8 CE8F 21E1 0D29 DF97 7054
* tag 'pull-riscv-to-apply-20220401' of github.com:alistair23/qemu:
target/riscv: rvv: Add missing early exit condition for whole register load/store
target/riscv: Avoid leaking "no translation" TLB entries
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
In gen_store_exclusive(), if the host does not have a cmpxchg128
primitive then we generate bad code for STXP for storing two 64-bit
values. We generate a call to the exit_atomic helper, which never
returns, and set is_jmp to DISAS_NORETURN. However, this is
forgetting that we have already emitted a brcond that jumps over this
call for the case where we don't hold the exclusive. The effect is
that we don't generate any code to end the TB for the
exclusive-not-held execution path, which falls into the "exit with
TB_EXIT_REQUESTED" code that gen_tb_end() emits. This then causes an
assert at runtime when cpu_loop_exec_tb() sees an EXIT_REQUESTED TB
return that wasn't for an interrupt or icount.
In particular, you can hit this case when using the clang sanitizers
and trying to run the xlnx-versal-virt acceptance test in 'make
check-acceptance'. This bug was masked until commit 848126d11e
("meson: move int128 checks from configure") because we used to set
CONFIG_CMPXCHG128=1 and avoid the buggy codepath, but after that we
do not.
Fix the bug by not setting is_jmp. The code after the exit_atomic
call up to the fail_label is dead, but TCG is smart enough to
eliminate it. We do need to set 'tmp' to some valid value, though
(in the same way the exit_atomic-using code in tcg/tcg-op.c does).
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/953
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220331150858.96348-1-peter.maydell@linaro.org
As per the AArch64.S2Walk() pseudo-code in the ARMv8 ARM, the final
decision as to the output address's PA space based on the SA/SW/NSA/NSW
bits needs to take the input IPA's PA space into account, and not the
PA space of the result of the stage 2 walk itself.
Signed-off-by: Idan Horowitz <idan.horowitz@gmail.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220327093427.1548629-4-idan.horowitz@gmail.com
[PMM: fixed commit message typo]
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
As per the AArch64.SS2InitialTTWState() psuedo-code in the ARMv8 ARM the
initial PA space used for stage 2 table walks is assigned based on the SW
and NSW bits of the VSTCR and VTCR registers.
This was already implemented for the recursive stage 2 page table walks
in S1_ptw_translate(), but was missing for the final stage 2 walk.
Signed-off-by: Idan Horowitz <idan.horowitz@gmail.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220327093427.1548629-3-idan.horowitz@gmail.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
As per the AArch64.SS2OutputPASpace() psuedo-code in the ARMv8 ARM when the
PA space of the IPA is non secure, the output PA space is secure if and only
if all of the bits VTCR.<NSW, NSA>, VSTCR.<SW, SA> are not set.
Signed-off-by: Idan Horowitz <idan.horowitz@gmail.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220327093427.1548629-2-idan.horowitz@gmail.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
While not mentioned anywhere in the actual specification text, the
HCR_EL2.ATA bit is treated as '1' when EL2 is disabled at the current
security state. This can be observed in the psuedo-code implementation
of AArch64.AllocationTagAccessIsEnabled().
Signed-off-by: Idan Horowitz <idan.horowitz@gmail.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220328173107.311267-1-idan.horowitz@gmail.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reported by Paul Eggert in
https://lists.gnu.org/archive/html/bug-gnulib/2021-09/msg00050.html
This program currently prints different results when run with TCG instead
of running on real s390x hardware:
#include <stdio.h>
int overflow_32 (int x, int y)
{
int sum;
return __builtin_sub_overflow (x, y, &sum);
}
int overflow_64 (long long x, long long y)
{
long sum;
return __builtin_sub_overflow (x, y, &sum);
}
int a1 = 0;
int b1 = -2147483648;
long long a2 = 0L;
long long b2 = -9223372036854775808L;
int main ()
{
{
int a = a1;
int b = b1;
printf ("a = 0x%x, b = 0x%x\n", a, b);
printf ("no_overflow = %d\n", ! overflow_32 (a, b));
}
{
long long a = a2;
long long b = b2;
printf ("a = 0x%llx, b = 0x%llx\n", a, b);
printf ("no_overflow = %d\n", ! overflow_64 (a, b));
}
}
Signed-off-by: Bruno Haible <bruno@clisp.org>
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/618
Message-Id: <20220323162621.139313-3-thuth@redhat.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
This program currently prints different results when run with TCG instead
of running on real s390x hardware:
#include <stdio.h>
int overflow_32 (int x, int y)
{
int sum;
return ! __builtin_add_overflow (x, y, &sum);
}
int overflow_64 (long long x, long long y)
{
long sum;
return ! __builtin_add_overflow (x, y, &sum);
}
int a1 = -2147483648;
int b1 = -2147483648;
long long a2 = -9223372036854775808L;
long long b2 = -9223372036854775808L;
int main ()
{
{
int a = a1;
int b = b1;
printf ("a = 0x%x, b = 0x%x\n", a, b);
printf ("no_overflow = %d\n", overflow_32 (a, b));
}
{
long long a = a2;
long long b = b2;
printf ("a = 0x%llx, b = 0x%llx\n", a, b);
printf ("no_overflow = %d\n", overflow_64 (a, b));
}
}
Signed-off-by: Bruno Haible <bruno@clisp.org>
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/616
Message-Id: <20220323162621.139313-2-thuth@redhat.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
According to v-spec (section 7.9):
The instructions operate with an effective vector length, evl=NFIELDS*VLEN/EEW,
regardless of current settings in vtype and vl. The usual property that no
elements are written if vstart ≥ vl does not apply to these instructions.
Instead, no elements are written if vstart ≥ evl.
Signed-off-by: eop Chen <eop.chen@sifive.com>
Reviewed-by: Frank Chang <frank.chang@sifive.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-Id: <164762720573.18409.3931931227997483525-0@git.sr.ht>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
The ISA doesn't allow bare mappings to be cached, as the caches are
translations and bare mppings are not translated. We cache these
translations in QEMU in order to utilize the TLB code, but that leaks
out to the guest.
Suggested-by: phantom@zju.edu.cn # no name in the From field
Fixes: 1e0d985fa9 ("target/riscv: Only flush TLB if SATP.ASID changes")
Signed-off-by: Palmer Dabbelt <palmer@rivosinc.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-Id: <20220330165913.8836-1-palmer@rivosinc.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
This file didn't have any non-trivial update since it was initially
added in 2006, and looking at the content, it seems incredibly outdated,
saying e.g. "The sh4 target is not ready at all yet for integration in
qemu" or "A sh4 user-mode has also somewhat started but will be worked
on afterwards"... Sounds like nobody is interested in this README file
anymore, so let's simply remove it now.
Signed-off-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Yoshinori Sato <ysato@users.sourceforge.jp>
Message-Id: <20220329151955.472306-1-thuth@redhat.com>
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
This bug is probably lurking there for so long, I cannot even git-blame
my way to the commit first introducing it.
Anyway, because n32 is also TARGET_MIPS64, the address space range
cannot be determined by looking at TARGET_MIPS64 alone. Fix this by only
declaring 48-bit address spaces for n64, or the n32 user emulation will
happily hand out memory ranges beyond the 31-bit limit and crash.
Confirmed to make the minimal reproducing example in the linked issue
behave.
Closes: https://gitlab.com/qemu-project/qemu/-/issues/939
Cc: Philippe Mathieu-Daudé <f4bug@amsat.org>
Cc: Aurelien Jarno <aurelien@aurel32.net>
Cc: Jiaxun Yang <jiaxun.yang@flygoat.com>
Cc: Aleksandar Rikalo <aleksandar.rikalo@syrmia.com>
Signed-off-by: WANG Xuerui <xen0n@gentoo.org>
Tested-by: Andreas K. Huettel <dilfridge@gentoo.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20220328035942.3299661-1-xen0n@gentoo.org>
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
When the xsmadd* insns were moved to decodetree, the helper arguments
were reordered to better match the PowerISA description. The same macro
is used to declare xvmadd* helpers, but the translation macro of these
insns was not changed accordingly.
Reported-by: Víctor Colombo <victor.colombo@eldorado.org.br>
Fixes: e4318ab2e4 ("target/ppc: move xs[n]madd[am][ds]p/xs[n]msub[am][ds]p to decodetree")
Signed-off-by: Matheus Ferst <matheus.ferst@eldorado.org.br>
Reviewed-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Tested-by: Víctor Colombo <victor.colombo@eldorado.org.br>
Message-Id: <20220325111851.718966-1-matheus.ferst@eldorado.org.br>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Both of these functions missed handling the TLB_MMIO flag
during the conversion to handle MTE.
Fixes: 10a85e2c8a ("target/arm: Reuse sve_probe_page for gather loads")
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/925
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220324010932.190428-1-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Some versions of Windows hang on reboot if their TSC value is greater
than 2^54. The calibration of the Hyper-V reference time overflows
and fails; as a result the processors' clock sources are out of sync.
The issue is that the TSC _should_ be reset to 0 on CPU reset and
QEMU tries to do that. However, KVM special cases writing 0 to the
TSC and thinks that QEMU is trying to hot-plug a CPU, which is
correct the first time through but not later. Thwart this valiant
effort and reset the TSC to 1 instead, but only if the CPU has been
run once.
For this to work, env->tsc has to be moved to the part of CPUArchState
that is not zeroed at the beginning of x86_cpu_reset.
Reported-by: Vadim Rozenfeld <vrozenfe@redhat.com>
Supersedes: <20220324082346.72180-1-pbonzini@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
High bits in the immediate operand of SSE comparisons are ignored, they
do not result in an undefined opcode exception. This is mentioned
explicitly in the Intel documentation.
Reported-by: sonicadvance1@gmail.com
Closes: https://gitlab.com/qemu-project/qemu/-/issues/184
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Some AMD processors expose the PKRU extended save state even if they do not have
the related PKU feature in CPUID. Worse, when they do they report a size of
64, whereas the expected size of the PKRU extended save state is 8, therefore
the esa->size == eax assertion does not hold.
The state is already ignored by KVM_GET_SUPPORTED_CPUID because it
was not enabled in the host XCR0. However, QEMU kvm_cpu_xsave_init()
runs before QEMU invokes arch_prctl() to enable dynamically-enabled
save states such as XTILEDATA, and KVM_GET_SUPPORTED_CPUID hides save
states that have yet to be enabled. Therefore, kvm_cpu_xsave_init()
needs to consult the host CPUID instead of KVM_GET_SUPPORTED_CPUID,
and dies with an assertion failure.
When setting up the ExtSaveArea array to match the host, ignore features that
KVM does not report as supported. This will cause QEMU to skip the incorrect
CPUID leaf instead of tripping the assertion.
Closes: https://gitlab.com/qemu-project/qemu/-/issues/916
Reported-by: Daniel P. Berrangé <berrange@redhat.com>
Analyzed-by: Yang Zhong <yang.zhong@intel.com>
Reported-by: Peter Krempa <pkrempa@redhat.com>
Tested-by: Daniel P. Berrangé <berrange@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
In the physical machine environment, when a SRAR error occurs,
the IA32_MCG_STATUS RIPV bit is set, but qemu does not set this
bit. When qemu injects an SRAR error into virtual machine, the
virtual machine kernel just call do_machine_check() to kill the
current task, but not call memory_failure() to isolate the faulty
page, which will cause the faulty page to be allocated and used
repeatedly. If used by the virtual machine kernel, it will cause
the virtual machine to crash
Signed-off-by: luofei <luofei@unicloud.com>
Message-Id: <20220120084634.131450-1-luofei@unicloud.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Fix vCPU hot-unplug related leak reported by Valgrind:
==132362== 4,096 bytes in 1 blocks are definitely lost in loss record 8,440 of 8,549
==132362== at 0x4C3B15F: memalign (vg_replace_malloc.c:1265)
==132362== by 0x4C3B288: posix_memalign (vg_replace_malloc.c:1429)
==132362== by 0xB41195: qemu_try_memalign (memalign.c:53)
==132362== by 0xB41204: qemu_memalign (memalign.c:73)
==132362== by 0x7131CB: kvm_init_xsave (kvm.c:1601)
==132362== by 0x7148ED: kvm_arch_init_vcpu (kvm.c:2031)
==132362== by 0x91D224: kvm_init_vcpu (kvm-all.c:516)
==132362== by 0x9242C9: kvm_vcpu_thread_fn (kvm-accel-ops.c:40)
==132362== by 0xB2EB26: qemu_thread_start (qemu-thread-posix.c:556)
==132362== by 0x7EB2159: start_thread (in /usr/lib64/libpthread-2.28.so)
==132362== by 0x9D45DD2: clone (in /usr/lib64/libc-2.28.so)
Reported-by: Mark Kanda <mark.kanda@oracle.com>
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Tested-by: Mark Kanda <mark.kanda@oracle.com>
Message-Id: <20220322120522.26200-1-philippe.mathieu.daude@gmail.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
The instruction description says "It is loaded without rounding
errors." which implies we should have the widest rounding mode
possible.
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/888
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220315121251.2280317-4-alex.bennee@linaro.org>
g_new(T, n) is neater than g_malloc(sizeof(T) * n). It's also safer,
for two reasons. One, it catches multiplication overflowing size_t.
Two, it returns T * rather than void *, which lets the compiler catch
more type errors.
This commit only touches allocations with size arguments of the form
sizeof(T).
Patch created mechanically with:
$ spatch --in-place --sp-file scripts/coccinelle/use-g_new-etc.cocci \
--macro-file scripts/cocci-macro-file.h FILES...
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Cédric Le Goater <clg@kaod.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Acked-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Message-Id: <20220315144156.1595462-4-armbru@redhat.com>
Reviewed-by: Pavel Dovgalyuk <Pavel.Dovgalyuk@ispras.ru>
Power ISA v3.1 formalizes the previously undefined result in
words 1 and 3 to be a copy of the result in words 0 and 2.
This affects: xvcvsxdsp, xvcvuxdsp, xvcvdpsp.
And the previously undefined result in word 1 to be a copy of
the result in word 0.
This affects: xscvdpsp.
Signed-off-by: Lucas Coutinho <lucas.coutinho@eldorado.org.br>
Message-Id: <20220316200427.3410437-1-lucas.coutinho@eldorado.org.br>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Power ISA v3.1 formalizes the previously undefined result in
words 1 and 3 to be a copy of the result in words 0 and 2.
This affects: xscvdpsxws, xscvdpuxws, xvcvdpsxws, xvcvdpuxws.
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/852
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
[ clg: checkpatch fixes ]
Message-Id: <20220315053934.377519-1-richard.henderson@linaro.org>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
KVM support for AMX includes a new system attribute, KVM_X86_XCOMP_GUEST_SUPP.
Commit 19db68ca68 ("x86: Grant AMX permission for guest", 2022-03-15) however
did not fully consider the behavior on older kernels. First, it warns
too aggressively. Second, it invokes the KVM_GET_DEVICE_ATTR ioctl
unconditionally and then uses the "bitmask" variable, which remains
uninitialized if the ioctl fails. Third, kvm_ioctl returns -errno rather
than -1 on errors.
While at it, explain why the ioctl is needed and KVM_GET_SUPPORTED_CPUID
is not enough.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Make the rvbar property settable after realize. This is done
in preparation to model the ZynqMP's runtime configurable rvbar.
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Message-id: 20220316164645.2303510-3-edgar.iglesias@gmail.com
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
For M-profile, the fault address is not always exposed to the guest
in a fault register (for instance the BFAR bus fault address register
is only updated for bus faults on data accesses, not instruction
accesses). Currently we log the address only if we're putting it
into a particular guest-visible register. Since we always have it,
log it generically, to make logs of i-side faults a bit clearer.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Message-id: 20220315204306.2797684-3-peter.maydell@linaro.org
Currently the CPU_LOG_INT logging misses some useful information
about loads from the vector table. Add logging where we load vector
table entries. This is particularly helpful for cases where the user
has accidentally not put a vector table in their image at all, which
can result in confusing guest crashes at startup.
Here's an example of the new logging for a case where
the vector table contains garbage:
Loaded reset SP 0x0 PC 0x0 from vector table
Loaded reset SP 0xd008f8df PC 0xf000bf00 from vector table
Taking exception 3 [Prefetch Abort] on CPU 0
...with CFSR.IACCVIOL
...BusFault with BFSR.STKERR
...taking pending nonsecure exception 3
...loading from element 3 of non-secure vector table at 0xc
...loaded new PC 0x20000558
----------------
IN:
0x20000558: 08000079 stmdaeq r0, {r0, r3, r4, r5, r6}
(The double reset logging is the result of our long-standing
"CPUs all get reset twice" weirdness; it looks a bit ugly
but it'll go away if we ever fix that :-))
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Message-id: 20220315204306.2797684-2-peter.maydell@linaro.org
LPAE descriptors come in three forms:
* table descriptors, giving the address of the next level page table
* page descriptors, which occur only at level 3 and describe the
mapping of one page (which might be 4K, 16K or 64K)
* block descriptors, which occur at higher page table levels, and
describe the mapping of huge pages
QEMU's page-table-walk code treats block and page entries
identically, simply ORing in a number of bits from the input virtual
address that depends on the level of the page table that we stopped
at; we depend on the previous masking of descaddr with descaddrmask
to have already cleared out the low bits of the descriptor word.
This is not quite right: the address field in a block descriptor is
smaller, and so there are bits which are valid address bits in a page
descriptor or a table descriptor but which are not supposed to be
part of the address in a block descriptor, and descaddrmask does not
clear them. We previously mostly got away with this because those
descriptor bits are RES0; however with FEAT_BBM (part of Armv8.4)
block descriptor bit 16 is defined to be the nT bit. No emulated
QEMU CPU has FEAT_BBM yet, but if the host CPU has it then we might
see it when using KVM or hvf.
Explicitly zero out all the descaddr bits we're about to OR vaddr
bits into.
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/790
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220304165628.2345765-1-peter.maydell@linaro.org
When arm_is_el2_enabled was introduced, we missed
updating pauth_check_trap.
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/788
Fixes: e6ef016926 ("target/arm: use arm_is_el2_enabled() where applicable")
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-id: 20220315021205.342768-1-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
For both ldnt1 and stnt1, the meaning of the Rn and Rm are different
from ld1 and st1: the vector and integer registers are reversed, and
the integer register 31 refers to XZR instead of SP.
Secondly, the 64-bit version of ldnt1 was being interpreted as
32-bit unpacked unscaled offset instead of 64-bit unscaled offset,
which discarded the upper 32 bits of the address coming from
the vector argument.
Thirdly, validate that the memory element size is in range for the
vector element size for ldnt1. For ld1, we do this via independent
decode patterns, but for ldnt1 we need to do it manually.
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/826
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20220308031655.240710-1-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
When RI2 is 0x80000000, qemu enters an infinite loop instead of jumping
backwards. Fix by adding a missing cast, like in in2_ri2().
Fixes: 7233f2ed17 ("target-s390: Convert BRANCH ON CONDITION")
Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com>
Message-Id: <20220314104232.675863-3-iii@linux.ibm.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Thomas Huth <thuth@redhat.com>
When RI2 is 0x80000000, qemu enters an infinite loop instead of jumping
backwards. Fix by adding a missing cast, like in in2_ri2().
Fixes: 8ac33cdb8b ("Convert BRANCH AND SAVE")
Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com>
Message-Id: <20220314104232.675863-2-iii@linux.ibm.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Thomas Huth <thuth@redhat.com>
- Remove various build warnings
- Fix building with modules on macOS
- Fix mouse/keyboard GUI interactions
-----BEGIN PGP SIGNATURE-----
iQIzBAABCAAdFiEE+qvnXhKRciHc/Wuy4+MsLN6twN4FAmIwjAMACgkQ4+MsLN6t
wN6AhBAAm4GBwQ5FYeFtKk2CmlTbWJtwsc4eRVnRnxRV/83scI+oWAl/jHRiAqHp
Z3eKVD911UDmHUlajWu3UXulnZQZeh1kOrAYCnDvP/wbRAiKjTLzPhoiu2qsKgg7
UT5bmm8/vY51DuCdEbbhqFSjp6X4L7E8UJLm3SlqADd5YXlNeX4D/58RPLbOgS1b
QX7eDREc/6ITVvsNrDeYmIf/AN3O0Rt+Spz7nruvIQd31tiLIXqrOtR4VfWIWvKz
HFvOGD7bOYByt7NJN+Q1sdR8twzaoENV8lqbHROGNo/6uBlz7ciCNRly76u3nd4u
uoFmpgWi9VDhxZztzM1V0qiD0VjyN+NnemAuexqbYrbT8Ym7AJt5hwLeWRjUqf1z
hCMR4Jc+3VCGoNI2yTyAnWdzIQvBUNRfKvFgLeLNzGZmP9fzNAWurFL/p8xD1m7i
lgZ5LAecIFkdtpwpzNKUnllTsRKBJDMc5g7tkm3gBosU0B4IFQuBDnwUQYlHcAhb
+lFVWU6H/gD/FRjfGVI64yZ940u91vShmE72K+04EqH+s0efMOwC/LPmXdF2MaQq
W7KyeWnBLvAFKgyYA6oM9+EWFeZ9KCFs+CXpujPEogJh3RloJNNNAtETu0keI0HZ
gGx0QCNekrZ4u2mZPi1S1xwoJTPeowThQHxUj/MEJghtvYaID/A=
=PLdU
-----END PGP SIGNATURE-----
Merge tag 'darwin-20220315' of https://github.com/philmd/qemu into staging
Darwin-based host patches
- Remove various build warnings
- Fix building with modules on macOS
- Fix mouse/keyboard GUI interactions
# gpg: Signature made Tue 15 Mar 2022 12:52:19 GMT
# gpg: using RSA key FAABE75E12917221DCFD6BB2E3E32C2CDEADC0DE
# gpg: Good signature from "Philippe Mathieu-Daudé (F4BUG) <f4bug@amsat.org>" [unknown]
# gpg: WARNING: This key is not certified with a trusted signature!
# gpg: There is no indication that the signature belongs to the owner.
# Primary key fingerprint: FAAB E75E 1291 7221 DCFD 6BB2 E3E3 2C2C DEAD C0DE
* tag 'darwin-20220315' of https://github.com/philmd/qemu: (21 commits)
MAINTAINERS: Volunteer to maintain Darwin-based hosts support
ui/cocoa: add option to swap Option and Command
ui/cocoa: capture all keys and combos when mouse is grabbed
ui/cocoa: release mouse when user switches away from QEMU window
ui/cocoa: add option to disable left-command forwarding to guest
ui/cocoa: Constify qkeycode translation arrays
configure: Pass filtered QEMU_OBJCFLAGS to meson
meson: Log QEMU_CXXFLAGS content in summary
meson: Resolve the entitlement.sh script once for good
osdep: Avoid using Clang-specific __builtin_available()
audio: Rename coreaudio extension to use Objective-C compiler
coreaudio: Always return 0 in handle_voice_change
audio: Log context for audio bug
audio/dbus: Fix building with modules on macOS
audio/coreaudio: Remove a deprecation warning on macOS 12
block/file-posix: Remove a deprecation warning on macOS 12
hvf: Remove deprecated hv_vcpu_flush() calls
hvf: Make hvf_get_segments() / hvf_put_segments() local
hvf: Use standard CR0 and CR4 register definitions
tests/fp/berkeley-testfloat-3: Ignore ignored #pragma directives
...
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* VSS header fixes (Marc-André)
* 5-level EPT support (Vitaly)
* AMX support (Jing Liu & Yang Zhong)
* Bundle changes to MSI routes (Longpeng)
* More precise emulation of #SS (Gareth)
* Disable ASAN testing
-----BEGIN PGP SIGNATURE-----
iQFIBAABCAAyFiEE8TM4V0tmI4mGbHaCv/vSX3jHroMFAmIwb5QUHHBib256aW5p
QHJlZGhhdC5jb20ACgkQv/vSX3jHroOOUQf8DiNcq8XVVMdX946Qwa4pSxc4ZJtF
X+RkNsscluuLJ2vGEFKwPVps6c6UPqAhXUruZOQmcLmma511MsyJrxyfd4iRgPD2
tL1+n4RpfsbnTEGT8c6TFWWMEIOjLTbKmR/SIxuxpeVG3xlk6tlCevykrIdc90gP
vQIByTGFx3GwiPyDo0j92mA/CsWLnfq6zQ2Tox1xCyt8R+QDimqG0KGLc5RAyiyC
ZmilN2yaqizDfkIzinwHG6gP1NGwVUsrUNl4X9C4mwEMFnsXiyKP5n/BlDZ7w4Wb
QXalFpPg1hJxRGGvyta6OF9VmCfmK9Q0FNVWm1lPE5adn3ECHFo6FJKvfg==
=LVgf
-----END PGP SIGNATURE-----
Merge tag 'for-upstream' of https://gitlab.com/bonzini/qemu into staging
* whpx fixes in preparation for GDB support (Ivan)
* VSS header fixes (Marc-André)
* 5-level EPT support (Vitaly)
* AMX support (Jing Liu & Yang Zhong)
* Bundle changes to MSI routes (Longpeng)
* More precise emulation of #SS (Gareth)
* Disable ASAN testing
# gpg: Signature made Tue 15 Mar 2022 10:51:00 GMT
# gpg: using RSA key F13338574B662389866C7682BFFBD25F78C7AE83
# gpg: issuer "pbonzini@redhat.com"
# gpg: Good signature from "Paolo Bonzini <bonzini@gnu.org>" [full]
# gpg: aka "Paolo Bonzini <pbonzini@redhat.com>" [full]
# Primary key fingerprint: 46F5 9FBD 57D6 12E7 BFD4 E2F7 7E15 100C CD36 69B1
# Subkey fingerprint: F133 3857 4B66 2389 866C 7682 BFFB D25F 78C7 AE83
* tag 'for-upstream' of https://gitlab.com/bonzini/qemu: (22 commits)
gitlab-ci: do not run tests with address sanitizer
KVM: SVM: always set MSR_AMD64_TSC_RATIO to default value
i386: Add Icelake-Server-v6 CPU model with 5-level EPT support
x86: Support XFD and AMX xsave data migration
x86: add support for KVM_CAP_XSAVE2 and AMX state migration
x86: Add AMX CPUIDs enumeration
x86: Add XFD faulting bit for state components
x86: Grant AMX permission for guest
x86: Add AMX XTILECFG and XTILEDATA components
x86: Fix the 64-byte boundary enumeration for extended state
linux-headers: include missing changes from 5.17
target/i386: Throw a #SS when loading a non-canonical IST
target/i386: only include bits in pg_mode if they are not ignored
kvm/msi: do explicit commit when adding msi routes
kvm-irqchip: introduce new API to support route change
update meson-buildoptions.sh
qga/vss: update informative message about MinGW
qga/vss-win32: check old VSS SDK headers
meson: fix generic location of vss headers
vmxcap: Add 5-level EPT bit
...
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
When building on macOS 11 [*], we get:
In file included from ../target/i386/hvf/hvf.c:59:
../target/i386/hvf/vmx.h:174:5: error: 'hv_vcpu_flush' is deprecated: first deprecated in macOS 11.0 - This API has no effect and always returns HV_UNSUPPORTED [-Werror,-Wdeprecated-declarations]
hv_vcpu_flush(vcpu);
^
/Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/System/Library/Frameworks/Hypervisor.framework/Headers/hv.h:364:20: note: 'hv_vcpu_flush' has been explicitly marked deprecated here
extern hv_return_t hv_vcpu_flush(hv_vcpuid_t vcpu)
^
Since this call "has no effect", simply remove it ¯\_(ツ)_/¯
Not very useful deprecation doc:
https://developer.apple.com/documentation/hypervisor/1441386-hv_vcpu_flush
[*] Also 10.15 (Catalina):
https://lore.kernel.org/qemu-devel/Yd3DmSqZ1SiJwd7P@roolebo.dev/
Reviewed-by: Roman Bolshakov <r.bolshakov@yadro.com>
Tested-by: Roman Bolshakov <r.bolshakov@yadro.com>
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Both hvf_get_segments/hvf_put_segments() functions are only
used within x86hvf.c: do not declare them as public API.
Reviewed-by: Roman Bolshakov <r.bolshakov@yadro.com>
Tested-by: Roman Bolshakov <r.bolshakov@yadro.com>
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
No need to have our own definitions of these registers.
Signed-off-by: Cameron Esfahani <dirty@apple.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
* Removal of user-created PHB devices
* Avocado fixes for --disable-tcg
* Instruction and Radix MMU fixes
-----BEGIN PGP SIGNATURE-----
iQIzBAABCAAdFiEEoPZlSPBIlev+awtgUaNDx8/77KEFAmIvXDcACgkQUaNDx8/7
7KHhjg//ZfMUtFUNmEBPuG40qWFfnI1Bv9n6Gr4ctoTpfCtWiImApVM45L/hDyh5
Jpyy2JuhYg5XpGc9lH3UvcAIOniQZMQfGHrD4OsjBeW9PnwMOV6njgU2GBz7rESW
xjNdfdk7M48RuXQBiMpHP/8MNPS2U/GEEN3KDHTgy2fIzW+x9lBEA60Bb4aO7rjb
fCszU9LQ8LfzVhpAzxV0rLaQKAY7WCg8RI6qCAUYsfWzsongLe1b8vWESFa71UxF
r+Iz4A7KK6WNsuI4M/ZK8Jo3Xq8Q4XPYnTgnV7AGRPHjz2LCRxhjZqzX/EBZ+OYZ
KtqCcgq0URv0pvOUorj9Q6U/8ectmbv9zoHQJMxYpeoEijZ8bsFS4eihfHSvlrPq
hCgP9gFzLJQ1z+BwhGkfYwA3+BDvGpoOSJNSvncWnVuxGeCmeZce5Rv0wWH/PFLQ
n+axIPUgFMUdto6k72T8Cpa5HHat9jrXYQtkIkFViZrzwg0+aI5i8A0Sy3LcG1E8
jrzAD3//ZEEuStTMOGTaDopI9IMy/i5UOHRfmFYHF1ZOb+AW+PnMJrl7S+5k4XYG
Qo5PXooyRxEcTZRiwP/OYGL/Rum0cTTCujmz42AIkKnyyyXeiKsg8b8Hl1oRdSuv
9AsIqSs4pP6T9GhbkkMVjpELAXTl221v+luDFeu6DQy/IdRI6BY=
=A6RF
-----END PGP SIGNATURE-----
Merge tag 'pull-ppc-20220314' of https://github.com/legoater/qemu into staging
ppc-7.0 queue :
* Removal of user-created PHB devices
* Avocado fixes for --disable-tcg
* Instruction and Radix MMU fixes
# gpg: Signature made Mon 14 Mar 2022 15:16:07 GMT
# gpg: using RSA key A0F66548F04895EBFE6B0B6051A343C7CFFBECA1
# gpg: Good signature from "Cédric Le Goater <clg@kaod.org>" [undefined]
# gpg: WARNING: This key is not certified with a trusted signature!
# gpg: There is no indication that the signature belongs to the owner.
# Primary key fingerprint: A0F6 6548 F048 95EB FE6B 0B60 51A3 43C7 CFFB ECA1
* tag 'pull-ppc-20220314' of https://github.com/legoater/qemu:
ppc/pnv: Remove user-created PHB{3,4,5} devices
ppc/pnv: Always create the PHB5 PEC devices
ppc/pnv: Introduce a pnv-phb5 device to match root port
ppc/xive2: Make type Xive2EndSource not user creatable
target/ppc: fix xxspltw for big endian hosts
target/ppc: fix ISI fault cause for Radix MMU
avocado/ppc_virtex_ml507.py: check TCG accel in test_ppc_virtex_ml507()
avocado/ppc_prep_40p.py: check TCG accel in all tests
avocado/ppc_mpc8544ds.py: check TCG accel in test_ppc_mpc8544ds()
avocado/ppc_bamboo.py: check TCG accel in test_ppc_bamboo()
avocado/ppc_74xx.py: check TCG accel for all tests
avocado/ppc_405.py: check TCG accel in test_ppc_ref405ep()
avocado/ppc_405.py: remove test_ppc_taihu()
avocado/boot_linux_console.py: check TCG accel in test_ppc_mac99()
avocado/boot_linux_console.py: check TCG accel in test_ppc_g3beige()
avocado/replay_kernel.py: make tcg-icount check in run_vm()
avocado/boot_linux_console.py: check tcg accel in test_ppc64_e500
avocado/boot_linux_console.py: check for tcg in test_ppc_powernv8/9
qtest/meson.build: check CONFIG_TCG for boot-serial-test in qtests_ppc
qtest/meson.build: check CONFIG_TCG for prom-env-test in qtests_ppc
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Even when the feature is not supported in guest CPUID,
still set the msr to the default value which will
be the only value KVM will accept in this case
Signed-off-by: Maxim Levitsky <mlevitsk@redhat.com>
Message-Id: <20220223115824.319821-1-mlevitsk@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Windows 11 with WSL2 enabled (Hyper-V) fails to boot with Icelake-Server
{-v5} CPU model but boots well with '-cpu host'. Apparently, it expects
5-level paging and 5-level EPT support to come in pair but QEMU's
Icelake-Server CPU model lacks the later. Introduce 'Icelake-Server-v6'
CPU model with 'vmx-page-walk-5' enabled by default.
Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Message-Id: <20220221145316.576138-1-vkuznets@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
XFD(eXtended Feature Disable) allows to enable a
feature on xsave state while preventing specific
user threads from using the feature.
Support save and restore XFD MSRs if CPUID.D.1.EAX[4]
enumerate to be valid. Likewise migrate the MSRs and
related xsave state necessarily.
Signed-off-by: Zeng Guang <guang.zeng@intel.com>
Signed-off-by: Wei Wang <wei.w.wang@intel.com>
Signed-off-by: Yang Zhong <yang.zhong@intel.com>
Message-Id: <20220217060434.52460-8-yang.zhong@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
When dynamic xfeatures (e.g. AMX) are used by the guest, the xsave
area would be larger than 4KB. KVM_GET_XSAVE2 and KVM_SET_XSAVE
under KVM_CAP_XSAVE2 works with a xsave buffer larger than 4KB.
Always use the new ioctls under KVM_CAP_XSAVE2 when KVM supports it.
Signed-off-by: Jing Liu <jing2.liu@intel.com>
Signed-off-by: Zeng Guang <guang.zeng@intel.com>
Signed-off-by: Wei Wang <wei.w.wang@intel.com>
Signed-off-by: Yang Zhong <yang.zhong@intel.com>
Message-Id: <20220217060434.52460-7-yang.zhong@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Add AMX primary feature bits XFD and AMX_TILE to
enumerate the CPU's AMX capability. Meanwhile, add
AMX TILE and TMUL CPUID leaf and subleaves which
exist when AMX TILE is present to provide the maximum
capability of TILE and TMUL.
Signed-off-by: Jing Liu <jing2.liu@intel.com>
Signed-off-by: Yang Zhong <yang.zhong@intel.com>
Message-Id: <20220217060434.52460-6-yang.zhong@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Intel introduces XFD faulting mechanism for extended
XSAVE features to dynamically enable the features in
runtime. If CPUID (EAX=0Dh, ECX=n, n>1).ECX[2] is set
as 1, it indicates support for XFD faulting of this
state component.
Signed-off-by: Jing Liu <jing2.liu@intel.com>
Signed-off-by: Yang Zhong <yang.zhong@intel.com>
Message-Id: <20220217060434.52460-5-yang.zhong@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Kernel allocates 4K xstate buffer by default. For XSAVE features
which require large state component (e.g. AMX), Linux kernel
dynamically expands the xstate buffer only after the process has
acquired the necessary permissions. Those are called dynamically-
enabled XSAVE features (or dynamic xfeatures).
There are separate permissions for native tasks and guests.
Qemu should request the guest permissions for dynamic xfeatures
which will be exposed to the guest. This only needs to be done
once before the first vcpu is created.
KVM implemented one new ARCH_GET_XCOMP_SUPP system attribute API to
get host side supported_xcr0 and Qemu can decide if it can request
dynamically enabled XSAVE features permission.
https://lore.kernel.org/all/20220126152210.3044876-1-pbonzini@redhat.com/
Suggested-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Yang Zhong <yang.zhong@intel.com>
Signed-off-by: Jing Liu <jing2.liu@intel.com>
Message-Id: <20220217060434.52460-4-yang.zhong@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
The AMX TILECFG register and the TMMx tile data registers are
saved/restored via XSAVE, respectively in state component 17
(64 bytes) and state component 18 (8192 bytes).
Add AMX feature bits to x86_ext_save_areas array to set
up AMX components. Add structs that define the layout of
AMX XSAVE areas and use QEMU_BUILD_BUG_ON to validate the
structs sizes.
Signed-off-by: Jing Liu <jing2.liu@intel.com>
Signed-off-by: Yang Zhong <yang.zhong@intel.com>
Message-Id: <20220217060434.52460-3-yang.zhong@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
The extended state subleaves (EAX=0Dh, ECX=n, n>1).ECX[1]
indicate whether the extended state component locates
on the next 64-byte boundary following the preceding state
component when the compacted format of an XSAVE area is
used.
Right now, they are all zero because no supported component
needed the bit to be set, but the upcoming AMX feature will
use it. Fix the subleaves value according to KVM's supported
cpuid.
Signed-off-by: Jing Liu <jing2.liu@intel.com>
Signed-off-by: Yang Zhong <yang.zhong@intel.com>
Message-Id: <20220217060434.52460-2-yang.zhong@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Loading a non-canonical address into rsp when handling an interrupt or
performing a far call should raise a #SS not a #GP.
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/870
Signed-off-by: Gareth Webb <gareth.webb@umbralsoftware.co.uk>
Message-Id: <164529651121.25406.15337137068584246397-0@git.sr.ht>
[Move get_pg_mode to seg_helper.c for user-mode emulators. - Paolo]
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
LA57/PKE/PKS is only relevant in 64-bit mode, and NXE is only relevant if
PAE is in use. Since there is code that checks PG_MODE_LA57 to determine
the canonicality of addresses, make sure that the bit is not set by
mistake in 32-bit mode. While it would not be a problem because 32-bit
addresses by definition fit in both 48-bit and 57-bit address spaces,
it is nicer if get_pg_mode() actually returns whether a feature is enabled,
and it allows a few simplifications in the page table walker.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
We invoke the kvm_irqchip_commit_routes() for each addition to MSI route
table, which is not efficient if we are adding lots of routes in some cases.
This patch lets callers invoke the kvm_irqchip_commit_routes(), so the
callers can decide how to optimize.
[1] https://lists.gnu.org/archive/html/qemu-devel/2021-11/msg00967.html
Signed-off-by: Longpeng <longpeng2@huawei.com>
Message-Id: <20220222141116.2091-3-longpeng2@huawei.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
This fixes the following error triggered when stopping and resuming a 64-bit
Linux kernel via gdb:
qemu-system-x86_64.exe: WHPX: Failed to set virtual processor context, hr=c0350005
The previous logic for synchronizing the values did not take into account
that the lower 4 bits of the CR8 register, containing the priority level,
mapped to bits 7:4 of the APIC.TPR register (see section 10.8.6.1 of
Volume 3 of Intel 64 and IA-32 Architectures Software Developer's Manual).
The caused WHvSetVirtualProcessorRegisters() to fail with an error,
effectively preventing GDB from changing the guest context.
Signed-off-by: Ivan Shcherbakov <ivan@sysprogs.com>
Message-Id: <010b01d82874$bb4ef160$31ecd420$@sysprogs.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Make sure that pausing the VM while in 64-bit mode will set the
HF_CS64_MASK flag in env->hflags (see x86_update_hflags() in
target/i386/cpu.c).
Without it, the code in gdbstub.c would only use the 32-bit register values
when debugging 64-bit targets, making debugging effectively impossible.
Signed-off-by: Ivan Shcherbakov <ivan@sysprogs.com>
Message-Id: <00f701d82874$68b02000$3a106000$@sysprogs.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Fix a typo in the host endianness macro and add a simple test to detect
regressions.
Fixes: 9bb0048ec6 ("target/ppc: convert xxspltw to vector operations")
Signed-off-by: Matheus Ferst <matheus.ferst@eldorado.org.br>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220310172047.61094-1-matheus.ferst@eldorado.org.br>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Fix Instruction Storage Interrupt (ISI) fault cause for Radix MMU,
when caused by missing PAGE_EXEC permission, to be
SRR1_NOEXEC_GUARD instead of DSISR_PROTFAULT.
This matches POWER9 hardware behavior.
Fixes: d5fee0bbe6 ("target/ppc: Implement ISA V3.00 radix page fault handler")
Signed-off-by: Leandro Lupori <leandro.lupori@eldorado.org.br>
Message-Id: <20220309192756.145283-1-leandro.lupori@eldorado.org.br>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
When building with clang version 13.0.0 (eg. Fedora 13.0.0-3.fc35),
two unused variables introduced by macro GATHER_FUNCTION and
SCATTER_FUNCTION will cause building process failure due to
[-Werror -Wunused-variable].
Signed-off-by: Zongyuan Li <zongyuan.li@smartx.com>
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/831
Message-Id: <20220124064339.56027-1-zongyuan.li@smartx.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: Taylor Simpson <tsimpson@quicinc.com>
Signed-off-by: Taylor Simpson <tsimpson@quicinc.com>
On Hexagon, c4 is an alias for predicate registers P3:0. If we assign to
c4 inside a packet with reads from predicate registers, the predicate
reads should get the old values.
Test case added to tests/tcg/hexagon/preg_alias.c
Co-authored-by: Michael Lambert <mlambert@cuicinc.com>
Signed-off-by: Taylor Simpson <tsimpson@quicinc.com>
Message-Id: <20220210021556.9217-13-tsimpson@quicinc.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Fix typo that checked for 32 bit nan instead of 64 bit
Test case added in tests/tcg/hexagon/usr.c
Signed-off-by: Taylor Simpson <tsimpson@quicinc.com>
Message-Id: <20220210021556.9217-11-tsimpson@quicinc.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
The float??_minnum implementation differs from Hexagon for SNaN,
it returns NaN, but Hexagon returns the other input. So, we use
float??_minimum_number.
Test cases added to tests/tcg/hexagon/fpstuff.c
Signed-off-by: Taylor Simpson <tsimpson@quicinc.com>
Message-Id: <20220308190410.22355-1-tsimpson@quicinc.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
The arch_sf_recip_common function was calling float32_getexp which
adjusts for denorm, but the we actually need the raw exponent bits.
This function is called from 3 instructions
sfrecipa
sffixupn
sffixupd
Test cases added to tests/tcg/hexagon/fpstuff.c
Signed-off-by: Taylor Simpson <tsimpson@quicinc.com>
Message-Id: <20220210021556.9217-6-tsimpson@quicinc.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Instead of checking for nan arguments, use float??_unordered_quiet
test cases added in a subsequent patch to more extensively test USR bits
Signed-off-by: Taylor Simpson <tsimpson@quicinc.com>
Message-Id: <20220210021556.9217-4-tsimpson@quicinc.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Versions V3 and earlier should treat the "K_const" and "length" values
as unsigned.
Modified circ_test_v3() in tests/tcg/hexagon/circ.c to reproduce the bug
Signed-off-by: Michael Lambert <mlambert@quicinc.com>
Signed-off-by: Taylor Simpson <tsimpson@quicinc.com>
Message-Id: <20220210021556.9217-2-tsimpson@quicinc.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
* cleanups of qemu_oom_check() and qemu_memalign()
* target/arm/translate-neon: UNDEF if VLD1/VST1 stride bits are non-zero
* target/arm/translate-neon: Simplify align field check for VLD3
* GICv3 ITS: add more trace events
* GICv3 ITS: implement 8-byte accesses properly
* GICv3: fix minor issues with some trace/log messages
* ui/cocoa: Use the standard about panel
* target/arm: Provide cpu property for controling FEAT_LPA2
* hw/arm/virt: Disable LPA2 for -machine virt-6.2
-----BEGIN PGP SIGNATURE-----
iQJNBAABCAA3FiEE4aXFk81BneKOgxXPPCUl7RQ2DN4FAmImNs4ZHHBldGVyLm1h
eWRlbGxAbGluYXJvLm9yZwAKCRA8JSXtFDYM3q87D/0cMQeF00uVRNqftrQg2SDI
txJIG2QYUOPMCDfGWlGTfXv2TUc5y3XwA77C9vTcJcIWJlZ30DUa95DNYqA0BbOH
TEOzRuZME64wA/JndHadz7oh+xb3HYn+6aSr63LeQCI3/h1eXVHknnEcyF1danOb
YNB1T308THTEwJHQuKHYksIasgVwcjOf8FvMRYFozVkAKEx1SlabpFXST+aVNyx4
ASsC2PTiJYAqwnYrTX8lWOYKMiKfkNrQcTd6x7rkoDw1pV7ZDMw2/69tpkhdJ5Fa
lwxhwZ3+40x49eFGAhfuZWZmGLd4c+76u64pmWW429uk1JhaoXgErJM3xfHbI1er
d7XSQYkMhDrY5SFuoE5XYwOuxanPtn3f7luM236Uzgf4ZR6qTrf6x+R1xLPZVYa9
fWbjvR3g5sltTOzyc+9UsBq1OPCbRUbmhJtJDvojj5sWmNvgOwZnSkTu5kMAqvFP
T2cQIi6phRBo3oMN/fhEZi3g828JjYEA9QlpWZ74JOyiXjYUq9VVNpoe/dtAv4Yy
wZ+XhVNIK82/4Mxjr9SEeYeNzYrsEEvFAUqe9Bil2CpuIMV5ONEzs+UfQ/gyk4eq
QnGPiojCrpf6PPAfci0Y6b4RzO+loMFpLjCpurngB4g4cBdmThKip0sVZdTZAI9Y
lnusB8MR1sESoqYdPZsAfQ==
=ix0J
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/pmaydell/tags/pull-target-arm-20220307' into staging
target-arm queue:
* cleanups of qemu_oom_check() and qemu_memalign()
* target/arm/translate-neon: UNDEF if VLD1/VST1 stride bits are non-zero
* target/arm/translate-neon: Simplify align field check for VLD3
* GICv3 ITS: add more trace events
* GICv3 ITS: implement 8-byte accesses properly
* GICv3: fix minor issues with some trace/log messages
* ui/cocoa: Use the standard about panel
* target/arm: Provide cpu property for controling FEAT_LPA2
* hw/arm/virt: Disable LPA2 for -machine virt-6.2
# gpg: Signature made Mon 07 Mar 2022 16:46:06 GMT
# gpg: using RSA key E1A5C593CD419DE28E8315CF3C2525ED14360CDE
# gpg: issuer "peter.maydell@linaro.org"
# gpg: Good signature from "Peter Maydell <peter.maydell@linaro.org>" [ultimate]
# gpg: aka "Peter Maydell <pmaydell@gmail.com>" [ultimate]
# gpg: aka "Peter Maydell <pmaydell@chiark.greenend.org.uk>" [ultimate]
# Primary key fingerprint: E1A5 C593 CD41 9DE2 8E83 15CF 3C25 25ED 1436 0CDE
* remotes/pmaydell/tags/pull-target-arm-20220307:
hw/arm/virt: Disable LPA2 for -machine virt-6.2
target/arm: Provide cpu property for controling FEAT_LPA2
ui/cocoa: Use the standard about panel
hw/intc/arm_gicv3_cpuif: Fix register names in ICV_HPPIR read trace event
hw/intc/arm_gicv3: Fix missing spaces in error log messages
hw/intc/arm_gicv3: Specify valid and impl in MemoryRegionOps
hw/intc/arm_gicv3_its: Add trace events for table reads and writes
hw/intc/arm_gicv3_its: Add trace events for commands
target/arm/translate-neon: Simplify align field check for VLD3
target/arm/translate-neon: UNDEF if VLD1/VST1 stride bits are non-zero
osdep: Move memalign-related functions to their own header
util: Put qemu_vfree() in memalign.c
util: Use meson checks for valloc() and memalign() presence
util: Share qemu_try_memalign() implementation between POSIX and Windows
meson.build: Don't misdetect posix_memalign() on Windows
util: Return valid allocation for qemu_try_memalign() with zero size
util: Unify implementations of qemu_memalign()
util: Make qemu_oom_check() a static function
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Since the previous commit 9ea89876f9d ("target/mips: Fix cycle
counter timing calculations"), MIPSCPU::cp0_count_rate is not
used anymore. We don't need it since it is already expressed
as mips_def_t::CCRes. Remove the duplicate and clean.
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <>20211213102340.1847248-1-f4bug@amsat.org>
The cp0_count_ns value is calculated from the CP0_COUNT_RATE_DEFAULT
constant in target/mips/cpu.c. The cycle counter resolution is defined
per-CPU in target/mips/cpu-defs.c.inc; use this value for calculating
cp0_count_ns. Fixings timing problems on guest OSs for the 20Kc CPU
which has a CCRes of 1.
Signed-off-by: Simon Burge <simonb@NetBSD.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20211213135125.18378-1-simonb@NetBSD.org>
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
There is a Linux kernel bug present until v5.12 that prevents
booting with FEAT_LPA2 enabled. As a workaround for TCG, allow
the feature to be disabled from -cpu max.
Since this kernel bug is present in the Fedora 31 image that
we test in avocado, disable lpa2 on the command-line.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
For VLD3 (single 3-element structure to one lane), there is no
alignment specification and the alignment bits in the instruction
must be zero. This is bit [4] for the size=0 and size=1 cases, and
bits [5:4] for the size=2 case. We do this check correctly in
VLDST_single(), but we write it a bit oddly: in the 'case 3' code we
check for bit 0 of a->align (bit [4] of the insn), and then we fall
through to the 'case 2' code which checks bit 1 of a->align (bit [5]
of the insn) in the size 2 case. Replace this with just checking "is
a->align non-zero" for VLD3, which lets us drop the fall-through and
put the cases in this switch in numerical order.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Tested-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220303113741.2156877-3-peter.maydell@linaro.org
For VLD1/VST1 (single element to one lane) we are only accessing one
register, and so the 'stride' is meaningless. The bits that would
specify stride (insn bit [4] for size=1, bit [6] for size=2) are
specified to be zero in the encoding (which would correspond to a
stride of 1 for VLD2/VLD3/VLD4 etc), and we must UNDEF if they are
not.
We failed to make this check, which meant that we would incorrectly
handle some instruction patterns as loads or stores instead of
UNDEFing them. Enforce that stride == 1 for the nregs == 1 case.
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/890
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Tested-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220303113741.2156877-2-peter.maydell@linaro.org
Move the various memalign-related functions out of osdep.h and into
their own header, which we include only where they are used.
While we're doing this, add some brief documentation comments.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-id: 20220226180723.1706285-10-peter.maydell@linaro.org
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220305233415.64627-3-philippe.mathieu.daude@gmail.com>
ArchCPU is our interface with target-specific code. Use it as
a forward-declared opaque pointer (abstract type), having its
structure defined by each target.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20220214183144.27402-15-f4bug@amsat.org>
Replace the boilerplate code to declare CPU QOM types
and macros, and forward-declare the CPU instance type.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20220214183144.27402-14-f4bug@amsat.org>
While CPUState is our interface with generic code, CPUArchState is
our interface with target-specific code. Use CPUArchState as an
abstract type, defined by each target.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20220214183144.27402-13-f4bug@amsat.org>
HexagonCPU field parent_class is of type CPUClass, which
is declared in "hw/core/cpu.h".
Reviewed-by: Damien Hedde <damien.hedde@greensocs.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20220214183144.27402-11-f4bug@amsat.org>
These target-specific files use the target-specific CPU state
but lack to include "cpu.h"; i.e.:
../target/riscv/pmp.h:61:23: error: unknown type name 'CPURISCVState'
void pmpcfg_csr_write(CPURISCVState *env, uint32_t reg_index,
^
../target/nios2/mmu.h:43:18: error: unknown type name 'CPUNios2State'
void mmu_flip_um(CPUNios2State *env, unsigned int um);
^
../target/microblaze/mmu.h:88:19: error: unknown type name 'CPUMBState'; did you mean 'CPUState'?
uint32_t mmu_read(CPUMBState *env, bool ea, uint32_t rn);
^~~~~~~~~~
CPUState
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20220214183144.27402-10-f4bug@amsat.org>
excp_helper.c requires "exec/exec-all.h" for tlb_set_page_with_attrs()
and misc_helper.c for tlb_flush().
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20220214183144.27402-8-f4bug@amsat.org>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20220207075426.81934-18-f4bug@amsat.org>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20220207075426.81934-17-f4bug@amsat.org>
Add cpu_thread_is_idle() to AccelOps, and implement it for the
KVM / WHPX accelerators.
Suggested-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20220207075426.81934-11-f4bug@amsat.org>
Mirror "sysemu/kvm.h" #ifdef'ry to define CONFIG_HAX_IS_POSSIBLE,
expose hax_allowed to hax_enabled() macro.
Suggested-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20220207075426.81934-9-f4bug@amsat.org>
Some ISA v2.03 Vector Multiply instructions marked to be ISA v2.07 only.
This patch fixes it.
Fixes: 80eca687c8 ("target/ppc: moved vector even and odd multiplication to decodetree")
Reported-by: Howard Spoelstra <hsp.cat7@gmail.com>
Suggested-by: Fabiano Rosas <farosas@linux.ibm.com>
Signed-off-by: Lucas Mateus Castro (alqotel) <lucas.araujo@eldorado.org.br>
Signed-off-by: Matheus Ferst <matheus.ferst@eldorado.org.br>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220304175156.2012315-2-matheus.ferst@eldorado.org.br>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Change VSX Scalar Multiply-Add/Subtract Type-A/M Single Precision
helpers to use float64r32_muladd. This method should correctly handle
all rounding modes, so the workaround for float_round_nearest_even can
be dropped.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Matheus Ferst <matheus.ferst@eldorado.org.br>
Message-Id: <20220304165417.1981159-3-matheus.ferst@eldorado.org.br>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
* Fixup checks for ext_zb[abcs]
* Add AIA support for virt machine
* Increase maximum number of CPUs in virt machine
* Fixup OpenTitan SPI address
* Add support for zfinx, zdinx and zhinx{min} extensions
-----BEGIN PGP SIGNATURE-----
iQEzBAABCAAdFiEE9sSsRtSTSGjTuM6PIeENKd+XcFQFAmIgUZ8ACgkQIeENKd+X
cFTzegf8DbUYFLpyfURm6bJoJfLQHjtjB4Hs6PnszJZZAEtC6Ia+551TDjh93vTf
GTbpWm0BlugQqEeyg+Mioe2mb2EhK2w208RGXRSDjT9QFVOaIp83NDAjaQTPqs22
XC35ygJYuo1Yf0WoJV77aB6IYPZB3ba5i+dkGb6lk60Ru5ULqoLvqp73tNe5KvNB
uVAEy+ubzjmzWs5hGPw95HqTIbcMGnlHew4XU6xJaiJixSy71Z5nOCCn+2sxk+6A
QW59Onglyfk01F9ac3GMLvi2e+FUdj0S0y07oVqchzxXWYpYwgTO4Xkt794c8mqU
T02kuelfubr1qH1z/IolStju1JnaXw==
=LzOY
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/alistair/tags/pull-riscv-to-apply-20220303' into staging
Fifth RISC-V PR for QEMU 7.0
* Fixup checks for ext_zb[abcs]
* Add AIA support for virt machine
* Increase maximum number of CPUs in virt machine
* Fixup OpenTitan SPI address
* Add support for zfinx, zdinx and zhinx{min} extensions
# gpg: Signature made Thu 03 Mar 2022 05:26:55 GMT
# gpg: using RSA key F6C4AC46D4934868D3B8CE8F21E10D29DF977054
# gpg: Good signature from "Alistair Francis <alistair@alistair23.me>" [full]
# Primary key fingerprint: F6C4 AC46 D493 4868 D3B8 CE8F 21E1 0D29 DF97 7054
* remotes/alistair/tags/pull-riscv-to-apply-20220303:
target/riscv: expose zfinx, zdinx, zhinx{min} properties
target/riscv: add support for zhinx/zhinxmin
target/riscv: add support for zdinx
target/riscv: add support for zfinx
target/riscv: hardwire mstatus.FS to zero when enable zfinx
target/riscv: add cfg properties for zfinx, zdinx and zhinx{min}
hw: riscv: opentitan: fixup SPI addresses
hw/riscv: virt: Increase maximum number of allowed CPUs
docs/system: riscv: Document AIA options for virt machine
hw/riscv: virt: Add optional AIA IMSIC support to virt machine
hw/intc: Add RISC-V AIA IMSIC device emulation
hw/riscv: virt: Add optional AIA APLIC support to virt machine
target/riscv: fix inverted checks for ext_zb[abcs]
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Previously, we would avoid setting CPU_INTERRUPT_HARD when interrupts
are disabled at a particular point in time, instead queuing the value
into cpu->irq_pending. This is more complicated than required.
Instead, set CPU_INTERRUPT_HARD any time there is a pending interrupt,
and exclusively check for interrupts disabled in nios2_cpu_exec_interrupt.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
It was never correct to be able to write to ipending.
Until the rest of the irq code is tidied, the read of
ipending will generate an "unnecessary" mask.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Create three separate functions for the three separate registers.
Avoid extra dispatch through op_helper.c.
Dispatch to the correct function in translation.
Clean up the ifdefs in wrctl.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
This will avoid having to replicate the check to additional cases.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
We can thus remove an ifdef covering the entire file.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
This functionality can be had via plugins, if desired.
In the meantime, it is unused code.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Co-authored-by: ardxwe <ardxwe@gmail.com>
Signed-off-by: Weiwei Li <liweiwei@iscas.ac.cn>
Signed-off-by: Junqiang Wang <wangjunqiang@iscas.ac.cn>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-Id: <20220211043920.28981-7-liweiwei@iscas.ac.cn>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Co-authored-by: ardxwe <ardxwe@gmail.com>
Signed-off-by: Weiwei Li <liweiwei@iscas.ac.cn>
Signed-off-by: Junqiang Wang <wangjunqiang@iscas.ac.cn>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-Id: <20220211043920.28981-2-liweiwei@iscas.ac.cn>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
While changing to the use of cfg_ptr, the conditions for REQUIRE_ZB[ABCS]
inadvertently became inverted and slipped through the initial testing (which
used RV64GC_XVentanaCondOps as a target).
This fixes the regression.
Tested against SPEC2017 w/ GCC 12 (prerelease) for RV64GC_zba_zbb_zbc_zbs.
Fixes: f2a32bec8f ("target/riscv: access cfg structure through DisasContext")
Signed-off-by: Philipp Tomsich <philipp.tomsich@vrull.eu>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-Id: <20220203153946.2676353-1-philipp.tomsich@vrull.eu>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
When we're using KVM, the PSCI implementation is provided by the
kernel, but QEMU has to tell the guest about it via the device tree.
Currently we look at the KVM_CAP_ARM_PSCI_0_2 capability to determine
if the kernel is providing at least PSCI 0.2, but if the kernel
provides a newer version than that we will still only tell the guest
it has PSCI 0.2. (This is fairly harmless; it just means the guest
won't use newer parts of the PSCI API.)
The kernel exposes the specific PSCI version it is implementing via
the ONE_REG API; use this to report in the dtb that the PSCI
implementation is 1.0-compatible if appropriate. (The device tree
binding currently only distinguishes "pre-0.2", "0.2-compatible" and
"1.0-compatible".)
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Marc Zyngier <maz@kernel.org>
Reviewed-by: Akihiko Odaki <akihiko.odaki@gmail.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Andrew Jones <drjones@redhat.com>
Message-id: 20220224134655.1207865-1-peter.maydell@linaro.org
This feature widens physical addresses (and intermediate physical
addresses for 2-stage translation) from 48 to 52 bits, when using
4k or 16k pages.
This introduces the DS bit to TCR_ELx, which is RES0 unless the
page size is enabled and supports LPA2, resulting in the effective
value of DS for a given table walk. The DS bit changes the format
of the page table descriptor slightly, moving the PS field out to
TCR so that all pages have the same sharability and repurposing
those bits of the page table descriptor for the highest bits of
the output address.
Do not yet enable FEAT_LPA2; we need extra plumbing to avoid
tickling an old kernel bug.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220301215958.157011-17-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
We support 16k pages, but do not advertize that in ID_AA64MMFR0.
The value 0 in the TGRAN*_2 fields indicates that stage2 lookups defer
to the same support as stage1 lookups. This setting is deprecated, so
indicate support for all stage2 page sizes directly.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20220301215958.157011-16-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
For FEAT_LPA2, we will need other ARMVAParameters, which themselves
depend on the translation granule in use. We might as well validate
that the given TG matches; the architecture "does not require that
the instruction invalidates any entries" if this is not true.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220301215958.157011-15-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The shift of the BaseADDR field depends on the translation
granule in use.
Fixes: 84940ed825 ("target/arm: Add support for FEAT_TLBIRANGE")
Reported-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220301215958.157011-14-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Merge tlbi_aa64_range_get_length and tlbi_aa64_range_get_base,
returning a structure containing both results. Pass in the
ARMMMUIdx, rather than the digested two_ranges boolean.
This is in preparation for FEAT_LPA2, where the interpretation
of 'value' depends on the effective value of DS for the regime.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220301215958.157011-13-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
With FEAT_LPA2, rather than introducing translation level 4,
we introduce level -1, below the current level 0. Extend
arm_fi_to_lfsc to handle these faults.
Assert that this new translation level does not leak into
fault types for which it is not defined, which allows some
masking of fi->level to be removed.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220301215958.157011-12-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This feature widens physical addresses (and intermediate physical
addresses for 2-stage translation) from 48 to 52 bits, when using
64k pages. The only thing left at this point is to handle the
extra bits in the TTBR and in the table descriptors.
Note that PAR_EL1 and HPFAR_EL2 are nominally extended, but we don't
mask out the high bits when writing to those registers, so no changes
are required there.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220301215958.157011-11-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This feature is relatively small, as it applies only to
64k pages and thus requires no additional changes to the
table descriptor walking algorithm, only a change to the
minimum TSZ (which is the inverse of the maximum virtual
address space size).
Note that this feature widens VBAR_ELx, but we already
treat the register as being 64 bits wide.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220301215958.157011-10-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The original A.a revision of the AArch64 ARM required that we
force-extend the addresses in these registers from 49 bits.
This language has been loosened via a combination of IMPLEMENTATION
DEFINED and CONSTRAINTED UNPREDICTABLE to allow consideration of
the entire aligned address.
This means that we do not have to consider whether or not FEAT_LVA
is enabled, and decide from which bit an address might need to be
extended.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220301215958.157011-9-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This field controls the output (intermediate) physical address size
of the translation process. V8 requires to raise an AddressSize
fault if the page tables are programmed incorrectly, such that any
intermediate descriptor address, or the final translated address,
is out of range.
Add a PS field to ARMVAParameters, and properly compute outputsize
in get_phys_addr_lpae. Test the descaddr as extracted from TTBR
and from page table entries.
Restrict descaddrmask so that we won't raise the fault for v7.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220301215958.157011-8-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The macro is a bit more readable than the inlined computation.
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220301215958.157011-7-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Pass down the width of the output address from translation.
For now this is still just PAMax, but a subsequent patch will
compute the correct value from TCR_ELx.{I}PS.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220301215958.157011-6-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
We will shortly share parts of this function with other portions
of address translation.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220301215958.157011-5-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Without FEAT_LVA, the behaviour of programming an invalid value
is IMPLEMENTATION DEFINED. With FEAT_LVA, programming an invalid
minimum value requires a Translation fault.
It is most self-consistent to choose to generate the fault always.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220301215958.157011-4-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Set this as the kernel would, to 48 bits, to keep the computation
of the address space correct for PAuth.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220301215958.157011-3-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
handle_simd_shift_fpint_conv() was accidentally freeing the TCG
temporary tcg_fpstatus too early, before the last use of it. Move
the free down to where it belongs.
Signed-off-by: Wentao_Liang <Wentao_Liang_g@163.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
[PMM: cleaned up commit message]
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Support the latest PSCI on TCG and HVF. A 64-bit function called from
AArch32 now returns NOT_SUPPORTED, which is necessary to adhere to SMC
Calling Convention 1.0. It is still not compliant with SMCCC 1.3 since
they do not implement mandatory functions.
Signed-off-by: Akihiko Odaki <akihiko.odaki@gmail.com>
Message-id: 20220213035753.34577-1-akihiko.odaki@gmail.com
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
[PMM: update MISMATCH_CHECK checks on PSCI_VERSION macros to match]
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Refactor VSX_SCALAR_CMP_DP, changing its name to VSX_SCALAR_CMP and
prepare the helper to be used for quadword comparisons.
Suggested-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Víctor Colombo <victor.colombo@eldorado.org.br>
Signed-off-by: Matheus Ferst <matheus.ferst@eldorado.org.br>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220225210936.1749575-41-matheus.ferst@eldorado.org.br>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
xscmpnedp was added in ISA v3.0 but removed in v3.0B. This patch
removes this instruction as it was not in the final version of v3.0.
Signed-off-by: Víctor Colombo <victor.colombo@eldorado.org.br>
Acked-by: Greg Kurz <groug@kaod.org>
Reviewed-by: Cédric Le Goater <clg@kaod.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Matheus Ferst <matheus.ferst@eldorado.org.br>
Message-Id: <20220225210936.1749575-40-matheus.ferst@eldorado.org.br>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Implement the following PowerISA v3.1 instructions:
vcmpgtsq: Vector Compare Greater Than Signed Quadword
vcmpgtuq: Vector Compare Greater Than Unsigned Quadword
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Matheus Ferst <matheus.ferst@eldorado.org.br>
Message-Id: <20220225210936.1749575-13-matheus.ferst@eldorado.org.br>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Move the following instructions to decodetree:
vextsb2w: Vector Extend Sign Byte To Word
vextsh2w: Vector Extend Sign Halfword To Word
vextsb2d: Vector Extend Sign Byte To Doubleword
vextsh2d: Vector Extend Sign Halfword To Doubleword
vextsw2d: Vector Extend Sign Word To Doubleword
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Lucas Coutinho <lucas.coutinho@eldorado.org.br>
Signed-off-by: Matheus Ferst <matheus.ferst@eldorado.org.br>
Message-Id: <20220225210936.1749575-8-matheus.ferst@eldorado.org.br>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Based on [1] by Lijun Pan <ljp@linux.ibm.com>, which was never merged
into master.
[1]: https://lists.gnu.org/archive/html/qemu-ppc/2020-07/msg00419.html
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Víctor Colombo <victor.colombo@eldorado.org.br>
Signed-off-by: Matheus Ferst <matheus.ferst@eldorado.org.br>
Message-Id: <20220225210936.1749575-7-matheus.ferst@eldorado.org.br>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Based on [1] by Lijun Pan <ljp@linux.ibm.com>, which was never merged
into master.
[1]: https://lists.gnu.org/archive/html/qemu-ppc/2020-07/msg00419.html
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Víctor Colombo <victor.colombo@eldorado.org.br>
Signed-off-by: Matheus Ferst <matheus.ferst@eldorado.org.br>
Message-Id: <20220225210936.1749575-6-matheus.ferst@eldorado.org.br>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Changed vmulhuw, vmulhud, vmulhsw, vmulhsd to not
use helpers.
Signed-off-by: Lucas Mateus Castro (alqotel) <lucas.araujo@eldorado.org.br>
Signed-off-by: Matheus Ferst <matheus.ferst@eldorado.org.br>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220225210936.1749575-5-matheus.ferst@eldorado.org.br>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
New macros that add FLAGS and FLAGS2 checking were added for
both TRANS and TRANS64.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Luis Pires <luis.pires@eldorado.org.br>
[ferst: - TRANS_FLAGS2 instead of TRANS_FLAGS_E
- Use the new macros in load/store vector insns ]
Signed-off-by: Matheus Ferst <matheus.ferst@eldorado.org.br>
Message-Id: <20220225210936.1749575-2-matheus.ferst@eldorado.org.br>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
This patch adds the EBB exception support that are triggered by
Performance Monitor alerts. This happens when a Performance Monitor
alert occurs and MMCR0_EBE, BESCR_PME and BESCR_GE are set.
fire_PMC_interrupt() will execute the raise_ebb_perfm_exception() helper
which will check for MMCR0_EBE, BESCR_PME and BESCR_GE bits. If all bits
are set, do_ebb() will attempt to trigger a PERFM EBB event.
If the EBB facility is enabled in both FSCR and HFSCR we consider that
the EBB is valid and set BESCR_PMEO. After that, if we're running in
problem state, fire a POWERPC_EXCP_PERM_EBB immediately. Otherwise we'll
queue a PPC_INTERRUPT_EBB.
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220225101140.1054160-5-danielhb413@gmail.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
PPC_INTERRUPT_EBB is a new interrupt that will be used to deliver EBB
exceptions that had to be postponed because the thread wasn't in problem
state at the time the event-based branch was supposed to occur.
ISA 3.1 also defines two EBB exceptions: Performance Monitor EBB
exception and External EBB exception. They are being added as
POWERPC_EXCP_PERFM_EBB and POWERPC_EXCP_EXTERNAL_EBB.
PPC_INTERRUPT_EBB will check BESCR bits to see the EBB type that
occurred and trigger the appropriate exception. Both exceptions are
doing the same thing in this first implementation: clear BESCR_GE and
enter the branch with env->nip retrieved from SPR_EBBHR.
The checks being done by the interrupt code are msr_pr and BESCR_GE
states. All other checks (EBB facility check, BESCR_PME bit, specific
bits related to the event type) must be done beforehand.
Reviewed-by: Cédric Le Goater <clg@kaod.org>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Message-Id: <20220225101140.1054160-4-danielhb413@gmail.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
There are still PMU exclusive bits to handle in fire_PMC_interrupt()
before implementing the EBB support. Let's finalize it now to avoid
dealing with PMU and EBB logic at the same time in the next patches.
fire_PMC_interrupt() will fire an Performance Monitor alert depending on
MMCR0_PMAE. If we are required to freeze the timers (MMCR0_FCECE) we'll
also need to update summaries and delete the existing overflow timers.
In all cases we're going to update the cycle counters.
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Message-Id: <20220225101140.1054160-3-danielhb413@gmail.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
This is an exclusive TCG helper. Gating it with CONFIG_TCG and changing
meson.build accordingly will prevent problems --disable-tcg and
--disable-linux-user later on.
We're also changing the uses of !kvm_enabled() to tcg_enabled() to avoid
adding "defined(CONFIG_TCG)" ifdefs, since tcg_enabled() will be
defaulted to false with --disable-tcg and the block will always be
skipped.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Message-Id: <20220225101140.1054160-2-danielhb413@gmail.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
The dh_alias redirect is intended to handle TCG types as distinguished
from C types. TCG does not distinguish signed int from unsigned int,
because they are the same size. However, we need to retain this
distinction for dh_typecode, lest we fail to extend abi types properly
for the host call parameters.
This bug was detected when running the 'arm' emulator on an s390
system. The s390 uses TCG_TARGET_EXTEND_ARGS which triggers code
in tcg_gen_callN to extend 32 bit values to 64 bits; the incorrect
sign data in the typemask for each argument caused the values to be
extended as unsigned values.
This simple program exhibits the problem:
static volatile int num = -9;
static volatile int den = -5;
int main(void)
{
int quo = num / den;
printf("num %d den %d quo %d\n", num, den, quo);
exit(0);
}
When run on the broken qemu, this results in:
num -9 den -5 quo 0
The correct result is:
num -9 den -5 quo 1
Fixes: 7319d83a73 ("tcg: Combine dh_is_64bit and dh_is_signed to dh_typecode")
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/876
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reported-by: Christian Ehrhardt <christian.ehrhardt@canonical.com>
Tested-by: Christian Ehrhardt <christian.ehrhardt@canonical.com>
Tested-by: Keith Packard <keithp@keithp.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
TCG implements everything we need to run basic z15 OS+software
Signed-off-by: David Miller <dmiller423@gmail.com>
Message-Id: <20220223223117.66660-3-dmiller423@gmail.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
implements:
AND WITH COMPLEMENT (NCRK, NCGRK)
NAND (NNRK, NNGRK)
NOT EXCLUSIVE OR (NXRK, NXGRK)
NOR (NORK, NOGRK)
OR WITH COMPLEMENT (OCRK, OCGRK)
SELECT (SELR, SELGR)
SELECT HIGH (SELFHR)
MOVE RIGHT TO LEFT (MVCRL)
POPULATION COUNT (POPCNT)
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/737
Signed-off-by: David Miller <dmiller423@gmail.com>
Message-Id: <20220223223117.66660-2-dmiller423@gmail.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Thomas Huth <thuth@redhat.com>
We previously loaded into in1, but in1 is not filled during
disassembly and hence always zero. This leads to an assertion failure:
qemu-system-s390x: /home/nrb/qemu/include/tcg/tcg.h:654: temp_idx:
Assertion `n >= 0 && n < tcg_ctx->nb_temps' failed.`
Instead, use in2_la2_m64a to load from storage into in2 and pass that to
the helper, which matches what we already do for SCKC.
This fixes the SCK test I sent here under TCG:
<https://www.spinics.net/lists/kvm/msg265169.html>
Fixes: 9dc67537 ("s390x/tcg: implement SET CLOCK ")
Suggested-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Nico Boehr <nrb@linux.ibm.com>
Message-Id: <20220126084201.774457-1-nrb@linux.ibm.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
* Some small fixes for the qtests
* Misc header cleanups by Philippe
-----BEGIN PGP SIGNATURE-----
iQJFBAABCAAvFiEEJ7iIR+7gJQEY8+q5LtnXdP5wLbUFAmITejURHHRodXRoQHJl
ZGhhdC5jb20ACgkQLtnXdP5wLbUFaBAAsj/mMIHbP0pIetfbimxopqg85HhryO8R
P3a2k3+clN0dhIMaZKfnXKM2S03/xWDtXYATidiRpliRfaeZ8oPM9j3U1kqbsjQ9
u+IdVgYdy0ZoLINvSdLZQp+5ZdBL34KP7OYBdkJUyFV8n2CwFk9c/8tjazkqA3Il
8OwkrdMu+7E5KyhjeDByPAOyONN53vOZT4nXdD2EsQ7AbIzKfw41Bo2wJzJCOqB+
uX9JHv+mpKhhv5NZle/oaUF5lg+rqveg4LxSe8D9FIGfYiFMYG3HNq38St4NVXVc
knBqzQiQZm2MLviXQQ4ym9Q3BFd1QZLJH3TB9SfvJjGEvrErb0Xylcqra1EIxseG
xI34f9ER0usWSUcIe4t/WjzAjEr3ez+uDJ6ItNFRqPwsV4PGaSgP4auhNzMGlkTo
zr1O5o/hJdh3otDzM6Qu8FtnNUsKLb2KerveQW+a0uJj3BDKshbn7Au7d3+6eORJ
DuugBwzrtgvAKr1z/6pYFT8eXyhvI7w/rwtUJwNiBsHXvTBQ4UxEXlKpUCKqEQls
oqlTK3bezKJuURnuND88L410qUAuvTABjoYx9Y9abbrSqq91F/52bpB/jY2Lke+y
YoWPV13npdguG1eHB8DowF7MQRLVcULTshXLuM0A9NXkSLJfNY2gLb/I9+hXuQr0
PuLO5BfVyLE=
=/azS
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/thuth-gitlab/tags/pull-request-2022-02-21' into staging
* Improve virtio-net failover test
* Some small fixes for the qtests
* Misc header cleanups by Philippe
# gpg: Signature made Mon 21 Feb 2022 11:40:37 GMT
# gpg: using RSA key 27B88847EEE0250118F3EAB92ED9D774FE702DB5
# gpg: issuer "thuth@redhat.com"
# gpg: Good signature from "Thomas Huth <th.huth@gmx.de>" [full]
# gpg: aka "Thomas Huth <thuth@redhat.com>" [full]
# gpg: aka "Thomas Huth <huth@tuxfamily.org>" [full]
# gpg: aka "Thomas Huth <th.huth@posteo.de>" [unknown]
# Primary key fingerprint: 27B8 8847 EEE0 2501 18F3 EAB9 2ED9 D774 FE70 2DB5
* remotes/thuth-gitlab/tags/pull-request-2022-02-21: (25 commits)
hw/tricore: Remove unused and incorrect header
hw/m68k/mcf: Add missing 'exec/hwaddr.h' header
exec/exec-all: Move 'qemu/log.h' include in units requiring it
softmmu/runstate: Clean headers
linux-user: Add missing "qemu/timer.h" include
target: Add missing "qemu/timer.h" include
core/ptimers: Remove unnecessary 'sysemu/cpus.h' include
exec/ramblock: Add missing includes
qtest: Add missing 'hw/qdev-core.h' include
hw/acpi/memory_hotplug: Remove unused 'hw/acpi/pc-hotplug.h' header
hw/remote: Add missing include
hw/tpm: Clean includes
scripts: Remove the old switch-timer-api script
tests/qtest: failover: migration abort test with failover off
tests/qtest: failover: test migration if the guest doesn't support failover
tests/qtest: failover: check migration with failover off
tests/qtest: failover: check missing guest feature
tests/qtest: failover: check the feature is correctly provided
tests/qtest: failover: use a macro for check_one_card()
tests/qtest: failover: clean up pathname of tests
...
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The last use of ENV_OFFSET was removed in 5e1401969b
("cpu: Move icount_decr to CPUNegativeOffsetState");
the commit of target/rx came in just afterward.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Yoshinori Sato <ysato@users.sourceforge.jp>
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20220203001252.37982-1-f4bug@amsat.org>
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
There is no 'vr' field in AVRCPUClass.
Likely a copy/paste typo from CRISCPUClass ;)
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220122001036.83267-1-f4bug@amsat.org>
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
The "hardware version" machinery (qemu_set_hw_version(),
qemu_hw_version(), and the QEMU_HW_VERSION define) is used by fewer
than 10 files. Move it out from osdep.h into a new
qemu/hw-version.h.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220208200856.3558249-6-peter.maydell@linaro.org
Currently we don't allow guests under hvf to use the PAuth extension,
because we didn't have any special code to handle that, and therefore
in arm_cpu_pauth_finalize() we will sanitize the ID_AA64ISAR1 value
the guest sees to clear the PAuth related fields.
Add support for this in the same way that KVM does it, by defaulting
to "PAuth enabled" if the host CPU has it and allowing the user to
disable it via '-cpu pauth=no' on the command line.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Andrew Jones <drjones@redhat.com>
Reviewed-by: Alexander Graf <agraf@csgraf.de>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220204165506.2846058-7-peter.maydell@linaro.org
Currently when using hvf we mishandle '-cpu max': we fall through to
the TCG version of its initfn, which then sets a lot of feature bits
that the real host CPU doesn't have. The hvf accelerator code then
exposes these bogus ID register values to the guest because it
doesn't check that the host really has the features.
Make '-cpu host' be like '-cpu max' for hvf, as we do with kvm.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Andrew Jones <drjones@redhat.com>
Reviewed-by: Alexander Graf <agraf@csgraf.de>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220204165506.2846058-6-peter.maydell@linaro.org
Now that the if() branch of the condition in aarch64_max_initfn()
returns early, we don't need to keep the rest of the code in
the function inside an else block. Remove the else, unindenting
that code.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Andrew Jones <drjones@redhat.com>
Reviewed-by: Alexander Graf <agraf@csgraf.de>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220204165506.2846058-5-peter.maydell@linaro.org
Currently for KVM the intention is that '-cpu max' and '-cpu host'
are the same thing, but because we did this with two separate
pieces of code they have got a little bit out of sync. Specifically,
'max' has a 'sve-max-vq' property, and 'host' does not.
Bring the two together by having the initfn for 'max' actually
call the initfn for 'host'. This will result in 'max' no longer
exposing the 'sve-max-vq' property when using KVM.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Andrew Jones <drjones@redhat.com>
Reviewed-by: Alexander Graf <agraf@csgraf.de>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220204165506.2846058-4-peter.maydell@linaro.org
Use the aarch64_cpu_register() machinery to register the 'host' CPU
type. This doesn't gain us anything functionally, but it does mean
that the code for initializing it looks more like that for the other
CPU types, in that its initfn then doesn't need to call
arm_cpu_post_init() (because aarch64_cpu_instance_init() does that
for it).
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Andrew Jones <drjones@redhat.com>
Reviewed-by: Alexander Graf <agraf@csgraf.de>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220204165506.2846058-3-peter.maydell@linaro.org
Now that KVM has dropped AArch32 host support, the 'host' CPU type is
always AArch64, and we can move it to cpu64.c. This move will allow
us to share code between it and '-cpu max', which should behave
the same as '-cpu host' when using KVM or HVF.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Andrew Jones <drjones@redhat.com>
Reviewed-by: Alexander Graf <agraf@csgraf.de>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220204165506.2846058-2-peter.maydell@linaro.org
Recent Linux versions added support to read ID_AA64ISAR2_EL1. On M1,
those reads trap into QEMU which handles them as faults.
However, AArch64 ID registers should always read as RES0. Let's
handle them accordingly.
This fixes booting Linux 5.17 guests.
Cc: qemu-stable@nongnu.org
Reported-by: Ivan Babrou <ivan@cloudflare.com>
Signed-off-by: Alexander Graf <agraf@csgraf.de>
Message-id: 20220209124135.69183-2-agraf@csgraf.de
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
We are parsing the syndrome field for sysregs in multiple places across
the hvf code, but repeat shift/mask operations with hard coded constants
every time. This is an error prone approach and makes it harder to reason
about the correctness of these operations.
Let's introduce macros that allow us to unify the constants used as well
as create new helpers to extract fields from the sysreg value.
Suggested-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Alexander Graf <agraf@csgraf.de>
Reviewed-by: Cameron Esfahani <dirty@apple.com <mailto:dirty@apple.com>>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-id: 20220209124135.69183-1-agraf@csgraf.de
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Many files use "qemu/log.h" declarations but neglect to include
it (they inherit it via "exec/exec-all.h"). "exec/exec-all.h" is
a core component and shouldn't be used that way. Move the
"qemu/log.h" inclusion locally to each unit requiring it.
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Acked-by: Thomas Huth <thuth@redhat.com>
Message-Id: <20220207082756.82600-10-f4bug@amsat.org>
Signed-off-by: Thomas Huth <thuth@redhat.com>
timer_new_ns(), cpu_get_host_ticks() and NANOSECONDS_PER_SECOND are
declared in "qemu/timer.h".
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-Id: <20220207082756.82600-8-f4bug@amsat.org>
Signed-off-by: Thomas Huth <thuth@redhat.com>
Let's leave cpu_init with just generic CPU initialization and
QOM-related functions.
The rest of the SPR registration functions will be moved in the
following patches along with the code that uses them. These are only
the commonly used ones.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Message-Id: <20220216162426.1885923-28-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
These will need to be accessed from other files once we move the CPUs
code to separate files.
The check_pow_hid0 and check_pow_hid0_74xx are too specific to be
moved to a header so I'll deal with them later when splitting this
code between the multiple CPU families.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Message-Id: <20220216162426.1885923-27-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Put the SPR registration macros in a header that is accessible outside
of cpu_init.c. The following patches will move CPU-specific code to
separate files and will need to access it.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Message-Id: <20220216162426.1885923-26-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
The following patches will move CPU-specific code into separate files,
so expose the most used SPR registration functions:
register_sdr1_sprs | 22 callers
register_low_BATs | 20 callers
register_non_embedded_sprs | 19 callers
register_high_BATs | 10 callers
register_thrm_sprs | 8 callers
register_usprgh_sprs | 6 callers
register_6xx_7xx_soft_tlb | only 3 callers, but it helps to
keep the soft TLB code consistent.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Message-Id: <20220216162426.1885923-25-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Initial intent for the spr_tcg header was to expose the spr_read|write
callbacks that are only used by TCG code. However, although these
routines are TCG-specific, the KVM code needs access to env->sprs
which creation is currently coupled to the callback registration.
We are probably not going to decouple SPR creation and TCG callback
registration any time soon, so let's rename the header to spr_common
to accomodate the register_*_sprs functions that will be moved out of
cpu_init.c in the following patches.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Message-Id: <20220216162426.1885923-24-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
This function registers just one SPR and has only two callers, so open
code it.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Message-Id: <20220216162426.1885923-23-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
The important part of this function is that it applies to non-embedded
CPUs, not that it also applies to the 601. We removed support for the
601 anyway, so rename this function.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Message-Id: <20220216162426.1885923-22-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
The init_proc_755 function is identical to the 745 one except for the
755-specific registers. I think it is worth it to make them share
code.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Message-Id: <20220216162426.1885923-21-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
init_proc_603 is defined after init_proc_e300, so I had to move some
code around to make it work.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Message-Id: <20220216162426.1885923-19-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
This is done to improve init_proc readability and to make subsequent
patches that touch this code a bit cleaner.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Message-Id: <20220216162426.1885923-18-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
This is done to improve init_proc readability and to make subsequent
patches that touch this code a bit cleaner.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Message-Id: <20220216162426.1885923-17-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
This is just to have 755-specific registers contained into a function,
intead of leaving them open-coded in init_proc_755. It makes init_proc
easier to read and keeps later patches that touch this code a bit
cleaner.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Message-Id: <20220216162426.1885923-16-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
The 745 and 755 can share the HID registration, so move it all into
register_755_sprs, which applies for both CPUs.
Also rename that function to register_745_sprs, since the 745 is the
earliest of the two. This will help with separating 755-specific
registers in a subsequent patch.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Message-Id: <20220216162426.1885923-14-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Move some of the 440 registers that are being repeated in the 440*
CPUs to register_440_sprs.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Message-Id: <20220216162426.1885923-11-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
We're considering these two to be from different CPU families, so
duplicate some code to keep them separate.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Message-Id: <20220216162426.1885923-10-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
We're considering these two to be in different CPU families (6xx and
7xx), so keep their SPR registration separate.
The code was copied into register_G2_sprs and the common function was
renamed to apply only to the 755.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Message-Id: <20220216162426.1885923-9-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Make sure that every register_*_sprs function only has calls to
spr_register* to register individual SPRs. Do not allow nesting. This
makes the code easier to follow and a look at init_proc_* should
suffice to know what SPRs a CPU has.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Message-Id: <20220216162426.1885923-6-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Now that the 601 was removed, all of our CPUs have a timebase, so that
can be moved into the common function.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Message-Id: <20220216162426.1885923-5-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
The top level init_proc calls register_generic_sprs but also registers
some other SPRs outside of that function. Let's group everything into
a single place.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Message-Id: <20220216162426.1885923-4-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
The G2LE CPU initialization code is the same as the G2. Use the latter
for both.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Message-Id: <20220216162426.1885923-3-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
The /* XXX : not implemented */ comments all over cpu_init are
confusing and ambiguous.
Do they mean not implemented by QEMU, not implemented in a specific
access mode? Not implemented by the CPU? Do they apply to just the
register right after or to a whole block? Do they mean we have an
action to take in the future to implement these? Are they only
informative?
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Message-Id: <20220216162426.1885923-2-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Introduce virtual hypervisor methods that can support a "Nested KVM HV"
implementation using the bare metal 2-level radix MMU, and using HV
exceptions to return from H_ENTER_NESTED (rather than cause interrupts).
HV exceptions can now be raised in the TCG spapr machine when running a
nested KVM HV guest. The main ones are the lev==1 syscall, the hdecr,
hdsi and hisi, hv fu, and hv emu, and h_virt external interrupts.
HV exceptions are intercepted in the exception handler code and instead
of causing interrupts in the guest and switching the machine to HV mode,
they go to the vhyp where it may exit the H_ENTER_NESTED hcall with the
interrupt vector numer as return value as required by the hcall API.
Address translation is provided by the 2-level page table walker that is
implemented for the bare metal radix MMU. The partition scope page table
is pointed to the L1's partition scope by the get_pate vhc method.
Reviewed-by: Fabiano Rosas <farosas@linux.ibm.com>
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Reviewed-by: Cédric Le Goater <clg@kaod.org>
Message-Id: <20220216102545.1808018-9-npiggin@gmail.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
This moves the logic to reset the QEMU exception state into its own
function.
Reviewed-by: Cédric Le Goater <clg@kaod.org>
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
[ clg: checkpatch fixes ]
Message-Id: <20220216102545.1808018-8-npiggin@gmail.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
The virtual hypervisor currently always intercepts and handles
hypercalls but with a future change this will not always be the case.
Add a helper for the test so the logic is abstracted from the mechanism.
Reviewed-by: Cédric Le Goater <clg@kaod.org>
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Message-Id: <20220216102545.1808018-7-npiggin@gmail.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
In prepartion for implementing a full partition table option for
vhyp, update the get_pate method to take an lpid and return a
success/fail indicator.
The spapr implementation currently just asserts lpid is always 0
and always return success.
Reviewed-by: Cédric Le Goater <clg@kaod.org>
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
[ clg: checkpatch fixes ]
Message-Id: <20220216102545.1808018-6-npiggin@gmail.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
The radix on vhyp MMU uses a single-level radix table walk, with the
partition scope mapping provided by the flat QEMU machine memory.
A subsequent change will use the two-level radix walk on vhyp in some
situations, so provide a helper which can abstract that logic.
Reviewed-by: Cédric Le Goater <clg@kaod.org>
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Message-Id: <20220216102545.1808018-5-npiggin@gmail.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Invalid or missing partition table entry exceptions should cause HV
interrupts. HDSISR is set to bad MMU config, which is consistent with
the ISA and experimentally matches what POWER9 generates.
Reviewed-by: Fabiano Rosas <farosas@linux.ibm.com>
Reviewed-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
[ clg: checkpatch fixes ]
Message-Id: <20220216102545.1808018-2-npiggin@gmail.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
- add PTE_PBMT bits: It uses two PTE bits, but otherwise has no effect on QEMU, since QEMU is sequentially consistent and doesn't model PMAs currently
- add PTE_PBMT bit check for inner PTE
Signed-off-by: Weiwei Li <liweiwei@iscas.ac.cn>
Signed-off-by: Junqiang Wang <wangjunqiang@iscas.ac.cn>
Reviewed-by: Anup Patel <anup@brainfault.org>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-Id: <20220204022658.18097-6-liweiwei@iscas.ac.cn>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
- sinval.vma, hinval.vvma and hinval.gvma do the same as sfence.vma, hfence.vvma and hfence.gvma except extension check
- do nothing other than extension check for sfence.w.inval and sfence.inval.ir
Signed-off-by: Weiwei Li <liweiwei@iscas.ac.cn>
Signed-off-by: Junqiang Wang <wangjunqiang@iscas.ac.cn>
Reviewed-by: Anup Patel <anup@brainfault.org>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-Id: <20220204022658.18097-5-liweiwei@iscas.ac.cn>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
- add PTE_N bit
- add PTE_N bit check for inner PTE
- update address translation to support 64KiB continuous region (napot_bits = 4)
Signed-off-by: Weiwei Li <liweiwei@iscas.ac.cn>
Signed-off-by: Junqiang Wang <wangjunqiang@iscas.ac.cn>
Reviewed-by: Anup Patel <anup@brainfault.org>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-Id: <20220204022658.18097-4-liweiwei@iscas.ac.cn>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
For non-leaf PTEs, the D, A, and U bits are reserved for future standard use.
Signed-off-by: Weiwei Li <liweiwei@iscas.ac.cn>
Signed-off-by: Junqiang Wang <wangjunqiang@iscas.ac.cn>
Reviewed-by: Anup Patel <anup@brainfault.org>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-Id: <20220204022658.18097-3-liweiwei@iscas.ac.cn>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Highest bits of PTE has been used for svpbmt, ref: [1], [2], so we
need to ignore them. They cannot be a part of ppn.
1: The RISC-V Instruction Set Manual, Volume II: Privileged Architecture
4.4 Sv39: Page-Based 39-bit Virtual-Memory System
4.5 Sv48: Page-Based 48-bit Virtual-Memory System
2: https://github.com/riscv/virtual-memory/blob/main/specs/663-Svpbmt-diff.pdf
Signed-off-by: Guo Ren <ren_guo@c-sky.com>
Reviewed-by: Liu Zhiwei <zhiwei_liu@c-sky.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Cc: Bin Meng <bmeng.cn@gmail.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-Id: <20220204022658.18097-2-liweiwei@iscas.ac.cn>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
We add "x-aia" command-line option for RISC-V HART using which
allows users to force enable CPU AIA CSRs without changing the
interrupt controller available in RISC-V machine.
Signed-off-by: Anup Patel <anup.patel@wdc.com>
Signed-off-by: Anup Patel <anup@brainfault.org>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Frank Chang <frank.chang@sifive.com>
Message-id: 20220204174700.534953-18-anup@brainfault.org
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
The AIA specification defines IMSIC interface CSRs for easy access
to the per-HART IMSIC registers without using indirect xiselect and
xireg CSRs. This patch implements the AIA IMSIC interface CSRs.
Signed-off-by: Anup Patel <anup.patel@wdc.com>
Signed-off-by: Anup Patel <anup@brainfault.org>
Reviewed-by: Frank Chang <frank.chang@sifive.com>
Message-id: 20220204174700.534953-16-anup@brainfault.org
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
The AIA specification introduces new [m|s|vs]topi CSRs for
reporting pending local IRQ number and associated IRQ priority.
Signed-off-by: Anup Patel <anup.patel@wdc.com>
Signed-off-by: Anup Patel <anup@brainfault.org>
Reviewed-by: Frank Chang <frank.chang@sifive.com>
Message-id: 20220204174700.534953-14-anup@brainfault.org
[ Changed by AF:
- Fixup indentation
]
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
The AIA specificaiton adds interrupt filtering support for M-mode
and HS-mode. Using AIA interrupt filtering M-mode and H-mode can
take local interrupt 13 or above and selectively inject same local
interrupt to lower privilege modes.
At the moment, we don't have any local interrupts above 12 so we
add dummy implementation (i.e. read zero and ignore write) of AIA
interrupt filtering CSRs.
Signed-off-by: Anup Patel <anup.patel@wdc.com>
Signed-off-by: Anup Patel <anup@brainfault.org>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Frank Chang <frank.chang@sifive.com>
Message-id: 20220204174700.534953-13-anup@brainfault.org
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
The AIA hvictl and hviprioX CSRs allow hypervisor to control
interrupts visible at VS-level. This patch implements AIA hvictl
and hviprioX CSRs.
Signed-off-by: Anup Patel <anup.patel@wdc.com>
Signed-off-by: Anup Patel <anup@brainfault.org>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Frank Chang <frank.chang@sifive.com>
Message-id: 20220204174700.534953-12-anup@brainfault.org
[ Changes by AF:
- Fix possible unintilised variable error in rmw_sie()
]
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
The AIA specification adds new CSRs for RV32 so that RISC-V hart can
support 64 local interrupts on both RV32 and RV64.
Signed-off-by: Anup Patel <anup.patel@wdc.com>
Signed-off-by: Anup Patel <anup@brainfault.org>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Frank Chang <frank.chang@sifive.com>
Message-id: 20220204174700.534953-11-anup@brainfault.org
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
The AIA spec defines programmable 8-bit priority for each local interrupt
at M-level, S-level and VS-level so we extend local interrupt processing
to consider AIA interrupt priorities. The AIA CSRs which help software
configure local interrupt priorities will be added by subsequent patches.
Signed-off-by: Anup Patel <anup.patel@wdc.com>
Signed-off-by: Anup Patel <anup@brainfault.org>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-id: 20220204174700.534953-10-anup@brainfault.org
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
The AIA device emulation (such as AIA IMSIC) should be able to set
(or provide) AIA ireg read-modify-write callback for each privilege
level of a RISC-V HART.
Signed-off-by: Anup Patel <anup.patel@wdc.com>
Signed-off-by: Anup Patel <anup@brainfault.org>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Frank Chang <frank.chang@sifive.com>
Message-id: 20220204174700.534953-9-anup@brainfault.org
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
The RISC-V AIA specification extends RISC-V local interrupts and
introduces new CSRs. This patch adds defines for the new AIA CSRs.
Signed-off-by: Anup Patel <anup.patel@wdc.com>
Signed-off-by: Anup Patel <anup@brainfault.org>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Frank Chang <frank.chang@sifive.com>
Message-id: 20220204174700.534953-8-anup@brainfault.org
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
We define a CPU feature for AIA CSR support in RISC-V CPUs which
can be set by machine/device emulation. The RISC-V CSR emulation
will also check this feature for emulating AIA CSRs.
Signed-off-by: Anup Patel <anup.patel@wdc.com>
Signed-off-by: Anup Patel <anup@brainfault.org>
Reviewed-by: Bin Meng <bmeng.cn@gmail.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Frank Chang <frank.chang@sifive.com>
Message-id: 20220204174700.534953-7-anup@brainfault.org
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
The machine or device emulation should be able to force set certain
CPU features because:
1) We can have certain CPU features which are in-general optional
but implemented by RISC-V CPUs on the machine.
2) We can have devices which require a certain CPU feature. For example,
AIA IMSIC devices expect AIA CSRs implemented by RISC-V CPUs.
Signed-off-by: Anup Patel <anup.patel@wdc.com>
Signed-off-by: Anup Patel <anup@brainfault.org>
Reviewed-by: Bin Meng <bmeng.cn@gmail.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Frank Chang <frank.chang@sifive.com>
Message-id: 20220204174700.534953-6-anup@brainfault.org
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
The guest external interrupts from an interrupt controller are
delivered only when the Guest/VM is running (i.e. V=1). This means
any guest external interrupt which is triggered while the Guest/VM
is not running (i.e. V=0) will be missed on QEMU resulting in Guest
with sluggish response to serial console input and other I/O events.
To solve this, we check and inject interrupt after setting V=1.
Signed-off-by: Anup Patel <anup.patel@wdc.com>
Signed-off-by: Anup Patel <anup@brainfault.org>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Frank Chang <frank.chang@sifive.com>
Message-id: 20220204174700.534953-5-anup@brainfault.org
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
The hgeie and hgeip CSRs are required for emulating an external
interrupt controller capable of injecting virtual external interrupt
to Guest/VM running at VS-level.
Signed-off-by: Anup Patel <anup.patel@wdc.com>
Signed-off-by: Anup Patel <anup@brainfault.org>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Frank Chang <frank.chang@sifive.com>
Message-id: 20220204174700.534953-4-anup@brainfault.org
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
A hypervisor can optionally take guest external interrupts using
SGEIP bit of hip and hie CSRs.
Signed-off-by: Anup Patel <anup.patel@wdc.com>
Signed-off-by: Anup Patel <anup@brainfault.org>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Frank Chang <frank.chang@sifive.com>
Message-id: 20220204174700.534953-3-anup@brainfault.org
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
The guest should be able to set the vill bit as part of vsetvl.
Currently we may set env->vill to 1 in the vsetvl helper, but there
is nowhere that we set it to 0, so once it transitions to 1 it's stuck
there until the system is reset.
Signed-off-by: LIU Zhiwei <zhiwei_liu@c-sky.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-Id: <20220201064601.41143-1-zhiwei_liu@c-sky.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
This adds the decoder and translation for the XVentanaCondOps custom
extension (vendor-defined by Ventana Micro Systems), which is
documented at https://github.com/ventanamicro/ventana-custom-extensions/releases/download/v1.0.0/ventana-custom-extensions-v1.0.0.pdf
This commit then also adds a guard-function (has_XVentanaCondOps_p)
and the decoder function to the table of decoders, enabling the
support for the XVentanaCondOps extension.
Signed-off-by: Philipp Tomsich <philipp.tomsich@vrull.eu>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-Id: <20220202005249.3566542-7-philipp.tomsich@vrull.eu>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
To split up the decoder into multiple functions (both to support
vendor-specific opcodes in separate files and to simplify maintenance
of orthogonal extensions), this changes decode_op to iterate over a
table of decoders predicated on guard functions.
This commit only adds the new structure and the table, allowing for
the easy addition of additional decoders in the future.
Signed-off-by: Philipp Tomsich <philipp.tomsich@vrull.eu>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-Id: <20220202005249.3566542-6-philipp.tomsich@vrull.eu>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
The Zb[abcs] support code still uses the RISCV_CPU macros to access
the configuration information (i.e., check whether an extension is
available/enabled). Now that we provide this information directly
from DisasContext, we can access this directly via the cfg_ptr field.
Signed-off-by: Philipp Tomsich <philipp.tomsich@vrull.eu>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Suggested-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220202005249.3566542-5-philipp.tomsich@vrull.eu>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
The implementation in trans_{rvi,rvv,rvzfh}.c.inc accesses the shallow
copies (in DisasContext) of some of the elements available in the
RISCVCPUConfig structure. This commit redirects accesses to use the
cfg_ptr copied into DisasContext and removes the shallow copies.
Signed-off-by: Philipp Tomsich <philipp.tomsich@vrull.eu>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Suggested-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220202005249.3566542-4-philipp.tomsich@vrull.eu>
[ Changes by AF:
- Fixup checkpatch failures
]
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
As the number of extensions is growing, copying them individiually
into the DisasContext will scale less and less... instead we populate
a pointer to the RISCVCPUConfig structure in the DisasContext.
This adds an extra indirection when checking for the availability of
an extension (compared to copying the fields into DisasContext).
While not a performance problem today, we can always (shallow) copy
the entire structure into the DisasContext (instead of putting a
pointer to it) if this is ever deemed necessary.
Signed-off-by: Philipp Tomsich <philipp.tomsich@vrull.eu>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Suggested-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220202005249.3566542-3-philipp.tomsich@vrull.eu>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Signed-off-by: Philipp Tomsich <philipp.tomsich@vrull.eu>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Suggested-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220202005249.3566542-2-philipp.tomsich@vrull.eu>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
The addition of uxl support in gdbstub adds a few checks on the maximum
register length, but omitted MXL_RV128, an experimental feature.
This patch makes rv128 react as rv64, as previously.
Signed-off-by: Frédéric Pétrot <frederic.petrot@univ-grenoble-alpes.fr>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: LIU Zhiwei <zhiwei_liu@c-sky.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-id: 20220124202456.420258-1-frederic.petrot@univ-grenoble-alpes.fr
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
I think these have been wrong since f193c7979c (do not depend on
thunk.h - more log items). Fix them so as not to confuse other
debugging.
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220204204335.1689602-26-alex.bennee@linaro.org>
ISA v3.1 changed some VSX instructions behavior by changing what the
other words/doubleword in the result should contain when the result is
only one word/doubleword. e.g. xsmaxdp operates on doubleword 0 and
saves the result also in doubleword 0.
Before, the second doubleword result was undefined according to the
ISA, but now it's stated that it should be zeroed.
Even tough the result was undefined before, hardware implementing these
instructions already filled these fields with 0s. Changing every ISA
version in QEMU to this behavior makes the results match what happens
in hardware.
Signed-off-by: Víctor Colombo <victor.colombo@eldorado.org.br>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220204181944.65063-1-victor.colombo@eldorado.org.br>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
We don't really need to check for exception model while applying
AIL. We can check the lpcr_mask for the presence of
LPCR_AIL/LPCR_HAIL.
This removes one more instance of passing the exception model ID
around.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Message-Id: <20220207183036.1507882-5-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
We currently abort QEMU during the dispatch of an interrupt if we try
to set MSR_HV without having MSR_HVB in the msr_mask. I think we
should verify this for all MSR bits. There is no reason to ever have a
MSR bit set if the corresponding bit is not set in that CPU's
msr_mask.
Note that this is not about the emulated code setting reserved
bits. We clear the new_msr when starting to dispatch an exception, so
if we end up with bits not present in the msr_mask that is a QEMU
programming error.
I kept the HSRR verification for BookS because it is the only CPU
family that has HSRRs.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Message-Id: <20220207183036.1507882-4-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Make the cpu-specific powerpc_excp_* functions a bit simpler by moving
the bounds check and logging to powerpc_excp.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Reviewed-by: Cédric Le Goater <clg@kaod.org>
Message-Id: <20220207183036.1507882-3-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Now that all CPU families have their own separate exception
dispatching code we can remove powerpc_excp_legacy.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Reviewed-by: Cédric Le Goater <clg@kaod.org>
Message-Id: <20220207183036.1507882-2-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
The 7xx CPUs don't have alternate/hypervisor Save and Restore
Registers, so we can set SRR0 and SRR1 directly.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Message-Id: <20220204173430.1457358-11-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
This code applies only to the 7xx CPUs, so we can remove the switch
statement.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Message-Id: <20220204173430.1457358-10-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Thre is no HV support in the 7xx.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Message-Id: <20220204173430.1457358-9-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Remove the BookE code and add a comment explaining why we need to keep
hypercall support even though this CPU does not have a hypervisor
mode.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Message-Id: <20220204173430.1457358-8-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
There's no ESR in the 7xx.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Message-Id: <20220204173430.1457358-7-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
There is no MSR_HV in the 7xx so remove the LPES0 handling.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Message-Id: <20220204173430.1457358-6-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
There's no MSR_HV in the 7xx.
Also remove 40x and BookE code.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Message-Id: <20220204173430.1457358-5-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Differences from the generic powerpc_excp code:
- Not BookE, so some MSR bits are cleared at interrupt dispatch;
- No MSR_HV;
- No power saving states;
- No Hypervisor Emulation Assistance;
- Not 64 bits;
- No System call vectored;
- No Alternate Interrupt Location.
Exceptions used:
POWERPC_EXCP_ALIGN
POWERPC_EXCP_DECR
POWERPC_EXCP_DLTLB
POWERPC_EXCP_DSI
POWERPC_EXCP_DSTLB
POWERPC_EXCP_EXTERNAL
POWERPC_EXCP_FPU
POWERPC_EXCP_IABR
POWERPC_EXCP_IFTLB
POWERPC_EXCP_ISI
POWERPC_EXCP_MCHECK
POWERPC_EXCP_PERFM
POWERPC_EXCP_PROGRAM
POWERPC_EXCP_RESET
POWERPC_EXCP_SMI
POWERPC_EXCP_SYSCALL
POWERPC_EXCP_THERM
POWERPC_EXCP_TRACE
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Message-Id: <20220204173430.1457358-4-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Introduce a new powerpc_excp function specific for PowerPC 7xx CPUs
(740, 745, 750, 750cl, 750cx, 750fx, 750gx, 755). This commit copies
powerpc_excp_legacy verbatim so the next one has a clean diff.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Message-Id: <20220204173430.1457358-3-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Since we've split the exception code by exception model, the exception
model IDs are becoming less useful. These two can be merged.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Message-Id: <20220204173430.1457358-2-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
The 6xx CPUs don't have alternate/hypervisor Save and Restore
Registers, so we can set SRR0 and SRR1 directly.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Message-Id: <20220203200957.1434641-12-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
This code applies only to the 6xx CPUs, so we can remove the switch
statement.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Message-Id: <20220203200957.1434641-11-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
There is no HV support in the 6xx.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Message-Id: <20220203200957.1434641-10-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
There is no Hypervisor mode in the 6xx CPUs.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Message-Id: <20220203200957.1434641-9-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
There's no ESR in the 6xx CPUs.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Message-Id: <20220203200957.1434641-8-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
There's no Hypervisor mode in the 6xx, so remove all LPES0 logic.
Also remove BookE IRQ code.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Message-Id: <20220203200957.1434641-7-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
There's no MSR_HV in the 6xx CPUs.
Also remove the 40x and BookE code.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Message-Id: <20220203200957.1434641-6-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
This only applies to the G2s, the other 6xx CPUs will not have this
vector registered.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Message-Id: <20220203200957.1434641-5-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Introduce a new powerpc_excp function specific for PowerPC 6xx CPUs
(603, 604, G2, MPC5xx, MCP8xx). This commit copies powerpc_excp_legacy
verbatim so the next one has a clean diff.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Message-Id: <20220203200957.1434641-3-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
We don't need three separate exception model IDs for the 603, 604 and
G2.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Message-Id: <20220203200957.1434641-2-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
The PowerPC 601 processor is the first generation of processors to
implement the PowerPC architecture. It was designed as a bridge
processor and also could execute most of the instructions of the
previous POWER architecture. It was found on the first Macs and IBM
RS/6000 workstations.
There is not much interest in keeping the CPU model of this
POWER-PowerPC bridge processor. We have the 603 and 604 CPU models of
the 60x family which implement the complete PowerPC instruction set.
Cc: "Hervé Poussineau" <hpoussin@reactos.org>
Cc: Laurent Vivier <laurent@vivier.eu>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Reviewed-by: Fabiano Rosas <farosas@linux.ibm.com>
Message-Id: <20220203142756.1302515-1-clg@kaod.org>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
ppc_radix64_partition_scoped_xlate() logs the host page protection
bits variable but it is uninitialized. The value is set later on in
ppc_radix64_check_prot(). Remove the output.
Fixes: Coverity CID 1468942
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Reviewed-by: Greg Kurz <groug@kaod.org>
Message-Id: <20220203142145.1301749-1-clg@kaod.org>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
There is no MSR_HV in BookE, so remove all of the HV logic.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Message-Id: <20220128224018.1228062-12-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Remove the switch as this function applies to BookE only.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Message-Id: <20220128224018.1228062-11-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
QEMU does not support BookE as a hypervisor.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Message-Id: <20220128224018.1228062-10-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
BookE has no DSISR or DAR. The proper registers ESR and DEAR were
already set at this point.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Message-Id: <20220128224018.1228062-9-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
There is no LPES0 in BookE and no MSR_HV.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Message-Id: <20220128224018.1228062-8-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
The SRR1 should be set to the MSR value. There are no diagnostic bits
in the SRR1 for BookE.
Note that this fixes a bug where MSR_GS would be set and Linux would
go into KVM code when there's no KVM guest.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Message-Id: <20220128224018.1228062-7-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
There is no DSISR or DAR in BookE. Change to ESR and DEAR.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Message-Id: <20220128224018.1228062-6-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
There's no MSR_HV in BookE.
Also remove 40x code.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Message-Id: <20220128224018.1228062-5-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Introduce a new powerpc_excp function specific for BookE CPUs. This
commit copies powerpc_excp_legacy verbatim so the next one has a clean
diff.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Message-Id: <20220128224018.1228062-2-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
This CPU was partially removed due to lack of support in 2017 by commit
aef7796057 ("ppc: remove non implemented cpu models").
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Reviewed-by: Cédric Le Goater <clg@kaod.org>
Message-Id: <20220128221611.1221715-1-farosas@linux.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
The recently introduced debug tests in kvm-unit-tests exposed an error
in our handling of singlestep cause by stale hflags. This is caught by
--enable-debug-tcg when running the tests.
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reported-by: Andrew Jones <drjones@redhat.com>
Tested-by: Andrew Jones <drjones@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220202122353.457084-1-alex.bennee@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The SMCCC 1.3 spec section 5.2 says
The Unknown SMC Function Identifier is a sign-extended value of (-1)
that is returned in the R0, W0 or X0 registers. An implementation must
return this error code when it receives:
* An SMC or HVC call with an unknown Function Identifier
* An SMC or HVC call for a removed Function Identifier
* An SMC64/HVC64 call from AArch32 state
To comply with these statements, let's always return -1 when we encounter
an unknown HVC or SMC call.
[PMM:
This is a reinstatement of commit 9fcd15b919, previously
reverted in commit 4825eaae4fdd56fba0f; we can do this now that we
have arranged for all the affected board models to not enable the
PSCI emulation if they are running guest code at EL3. This avoids
the regressions that caused us to revert the change for 7.0.]
Signed-off-by: Alexander Graf <agraf@csgraf.de>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Niek Linnenbank <nieklinnenbank@gmail.com>
Tested-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Tested-by: Cédric Le Goater <clg@kaod.org>
Tested-by: Niek Linnenbank <nieklinnenbank@gmail.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
We want to allow the psci-conduit property to be set after realize,
because the parts of the code which are best placed to decide if it's
OK to enable QEMU's builtin PSCI emulation (the board code and the
arm_load_kernel() function are distant from the code which creates
and realizes CPUs (typically inside an SoC object's init and realize
method) and run afterwards.
Since the DEFINE_PROP_* macros don't have support for creating
properties which can be changed after realize, change the property to
be created with object_property_add_uint32_ptr(), which is what we
already use in this function for creating settable-after-realize
properties like init-svtor and init-nsvtor.
Note that it doesn't conceptually make sense to change the setting of
the property after the machine has been completely initialized,
beacuse this would mean that the behaviour of the machine when first
started would differ from its behaviour when the system is
subsequently reset. (It would also require the underlying state to
be migrated, which we don't do.)
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Tested-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Tested-by: Cédric Le Goater <clg@kaod.org>
Message-id: 20220127154639.2090164-2-peter.maydell@linaro.org
Use the named bit rather than a bare extract32.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Zenghui Yu <yuzenghui@huawei.com>
Message-id: 20220127063428.30212-5-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
When HCR_EL2.E2H is set, the format of CPTR_EL2 changes to
look more like CPACR_EL1, with ZEN and FPEN fields instead
of TZ and TFP fields.
Reported-by: Zenghui Yu <yuzenghui@huawei.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20220127063428.30212-4-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Extract entire fields for ZEN and FPEN, rather than testing specific bits.
This makes it easier to follow the code versus the ARM spec.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Zenghui Yu <yuzenghui@huawei.com>
Message-id: 20220127063428.30212-3-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This patchset fixes some important bugs in the hppa artist graphics driver:
- Fix artist graphics for HP-UX and Linux
- Mouse cursor fixes for HP-UX
- Fix draw_line() function on artist graphic
and it adds new qemu features for hppa:
- Allow up to 16 emulated CPUs (instead of 8)
- Add support for an emulated TOC/NMI button
A new Seabios-hppa firmware is included as well:
- Update SeaBIOS-hppa to VERSION 3
- New opt/hostid fw_cfg option to change hostid
- Add opt/console fw_cfg option to select default console
- Added 16x32 font to STI firmware
Signed-off-by: Helge Deller <deller@gmx.de>
-----BEGIN PGP SIGNATURE-----
iHUEABYIAB0WIQS86RI+GtKfB8BJu973ErUQojoPXwUCYfrIogAKCRD3ErUQojoP
X93ZAP9hqp/FCz/goH7Tpqce6FspHriJm6Ej2Rd7HxZWmh4bpQD/cMjY8qpcA/6r
Nx4bgRPT6kCZwwLx7v2jZ2QsA2KaZAM=
=c0qO
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/hdeller/tags/hppa-updates-pull-request' into staging
Fixes and updates for hppa target
This patchset fixes some important bugs in the hppa artist graphics driver:
- Fix artist graphics for HP-UX and Linux
- Mouse cursor fixes for HP-UX
- Fix draw_line() function on artist graphic
and it adds new qemu features for hppa:
- Allow up to 16 emulated CPUs (instead of 8)
- Add support for an emulated TOC/NMI button
A new Seabios-hppa firmware is included as well:
- Update SeaBIOS-hppa to VERSION 3
- New opt/hostid fw_cfg option to change hostid
- Add opt/console fw_cfg option to select default console
- Added 16x32 font to STI firmware
Signed-off-by: Helge Deller <deller@gmx.de>
# gpg: Signature made Wed 02 Feb 2022 18:08:34 GMT
# gpg: using EDDSA key BCE9123E1AD29F07C049BBDEF712B510A23A0F5F
# gpg: Good signature from "Helge Deller <deller@gmx.de>" [unknown]
# gpg: aka "Helge Deller <deller@kernel.org>" [unknown]
# gpg: WARNING: This key is not certified with a trusted signature!
# gpg: There is no indication that the signature belongs to the owner.
# Primary key fingerprint: 4544 8228 2CD9 10DB EF3D 25F8 3E5F 3D04 A7A2 4603
# Subkey fingerprint: BCE9 123E 1AD2 9F07 C049 BBDE F712 B510 A23A 0F5F
* remotes/hdeller/tags/hppa-updates-pull-request:
hw/display/artist: Fix draw_line() artefacts
hw/display/artist: Mouse cursor fixes for HP-UX
hw/display/artist: rewrite vram access mode handling
hppa: Add support for an emulated TOC/NMI button.
hw/hppa: Allow up to 16 emulated CPUs
seabios-hppa: Update SeaBIOS-hppa to VERSION 3
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Almost all PA-RISC machines have either a button that is labeled with 'TOC' or
a BMC/GSP function to trigger a TOC. TOC is a non-maskable interrupt that is
sent to the processor. This can be used for diagnostic purposes like obtaining
a stack trace/register dump or to enter KDB/KGDB in Linux.
This patch adds support for such an emulated TOC button.
It wires up the qemu monitor "nmi" command to trigger a TOC. For that it
provides the hppa_nmi function which is assigned to the nmi_monitor_handler
function pointer. When called it raises the EXCP_TOC hardware interrupt in the
hppa_cpu_do_interrupt() function. The interrupt function then calls the
architecturally defined TOC function in SeaBIOS-hppa firmware (at fixed address
0xf0000000).
According to the PA-RISC PDC specification, the SeaBIOS firmware then writes
the CPU registers into PIM (processor internal memmory) for later analysis. In
order to write all registers it needs to know the contents of the CPU "shadow
registers" and the IASQ- and IAOQ-back values. The IAOQ/IASQ values are
provided by qemu in shadow registers when entering the SeaBIOS TOC function.
This patch adds a new aritificial opcode "getshadowregs" (0xfffdead2) which
restores the original values of the shadow registers. With this opcode SeaBIOS
can store those registers as well into PIM before calling an OS-provided TOC
handler.
To trigger a TOC, switch to the qemu monitor with Ctrl-A C, and type in the
command "nmi". After the TOC started the OS-debugger, exit the qemu monitor
with Ctrl-A C.
Signed-off-by: Helge Deller <deller@gmx.de>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Hi
This time I have disabled vmstate canary patches form Dave Gilbert.
Let's see if it works.
Later, Juan.
-----BEGIN PGP SIGNATURE-----
iQIzBAABCAAdFiEEGJn/jt6/WMzuA0uC9IfvGFhy1yMFAmH0NkEACgkQ9IfvGFhy
1yM4VQ/+MML5ugA9XA5hOFV+Stwv2ENtMR4r4raQsC7UKdKMaCNuoj1BdlXMRaki
E2TpoHYq99rfJX+AA0XihxHh84I1l9fpoiXrcr8pgNmhcj0qkBykY9Elzf95woMM
UMyinL2jhHfHjby29AaE7BDelUZIA0BgyzQ3TMq8rO+l/ZqFYA8U1SEgPlDYj7cn
gkDWFkPJx6IKgcI8M1obHw11azHgS7dmjjl9lXzxJ2/WfXnoZCuU0BtHd6a1rnAS
qcO3gwLfCo+3aTGKRseJie1Cljz6sIP+ke0Xgn5O+e7alWjCOtlVZrWwd2MqQ07K
2bf7uuTC2KQicLLH8DCnoH/BSvHmpyl/FglFrETRk/55KKg0bi+ZltXaTs9bC2uO
jzNbBSRf8UMcX6Bp3ukhPaFQ1vxqP7KxN9bM+7LYP9aX7Lt/NCJciYjw1jCTwcwi
nz0RS4d7cscMhoMEarPCKcaNJR6PJetdZY2VXavWjXv6er3407yTocvuei0Epdyb
WZtbFnpI2tfx1GEr/Bz6Mnk/qn7kwo7BFEUtJoweFE05g5wHa1PojsblrrsqeOuc
llpK8o8c8NFACxeiLa0z0VBkTjdOtao206eLhF+Se3ukubImayRQwZiOCEBBXwB3
+LmVcmwNDfNonSWI04AA2WAy9gAdM3Ko/gBfWsuOPR5oIs65wns=
=F/ek
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/quintela-gitlab/tags/migration-20220128-pull-request' into staging
Migration Pull request (Take 2)
Hi
This time I have disabled vmstate canary patches form Dave Gilbert.
Let's see if it works.
Later, Juan.
# gpg: Signature made Fri 28 Jan 2022 18:30:25 GMT
# gpg: using RSA key 1899FF8EDEBF58CCEE034B82F487EF185872D723
# gpg: Good signature from "Juan Quintela <quintela@redhat.com>" [full]
# gpg: aka "Juan Quintela <quintela@trasno.org>" [full]
# Primary key fingerprint: 1899 FF8E DEBF 58CC EE03 4B82 F487 EF18 5872 D723
* remotes/quintela-gitlab/tags/migration-20220128-pull-request: (36 commits)
migration: Move temp page setup and cleanup into separate functions
migration: Simplify unqueue_page()
migration: Add postcopy_has_request()
migration: Enable UFFD_FEATURE_THREAD_ID even without blocktime feat
migration: No off-by-one for pss->page update in host page size
migration: Tally pre-copy, downtime and post-copy bytes independently
migration: Introduce ram_transferred_add()
migration: Don't return for postcopy_send_discard_bm_ram()
migration: Drop return code for disgard ram process
migration: Do chunk page in postcopy_each_ram_send_discard()
migration: Drop postcopy_chunk_hostpages()
migration: Don't return for postcopy_chunk_hostpages()
migration: Drop dead code of ram_debug_dump_bitmap()
migration/ram: clean up unused comment.
migration: Report the error returned when save_live_iterate fails
migration/migration.c: Remove the MIGRATION_STATUS_ACTIVE when migration finished
migration/migration.c: Avoid COLO boot in postcopy migration
migration/migration.c: Add missed default error handler for migration state
Remove unnecessary minimum_version_id_old fields
multifd: Rename pages_used to normal_pages
...
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The 602 was derived from the PowerPC 603, for the gaming market it
seems. It was hardly used and no firmware supporting the CPU could be
found. Drop support.
Signed-off-by: Cédric Le Goater <clg@kaod.org>
The migration code will not look at a VMStateDescription's
minimum_version_id_old field unless that VMSD has set the
load_state_old field to something non-NULL. (The purpose of
minimum_version_id_old is to specify what migration version is needed
for the code in the function pointed to by load_state_old to be able
to handle it on incoming migration.)
We have exactly one VMSD which still has a load_state_old,
in the PPC CPU; every other VMSD which sets minimum_version_id_old
is doing so unnecessarily. Delete all the unnecessary ones.
Commit created with:
sed -i '/\.minimum_version_id_old/d' $(git grep -l '\.minimum_version_id_old')
with the one legitimate use then hand-edited back in.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
---
It missed vmstate_ppc_cpu.
The exception caused by an SVC instruction may be taken to AArch32
Hyp mode for two reasons:
* HCR.TGE indicates that exceptions from EL0 should trap to EL2
* we were already in Hyp mode
The entrypoint in the vector table to be used differs in these two
cases: for an exception routed to Hyp mode from EL0, we enter at the
common 0x14 "hyp trap" entrypoint. For SVC from Hyp mode to Hyp
mode, we enter at the 0x08 (svc/hvc trap) entrypoint.
In the v8A Arm ARM pseudocode this is done in AArch32.TakeSVCException.
QEMU incorrectly routed both of these exceptions to the 0x14
entrypoint. Correct the entrypoint for SVC from Hyp to Hyp by making
use of the existing logic which handles "normal entrypoint for
Hyp-to-Hyp, otherwise 0x14" for traps like UNDEF and data/prefetch
aborts (reproduced here since it's outside the visible context
in the diff for this commit):
if (arm_current_el(env) != 2 && addr < 0x14) {
addr = 0x14;
}
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220117131953.3936137-1-peter.maydell@linaro.org
In an SMP system it can be unclear which CPU is taking an exception;
add the CPU index (which is the same value used in the TCG 'Trace
%d:' logging) to the "Taking exception" log line to clarify it.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220122182444.724087-2-peter.maydell@linaro.org