target-arm:

* hw/intc/arm_gic: Fix set pending of PPIs
  * hw/intc/arm_gic: Fix writes to GICD_ITARGETSRn
  * xilinx_zynq: Add cache controller
  * xilinx_zynq: Support up to two CPU cores
  * tests/avocado: update sbsa-ref firmware
  * sbsa-ref: move to Neoverse-N2 as default
  * More decodetree conversion of A64 ASIMD insns
  * docs/system/target-arm: Re-alphabetize board list
  * Implement FEAT WFxT and enable for '-cpu max'
  * hw/usb/hcd-ohci: Fix #1510, #303: pid not IN or OUT
 -----BEGIN PGP SIGNATURE-----
 
 iQJNBAABCAA3FiEE4aXFk81BneKOgxXPPCUl7RQ2DN4FAmZZvHgZHHBldGVyLm1h
 eWRlbGxAbGluYXJvLm9yZwAKCRA8JSXtFDYM3uArEACZgk0hqKtRcEzwdJi7w7ax
 ta/Iyl7AA+ngmh0qcE8QX8rzZhcGcKhsaQ8dNESMIBqVi1fS0hmNrIUWhXqmvNmZ
 07WJvQx7Ki9YNX02frjkRZTwWozsbW8uoaXgnngFK93PNh/IoQBRP5T/LIZ5t3d7
 7I/O/tnS/LZrL6wtP4EbRIEvZ4dfJe3X+uSCHSF8iOYrJLrZCsy/ItJqzY6Y0f96
 iUoOfXjrYH2hM9VkJGHIGy1r9nYRkCxXREQh7ahw/z6mv0nIB1YTS1eR0dH9D1yM
 afdby8iPN7k+f3en+2dHfyPjani4vPd1/k9mgLnQtVLOHrdw2APs1Q59YwYhunhe
 ZC0Fcp6jBSkcI6LHRY0bRtY0U3SBPrfkSD5sJrNH1obnsSvizeSU3uCq1QmKRCRY
 FuARmE77ywY8CURiqfwPSrC/ecSnamueIQNKNPZVQ5ve3dbokp/Gr1eJgcq80ovK
 wIKmNhJq60qBcj2zQ1aw1PP3+zvbZ/rl2j0abGbxBH3Kkp9AvALDiLRMciazVWph
 vbx7e1Y90Zrs3ap1AAUFUyWexYPNvZWmSGOaWv6Wdt+1Yf/YDW9wrwjVd3eRG9rM
 vgNMrccysBUNDpS4s0KSbqLy9AsjqAa41SiKipWFBekUyQFboNpTNfDNCspIPj9m
 dnI4fyXkVmSCYFiW2akmjg==
 =Jy5P
 -----END PGP SIGNATURE-----

Merge tag 'pull-target-arm-20240531' of https://git.linaro.org/people/pmaydell/qemu-arm into staging

target-arm:
 * hw/intc/arm_gic: Fix set pending of PPIs
 * hw/intc/arm_gic: Fix writes to GICD_ITARGETSRn
 * xilinx_zynq: Add cache controller
 * xilinx_zynq: Support up to two CPU cores
 * tests/avocado: update sbsa-ref firmware
 * sbsa-ref: move to Neoverse-N2 as default
 * More decodetree conversion of A64 ASIMD insns
 * docs/system/target-arm: Re-alphabetize board list
 * Implement FEAT WFxT and enable for '-cpu max'
 * hw/usb/hcd-ohci: Fix #1510, #303: pid not IN or OUT

# -----BEGIN PGP SIGNATURE-----
#
# iQJNBAABCAA3FiEE4aXFk81BneKOgxXPPCUl7RQ2DN4FAmZZvHgZHHBldGVyLm1h
# eWRlbGxAbGluYXJvLm9yZwAKCRA8JSXtFDYM3uArEACZgk0hqKtRcEzwdJi7w7ax
# ta/Iyl7AA+ngmh0qcE8QX8rzZhcGcKhsaQ8dNESMIBqVi1fS0hmNrIUWhXqmvNmZ
# 07WJvQx7Ki9YNX02frjkRZTwWozsbW8uoaXgnngFK93PNh/IoQBRP5T/LIZ5t3d7
# 7I/O/tnS/LZrL6wtP4EbRIEvZ4dfJe3X+uSCHSF8iOYrJLrZCsy/ItJqzY6Y0f96
# iUoOfXjrYH2hM9VkJGHIGy1r9nYRkCxXREQh7ahw/z6mv0nIB1YTS1eR0dH9D1yM
# afdby8iPN7k+f3en+2dHfyPjani4vPd1/k9mgLnQtVLOHrdw2APs1Q59YwYhunhe
# ZC0Fcp6jBSkcI6LHRY0bRtY0U3SBPrfkSD5sJrNH1obnsSvizeSU3uCq1QmKRCRY
# FuARmE77ywY8CURiqfwPSrC/ecSnamueIQNKNPZVQ5ve3dbokp/Gr1eJgcq80ovK
# wIKmNhJq60qBcj2zQ1aw1PP3+zvbZ/rl2j0abGbxBH3Kkp9AvALDiLRMciazVWph
# vbx7e1Y90Zrs3ap1AAUFUyWexYPNvZWmSGOaWv6Wdt+1Yf/YDW9wrwjVd3eRG9rM
# vgNMrccysBUNDpS4s0KSbqLy9AsjqAa41SiKipWFBekUyQFboNpTNfDNCspIPj9m
# dnI4fyXkVmSCYFiW2akmjg==
# =Jy5P
# -----END PGP SIGNATURE-----
# gpg: Signature made Fri 31 May 2024 05:03:04 AM PDT
# gpg:                using RSA key E1A5C593CD419DE28E8315CF3C2525ED14360CDE
# gpg:                issuer "peter.maydell@linaro.org"
# gpg: Good signature from "Peter Maydell <peter.maydell@linaro.org>" [full]
# gpg:                 aka "Peter Maydell <pmaydell@gmail.com>" [full]
# gpg:                 aka "Peter Maydell <pmaydell@chiark.greenend.org.uk>" [full]
# gpg:                 aka "Peter Maydell <peter@archaic.org.uk>" [unknown]

* tag 'pull-target-arm-20240531' of https://git.linaro.org/people/pmaydell/qemu-arm: (43 commits)
  hw/usb/hcd-ohci: Fix #1510, #303: pid not IN or OUT
  target/arm: Implement FEAT WFxT and enable for '-cpu max'
  accel/tcg: Make TCGCPUOps::cpu_exec_halt return bool for whether to halt
  docs/system/target-arm: Re-alphabetize board list
  target/arm: Disable SVE extensions when SVE is disabled
  target/arm: Convert FCSEL to decodetree
  target/arm: Convert FMADD, FMSUB, FNMADD, FNMSUB to decodetree
  target/arm: Convert SQDMULH, SQRDMULH to decodetree
  target/arm: Tidy SQDMULH, SQRDMULH (vector)
  target/arm: Convert MLA, MLS to decodetree
  target/arm: Convert MUL, PMUL to decodetree
  target/arm: Convert SABA, SABD, UABA, UABD to decodetree
  target/arm: Convert SMAX, SMIN, UMAX, UMIN to decodetree
  target/arm: Convert SRHADD, URHADD to decodetree
  target/arm: Convert SRHADD, URHADD to gvec
  target/arm: Convert SHSUB, UHSUB to decodetree
  target/arm: Convert SHSUB, UHSUB to gvec
  target/arm: Convert SHADD, UHADD to decodetree
  target/arm: Convert SHADD, UHADD to gvec
  target/arm: Use TCG_COND_TSTNE in gen_cmtst_vec
  ...

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
This commit is contained in:
Richard Henderson 2024-05-31 11:10:10 -07:00
commit 74abb45dac
33 changed files with 2034 additions and 1532 deletions

View File

@ -682,11 +682,14 @@ static inline bool cpu_handle_halt(CPUState *cpu)
#ifndef CONFIG_USER_ONLY
if (cpu->halted) {
const TCGCPUOps *tcg_ops = cpu->cc->tcg_ops;
bool leave_halt;
if (tcg_ops->cpu_exec_halt) {
tcg_ops->cpu_exec_halt(cpu);
leave_halt = tcg_ops->cpu_exec_halt(cpu);
} else {
leave_halt = cpu_has_work(cpu);
}
if (!cpu_has_work(cpu)) {
if (!leave_halt) {
return true;
}

View File

@ -146,6 +146,7 @@ the following architecture extensions:
- FEAT_UAO (Unprivileged Access Override control)
- FEAT_VHE (Virtualization Host Extensions)
- FEAT_VMID16 (16-bit VMID)
- FEAT_WFxT (WFE and WFI instructions with timeout)
- FEAT_XNX (Translation table stage 2 Unprivileged Execute-never)
For information on the specifics of these extensions, please refer

View File

@ -86,16 +86,16 @@ undocumented; you can get a complete list by running
arm/bananapi_m2u.rst
arm/b-l475e-iot01a.rst
arm/sabrelite
arm/highbank
arm/digic
arm/cubieboard
arm/emcraft-sf2
arm/highbank
arm/musicpal
arm/gumstix
arm/mainstone
arm/kzm
arm/nrf
arm/nseries
arm/nrf
arm/nuvoton
arm/imx25-pdk
arm/orangepi
@ -107,8 +107,8 @@ undocumented; you can get a complete list by running
arm/stellaris
arm/stm32
arm/virt
arm/xlnx-versal-virt
arm/xenpvh
arm/xlnx-versal-virt
Emulated CPU architecture support
=================================

View File

@ -370,6 +370,7 @@ config ZYNQ
select A9MPCORE
select CADENCE # UART
select PFLASH_CFI02
select PL310 # cache controller
select PL330
select SDHCI
select SSI_M25P80

View File

@ -891,7 +891,7 @@ static void sbsa_ref_class_init(ObjectClass *oc, void *data)
mc->init = sbsa_ref_init;
mc->desc = "QEMU 'SBSA Reference' ARM Virtual Machine";
mc->default_cpu_type = ARM_CPU_TYPE_NAME("neoverse-n1");
mc->default_cpu_type = ARM_CPU_TYPE_NAME("neoverse-n2");
mc->valid_cpu_types = valid_cpu_types;
mc->max_cpus = 512;
mc->pci_allow_0_address = true;

View File

@ -84,9 +84,12 @@ static const int dma_irqs[8] = {
0xe3401000 + ARMV7_IMM16(extract32((val), 16, 16)), /* movt r1 ... */ \
0xe5801000 + (addr)
#define ZYNQ_MAX_CPUS 2
struct ZynqMachineState {
MachineState parent;
Clock *ps_clk;
ARMCPU *cpu[ZYNQ_MAX_CPUS];
};
static void zynq_write_board_setup(ARMCPU *cpu,
@ -176,13 +179,13 @@ static inline int zynq_init_spi_flashes(uint32_t base_addr, qemu_irq irq,
static void zynq_init(MachineState *machine)
{
ZynqMachineState *zynq_machine = ZYNQ_MACHINE(machine);
ARMCPU *cpu;
MemoryRegion *address_space_mem = get_system_memory();
MemoryRegion *ocm_ram = g_new(MemoryRegion, 1);
DeviceState *dev, *slcr;
SysBusDevice *busdev;
qemu_irq pic[64];
int n;
unsigned int smp_cpus = machine->smp.cpus;
/* max 2GB ram */
if (machine->ram_size > 2 * GiB) {
@ -190,22 +193,27 @@ static void zynq_init(MachineState *machine)
exit(EXIT_FAILURE);
}
cpu = ARM_CPU(object_new(machine->cpu_type));
for (n = 0; n < smp_cpus; n++) {
Object *cpuobj = object_new(machine->cpu_type);
/* By default A9 CPUs have EL3 enabled. This board does not
* currently support EL3 so the CPU EL3 property is disabled before
* realization.
*/
if (object_property_find(OBJECT(cpu), "has_el3")) {
object_property_set_bool(OBJECT(cpu), "has_el3", false, &error_fatal);
/*
* By default A9 CPUs have EL3 enabled. This board does not currently
* support EL3 so the CPU EL3 property is disabled before realization.
*/
if (object_property_find(cpuobj, "has_el3")) {
object_property_set_bool(cpuobj, "has_el3", false, &error_fatal);
}
object_property_set_int(cpuobj, "midr", ZYNQ_BOARD_MIDR,
&error_fatal);
object_property_set_int(cpuobj, "reset-cbar", MPCORE_PERIPHBASE,
&error_fatal);
qdev_realize(DEVICE(cpuobj), NULL, &error_fatal);
zynq_machine->cpu[n] = ARM_CPU(cpuobj);
}
object_property_set_int(OBJECT(cpu), "midr", ZYNQ_BOARD_MIDR,
&error_fatal);
object_property_set_int(OBJECT(cpu), "reset-cbar", MPCORE_PERIPHBASE,
&error_fatal);
qdev_realize(DEVICE(cpu), NULL, &error_fatal);
/* DDR remapped to address zero. */
memory_region_add_subregion(address_space_mem, 0, machine->ram);
@ -237,14 +245,19 @@ static void zynq_init(MachineState *machine)
sysbus_mmio_map(SYS_BUS_DEVICE(slcr), 0, 0xF8000000);
dev = qdev_new(TYPE_A9MPCORE_PRIV);
qdev_prop_set_uint32(dev, "num-cpu", 1);
qdev_prop_set_uint32(dev, "num-cpu", smp_cpus);
busdev = SYS_BUS_DEVICE(dev);
sysbus_realize_and_unref(busdev, &error_fatal);
sysbus_mmio_map(busdev, 0, MPCORE_PERIPHBASE);
sysbus_connect_irq(busdev, 0,
qdev_get_gpio_in(DEVICE(cpu), ARM_CPU_IRQ));
sysbus_connect_irq(busdev, 1,
qdev_get_gpio_in(DEVICE(cpu), ARM_CPU_FIQ));
zynq_binfo.gic_cpu_if_addr = MPCORE_PERIPHBASE + 0x100;
sysbus_create_varargs("l2x0", MPCORE_PERIPHBASE + 0x2000, NULL);
for (n = 0; n < smp_cpus; n++) {
DeviceState *cpudev = DEVICE(zynq_machine->cpu[n]);
sysbus_connect_irq(busdev, (2 * n) + 0,
qdev_get_gpio_in(cpudev, ARM_CPU_IRQ));
sysbus_connect_irq(busdev, (2 * n) + 1,
qdev_get_gpio_in(cpudev, ARM_CPU_FIQ));
}
for (n = 0; n < 64; n++) {
pic[n] = qdev_get_gpio_in(dev, n);
@ -349,7 +362,7 @@ static void zynq_init(MachineState *machine)
zynq_binfo.board_setup_addr = BOARD_SETUP_ADDR;
zynq_binfo.write_board_setup = zynq_write_board_setup;
arm_load_kernel(cpu, machine, &zynq_binfo);
arm_load_kernel(zynq_machine->cpu[0], machine, &zynq_binfo);
}
static void zynq_machine_class_init(ObjectClass *oc, void *data)
@ -361,7 +374,7 @@ static void zynq_machine_class_init(ObjectClass *oc, void *data)
MachineClass *mc = MACHINE_CLASS(oc);
mc->desc = "Xilinx Zynq Platform Baseboard for Cortex-A9";
mc->init = zynq_init;
mc->max_cpus = 1;
mc->max_cpus = ZYNQ_MAX_CPUS;
mc->no_sdcard = 1;
mc->ignore_memory_transaction_failures = true;
mc->valid_cpu_types = valid_cpu_types;

View File

@ -1308,12 +1308,15 @@ static void gic_dist_writeb(void *opaque, hwaddr offset,
for (i = 0; i < 8; i++) {
if (value & (1 << i)) {
int mask = (irq < GIC_INTERNAL) ? (1 << cpu)
: GIC_DIST_TARGET(irq + i);
if (s->security_extn && !attrs.secure &&
!GIC_DIST_TEST_GROUP(irq + i, 1 << cpu)) {
continue; /* Ignore Non-secure access of Group0 IRQ */
}
GIC_DIST_SET_PENDING(irq + i, GIC_DIST_TARGET(irq + i));
GIC_DIST_SET_PENDING(irq + i, mask);
}
}
} else if (offset < 0x300) {
@ -1407,6 +1410,13 @@ static void gic_dist_writeb(void *opaque, hwaddr offset,
value = ALL_CPU_MASK;
}
s->irq_target[irq] = value & ALL_CPU_MASK;
if (irq >= GIC_INTERNAL && s->irq_state[irq].pending) {
/*
* Changing the target of an interrupt that is currently
* pending updates the set of CPUs it is pending on.
*/
s->irq_state[irq].pending = value & ALL_CPU_MASK;
}
}
} else if (offset < 0xf00) {
/* Interrupt Configuration. */

View File

@ -927,6 +927,11 @@ static int ohci_service_td(OHCIState *ohci, struct ohci_ed *ed)
case OHCI_TD_DIR_SETUP:
str = "setup";
pid = USB_TOKEN_SETUP;
if (OHCI_BM(ed->flags, ED_EN) > 0) { /* setup only allowed to ep 0 */
trace_usb_ohci_td_bad_pid(str, ed->flags, td.flags);
ohci_die(ohci);
return 1;
}
break;
default:
trace_usb_ohci_td_bad_direction(dir);

View File

@ -28,6 +28,7 @@ usb_ohci_iso_td_data_overrun(int ret, ssize_t len) "DataOverrun %d > %zu"
usb_ohci_iso_td_data_underrun(int ret) "DataUnderrun %d"
usb_ohci_iso_td_nak(int ret) "got NAK/STALL %d"
usb_ohci_iso_td_bad_response(int ret) "Bad device response %d"
usb_ohci_td_bad_pid(const char *s, uint32_t edf, uint32_t tdf) "Bad pid %s: ed.flags 0x%x td.flags 0x%x"
usb_ohci_port_attach(int index) "port #%d"
usb_ohci_port_detach(int index) "port #%d"
usb_ohci_port_wakeup(int index) "port #%d"

View File

@ -115,8 +115,19 @@ struct TCGCPUOps {
void (*do_interrupt)(CPUState *cpu);
/** @cpu_exec_interrupt: Callback for processing interrupts in cpu_exec */
bool (*cpu_exec_interrupt)(CPUState *cpu, int interrupt_request);
/** @cpu_exec_halt: Callback for handling halt in cpu_exec */
void (*cpu_exec_halt)(CPUState *cpu);
/**
* @cpu_exec_halt: Callback for handling halt in cpu_exec.
*
* The target CPU should do any special processing here that it needs
* to do when the CPU is in the halted state.
*
* Return true to indicate that the CPU should now leave halt, false
* if it should remain in the halted state.
*
* If this method is not provided, the default is to do nothing, and
* to leave halt if cpu_has_work() returns true.
*/
bool (*cpu_exec_halt)(CPUState *cpu);
/**
* @tlb_fill: Handle a softmmu tlb miss
*

View File

@ -571,6 +571,11 @@ static inline bool isar_feature_aa64_i8mm(const ARMISARegisters *id)
return FIELD_EX64(id->id_aa64isar1, ID_AA64ISAR1, I8MM) != 0;
}
static inline bool isar_feature_aa64_wfxt(const ARMISARegisters *id)
{
return FIELD_EX64(id->id_aa64isar2, ID_AA64ISAR2, WFXT) >= 2;
}
static inline bool isar_feature_aa64_hbc(const ARMISARegisters *id)
{
return FIELD_EX64(id->id_aa64isar2, ID_AA64ISAR2, BC) != 0;

View File

@ -1132,6 +1132,35 @@ static bool arm_cpu_virtio_is_big_endian(CPUState *cs)
return arm_cpu_data_is_big_endian(env);
}
#ifdef CONFIG_TCG
static bool arm_cpu_exec_halt(CPUState *cs)
{
bool leave_halt = cpu_has_work(cs);
if (leave_halt) {
/* We're about to come out of WFI/WFE: disable the WFxT timer */
ARMCPU *cpu = ARM_CPU(cs);
if (cpu->wfxt_timer) {
timer_del(cpu->wfxt_timer);
}
}
return leave_halt;
}
#endif
static void arm_wfxt_timer_cb(void *opaque)
{
ARMCPU *cpu = opaque;
CPUState *cs = CPU(cpu);
/*
* We expect the CPU to be halted; this will cause arm_cpu_is_work()
* to return true (so we will come out of halt even with no other
* pending interrupt), and the TCG accelerator's cpu_exec_interrupt()
* function auto-clears the CPU_INTERRUPT_EXITTB flag for us.
*/
cpu_interrupt(cs, CPU_INTERRUPT_EXITTB);
}
#endif
static void arm_disas_set_info(CPUState *cpu, disassemble_info *info)
@ -1877,6 +1906,9 @@ static void arm_cpu_finalizefn(Object *obj)
if (cpu->pmu_timer) {
timer_free(cpu->pmu_timer);
}
if (cpu->wfxt_timer) {
timer_free(cpu->wfxt_timer);
}
#endif
}
@ -2369,6 +2401,13 @@ static void arm_cpu_realizefn(DeviceState *dev, Error **errp)
#endif
}
#ifndef CONFIG_USER_ONLY
if (tcg_enabled() && cpu_isar_feature(aa64_wfxt, cpu)) {
cpu->wfxt_timer = timer_new_ns(QEMU_CLOCK_VIRTUAL,
arm_wfxt_timer_cb, cpu);
}
#endif
if (tcg_enabled()) {
/*
* Don't report some architectural features in the ID registers
@ -2625,6 +2664,7 @@ static const TCGCPUOps arm_tcg_ops = {
#else
.tlb_fill = arm_cpu_tlb_fill,
.cpu_exec_interrupt = arm_cpu_exec_interrupt,
.cpu_exec_halt = arm_cpu_exec_halt,
.do_interrupt = arm_cpu_do_interrupt,
.do_transaction_failed = arm_cpu_do_transaction_failed,
.do_unaligned_access = arm_cpu_do_unaligned_access,

View File

@ -866,6 +866,9 @@ struct ArchCPU {
* pmu_op_finish() - it does not need other handling during migration
*/
QEMUTimer *pmu_timer;
/* Timer used for WFxT timeouts */
QEMUTimer *wfxt_timer;
/* GPIO outputs for generic timer */
qemu_irq gt_timer_outputs[NUM_GTIMERS];
/* GPIO output for GICv3 maintenance interrupt signal */

View File

@ -109,7 +109,11 @@ void arm_cpu_sve_finalize(ARMCPU *cpu, Error **errp)
* No explicit bits enabled, and no implicit bits from sve-max-vq.
*/
if (!cpu_isar_feature(aa64_sve, cpu)) {
/* SVE is disabled and so are all vector lengths. Good. */
/*
* SVE is disabled and so are all vector lengths. Good.
* Disable all SVE extensions as well.
*/
cpu->isar.id_aa64zfr0 = 0;
return;
}

View File

@ -2665,7 +2665,7 @@ static CPAccessResult gt_stimer_access(CPUARMState *env,
}
}
static uint64_t gt_get_countervalue(CPUARMState *env)
uint64_t gt_get_countervalue(CPUARMState *env)
{
ARMCPU *cpu = env_archcpu(env);
@ -2800,7 +2800,7 @@ static uint64_t gt_cnt_read(CPUARMState *env, const ARMCPRegInfo *ri)
return gt_get_countervalue(env) - gt_phys_cnt_offset(env);
}
static uint64_t gt_virt_cnt_offset(CPUARMState *env)
uint64_t gt_virt_cnt_offset(CPUARMState *env)
{
uint64_t hcr;

View File

@ -53,6 +53,7 @@ DEF_HELPER_2(exception_pc_alignment, noreturn, env, tl)
DEF_HELPER_1(setend, void, env)
DEF_HELPER_2(wfi, void, env, i32)
DEF_HELPER_1(wfe, void, env)
DEF_HELPER_2(wfit, void, env, i64)
DEF_HELPER_1(yield, void, env)
DEF_HELPER_1(pre_hvc, void, env)
DEF_HELPER_2(pre_smc, void, env, i32)
@ -268,50 +269,6 @@ DEF_HELPER_FLAGS_2(fjcvtzs, TCG_CALL_NO_RWG, i64, f64, ptr)
DEF_HELPER_FLAGS_3(check_hcr_el2_trap, TCG_CALL_NO_WG, void, env, i32, i32)
/* neon_helper.c */
DEF_HELPER_FLAGS_3(neon_qadd_u8, TCG_CALL_NO_RWG, i32, env, i32, i32)
DEF_HELPER_FLAGS_3(neon_qadd_s8, TCG_CALL_NO_RWG, i32, env, i32, i32)
DEF_HELPER_FLAGS_3(neon_qadd_u16, TCG_CALL_NO_RWG, i32, env, i32, i32)
DEF_HELPER_FLAGS_3(neon_qadd_s16, TCG_CALL_NO_RWG, i32, env, i32, i32)
DEF_HELPER_FLAGS_3(neon_qadd_u32, TCG_CALL_NO_RWG, i32, env, i32, i32)
DEF_HELPER_FLAGS_3(neon_qadd_s32, TCG_CALL_NO_RWG, i32, env, i32, i32)
DEF_HELPER_FLAGS_3(neon_uqadd_s8, TCG_CALL_NO_RWG, i32, env, i32, i32)
DEF_HELPER_FLAGS_3(neon_uqadd_s16, TCG_CALL_NO_RWG, i32, env, i32, i32)
DEF_HELPER_FLAGS_3(neon_uqadd_s32, TCG_CALL_NO_RWG, i32, env, i32, i32)
DEF_HELPER_FLAGS_3(neon_uqadd_s64, TCG_CALL_NO_RWG, i64, env, i64, i64)
DEF_HELPER_FLAGS_3(neon_sqadd_u8, TCG_CALL_NO_RWG, i32, env, i32, i32)
DEF_HELPER_FLAGS_3(neon_sqadd_u16, TCG_CALL_NO_RWG, i32, env, i32, i32)
DEF_HELPER_FLAGS_3(neon_sqadd_u32, TCG_CALL_NO_RWG, i32, env, i32, i32)
DEF_HELPER_FLAGS_3(neon_sqadd_u64, TCG_CALL_NO_RWG, i64, env, i64, i64)
DEF_HELPER_3(neon_qsub_u8, i32, env, i32, i32)
DEF_HELPER_3(neon_qsub_s8, i32, env, i32, i32)
DEF_HELPER_3(neon_qsub_u16, i32, env, i32, i32)
DEF_HELPER_3(neon_qsub_s16, i32, env, i32, i32)
DEF_HELPER_3(neon_qsub_u32, i32, env, i32, i32)
DEF_HELPER_3(neon_qsub_s32, i32, env, i32, i32)
DEF_HELPER_3(neon_qadd_u64, i64, env, i64, i64)
DEF_HELPER_3(neon_qadd_s64, i64, env, i64, i64)
DEF_HELPER_3(neon_qsub_u64, i64, env, i64, i64)
DEF_HELPER_3(neon_qsub_s64, i64, env, i64, i64)
DEF_HELPER_2(neon_hadd_s8, i32, i32, i32)
DEF_HELPER_2(neon_hadd_u8, i32, i32, i32)
DEF_HELPER_2(neon_hadd_s16, i32, i32, i32)
DEF_HELPER_2(neon_hadd_u16, i32, i32, i32)
DEF_HELPER_2(neon_hadd_s32, s32, s32, s32)
DEF_HELPER_2(neon_hadd_u32, i32, i32, i32)
DEF_HELPER_2(neon_rhadd_s8, i32, i32, i32)
DEF_HELPER_2(neon_rhadd_u8, i32, i32, i32)
DEF_HELPER_2(neon_rhadd_s16, i32, i32, i32)
DEF_HELPER_2(neon_rhadd_u16, i32, i32, i32)
DEF_HELPER_2(neon_rhadd_s32, s32, s32, s32)
DEF_HELPER_2(neon_rhadd_u32, i32, i32, i32)
DEF_HELPER_2(neon_hsub_s8, i32, i32, i32)
DEF_HELPER_2(neon_hsub_u8, i32, i32, i32)
DEF_HELPER_2(neon_hsub_s16, i32, i32, i32)
DEF_HELPER_2(neon_hsub_u16, i32, i32, i32)
DEF_HELPER_2(neon_hsub_s32, s32, s32, s32)
DEF_HELPER_2(neon_hsub_u32, i32, i32, i32)
DEF_HELPER_2(neon_pmin_u8, i32, i32, i32)
DEF_HELPER_2(neon_pmin_s8, i32, i32, i32)
DEF_HELPER_2(neon_pmin_u16, i32, i32, i32)
@ -351,6 +308,32 @@ DEF_HELPER_3(neon_qrshl_u32, i32, env, i32, i32)
DEF_HELPER_3(neon_qrshl_s32, i32, env, i32, i32)
DEF_HELPER_3(neon_qrshl_u64, i64, env, i64, i64)
DEF_HELPER_3(neon_qrshl_s64, i64, env, i64, i64)
DEF_HELPER_FLAGS_5(neon_sqshl_b, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_5(neon_sqshl_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_5(neon_sqshl_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_5(neon_sqshl_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_5(neon_uqshl_b, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_5(neon_uqshl_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_5(neon_uqshl_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_5(neon_uqshl_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_5(neon_sqrshl_b, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_5(neon_sqrshl_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_5(neon_sqrshl_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_5(neon_sqrshl_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_5(neon_uqrshl_b, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_5(neon_uqrshl_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_5(neon_uqrshl_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_5(neon_uqrshl_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(gvec_srshl_b, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(gvec_srshl_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(gvec_srshl_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(gvec_srshl_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(gvec_urshl_b, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(gvec_urshl_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(gvec_urshl_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(gvec_urshl_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
DEF_HELPER_2(neon_add_u8, i32, i32, i32)
DEF_HELPER_2(neon_add_u16, i32, i32, i32)
@ -836,6 +819,22 @@ DEF_HELPER_FLAGS_5(gvec_sqsub_s, TCG_CALL_NO_RWG,
void, ptr, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_5(gvec_sqsub_d, TCG_CALL_NO_RWG,
void, ptr, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_5(gvec_usqadd_b, TCG_CALL_NO_RWG,
void, ptr, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_5(gvec_usqadd_h, TCG_CALL_NO_RWG,
void, ptr, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_5(gvec_usqadd_s, TCG_CALL_NO_RWG,
void, ptr, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_5(gvec_usqadd_d, TCG_CALL_NO_RWG,
void, ptr, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_5(gvec_suqadd_b, TCG_CALL_NO_RWG,
void, ptr, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_5(gvec_suqadd_h, TCG_CALL_NO_RWG,
void, ptr, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_5(gvec_suqadd_s, TCG_CALL_NO_RWG,
void, ptr, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_5(gvec_suqadd_d, TCG_CALL_NO_RWG,
void, ptr, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_5(gvec_fmlal_a32, TCG_CALL_NO_RWG,
void, ptr, ptr, ptr, ptr, i32)
@ -970,6 +969,16 @@ DEF_HELPER_FLAGS_5(neon_sqrdmulh_h, TCG_CALL_NO_RWG,
DEF_HELPER_FLAGS_5(neon_sqrdmulh_s, TCG_CALL_NO_RWG,
void, ptr, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_5(neon_sqdmulh_idx_h, TCG_CALL_NO_RWG,
void, ptr, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_5(neon_sqdmulh_idx_s, TCG_CALL_NO_RWG,
void, ptr, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_5(neon_sqrdmulh_idx_h, TCG_CALL_NO_RWG,
void, ptr, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_5(neon_sqrdmulh_idx_s, TCG_CALL_NO_RWG,
void, ptr, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(sve2_sqdmulh_b, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(sve2_sqdmulh_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
DEF_HELPER_FLAGS_4(sve2_sqdmulh_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)

View File

@ -1770,4 +1770,12 @@ bool check_watchpoint_in_range(int i, target_ulong addr);
CPUWatchpoint *find_hw_watchpoint(CPUState *cpu, target_ulong addr);
int insert_hw_watchpoint(target_ulong addr, target_ulong len, int type);
int delete_hw_watchpoint(target_ulong addr, target_ulong len, int type);
/* Return the current value of the system counter in ticks */
uint64_t gt_get_countervalue(CPUARMState *env);
/*
* Return the currently applicable offset between the system counter
* and CNTVCT_EL0 (this will be either 0 or the value of CNTVOFF_EL2).
*/
uint64_t gt_virt_cnt_offset(CPUARMState *env);
#endif

View File

@ -242,6 +242,25 @@ static const VMStateDescription vmstate_irq_line_state = {
}
};
static bool wfxt_timer_needed(void *opaque)
{
ARMCPU *cpu = opaque;
/* We'll only have the timer object if FEAT_WFxT is implemented */
return cpu->wfxt_timer;
}
static const VMStateDescription vmstate_wfxt_timer = {
.name = "cpu/wfxt-timer",
.version_id = 1,
.minimum_version_id = 1,
.needed = wfxt_timer_needed,
.fields = (const VMStateField[]) {
VMSTATE_TIMER_PTR(wfxt_timer, ARMCPU),
VMSTATE_END_OF_LIST()
}
};
static bool m_needed(void *opaque)
{
ARMCPU *cpu = opaque;
@ -957,6 +976,7 @@ const VMStateDescription vmstate_arm_cpu = {
#endif
&vmstate_serror,
&vmstate_irq_line_state,
&vmstate_wfxt_timer,
NULL
}
};

View File

@ -32,6 +32,7 @@
&rr_e rd rn esz
&rrr_e rd rn rm esz
&rrx_e rd rn rm idx esz
&rrrr_e rd rn rm ra esz
&qrr_e q rd rn esz
&qrrr_e q rd rn rm esz
&qrrx_e q rd rn rm idx esz
@ -42,8 +43,11 @@
@rr_sd ........ ... ..... ...... rn:5 rd:5 &rr_e esz=%esz_sd
@rrr_h ........ ... rm:5 ...... rn:5 rd:5 &rrr_e esz=1
@rrr_d ........ ... rm:5 ...... rn:5 rd:5 &rrr_e esz=3
@rrr_sd ........ ... rm:5 ...... rn:5 rd:5 &rrr_e esz=%esz_sd
@rrr_hsd ........ ... rm:5 ...... rn:5 rd:5 &rrr_e esz=%esz_hsd
@rrr_e ........ esz:2 . rm:5 ...... rn:5 rd:5 &rrr_e
@r2r_e ........ esz:2 . ..... ...... rm:5 rd:5 &rrr_e rn=%rd
@rrx_h ........ .. .. rm:4 .... . . rn:5 rd:5 &rrx_e esz=1 idx=%hlm
@rrx_s ........ .. . rm:5 .... . . rn:5 rd:5 &rrx_e esz=2 idx=%hl
@ -59,6 +63,7 @@
@qrrr_h . q:1 ...... ... rm:5 ...... rn:5 rd:5 &qrrr_e esz=1
@qrrr_sd . q:1 ...... ... rm:5 ...... rn:5 rd:5 &qrrr_e esz=%esz_sd
@qrrr_e . q:1 ...... esz:2 . rm:5 ...... rn:5 rd:5 &qrrr_e
@qr2r_e . q:1 ...... esz:2 . ..... ...... rm:5 rd:5 &qrrr_e rn=%rd
@qrrx_h . q:1 .. .... .. .. rm:4 .... . . rn:5 rd:5 \
&qrrx_e esz=1 idx=%hlm
@ -225,6 +230,10 @@ ERETA 1101011 0100 11111 00001 m:1 11111 11111 &reta # ERETAA, ERETAB
NOP 1101 0101 0000 0011 0010 ---- --- 11111
}
# System instructions with register argument
WFET 1101 0101 0000 0011 0001 0000 000 rd:5
WFIT 1101 0101 0000 0011 0001 0000 001 rd:5
# Barriers
CLREX 1101 0101 0000 0011 0011 ---- 010 11111
@ -744,6 +753,35 @@ FRECPS_s 0101 1110 0.1 ..... 11111 1 ..... ..... @rrr_sd
FRSQRTS_s 0101 1110 110 ..... 00111 1 ..... ..... @rrr_h
FRSQRTS_s 0101 1110 1.1 ..... 11111 1 ..... ..... @rrr_sd
SQADD_s 0101 1110 ..1 ..... 00001 1 ..... ..... @rrr_e
UQADD_s 0111 1110 ..1 ..... 00001 1 ..... ..... @rrr_e
SQSUB_s 0101 1110 ..1 ..... 00101 1 ..... ..... @rrr_e
UQSUB_s 0111 1110 ..1 ..... 00101 1 ..... ..... @rrr_e
SUQADD_s 0101 1110 ..1 00000 00111 0 ..... ..... @r2r_e
USQADD_s 0111 1110 ..1 00000 00111 0 ..... ..... @r2r_e
SSHL_s 0101 1110 111 ..... 01000 1 ..... ..... @rrr_d
USHL_s 0111 1110 111 ..... 01000 1 ..... ..... @rrr_d
SRSHL_s 0101 1110 111 ..... 01010 1 ..... ..... @rrr_d
URSHL_s 0111 1110 111 ..... 01010 1 ..... ..... @rrr_d
SQSHL_s 0101 1110 ..1 ..... 01001 1 ..... ..... @rrr_e
UQSHL_s 0111 1110 ..1 ..... 01001 1 ..... ..... @rrr_e
SQRSHL_s 0101 1110 ..1 ..... 01011 1 ..... ..... @rrr_e
UQRSHL_s 0111 1110 ..1 ..... 01011 1 ..... ..... @rrr_e
ADD_s 0101 1110 111 ..... 10000 1 ..... ..... @rrr_d
SUB_s 0111 1110 111 ..... 10000 1 ..... ..... @rrr_d
CMGT_s 0101 1110 111 ..... 00110 1 ..... ..... @rrr_d
CMHI_s 0111 1110 111 ..... 00110 1 ..... ..... @rrr_d
CMGE_s 0101 1110 111 ..... 00111 1 ..... ..... @rrr_d
CMHS_s 0111 1110 111 ..... 00111 1 ..... ..... @rrr_d
CMTST_s 0101 1110 111 ..... 10001 1 ..... ..... @rrr_d
CMEQ_s 0111 1110 111 ..... 10001 1 ..... ..... @rrr_d
SQDMULH_s 0101 1110 ..1 ..... 10110 1 ..... ..... @rrr_e
SQRDMULH_s 0111 1110 ..1 ..... 10110 1 ..... ..... @rrr_e
### Advanced SIMD scalar pairwise
FADDP_s 0101 1110 0011 0000 1101 10 ..... ..... @rr_h
@ -857,6 +895,53 @@ BSL_v 0.10 1110 011 ..... 00011 1 ..... ..... @qrrr_b
BIT_v 0.10 1110 101 ..... 00011 1 ..... ..... @qrrr_b
BIF_v 0.10 1110 111 ..... 00011 1 ..... ..... @qrrr_b
SQADD_v 0.00 1110 ..1 ..... 00001 1 ..... ..... @qrrr_e
UQADD_v 0.10 1110 ..1 ..... 00001 1 ..... ..... @qrrr_e
SQSUB_v 0.00 1110 ..1 ..... 00101 1 ..... ..... @qrrr_e
UQSUB_v 0.10 1110 ..1 ..... 00101 1 ..... ..... @qrrr_e
SUQADD_v 0.00 1110 ..1 00000 00111 0 ..... ..... @qr2r_e
USQADD_v 0.10 1110 ..1 00000 00111 0 ..... ..... @qr2r_e
SSHL_v 0.00 1110 ..1 ..... 01000 1 ..... ..... @qrrr_e
USHL_v 0.10 1110 ..1 ..... 01000 1 ..... ..... @qrrr_e
SRSHL_v 0.00 1110 ..1 ..... 01010 1 ..... ..... @qrrr_e
URSHL_v 0.10 1110 ..1 ..... 01010 1 ..... ..... @qrrr_e
SQSHL_v 0.00 1110 ..1 ..... 01001 1 ..... ..... @qrrr_e
UQSHL_v 0.10 1110 ..1 ..... 01001 1 ..... ..... @qrrr_e
SQRSHL_v 0.00 1110 ..1 ..... 01011 1 ..... ..... @qrrr_e
UQRSHL_v 0.10 1110 ..1 ..... 01011 1 ..... ..... @qrrr_e
ADD_v 0.00 1110 ..1 ..... 10000 1 ..... ..... @qrrr_e
SUB_v 0.10 1110 ..1 ..... 10000 1 ..... ..... @qrrr_e
CMGT_v 0.00 1110 ..1 ..... 00110 1 ..... ..... @qrrr_e
CMHI_v 0.10 1110 ..1 ..... 00110 1 ..... ..... @qrrr_e
CMGE_v 0.00 1110 ..1 ..... 00111 1 ..... ..... @qrrr_e
CMHS_v 0.10 1110 ..1 ..... 00111 1 ..... ..... @qrrr_e
CMTST_v 0.00 1110 ..1 ..... 10001 1 ..... ..... @qrrr_e
CMEQ_v 0.10 1110 ..1 ..... 10001 1 ..... ..... @qrrr_e
SHADD_v 0.00 1110 ..1 ..... 00000 1 ..... ..... @qrrr_e
UHADD_v 0.10 1110 ..1 ..... 00000 1 ..... ..... @qrrr_e
SHSUB_v 0.00 1110 ..1 ..... 00100 1 ..... ..... @qrrr_e
UHSUB_v 0.10 1110 ..1 ..... 00100 1 ..... ..... @qrrr_e
SRHADD_v 0.00 1110 ..1 ..... 00010 1 ..... ..... @qrrr_e
URHADD_v 0.10 1110 ..1 ..... 00010 1 ..... ..... @qrrr_e
SMAX_v 0.00 1110 ..1 ..... 01100 1 ..... ..... @qrrr_e
UMAX_v 0.10 1110 ..1 ..... 01100 1 ..... ..... @qrrr_e
SMIN_v 0.00 1110 ..1 ..... 01101 1 ..... ..... @qrrr_e
UMIN_v 0.10 1110 ..1 ..... 01101 1 ..... ..... @qrrr_e
SABD_v 0.00 1110 ..1 ..... 01110 1 ..... ..... @qrrr_e
UABD_v 0.10 1110 ..1 ..... 01110 1 ..... ..... @qrrr_e
SABA_v 0.00 1110 ..1 ..... 01111 1 ..... ..... @qrrr_e
UABA_v 0.10 1110 ..1 ..... 01111 1 ..... ..... @qrrr_e
MUL_v 0.00 1110 ..1 ..... 10011 1 ..... ..... @qrrr_e
PMUL_v 0.10 1110 001 ..... 10011 1 ..... ..... @qrrr_b
MLA_v 0.00 1110 ..1 ..... 10010 1 ..... ..... @qrrr_e
MLS_v 0.10 1110 ..1 ..... 10010 1 ..... ..... @qrrr_e
SQDMULH_v 0.00 1110 ..1 ..... 10110 1 ..... ..... @qrrr_e
SQRDMULH_v 0.10 1110 ..1 ..... 10110 1 ..... ..... @qrrr_e
### Advanced SIMD scalar x indexed element
FMUL_si 0101 1111 00 .. .... 1001 . 0 ..... ..... @rrx_h
@ -875,6 +960,12 @@ FMULX_si 0111 1111 00 .. .... 1001 . 0 ..... ..... @rrx_h
FMULX_si 0111 1111 10 . ..... 1001 . 0 ..... ..... @rrx_s
FMULX_si 0111 1111 11 0 ..... 1001 . 0 ..... ..... @rrx_d
SQDMULH_si 0101 1111 01 .. .... 1100 . 0 ..... ..... @rrx_h
SQDMULH_si 0101 1111 10 .. .... 1100 . 0 ..... ..... @rrx_s
SQRDMULH_si 0101 1111 01 .. .... 1101 . 0 ..... ..... @rrx_h
SQRDMULH_si 0101 1111 10 . ..... 1101 . 0 ..... ..... @rrx_s
### Advanced SIMD vector x indexed element
FMUL_vi 0.00 1111 00 .. .... 1001 . 0 ..... ..... @qrrx_h
@ -897,3 +988,31 @@ FMLAL_vi 0.00 1111 10 .. .... 0000 . 0 ..... ..... @qrrx_h
FMLSL_vi 0.00 1111 10 .. .... 0100 . 0 ..... ..... @qrrx_h
FMLAL2_vi 0.10 1111 10 .. .... 1000 . 0 ..... ..... @qrrx_h
FMLSL2_vi 0.10 1111 10 .. .... 1100 . 0 ..... ..... @qrrx_h
MUL_vi 0.00 1111 01 .. .... 1000 . 0 ..... ..... @qrrx_h
MUL_vi 0.00 1111 10 . ..... 1000 . 0 ..... ..... @qrrx_s
MLA_vi 0.10 1111 01 .. .... 0000 . 0 ..... ..... @qrrx_h
MLA_vi 0.10 1111 10 . ..... 0000 . 0 ..... ..... @qrrx_s
MLS_vi 0.10 1111 01 .. .... 0100 . 0 ..... ..... @qrrx_h
MLS_vi 0.10 1111 10 . ..... 0100 . 0 ..... ..... @qrrx_s
SQDMULH_vi 0.00 1111 01 .. .... 1100 . 0 ..... ..... @qrrx_h
SQDMULH_vi 0.00 1111 10 . ..... 1100 . 0 ..... ..... @qrrx_s
SQRDMULH_vi 0.00 1111 01 .. .... 1101 . 0 ..... ..... @qrrx_h
SQRDMULH_vi 0.00 1111 10 . ..... 1101 . 0 ..... ..... @qrrx_s
# Floating-point conditional select
FCSEL 0001 1110 .. 1 rm:5 cond:4 11 rn:5 rd:5 esz=%esz_hsd
# Floating-point data-processing (3 source)
@rrrr_hsd .... .... .. . rm:5 . ra:5 rn:5 rd:5 &rrrr_e esz=%esz_hsd
FMADD 0001 1111 .. 0 ..... 0 ..... ..... ..... @rrrr_hsd
FMSUB 0001 1111 .. 0 ..... 1 ..... ..... ..... @rrrr_hsd
FNMADD 0001 1111 .. 1 ..... 0 ..... ..... ..... @rrrr_hsd
FNMSUB 0001 1111 .. 1 ..... 1 ..... ..... ..... @rrrr_hsd

View File

@ -1168,6 +1168,7 @@ void aarch64_max_tcg_initfn(Object *obj)
t = cpu->isar.id_aa64isar2;
t = FIELD_DP64(t, ID_AA64ISAR2, MOPS, 1); /* FEAT_MOPS */
t = FIELD_DP64(t, ID_AA64ISAR2, BC, 1); /* FEAT_HBC */
t = FIELD_DP64(t, ID_AA64ISAR2, WFXT, 2); /* FEAT_WFxT */
cpu->isar.id_aa64isar2 = t;
t = cpu->isar.id_aa64pfr0;

View File

@ -29,11 +29,32 @@ static void gen_gvec_fn3_qc(uint32_t rd_ofs, uint32_t rn_ofs, uint32_t rm_ofs,
{
TCGv_ptr qc_ptr = tcg_temp_new_ptr();
tcg_debug_assert(opr_sz <= sizeof_field(CPUARMState, vfp.qc));
tcg_gen_addi_ptr(qc_ptr, tcg_env, offsetof(CPUARMState, vfp.qc));
tcg_gen_gvec_3_ptr(rd_ofs, rn_ofs, rm_ofs, qc_ptr,
opr_sz, max_sz, 0, fn);
}
void gen_gvec_sqdmulh_qc(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz)
{
static gen_helper_gvec_3_ptr * const fns[2] = {
gen_helper_neon_sqdmulh_h, gen_helper_neon_sqdmulh_s
};
tcg_debug_assert(vece >= 1 && vece <= 2);
gen_gvec_fn3_qc(rd_ofs, rn_ofs, rm_ofs, opr_sz, max_sz, fns[vece - 1]);
}
void gen_gvec_sqrdmulh_qc(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz)
{
static gen_helper_gvec_3_ptr * const fns[2] = {
gen_helper_neon_sqrdmulh_h, gen_helper_neon_sqrdmulh_s
};
tcg_debug_assert(vece >= 1 && vece <= 2);
gen_gvec_fn3_qc(rd_ofs, rn_ofs, rm_ofs, opr_sz, max_sz, fns[vece - 1]);
}
void gen_gvec_sqrdmlah_qc(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz)
{
@ -933,21 +954,17 @@ void gen_gvec_mls(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
/* CMTST : test is "if (X & Y != 0)". */
static void gen_cmtst_i32(TCGv_i32 d, TCGv_i32 a, TCGv_i32 b)
{
tcg_gen_and_i32(d, a, b);
tcg_gen_negsetcond_i32(TCG_COND_NE, d, d, tcg_constant_i32(0));
tcg_gen_negsetcond_i32(TCG_COND_TSTNE, d, a, b);
}
void gen_cmtst_i64(TCGv_i64 d, TCGv_i64 a, TCGv_i64 b)
{
tcg_gen_and_i64(d, a, b);
tcg_gen_negsetcond_i64(TCG_COND_NE, d, d, tcg_constant_i64(0));
tcg_gen_negsetcond_i64(TCG_COND_TSTNE, d, a, b);
}
static void gen_cmtst_vec(unsigned vece, TCGv_vec d, TCGv_vec a, TCGv_vec b)
{
tcg_gen_and_vec(vece, d, a, b);
tcg_gen_dupi_vec(vece, a, 0);
tcg_gen_cmp_vec(TCG_COND_NE, vece, d, d, a);
tcg_gen_cmp_vec(TCG_COND_TSTNE, vece, d, a, b);
}
void gen_gvec_cmtst(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
@ -1217,21 +1234,113 @@ void gen_gvec_sshl(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
tcg_gen_gvec_3(rd_ofs, rn_ofs, rm_ofs, opr_sz, max_sz, &ops[vece]);
}
static void gen_uqadd_vec(unsigned vece, TCGv_vec t, TCGv_vec sat,
void gen_gvec_srshl(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz)
{
static gen_helper_gvec_3 * const fns[] = {
gen_helper_gvec_srshl_b, gen_helper_gvec_srshl_h,
gen_helper_gvec_srshl_s, gen_helper_gvec_srshl_d,
};
tcg_debug_assert(vece <= MO_64);
tcg_gen_gvec_3_ool(rd_ofs, rn_ofs, rm_ofs, opr_sz, max_sz, 0, fns[vece]);
}
void gen_gvec_urshl(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz)
{
static gen_helper_gvec_3 * const fns[] = {
gen_helper_gvec_urshl_b, gen_helper_gvec_urshl_h,
gen_helper_gvec_urshl_s, gen_helper_gvec_urshl_d,
};
tcg_debug_assert(vece <= MO_64);
tcg_gen_gvec_3_ool(rd_ofs, rn_ofs, rm_ofs, opr_sz, max_sz, 0, fns[vece]);
}
void gen_neon_sqshl(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz)
{
static gen_helper_gvec_3_ptr * const fns[] = {
gen_helper_neon_sqshl_b, gen_helper_neon_sqshl_h,
gen_helper_neon_sqshl_s, gen_helper_neon_sqshl_d,
};
tcg_debug_assert(vece <= MO_64);
tcg_gen_gvec_3_ptr(rd_ofs, rn_ofs, rm_ofs, tcg_env,
opr_sz, max_sz, 0, fns[vece]);
}
void gen_neon_uqshl(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz)
{
static gen_helper_gvec_3_ptr * const fns[] = {
gen_helper_neon_uqshl_b, gen_helper_neon_uqshl_h,
gen_helper_neon_uqshl_s, gen_helper_neon_uqshl_d,
};
tcg_debug_assert(vece <= MO_64);
tcg_gen_gvec_3_ptr(rd_ofs, rn_ofs, rm_ofs, tcg_env,
opr_sz, max_sz, 0, fns[vece]);
}
void gen_neon_sqrshl(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz)
{
static gen_helper_gvec_3_ptr * const fns[] = {
gen_helper_neon_sqrshl_b, gen_helper_neon_sqrshl_h,
gen_helper_neon_sqrshl_s, gen_helper_neon_sqrshl_d,
};
tcg_debug_assert(vece <= MO_64);
tcg_gen_gvec_3_ptr(rd_ofs, rn_ofs, rm_ofs, tcg_env,
opr_sz, max_sz, 0, fns[vece]);
}
void gen_neon_uqrshl(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz)
{
static gen_helper_gvec_3_ptr * const fns[] = {
gen_helper_neon_uqrshl_b, gen_helper_neon_uqrshl_h,
gen_helper_neon_uqrshl_s, gen_helper_neon_uqrshl_d,
};
tcg_debug_assert(vece <= MO_64);
tcg_gen_gvec_3_ptr(rd_ofs, rn_ofs, rm_ofs, tcg_env,
opr_sz, max_sz, 0, fns[vece]);
}
void gen_uqadd_bhs(TCGv_i64 res, TCGv_i64 qc, TCGv_i64 a, TCGv_i64 b, MemOp esz)
{
uint64_t max = MAKE_64BIT_MASK(0, 8 << esz);
TCGv_i64 tmp = tcg_temp_new_i64();
tcg_gen_add_i64(tmp, a, b);
tcg_gen_umin_i64(res, tmp, tcg_constant_i64(max));
tcg_gen_xor_i64(tmp, tmp, res);
tcg_gen_or_i64(qc, qc, tmp);
}
void gen_uqadd_d(TCGv_i64 res, TCGv_i64 qc, TCGv_i64 a, TCGv_i64 b)
{
TCGv_i64 t = tcg_temp_new_i64();
tcg_gen_add_i64(t, a, b);
tcg_gen_movcond_i64(TCG_COND_LTU, res, t, a,
tcg_constant_i64(UINT64_MAX), t);
tcg_gen_xor_i64(t, t, res);
tcg_gen_or_i64(qc, qc, t);
}
static void gen_uqadd_vec(unsigned vece, TCGv_vec t, TCGv_vec qc,
TCGv_vec a, TCGv_vec b)
{
TCGv_vec x = tcg_temp_new_vec_matching(t);
tcg_gen_add_vec(vece, x, a, b);
tcg_gen_usadd_vec(vece, t, a, b);
tcg_gen_cmp_vec(TCG_COND_NE, vece, x, x, t);
tcg_gen_or_vec(vece, sat, sat, x);
tcg_gen_xor_vec(vece, x, x, t);
tcg_gen_or_vec(vece, qc, qc, x);
}
void gen_gvec_uqadd_qc(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz)
{
static const TCGOpcode vecop_list[] = {
INDEX_op_usadd_vec, INDEX_op_cmp_vec, INDEX_op_add_vec, 0
INDEX_op_usadd_vec, INDEX_op_add_vec, 0
};
static const GVecGen4 ops[4] = {
{ .fniv = gen_uqadd_vec,
@ -1250,30 +1359,68 @@ void gen_gvec_uqadd_qc(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
.opt_opc = vecop_list,
.vece = MO_32 },
{ .fniv = gen_uqadd_vec,
.fni8 = gen_uqadd_d,
.fno = gen_helper_gvec_uqadd_d,
.write_aofs = true,
.opt_opc = vecop_list,
.vece = MO_64 },
};
tcg_debug_assert(opr_sz <= sizeof_field(CPUARMState, vfp.qc));
tcg_gen_gvec_4(rd_ofs, offsetof(CPUARMState, vfp.qc),
rn_ofs, rm_ofs, opr_sz, max_sz, &ops[vece]);
}
static void gen_sqadd_vec(unsigned vece, TCGv_vec t, TCGv_vec sat,
void gen_sqadd_bhs(TCGv_i64 res, TCGv_i64 qc, TCGv_i64 a, TCGv_i64 b, MemOp esz)
{
int64_t max = MAKE_64BIT_MASK(0, (8 << esz) - 1);
int64_t min = -1ll - max;
TCGv_i64 tmp = tcg_temp_new_i64();
tcg_gen_add_i64(tmp, a, b);
tcg_gen_smin_i64(res, tmp, tcg_constant_i64(max));
tcg_gen_smax_i64(res, res, tcg_constant_i64(min));
tcg_gen_xor_i64(tmp, tmp, res);
tcg_gen_or_i64(qc, qc, tmp);
}
void gen_sqadd_d(TCGv_i64 res, TCGv_i64 qc, TCGv_i64 a, TCGv_i64 b)
{
TCGv_i64 t0 = tcg_temp_new_i64();
TCGv_i64 t1 = tcg_temp_new_i64();
TCGv_i64 t2 = tcg_temp_new_i64();
tcg_gen_add_i64(t0, a, b);
/* Compute signed overflow indication into T1 */
tcg_gen_xor_i64(t1, a, b);
tcg_gen_xor_i64(t2, t0, a);
tcg_gen_andc_i64(t1, t2, t1);
/* Compute saturated value into T2 */
tcg_gen_sari_i64(t2, a, 63);
tcg_gen_xori_i64(t2, t2, INT64_MAX);
tcg_gen_movcond_i64(TCG_COND_LT, res, t1, tcg_constant_i64(0), t2, t0);
tcg_gen_xor_i64(t0, t0, res);
tcg_gen_or_i64(qc, qc, t0);
}
static void gen_sqadd_vec(unsigned vece, TCGv_vec t, TCGv_vec qc,
TCGv_vec a, TCGv_vec b)
{
TCGv_vec x = tcg_temp_new_vec_matching(t);
tcg_gen_add_vec(vece, x, a, b);
tcg_gen_ssadd_vec(vece, t, a, b);
tcg_gen_cmp_vec(TCG_COND_NE, vece, x, x, t);
tcg_gen_or_vec(vece, sat, sat, x);
tcg_gen_xor_vec(vece, x, x, t);
tcg_gen_or_vec(vece, qc, qc, x);
}
void gen_gvec_sqadd_qc(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz)
{
static const TCGOpcode vecop_list[] = {
INDEX_op_ssadd_vec, INDEX_op_cmp_vec, INDEX_op_add_vec, 0
INDEX_op_ssadd_vec, INDEX_op_add_vec, 0
};
static const GVecGen4 ops[4] = {
{ .fniv = gen_sqadd_vec,
@ -1292,30 +1439,53 @@ void gen_gvec_sqadd_qc(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
.write_aofs = true,
.vece = MO_32 },
{ .fniv = gen_sqadd_vec,
.fni8 = gen_sqadd_d,
.fno = gen_helper_gvec_sqadd_d,
.opt_opc = vecop_list,
.write_aofs = true,
.vece = MO_64 },
};
tcg_debug_assert(opr_sz <= sizeof_field(CPUARMState, vfp.qc));
tcg_gen_gvec_4(rd_ofs, offsetof(CPUARMState, vfp.qc),
rn_ofs, rm_ofs, opr_sz, max_sz, &ops[vece]);
}
static void gen_uqsub_vec(unsigned vece, TCGv_vec t, TCGv_vec sat,
void gen_uqsub_bhs(TCGv_i64 res, TCGv_i64 qc, TCGv_i64 a, TCGv_i64 b, MemOp esz)
{
TCGv_i64 tmp = tcg_temp_new_i64();
tcg_gen_sub_i64(tmp, a, b);
tcg_gen_smax_i64(res, tmp, tcg_constant_i64(0));
tcg_gen_xor_i64(tmp, tmp, res);
tcg_gen_or_i64(qc, qc, tmp);
}
void gen_uqsub_d(TCGv_i64 res, TCGv_i64 qc, TCGv_i64 a, TCGv_i64 b)
{
TCGv_i64 t = tcg_temp_new_i64();
tcg_gen_sub_i64(t, a, b);
tcg_gen_movcond_i64(TCG_COND_LTU, res, a, b, tcg_constant_i64(0), t);
tcg_gen_xor_i64(t, t, res);
tcg_gen_or_i64(qc, qc, t);
}
static void gen_uqsub_vec(unsigned vece, TCGv_vec t, TCGv_vec qc,
TCGv_vec a, TCGv_vec b)
{
TCGv_vec x = tcg_temp_new_vec_matching(t);
tcg_gen_sub_vec(vece, x, a, b);
tcg_gen_ussub_vec(vece, t, a, b);
tcg_gen_cmp_vec(TCG_COND_NE, vece, x, x, t);
tcg_gen_or_vec(vece, sat, sat, x);
tcg_gen_xor_vec(vece, x, x, t);
tcg_gen_or_vec(vece, qc, qc, x);
}
void gen_gvec_uqsub_qc(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz)
{
static const TCGOpcode vecop_list[] = {
INDEX_op_ussub_vec, INDEX_op_cmp_vec, INDEX_op_sub_vec, 0
INDEX_op_ussub_vec, INDEX_op_sub_vec, 0
};
static const GVecGen4 ops[4] = {
{ .fniv = gen_uqsub_vec,
@ -1334,30 +1504,68 @@ void gen_gvec_uqsub_qc(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
.write_aofs = true,
.vece = MO_32 },
{ .fniv = gen_uqsub_vec,
.fni8 = gen_uqsub_d,
.fno = gen_helper_gvec_uqsub_d,
.opt_opc = vecop_list,
.write_aofs = true,
.vece = MO_64 },
};
tcg_debug_assert(opr_sz <= sizeof_field(CPUARMState, vfp.qc));
tcg_gen_gvec_4(rd_ofs, offsetof(CPUARMState, vfp.qc),
rn_ofs, rm_ofs, opr_sz, max_sz, &ops[vece]);
}
static void gen_sqsub_vec(unsigned vece, TCGv_vec t, TCGv_vec sat,
void gen_sqsub_bhs(TCGv_i64 res, TCGv_i64 qc, TCGv_i64 a, TCGv_i64 b, MemOp esz)
{
int64_t max = MAKE_64BIT_MASK(0, (8 << esz) - 1);
int64_t min = -1ll - max;
TCGv_i64 tmp = tcg_temp_new_i64();
tcg_gen_sub_i64(tmp, a, b);
tcg_gen_smin_i64(res, tmp, tcg_constant_i64(max));
tcg_gen_smax_i64(res, res, tcg_constant_i64(min));
tcg_gen_xor_i64(tmp, tmp, res);
tcg_gen_or_i64(qc, qc, tmp);
}
void gen_sqsub_d(TCGv_i64 res, TCGv_i64 qc, TCGv_i64 a, TCGv_i64 b)
{
TCGv_i64 t0 = tcg_temp_new_i64();
TCGv_i64 t1 = tcg_temp_new_i64();
TCGv_i64 t2 = tcg_temp_new_i64();
tcg_gen_sub_i64(t0, a, b);
/* Compute signed overflow indication into T1 */
tcg_gen_xor_i64(t1, a, b);
tcg_gen_xor_i64(t2, t0, a);
tcg_gen_and_i64(t1, t1, t2);
/* Compute saturated value into T2 */
tcg_gen_sari_i64(t2, a, 63);
tcg_gen_xori_i64(t2, t2, INT64_MAX);
tcg_gen_movcond_i64(TCG_COND_LT, res, t1, tcg_constant_i64(0), t2, t0);
tcg_gen_xor_i64(t0, t0, res);
tcg_gen_or_i64(qc, qc, t0);
}
static void gen_sqsub_vec(unsigned vece, TCGv_vec t, TCGv_vec qc,
TCGv_vec a, TCGv_vec b)
{
TCGv_vec x = tcg_temp_new_vec_matching(t);
tcg_gen_sub_vec(vece, x, a, b);
tcg_gen_sssub_vec(vece, t, a, b);
tcg_gen_cmp_vec(TCG_COND_NE, vece, x, x, t);
tcg_gen_or_vec(vece, sat, sat, x);
tcg_gen_xor_vec(vece, x, x, t);
tcg_gen_or_vec(vece, qc, qc, x);
}
void gen_gvec_sqsub_qc(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz)
{
static const TCGOpcode vecop_list[] = {
INDEX_op_sssub_vec, INDEX_op_cmp_vec, INDEX_op_sub_vec, 0
INDEX_op_sssub_vec, INDEX_op_sub_vec, 0
};
static const GVecGen4 ops[4] = {
{ .fniv = gen_sqsub_vec,
@ -1376,11 +1584,14 @@ void gen_gvec_sqsub_qc(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
.write_aofs = true,
.vece = MO_32 },
{ .fniv = gen_sqsub_vec,
.fni8 = gen_sqsub_d,
.fno = gen_helper_gvec_sqsub_d,
.opt_opc = vecop_list,
.write_aofs = true,
.vece = MO_64 },
};
tcg_debug_assert(opr_sz <= sizeof_field(CPUARMState, vfp.qc));
tcg_gen_gvec_4(rd_ofs, offsetof(CPUARMState, vfp.qc),
rn_ofs, rm_ofs, opr_sz, max_sz, &ops[vece]);
}
@ -1670,3 +1881,435 @@ void gen_gvec_uminp(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
tcg_debug_assert(vece <= MO_32);
tcg_gen_gvec_3_ool(rd_ofs, rn_ofs, rm_ofs, opr_sz, max_sz, 0, fns[vece]);
}
static void gen_shadd8_i64(TCGv_i64 d, TCGv_i64 a, TCGv_i64 b)
{
TCGv_i64 t = tcg_temp_new_i64();
tcg_gen_and_i64(t, a, b);
tcg_gen_vec_sar8i_i64(a, a, 1);
tcg_gen_vec_sar8i_i64(b, b, 1);
tcg_gen_andi_i64(t, t, dup_const(MO_8, 1));
tcg_gen_vec_add8_i64(d, a, b);
tcg_gen_vec_add8_i64(d, d, t);
}
static void gen_shadd16_i64(TCGv_i64 d, TCGv_i64 a, TCGv_i64 b)
{
TCGv_i64 t = tcg_temp_new_i64();
tcg_gen_and_i64(t, a, b);
tcg_gen_vec_sar16i_i64(a, a, 1);
tcg_gen_vec_sar16i_i64(b, b, 1);
tcg_gen_andi_i64(t, t, dup_const(MO_16, 1));
tcg_gen_vec_add16_i64(d, a, b);
tcg_gen_vec_add16_i64(d, d, t);
}
static void gen_shadd_i32(TCGv_i32 d, TCGv_i32 a, TCGv_i32 b)
{
TCGv_i32 t = tcg_temp_new_i32();
tcg_gen_and_i32(t, a, b);
tcg_gen_sari_i32(a, a, 1);
tcg_gen_sari_i32(b, b, 1);
tcg_gen_andi_i32(t, t, 1);
tcg_gen_add_i32(d, a, b);
tcg_gen_add_i32(d, d, t);
}
static void gen_shadd_vec(unsigned vece, TCGv_vec d, TCGv_vec a, TCGv_vec b)
{
TCGv_vec t = tcg_temp_new_vec_matching(d);
tcg_gen_and_vec(vece, t, a, b);
tcg_gen_sari_vec(vece, a, a, 1);
tcg_gen_sari_vec(vece, b, b, 1);
tcg_gen_and_vec(vece, t, t, tcg_constant_vec_matching(d, vece, 1));
tcg_gen_add_vec(vece, d, a, b);
tcg_gen_add_vec(vece, d, d, t);
}
void gen_gvec_shadd(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz)
{
static const TCGOpcode vecop_list[] = {
INDEX_op_sari_vec, INDEX_op_add_vec, 0
};
static const GVecGen3 g[] = {
{ .fni8 = gen_shadd8_i64,
.fniv = gen_shadd_vec,
.opt_opc = vecop_list,
.vece = MO_8 },
{ .fni8 = gen_shadd16_i64,
.fniv = gen_shadd_vec,
.opt_opc = vecop_list,
.vece = MO_16 },
{ .fni4 = gen_shadd_i32,
.fniv = gen_shadd_vec,
.opt_opc = vecop_list,
.vece = MO_32 },
};
tcg_debug_assert(vece <= MO_32);
tcg_gen_gvec_3(rd_ofs, rn_ofs, rm_ofs, opr_sz, max_sz, &g[vece]);
}
static void gen_uhadd8_i64(TCGv_i64 d, TCGv_i64 a, TCGv_i64 b)
{
TCGv_i64 t = tcg_temp_new_i64();
tcg_gen_and_i64(t, a, b);
tcg_gen_vec_shr8i_i64(a, a, 1);
tcg_gen_vec_shr8i_i64(b, b, 1);
tcg_gen_andi_i64(t, t, dup_const(MO_8, 1));
tcg_gen_vec_add8_i64(d, a, b);
tcg_gen_vec_add8_i64(d, d, t);
}
static void gen_uhadd16_i64(TCGv_i64 d, TCGv_i64 a, TCGv_i64 b)
{
TCGv_i64 t = tcg_temp_new_i64();
tcg_gen_and_i64(t, a, b);
tcg_gen_vec_shr16i_i64(a, a, 1);
tcg_gen_vec_shr16i_i64(b, b, 1);
tcg_gen_andi_i64(t, t, dup_const(MO_16, 1));
tcg_gen_vec_add16_i64(d, a, b);
tcg_gen_vec_add16_i64(d, d, t);
}
static void gen_uhadd_i32(TCGv_i32 d, TCGv_i32 a, TCGv_i32 b)
{
TCGv_i32 t = tcg_temp_new_i32();
tcg_gen_and_i32(t, a, b);
tcg_gen_shri_i32(a, a, 1);
tcg_gen_shri_i32(b, b, 1);
tcg_gen_andi_i32(t, t, 1);
tcg_gen_add_i32(d, a, b);
tcg_gen_add_i32(d, d, t);
}
static void gen_uhadd_vec(unsigned vece, TCGv_vec d, TCGv_vec a, TCGv_vec b)
{
TCGv_vec t = tcg_temp_new_vec_matching(d);
tcg_gen_and_vec(vece, t, a, b);
tcg_gen_shri_vec(vece, a, a, 1);
tcg_gen_shri_vec(vece, b, b, 1);
tcg_gen_and_vec(vece, t, t, tcg_constant_vec_matching(d, vece, 1));
tcg_gen_add_vec(vece, d, a, b);
tcg_gen_add_vec(vece, d, d, t);
}
void gen_gvec_uhadd(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz)
{
static const TCGOpcode vecop_list[] = {
INDEX_op_shri_vec, INDEX_op_add_vec, 0
};
static const GVecGen3 g[] = {
{ .fni8 = gen_uhadd8_i64,
.fniv = gen_uhadd_vec,
.opt_opc = vecop_list,
.vece = MO_8 },
{ .fni8 = gen_uhadd16_i64,
.fniv = gen_uhadd_vec,
.opt_opc = vecop_list,
.vece = MO_16 },
{ .fni4 = gen_uhadd_i32,
.fniv = gen_uhadd_vec,
.opt_opc = vecop_list,
.vece = MO_32 },
};
tcg_debug_assert(vece <= MO_32);
tcg_gen_gvec_3(rd_ofs, rn_ofs, rm_ofs, opr_sz, max_sz, &g[vece]);
}
static void gen_shsub8_i64(TCGv_i64 d, TCGv_i64 a, TCGv_i64 b)
{
TCGv_i64 t = tcg_temp_new_i64();
tcg_gen_andc_i64(t, b, a);
tcg_gen_vec_sar8i_i64(a, a, 1);
tcg_gen_vec_sar8i_i64(b, b, 1);
tcg_gen_andi_i64(t, t, dup_const(MO_8, 1));
tcg_gen_vec_sub8_i64(d, a, b);
tcg_gen_vec_sub8_i64(d, d, t);
}
static void gen_shsub16_i64(TCGv_i64 d, TCGv_i64 a, TCGv_i64 b)
{
TCGv_i64 t = tcg_temp_new_i64();
tcg_gen_andc_i64(t, b, a);
tcg_gen_vec_sar16i_i64(a, a, 1);
tcg_gen_vec_sar16i_i64(b, b, 1);
tcg_gen_andi_i64(t, t, dup_const(MO_16, 1));
tcg_gen_vec_sub16_i64(d, a, b);
tcg_gen_vec_sub16_i64(d, d, t);
}
static void gen_shsub_i32(TCGv_i32 d, TCGv_i32 a, TCGv_i32 b)
{
TCGv_i32 t = tcg_temp_new_i32();
tcg_gen_andc_i32(t, b, a);
tcg_gen_sari_i32(a, a, 1);
tcg_gen_sari_i32(b, b, 1);
tcg_gen_andi_i32(t, t, 1);
tcg_gen_sub_i32(d, a, b);
tcg_gen_sub_i32(d, d, t);
}
static void gen_shsub_vec(unsigned vece, TCGv_vec d, TCGv_vec a, TCGv_vec b)
{
TCGv_vec t = tcg_temp_new_vec_matching(d);
tcg_gen_andc_vec(vece, t, b, a);
tcg_gen_sari_vec(vece, a, a, 1);
tcg_gen_sari_vec(vece, b, b, 1);
tcg_gen_and_vec(vece, t, t, tcg_constant_vec_matching(d, vece, 1));
tcg_gen_sub_vec(vece, d, a, b);
tcg_gen_sub_vec(vece, d, d, t);
}
void gen_gvec_shsub(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz)
{
static const TCGOpcode vecop_list[] = {
INDEX_op_sari_vec, INDEX_op_sub_vec, 0
};
static const GVecGen3 g[4] = {
{ .fni8 = gen_shsub8_i64,
.fniv = gen_shsub_vec,
.opt_opc = vecop_list,
.vece = MO_8 },
{ .fni8 = gen_shsub16_i64,
.fniv = gen_shsub_vec,
.opt_opc = vecop_list,
.vece = MO_16 },
{ .fni4 = gen_shsub_i32,
.fniv = gen_shsub_vec,
.opt_opc = vecop_list,
.vece = MO_32 },
};
assert(vece <= MO_32);
tcg_gen_gvec_3(rd_ofs, rn_ofs, rm_ofs, opr_sz, max_sz, &g[vece]);
}
static void gen_uhsub8_i64(TCGv_i64 d, TCGv_i64 a, TCGv_i64 b)
{
TCGv_i64 t = tcg_temp_new_i64();
tcg_gen_andc_i64(t, b, a);
tcg_gen_vec_shr8i_i64(a, a, 1);
tcg_gen_vec_shr8i_i64(b, b, 1);
tcg_gen_andi_i64(t, t, dup_const(MO_8, 1));
tcg_gen_vec_sub8_i64(d, a, b);
tcg_gen_vec_sub8_i64(d, d, t);
}
static void gen_uhsub16_i64(TCGv_i64 d, TCGv_i64 a, TCGv_i64 b)
{
TCGv_i64 t = tcg_temp_new_i64();
tcg_gen_andc_i64(t, b, a);
tcg_gen_vec_shr16i_i64(a, a, 1);
tcg_gen_vec_shr16i_i64(b, b, 1);
tcg_gen_andi_i64(t, t, dup_const(MO_16, 1));
tcg_gen_vec_sub16_i64(d, a, b);
tcg_gen_vec_sub16_i64(d, d, t);
}
static void gen_uhsub_i32(TCGv_i32 d, TCGv_i32 a, TCGv_i32 b)
{
TCGv_i32 t = tcg_temp_new_i32();
tcg_gen_andc_i32(t, b, a);
tcg_gen_shri_i32(a, a, 1);
tcg_gen_shri_i32(b, b, 1);
tcg_gen_andi_i32(t, t, 1);
tcg_gen_sub_i32(d, a, b);
tcg_gen_sub_i32(d, d, t);
}
static void gen_uhsub_vec(unsigned vece, TCGv_vec d, TCGv_vec a, TCGv_vec b)
{
TCGv_vec t = tcg_temp_new_vec_matching(d);
tcg_gen_andc_vec(vece, t, b, a);
tcg_gen_shri_vec(vece, a, a, 1);
tcg_gen_shri_vec(vece, b, b, 1);
tcg_gen_and_vec(vece, t, t, tcg_constant_vec_matching(d, vece, 1));
tcg_gen_sub_vec(vece, d, a, b);
tcg_gen_sub_vec(vece, d, d, t);
}
void gen_gvec_uhsub(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz)
{
static const TCGOpcode vecop_list[] = {
INDEX_op_shri_vec, INDEX_op_sub_vec, 0
};
static const GVecGen3 g[4] = {
{ .fni8 = gen_uhsub8_i64,
.fniv = gen_uhsub_vec,
.opt_opc = vecop_list,
.vece = MO_8 },
{ .fni8 = gen_uhsub16_i64,
.fniv = gen_uhsub_vec,
.opt_opc = vecop_list,
.vece = MO_16 },
{ .fni4 = gen_uhsub_i32,
.fniv = gen_uhsub_vec,
.opt_opc = vecop_list,
.vece = MO_32 },
};
assert(vece <= MO_32);
tcg_gen_gvec_3(rd_ofs, rn_ofs, rm_ofs, opr_sz, max_sz, &g[vece]);
}
static void gen_srhadd8_i64(TCGv_i64 d, TCGv_i64 a, TCGv_i64 b)
{
TCGv_i64 t = tcg_temp_new_i64();
tcg_gen_or_i64(t, a, b);
tcg_gen_vec_sar8i_i64(a, a, 1);
tcg_gen_vec_sar8i_i64(b, b, 1);
tcg_gen_andi_i64(t, t, dup_const(MO_8, 1));
tcg_gen_vec_add8_i64(d, a, b);
tcg_gen_vec_add8_i64(d, d, t);
}
static void gen_srhadd16_i64(TCGv_i64 d, TCGv_i64 a, TCGv_i64 b)
{
TCGv_i64 t = tcg_temp_new_i64();
tcg_gen_or_i64(t, a, b);
tcg_gen_vec_sar16i_i64(a, a, 1);
tcg_gen_vec_sar16i_i64(b, b, 1);
tcg_gen_andi_i64(t, t, dup_const(MO_16, 1));
tcg_gen_vec_add16_i64(d, a, b);
tcg_gen_vec_add16_i64(d, d, t);
}
static void gen_srhadd_i32(TCGv_i32 d, TCGv_i32 a, TCGv_i32 b)
{
TCGv_i32 t = tcg_temp_new_i32();
tcg_gen_or_i32(t, a, b);
tcg_gen_sari_i32(a, a, 1);
tcg_gen_sari_i32(b, b, 1);
tcg_gen_andi_i32(t, t, 1);
tcg_gen_add_i32(d, a, b);
tcg_gen_add_i32(d, d, t);
}
static void gen_srhadd_vec(unsigned vece, TCGv_vec d, TCGv_vec a, TCGv_vec b)
{
TCGv_vec t = tcg_temp_new_vec_matching(d);
tcg_gen_or_vec(vece, t, a, b);
tcg_gen_sari_vec(vece, a, a, 1);
tcg_gen_sari_vec(vece, b, b, 1);
tcg_gen_and_vec(vece, t, t, tcg_constant_vec_matching(d, vece, 1));
tcg_gen_add_vec(vece, d, a, b);
tcg_gen_add_vec(vece, d, d, t);
}
void gen_gvec_srhadd(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz)
{
static const TCGOpcode vecop_list[] = {
INDEX_op_sari_vec, INDEX_op_add_vec, 0
};
static const GVecGen3 g[] = {
{ .fni8 = gen_srhadd8_i64,
.fniv = gen_srhadd_vec,
.opt_opc = vecop_list,
.vece = MO_8 },
{ .fni8 = gen_srhadd16_i64,
.fniv = gen_srhadd_vec,
.opt_opc = vecop_list,
.vece = MO_16 },
{ .fni4 = gen_srhadd_i32,
.fniv = gen_srhadd_vec,
.opt_opc = vecop_list,
.vece = MO_32 },
};
assert(vece <= MO_32);
tcg_gen_gvec_3(rd_ofs, rn_ofs, rm_ofs, opr_sz, max_sz, &g[vece]);
}
static void gen_urhadd8_i64(TCGv_i64 d, TCGv_i64 a, TCGv_i64 b)
{
TCGv_i64 t = tcg_temp_new_i64();
tcg_gen_or_i64(t, a, b);
tcg_gen_vec_shr8i_i64(a, a, 1);
tcg_gen_vec_shr8i_i64(b, b, 1);
tcg_gen_andi_i64(t, t, dup_const(MO_8, 1));
tcg_gen_vec_add8_i64(d, a, b);
tcg_gen_vec_add8_i64(d, d, t);
}
static void gen_urhadd16_i64(TCGv_i64 d, TCGv_i64 a, TCGv_i64 b)
{
TCGv_i64 t = tcg_temp_new_i64();
tcg_gen_or_i64(t, a, b);
tcg_gen_vec_shr16i_i64(a, a, 1);
tcg_gen_vec_shr16i_i64(b, b, 1);
tcg_gen_andi_i64(t, t, dup_const(MO_16, 1));
tcg_gen_vec_add16_i64(d, a, b);
tcg_gen_vec_add16_i64(d, d, t);
}
static void gen_urhadd_i32(TCGv_i32 d, TCGv_i32 a, TCGv_i32 b)
{
TCGv_i32 t = tcg_temp_new_i32();
tcg_gen_or_i32(t, a, b);
tcg_gen_shri_i32(a, a, 1);
tcg_gen_shri_i32(b, b, 1);
tcg_gen_andi_i32(t, t, 1);
tcg_gen_add_i32(d, a, b);
tcg_gen_add_i32(d, d, t);
}
static void gen_urhadd_vec(unsigned vece, TCGv_vec d, TCGv_vec a, TCGv_vec b)
{
TCGv_vec t = tcg_temp_new_vec_matching(d);
tcg_gen_or_vec(vece, t, a, b);
tcg_gen_shri_vec(vece, a, a, 1);
tcg_gen_shri_vec(vece, b, b, 1);
tcg_gen_and_vec(vece, t, t, tcg_constant_vec_matching(d, vece, 1));
tcg_gen_add_vec(vece, d, a, b);
tcg_gen_add_vec(vece, d, d, t);
}
void gen_gvec_urhadd(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz)
{
static const TCGOpcode vecop_list[] = {
INDEX_op_shri_vec, INDEX_op_add_vec, 0
};
static const GVecGen3 g[] = {
{ .fni8 = gen_urhadd8_i64,
.fniv = gen_urhadd_vec,
.opt_opc = vecop_list,
.vece = MO_8 },
{ .fni8 = gen_urhadd16_i64,
.fniv = gen_urhadd_vec,
.opt_opc = vecop_list,
.vece = MO_16 },
{ .fni4 = gen_urhadd_i32,
.fniv = gen_urhadd_vec,
.opt_opc = vecop_list,
.vece = MO_32 },
};
assert(vece <= MO_32);
tcg_gen_gvec_3(rd_ofs, rn_ofs, rm_ofs, opr_sz, max_sz, &g[vece]);
}

View File

@ -188,3 +188,184 @@ void gen_gvec_bcax(unsigned vece, uint32_t d, uint32_t n, uint32_t m,
tcg_gen_gvec_4(d, n, m, a, oprsz, maxsz, &op);
}
/*
* Set @res to the correctly saturated result.
* Set @qc non-zero if saturation occured.
*/
void gen_suqadd_bhs(TCGv_i64 res, TCGv_i64 qc,
TCGv_i64 a, TCGv_i64 b, MemOp esz)
{
TCGv_i64 max = tcg_constant_i64((1ull << ((8 << esz) - 1)) - 1);
TCGv_i64 t = tcg_temp_new_i64();
tcg_gen_add_i64(t, a, b);
tcg_gen_smin_i64(res, t, max);
tcg_gen_xor_i64(t, t, res);
tcg_gen_or_i64(qc, qc, t);
}
void gen_suqadd_d(TCGv_i64 res, TCGv_i64 qc, TCGv_i64 a, TCGv_i64 b)
{
TCGv_i64 max = tcg_constant_i64(INT64_MAX);
TCGv_i64 t = tcg_temp_new_i64();
/* Maximum value that can be added to @a without overflow. */
tcg_gen_sub_i64(t, max, a);
/* Constrain addend so that the next addition never overflows. */
tcg_gen_umin_i64(t, t, b);
tcg_gen_add_i64(res, a, t);
tcg_gen_xor_i64(t, t, b);
tcg_gen_or_i64(qc, qc, t);
}
static void gen_suqadd_vec(unsigned vece, TCGv_vec t, TCGv_vec qc,
TCGv_vec a, TCGv_vec b)
{
TCGv_vec max =
tcg_constant_vec_matching(t, vece, (1ull << ((8 << vece) - 1)) - 1);
TCGv_vec u = tcg_temp_new_vec_matching(t);
/* Maximum value that can be added to @a without overflow. */
tcg_gen_sub_vec(vece, u, max, a);
/* Constrain addend so that the next addition never overflows. */
tcg_gen_umin_vec(vece, u, u, b);
tcg_gen_add_vec(vece, t, u, a);
/* Compute QC by comparing the adjusted @b. */
tcg_gen_xor_vec(vece, u, u, b);
tcg_gen_or_vec(vece, qc, qc, u);
}
void gen_gvec_suqadd_qc(unsigned vece, uint32_t rd_ofs,
uint32_t rn_ofs, uint32_t rm_ofs,
uint32_t opr_sz, uint32_t max_sz)
{
static const TCGOpcode vecop_list[] = {
INDEX_op_add_vec, INDEX_op_sub_vec, INDEX_op_umin_vec, 0
};
static const GVecGen4 ops[4] = {
{ .fniv = gen_suqadd_vec,
.fno = gen_helper_gvec_suqadd_b,
.opt_opc = vecop_list,
.write_aofs = true,
.vece = MO_8 },
{ .fniv = gen_suqadd_vec,
.fno = gen_helper_gvec_suqadd_h,
.opt_opc = vecop_list,
.write_aofs = true,
.vece = MO_16 },
{ .fniv = gen_suqadd_vec,
.fno = gen_helper_gvec_suqadd_s,
.opt_opc = vecop_list,
.write_aofs = true,
.vece = MO_32 },
{ .fniv = gen_suqadd_vec,
.fni8 = gen_suqadd_d,
.fno = gen_helper_gvec_suqadd_d,
.opt_opc = vecop_list,
.write_aofs = true,
.vece = MO_64 },
};
tcg_debug_assert(opr_sz <= sizeof_field(CPUARMState, vfp.qc));
tcg_gen_gvec_4(rd_ofs, offsetof(CPUARMState, vfp.qc),
rn_ofs, rm_ofs, opr_sz, max_sz, &ops[vece]);
}
void gen_usqadd_bhs(TCGv_i64 res, TCGv_i64 qc,
TCGv_i64 a, TCGv_i64 b, MemOp esz)
{
TCGv_i64 max = tcg_constant_i64(MAKE_64BIT_MASK(0, 8 << esz));
TCGv_i64 zero = tcg_constant_i64(0);
TCGv_i64 tmp = tcg_temp_new_i64();
tcg_gen_add_i64(tmp, a, b);
tcg_gen_smin_i64(res, tmp, max);
tcg_gen_smax_i64(res, res, zero);
tcg_gen_xor_i64(tmp, tmp, res);
tcg_gen_or_i64(qc, qc, tmp);
}
void gen_usqadd_d(TCGv_i64 res, TCGv_i64 qc, TCGv_i64 a, TCGv_i64 b)
{
TCGv_i64 tmp = tcg_temp_new_i64();
TCGv_i64 tneg = tcg_temp_new_i64();
TCGv_i64 tpos = tcg_temp_new_i64();
TCGv_i64 max = tcg_constant_i64(UINT64_MAX);
TCGv_i64 zero = tcg_constant_i64(0);
tcg_gen_add_i64(tmp, a, b);
/* If @b is positive, saturate if (a + b) < a, aka unsigned overflow. */
tcg_gen_movcond_i64(TCG_COND_LTU, tpos, tmp, a, max, tmp);
/* If @b is negative, saturate if a < -b, ie subtraction is negative. */
tcg_gen_neg_i64(tneg, b);
tcg_gen_movcond_i64(TCG_COND_LTU, tneg, a, tneg, zero, tmp);
/* Select correct result from sign of @b. */
tcg_gen_movcond_i64(TCG_COND_LT, res, b, zero, tneg, tpos);
tcg_gen_xor_i64(tmp, tmp, res);
tcg_gen_or_i64(qc, qc, tmp);
}
static void gen_usqadd_vec(unsigned vece, TCGv_vec t, TCGv_vec qc,
TCGv_vec a, TCGv_vec b)
{
TCGv_vec u = tcg_temp_new_vec_matching(t);
TCGv_vec z = tcg_constant_vec_matching(t, vece, 0);
/* Compute unsigned saturation of add for +b and sub for -b. */
tcg_gen_neg_vec(vece, t, b);
tcg_gen_usadd_vec(vece, u, a, b);
tcg_gen_ussub_vec(vece, t, a, t);
/* Select the correct result depending on the sign of b. */
tcg_gen_cmpsel_vec(TCG_COND_LT, vece, t, b, z, t, u);
/* Compute QC by comparing against the non-saturated result. */
tcg_gen_add_vec(vece, u, a, b);
tcg_gen_xor_vec(vece, u, u, t);
tcg_gen_or_vec(vece, qc, qc, u);
}
void gen_gvec_usqadd_qc(unsigned vece, uint32_t rd_ofs,
uint32_t rn_ofs, uint32_t rm_ofs,
uint32_t opr_sz, uint32_t max_sz)
{
static const TCGOpcode vecop_list[] = {
INDEX_op_neg_vec, INDEX_op_add_vec,
INDEX_op_usadd_vec, INDEX_op_ussub_vec,
INDEX_op_cmpsel_vec, 0
};
static const GVecGen4 ops[4] = {
{ .fniv = gen_usqadd_vec,
.fno = gen_helper_gvec_usqadd_b,
.opt_opc = vecop_list,
.write_aofs = true,
.vece = MO_8 },
{ .fniv = gen_usqadd_vec,
.fno = gen_helper_gvec_usqadd_h,
.opt_opc = vecop_list,
.write_aofs = true,
.vece = MO_16 },
{ .fniv = gen_usqadd_vec,
.fno = gen_helper_gvec_usqadd_s,
.opt_opc = vecop_list,
.write_aofs = true,
.vece = MO_32 },
{ .fniv = gen_usqadd_vec,
.fni8 = gen_usqadd_d,
.fno = gen_helper_gvec_usqadd_d,
.opt_opc = vecop_list,
.write_aofs = true,
.vece = MO_64 },
};
tcg_debug_assert(opr_sz <= sizeof_field(CPUARMState, vfp.qc));
tcg_gen_gvec_4(rd_ofs, offsetof(CPUARMState, vfp.qc),
rn_ofs, rm_ofs, opr_sz, max_sz, &ops[vece]);
}

View File

@ -102,37 +102,12 @@ VCGE_U_3s 1111 001 1 0 . .. .... .... 0011 . . . 1 .... @3same
VSHL_S_3s 1111 001 0 0 . .. .... .... 0100 . . . 0 .... @3same_rev
VSHL_U_3s 1111 001 1 0 . .. .... .... 0100 . . . 0 .... @3same_rev
# Insns operating on 64-bit elements (size!=0b11 handled elsewhere)
# The _rev suffix indicates that Vn and Vm are reversed (as explained
# by the comment for the @3same_rev format).
@3same_64_rev .... ... . . . 11 .... .... .... . q:1 . . .... \
&3same vm=%vn_dp vn=%vm_dp vd=%vd_dp size=3
{
VQSHL_S64_3s 1111 001 0 0 . .. .... .... 0100 . . . 1 .... @3same_64_rev
VQSHL_S_3s 1111 001 0 0 . .. .... .... 0100 . . . 1 .... @3same_rev
}
{
VQSHL_U64_3s 1111 001 1 0 . .. .... .... 0100 . . . 1 .... @3same_64_rev
VQSHL_U_3s 1111 001 1 0 . .. .... .... 0100 . . . 1 .... @3same_rev
}
{
VRSHL_S64_3s 1111 001 0 0 . .. .... .... 0101 . . . 0 .... @3same_64_rev
VRSHL_S_3s 1111 001 0 0 . .. .... .... 0101 . . . 0 .... @3same_rev
}
{
VRSHL_U64_3s 1111 001 1 0 . .. .... .... 0101 . . . 0 .... @3same_64_rev
VRSHL_U_3s 1111 001 1 0 . .. .... .... 0101 . . . 0 .... @3same_rev
}
{
VQRSHL_S64_3s 1111 001 0 0 . .. .... .... 0101 . . . 1 .... @3same_64_rev
VQRSHL_S_3s 1111 001 0 0 . .. .... .... 0101 . . . 1 .... @3same_rev
}
{
VQRSHL_U64_3s 1111 001 1 0 . .. .... .... 0101 . . . 1 .... @3same_64_rev
VQRSHL_U_3s 1111 001 1 0 . .. .... .... 0101 . . . 1 .... @3same_rev
}
VQSHL_S_3s 1111 001 0 0 . .. .... .... 0100 . . . 1 .... @3same_rev
VQSHL_U_3s 1111 001 1 0 . .. .... .... 0100 . . . 1 .... @3same_rev
VRSHL_S_3s 1111 001 0 0 . .. .... .... 0101 . . . 0 .... @3same_rev
VRSHL_U_3s 1111 001 1 0 . .. .... .... 0101 . . . 0 .... @3same_rev
VQRSHL_S_3s 1111 001 0 0 . .. .... .... 0101 . . . 1 .... @3same_rev
VQRSHL_U_3s 1111 001 1 0 . .. .... .... 0101 . . . 1 .... @3same_rev
VMAX_S_3s 1111 001 0 0 . .. .... .... 0110 . . . 0 .... @3same
VMAX_U_3s 1111 001 1 0 . .. .... .... 0110 . . . 0 .... @3same

View File

@ -6,10 +6,11 @@
*
* This code is licensed under the GNU GPL v2.
*/
#include "qemu/osdep.h"
#include "qemu/osdep.h"
#include "cpu.h"
#include "exec/helper-proto.h"
#include "tcg/tcg-gvec-desc.h"
#include "fpu/softfloat.h"
#include "vec_internal.h"
@ -117,6 +118,29 @@ NEON_VOP_BODY(vtype, n)
uint32_t HELPER(glue(neon_,name))(CPUARMState *env, uint32_t arg1, uint32_t arg2) \
NEON_VOP_BODY(vtype, n)
#define NEON_GVEC_VOP2(name, vtype) \
void HELPER(name)(void *vd, void *vn, void *vm, uint32_t desc) \
{ \
intptr_t i, opr_sz = simd_oprsz(desc); \
vtype *d = vd, *n = vn, *m = vm; \
for (i = 0; i < opr_sz / sizeof(vtype); i++) { \
NEON_FN(d[i], n[i], m[i]); \
} \
clear_tail(d, opr_sz, simd_maxsz(desc)); \
}
#define NEON_GVEC_VOP2_ENV(name, vtype) \
void HELPER(name)(void *vd, void *vn, void *vm, void *venv, uint32_t desc) \
{ \
intptr_t i, opr_sz = simd_oprsz(desc); \
vtype *d = vd, *n = vn, *m = vm; \
CPUARMState *env = venv; \
for (i = 0; i < opr_sz / sizeof(vtype); i++) { \
NEON_FN(d[i], n[i], m[i]); \
} \
clear_tail(d, opr_sz, simd_maxsz(desc)); \
}
/* Pairwise operations. */
/* For 32-bit elements each segment only contains a single element, so
the elementwise and pairwise operations are the same. */
@ -155,414 +179,6 @@ uint32_t HELPER(glue(neon_,name))(uint32_t arg) \
return arg; \
}
#define NEON_USAT(dest, src1, src2, type) do { \
uint32_t tmp = (uint32_t)src1 + (uint32_t)src2; \
if (tmp != (type)tmp) { \
SET_QC(); \
dest = ~0; \
} else { \
dest = tmp; \
}} while(0)
#define NEON_FN(dest, src1, src2) NEON_USAT(dest, src1, src2, uint8_t)
NEON_VOP_ENV(qadd_u8, neon_u8, 4)
#undef NEON_FN
#define NEON_FN(dest, src1, src2) NEON_USAT(dest, src1, src2, uint16_t)
NEON_VOP_ENV(qadd_u16, neon_u16, 2)
#undef NEON_FN
#undef NEON_USAT
uint32_t HELPER(neon_qadd_u32)(CPUARMState *env, uint32_t a, uint32_t b)
{
uint32_t res = a + b;
if (res < a) {
SET_QC();
res = ~0;
}
return res;
}
uint64_t HELPER(neon_qadd_u64)(CPUARMState *env, uint64_t src1, uint64_t src2)
{
uint64_t res;
res = src1 + src2;
if (res < src1) {
SET_QC();
res = ~(uint64_t)0;
}
return res;
}
#define NEON_SSAT(dest, src1, src2, type) do { \
int32_t tmp = (uint32_t)src1 + (uint32_t)src2; \
if (tmp != (type)tmp) { \
SET_QC(); \
if (src2 > 0) { \
tmp = (1 << (sizeof(type) * 8 - 1)) - 1; \
} else { \
tmp = 1 << (sizeof(type) * 8 - 1); \
} \
} \
dest = tmp; \
} while(0)
#define NEON_FN(dest, src1, src2) NEON_SSAT(dest, src1, src2, int8_t)
NEON_VOP_ENV(qadd_s8, neon_s8, 4)
#undef NEON_FN
#define NEON_FN(dest, src1, src2) NEON_SSAT(dest, src1, src2, int16_t)
NEON_VOP_ENV(qadd_s16, neon_s16, 2)
#undef NEON_FN
#undef NEON_SSAT
uint32_t HELPER(neon_qadd_s32)(CPUARMState *env, uint32_t a, uint32_t b)
{
uint32_t res = a + b;
if (((res ^ a) & SIGNBIT) && !((a ^ b) & SIGNBIT)) {
SET_QC();
res = ~(((int32_t)a >> 31) ^ SIGNBIT);
}
return res;
}
uint64_t HELPER(neon_qadd_s64)(CPUARMState *env, uint64_t src1, uint64_t src2)
{
uint64_t res;
res = src1 + src2;
if (((res ^ src1) & SIGNBIT64) && !((src1 ^ src2) & SIGNBIT64)) {
SET_QC();
res = ((int64_t)src1 >> 63) ^ ~SIGNBIT64;
}
return res;
}
/* Unsigned saturating accumulate of signed value
*
* Op1/Rn is treated as signed
* Op2/Rd is treated as unsigned
*
* Explicit casting is used to ensure the correct sign extension of
* inputs. The result is treated as a unsigned value and saturated as such.
*
* We use a macro for the 8/16 bit cases which expects signed integers of va,
* vb, and vr for interim calculation and an unsigned 32 bit result value r.
*/
#define USATACC(bits, shift) \
do { \
va = sextract32(a, shift, bits); \
vb = extract32(b, shift, bits); \
vr = va + vb; \
if (vr > UINT##bits##_MAX) { \
SET_QC(); \
vr = UINT##bits##_MAX; \
} else if (vr < 0) { \
SET_QC(); \
vr = 0; \
} \
r = deposit32(r, shift, bits, vr); \
} while (0)
uint32_t HELPER(neon_uqadd_s8)(CPUARMState *env, uint32_t a, uint32_t b)
{
int16_t va, vb, vr;
uint32_t r = 0;
USATACC(8, 0);
USATACC(8, 8);
USATACC(8, 16);
USATACC(8, 24);
return r;
}
uint32_t HELPER(neon_uqadd_s16)(CPUARMState *env, uint32_t a, uint32_t b)
{
int32_t va, vb, vr;
uint64_t r = 0;
USATACC(16, 0);
USATACC(16, 16);
return r;
}
#undef USATACC
uint32_t HELPER(neon_uqadd_s32)(CPUARMState *env, uint32_t a, uint32_t b)
{
int64_t va = (int32_t)a;
int64_t vb = (uint32_t)b;
int64_t vr = va + vb;
if (vr > UINT32_MAX) {
SET_QC();
vr = UINT32_MAX;
} else if (vr < 0) {
SET_QC();
vr = 0;
}
return vr;
}
uint64_t HELPER(neon_uqadd_s64)(CPUARMState *env, uint64_t a, uint64_t b)
{
uint64_t res;
res = a + b;
/* We only need to look at the pattern of SIGN bits to detect
* +ve/-ve saturation
*/
if (~a & b & ~res & SIGNBIT64) {
SET_QC();
res = UINT64_MAX;
} else if (a & ~b & res & SIGNBIT64) {
SET_QC();
res = 0;
}
return res;
}
/* Signed saturating accumulate of unsigned value
*
* Op1/Rn is treated as unsigned
* Op2/Rd is treated as signed
*
* The result is treated as a signed value and saturated as such
*
* We use a macro for the 8/16 bit cases which expects signed integers of va,
* vb, and vr for interim calculation and an unsigned 32 bit result value r.
*/
#define SSATACC(bits, shift) \
do { \
va = extract32(a, shift, bits); \
vb = sextract32(b, shift, bits); \
vr = va + vb; \
if (vr > INT##bits##_MAX) { \
SET_QC(); \
vr = INT##bits##_MAX; \
} else if (vr < INT##bits##_MIN) { \
SET_QC(); \
vr = INT##bits##_MIN; \
} \
r = deposit32(r, shift, bits, vr); \
} while (0)
uint32_t HELPER(neon_sqadd_u8)(CPUARMState *env, uint32_t a, uint32_t b)
{
int16_t va, vb, vr;
uint32_t r = 0;
SSATACC(8, 0);
SSATACC(8, 8);
SSATACC(8, 16);
SSATACC(8, 24);
return r;
}
uint32_t HELPER(neon_sqadd_u16)(CPUARMState *env, uint32_t a, uint32_t b)
{
int32_t va, vb, vr;
uint32_t r = 0;
SSATACC(16, 0);
SSATACC(16, 16);
return r;
}
#undef SSATACC
uint32_t HELPER(neon_sqadd_u32)(CPUARMState *env, uint32_t a, uint32_t b)
{
int64_t res;
int64_t op1 = (uint32_t)a;
int64_t op2 = (int32_t)b;
res = op1 + op2;
if (res > INT32_MAX) {
SET_QC();
res = INT32_MAX;
} else if (res < INT32_MIN) {
SET_QC();
res = INT32_MIN;
}
return res;
}
uint64_t HELPER(neon_sqadd_u64)(CPUARMState *env, uint64_t a, uint64_t b)
{
uint64_t res;
res = a + b;
/* We only need to look at the pattern of SIGN bits to detect an overflow */
if (((a & res)
| (~b & res)
| (a & ~b)) & SIGNBIT64) {
SET_QC();
res = INT64_MAX;
}
return res;
}
#define NEON_USAT(dest, src1, src2, type) do { \
uint32_t tmp = (uint32_t)src1 - (uint32_t)src2; \
if (tmp != (type)tmp) { \
SET_QC(); \
dest = 0; \
} else { \
dest = tmp; \
}} while(0)
#define NEON_FN(dest, src1, src2) NEON_USAT(dest, src1, src2, uint8_t)
NEON_VOP_ENV(qsub_u8, neon_u8, 4)
#undef NEON_FN
#define NEON_FN(dest, src1, src2) NEON_USAT(dest, src1, src2, uint16_t)
NEON_VOP_ENV(qsub_u16, neon_u16, 2)
#undef NEON_FN
#undef NEON_USAT
uint32_t HELPER(neon_qsub_u32)(CPUARMState *env, uint32_t a, uint32_t b)
{
uint32_t res = a - b;
if (res > a) {
SET_QC();
res = 0;
}
return res;
}
uint64_t HELPER(neon_qsub_u64)(CPUARMState *env, uint64_t src1, uint64_t src2)
{
uint64_t res;
if (src1 < src2) {
SET_QC();
res = 0;
} else {
res = src1 - src2;
}
return res;
}
#define NEON_SSAT(dest, src1, src2, type) do { \
int32_t tmp = (uint32_t)src1 - (uint32_t)src2; \
if (tmp != (type)tmp) { \
SET_QC(); \
if (src2 < 0) { \
tmp = (1 << (sizeof(type) * 8 - 1)) - 1; \
} else { \
tmp = 1 << (sizeof(type) * 8 - 1); \
} \
} \
dest = tmp; \
} while(0)
#define NEON_FN(dest, src1, src2) NEON_SSAT(dest, src1, src2, int8_t)
NEON_VOP_ENV(qsub_s8, neon_s8, 4)
#undef NEON_FN
#define NEON_FN(dest, src1, src2) NEON_SSAT(dest, src1, src2, int16_t)
NEON_VOP_ENV(qsub_s16, neon_s16, 2)
#undef NEON_FN
#undef NEON_SSAT
uint32_t HELPER(neon_qsub_s32)(CPUARMState *env, uint32_t a, uint32_t b)
{
uint32_t res = a - b;
if (((res ^ a) & SIGNBIT) && ((a ^ b) & SIGNBIT)) {
SET_QC();
res = ~(((int32_t)a >> 31) ^ SIGNBIT);
}
return res;
}
uint64_t HELPER(neon_qsub_s64)(CPUARMState *env, uint64_t src1, uint64_t src2)
{
uint64_t res;
res = src1 - src2;
if (((res ^ src1) & SIGNBIT64) && ((src1 ^ src2) & SIGNBIT64)) {
SET_QC();
res = ((int64_t)src1 >> 63) ^ ~SIGNBIT64;
}
return res;
}
#define NEON_FN(dest, src1, src2) dest = (src1 + src2) >> 1
NEON_VOP(hadd_s8, neon_s8, 4)
NEON_VOP(hadd_u8, neon_u8, 4)
NEON_VOP(hadd_s16, neon_s16, 2)
NEON_VOP(hadd_u16, neon_u16, 2)
#undef NEON_FN
int32_t HELPER(neon_hadd_s32)(int32_t src1, int32_t src2)
{
int32_t dest;
dest = (src1 >> 1) + (src2 >> 1);
if (src1 & src2 & 1)
dest++;
return dest;
}
uint32_t HELPER(neon_hadd_u32)(uint32_t src1, uint32_t src2)
{
uint32_t dest;
dest = (src1 >> 1) + (src2 >> 1);
if (src1 & src2 & 1)
dest++;
return dest;
}
#define NEON_FN(dest, src1, src2) dest = (src1 + src2 + 1) >> 1
NEON_VOP(rhadd_s8, neon_s8, 4)
NEON_VOP(rhadd_u8, neon_u8, 4)
NEON_VOP(rhadd_s16, neon_s16, 2)
NEON_VOP(rhadd_u16, neon_u16, 2)
#undef NEON_FN
int32_t HELPER(neon_rhadd_s32)(int32_t src1, int32_t src2)
{
int32_t dest;
dest = (src1 >> 1) + (src2 >> 1);
if ((src1 | src2) & 1)
dest++;
return dest;
}
uint32_t HELPER(neon_rhadd_u32)(uint32_t src1, uint32_t src2)
{
uint32_t dest;
dest = (src1 >> 1) + (src2 >> 1);
if ((src1 | src2) & 1)
dest++;
return dest;
}
#define NEON_FN(dest, src1, src2) dest = (src1 - src2) >> 1
NEON_VOP(hsub_s8, neon_s8, 4)
NEON_VOP(hsub_u8, neon_u8, 4)
NEON_VOP(hsub_s16, neon_s16, 2)
NEON_VOP(hsub_u16, neon_u16, 2)
#undef NEON_FN
int32_t HELPER(neon_hsub_s32)(int32_t src1, int32_t src2)
{
int32_t dest;
dest = (src1 >> 1) - (src2 >> 1);
if ((~src1) & src2 & 1)
dest--;
return dest;
}
uint32_t HELPER(neon_hsub_u32)(uint32_t src1, uint32_t src2)
{
uint32_t dest;
dest = (src1 >> 1) - (src2 >> 1);
if ((~src1) & src2 & 1)
dest--;
return dest;
}
#define NEON_FN(dest, src1, src2) dest = (src1 < src2) ? src1 : src2
NEON_POP(pmin_s8, neon_s8, 4)
NEON_POP(pmin_u8, neon_u8, 4)
@ -590,11 +206,23 @@ NEON_VOP(shl_s16, neon_s16, 2)
#define NEON_FN(dest, src1, src2) \
(dest = do_sqrshl_bhs(src1, (int8_t)src2, 8, true, NULL))
NEON_VOP(rshl_s8, neon_s8, 4)
NEON_GVEC_VOP2(gvec_srshl_b, int8_t)
#undef NEON_FN
#define NEON_FN(dest, src1, src2) \
(dest = do_sqrshl_bhs(src1, (int8_t)src2, 16, true, NULL))
NEON_VOP(rshl_s16, neon_s16, 2)
NEON_GVEC_VOP2(gvec_srshl_h, int16_t)
#undef NEON_FN
#define NEON_FN(dest, src1, src2) \
(dest = do_sqrshl_bhs(src1, (int8_t)src2, 32, true, NULL))
NEON_GVEC_VOP2(gvec_srshl_s, int32_t)
#undef NEON_FN
#define NEON_FN(dest, src1, src2) \
(dest = do_sqrshl_d(src1, (int8_t)src2, true, NULL))
NEON_GVEC_VOP2(gvec_srshl_d, int64_t)
#undef NEON_FN
uint32_t HELPER(neon_rshl_s32)(uint32_t val, uint32_t shift)
@ -610,11 +238,23 @@ uint64_t HELPER(neon_rshl_s64)(uint64_t val, uint64_t shift)
#define NEON_FN(dest, src1, src2) \
(dest = do_uqrshl_bhs(src1, (int8_t)src2, 8, true, NULL))
NEON_VOP(rshl_u8, neon_u8, 4)
NEON_GVEC_VOP2(gvec_urshl_b, uint8_t)
#undef NEON_FN
#define NEON_FN(dest, src1, src2) \
(dest = do_uqrshl_bhs(src1, (int8_t)src2, 16, true, NULL))
NEON_VOP(rshl_u16, neon_u16, 2)
NEON_GVEC_VOP2(gvec_urshl_h, uint16_t)
#undef NEON_FN
#define NEON_FN(dest, src1, src2) \
(dest = do_uqrshl_bhs(src1, (int8_t)src2, 32, true, NULL))
NEON_GVEC_VOP2(gvec_urshl_s, int32_t)
#undef NEON_FN
#define NEON_FN(dest, src1, src2) \
(dest = do_uqrshl_d(src1, (int8_t)src2, true, NULL))
NEON_GVEC_VOP2(gvec_urshl_d, int64_t)
#undef NEON_FN
uint32_t HELPER(neon_rshl_u32)(uint32_t val, uint32_t shift)
@ -630,11 +270,23 @@ uint64_t HELPER(neon_rshl_u64)(uint64_t val, uint64_t shift)
#define NEON_FN(dest, src1, src2) \
(dest = do_uqrshl_bhs(src1, (int8_t)src2, 8, false, env->vfp.qc))
NEON_VOP_ENV(qshl_u8, neon_u8, 4)
NEON_GVEC_VOP2_ENV(neon_uqshl_b, uint8_t)
#undef NEON_FN
#define NEON_FN(dest, src1, src2) \
(dest = do_uqrshl_bhs(src1, (int8_t)src2, 16, false, env->vfp.qc))
NEON_VOP_ENV(qshl_u16, neon_u16, 2)
NEON_GVEC_VOP2_ENV(neon_uqshl_h, uint16_t)
#undef NEON_FN
#define NEON_FN(dest, src1, src2) \
(dest = do_uqrshl_bhs(src1, (int8_t)src2, 32, false, env->vfp.qc))
NEON_GVEC_VOP2_ENV(neon_uqshl_s, uint32_t)
#undef NEON_FN
#define NEON_FN(dest, src1, src2) \
(dest = do_uqrshl_d(src1, (int8_t)src2, false, env->vfp.qc))
NEON_GVEC_VOP2_ENV(neon_uqshl_d, uint64_t)
#undef NEON_FN
uint32_t HELPER(neon_qshl_u32)(CPUARMState *env, uint32_t val, uint32_t shift)
@ -650,11 +302,23 @@ uint64_t HELPER(neon_qshl_u64)(CPUARMState *env, uint64_t val, uint64_t shift)
#define NEON_FN(dest, src1, src2) \
(dest = do_sqrshl_bhs(src1, (int8_t)src2, 8, false, env->vfp.qc))
NEON_VOP_ENV(qshl_s8, neon_s8, 4)
NEON_GVEC_VOP2_ENV(neon_sqshl_b, int8_t)
#undef NEON_FN
#define NEON_FN(dest, src1, src2) \
(dest = do_sqrshl_bhs(src1, (int8_t)src2, 16, false, env->vfp.qc))
NEON_VOP_ENV(qshl_s16, neon_s16, 2)
NEON_GVEC_VOP2_ENV(neon_sqshl_h, int16_t)
#undef NEON_FN
#define NEON_FN(dest, src1, src2) \
(dest = do_sqrshl_bhs(src1, (int8_t)src2, 32, false, env->vfp.qc))
NEON_GVEC_VOP2_ENV(neon_sqshl_s, int32_t)
#undef NEON_FN
#define NEON_FN(dest, src1, src2) \
(dest = do_sqrshl_d(src1, (int8_t)src2, false, env->vfp.qc))
NEON_GVEC_VOP2_ENV(neon_sqshl_d, int64_t)
#undef NEON_FN
uint32_t HELPER(neon_qshl_s32)(CPUARMState *env, uint32_t val, uint32_t shift)
@ -690,11 +354,23 @@ uint64_t HELPER(neon_qshlu_s64)(CPUARMState *env, uint64_t val, uint64_t shift)
#define NEON_FN(dest, src1, src2) \
(dest = do_uqrshl_bhs(src1, (int8_t)src2, 8, true, env->vfp.qc))
NEON_VOP_ENV(qrshl_u8, neon_u8, 4)
NEON_GVEC_VOP2_ENV(neon_uqrshl_b, uint8_t)
#undef NEON_FN
#define NEON_FN(dest, src1, src2) \
(dest = do_uqrshl_bhs(src1, (int8_t)src2, 16, true, env->vfp.qc))
NEON_VOP_ENV(qrshl_u16, neon_u16, 2)
NEON_GVEC_VOP2_ENV(neon_uqrshl_h, uint16_t)
#undef NEON_FN
#define NEON_FN(dest, src1, src2) \
(dest = do_uqrshl_bhs(src1, (int8_t)src2, 32, true, env->vfp.qc))
NEON_GVEC_VOP2_ENV(neon_uqrshl_s, uint32_t)
#undef NEON_FN
#define NEON_FN(dest, src1, src2) \
(dest = do_uqrshl_d(src1, (int8_t)src2, true, env->vfp.qc))
NEON_GVEC_VOP2_ENV(neon_uqrshl_d, uint64_t)
#undef NEON_FN
uint32_t HELPER(neon_qrshl_u32)(CPUARMState *env, uint32_t val, uint32_t shift)
@ -710,11 +386,23 @@ uint64_t HELPER(neon_qrshl_u64)(CPUARMState *env, uint64_t val, uint64_t shift)
#define NEON_FN(dest, src1, src2) \
(dest = do_sqrshl_bhs(src1, (int8_t)src2, 8, true, env->vfp.qc))
NEON_VOP_ENV(qrshl_s8, neon_s8, 4)
NEON_GVEC_VOP2_ENV(neon_sqrshl_b, int8_t)
#undef NEON_FN
#define NEON_FN(dest, src1, src2) \
(dest = do_sqrshl_bhs(src1, (int8_t)src2, 16, true, env->vfp.qc))
NEON_VOP_ENV(qrshl_s16, neon_s16, 2)
NEON_GVEC_VOP2_ENV(neon_sqrshl_h, int16_t)
#undef NEON_FN
#define NEON_FN(dest, src1, src2) \
(dest = do_sqrshl_bhs(src1, (int8_t)src2, 32, true, env->vfp.qc))
NEON_GVEC_VOP2_ENV(neon_sqrshl_s, int32_t)
#undef NEON_FN
#define NEON_FN(dest, src1, src2) \
(dest = do_sqrshl_d(src1, (int8_t)src2, true, env->vfp.qc))
NEON_GVEC_VOP2_ENV(neon_sqrshl_d, int64_t)
#undef NEON_FN
uint32_t HELPER(neon_qrshl_s32)(CPUARMState *env, uint32_t val, uint32_t shift)

View File

@ -409,6 +409,60 @@ void HELPER(wfi)(CPUARMState *env, uint32_t insn_len)
#endif
}
void HELPER(wfit)(CPUARMState *env, uint64_t timeout)
{
#ifdef CONFIG_USER_ONLY
/*
* WFI in the user-mode emulator is technically permitted but not
* something any real-world code would do. AArch64 Linux kernels
* trap it via SCTRL_EL1.nTWI and make it an (expensive) NOP;
* AArch32 kernels don't trap it so it will delay a bit.
* For QEMU, make it NOP here, because trying to raise EXCP_HLT
* would trigger an abort.
*/
return;
#else
ARMCPU *cpu = env_archcpu(env);
CPUState *cs = env_cpu(env);
int target_el = check_wfx_trap(env, false);
/* The WFIT should time out when CNTVCT_EL0 >= the specified value. */
uint64_t cntval = gt_get_countervalue(env);
uint64_t offset = gt_virt_cnt_offset(env);
uint64_t cntvct = cntval - offset;
uint64_t nexttick;
if (cpu_has_work(cs) || cntvct >= timeout) {
/*
* Don't bother to go into our "low power state" if
* we would just wake up immediately.
*/
return;
}
if (target_el) {
env->pc -= 4;
raise_exception(env, EXCP_UDEF, syn_wfx(1, 0xe, 0, false),
target_el);
}
if (uadd64_overflow(timeout, offset, &nexttick)) {
nexttick = UINT64_MAX;
}
if (nexttick > INT64_MAX / gt_cntfrq_period_ns(cpu)) {
/*
* If the timeout is too long for the signed 64-bit range
* of a QEMUTimer, let it expire early.
*/
timer_mod_ns(cpu->wfxt_timer, INT64_MAX);
} else {
timer_mod(cpu->wfxt_timer, nexttick);
}
cs->exception_index = EXCP_HLT;
cs->halted = 1;
cpu_loop_exit(cs);
#endif
}
void HELPER(wfe)(CPUARMState *env)
{
/* This is a hint instruction that is semantically different

File diff suppressed because it is too large Load Diff

View File

@ -198,6 +198,20 @@ void gen_gvec_eor3(unsigned vece, uint32_t d, uint32_t n, uint32_t m,
void gen_gvec_bcax(unsigned vece, uint32_t d, uint32_t n, uint32_t m,
uint32_t a, uint32_t oprsz, uint32_t maxsz);
void gen_suqadd_bhs(TCGv_i64 res, TCGv_i64 qc,
TCGv_i64 a, TCGv_i64 b, MemOp esz);
void gen_suqadd_d(TCGv_i64 res, TCGv_i64 qc, TCGv_i64 a, TCGv_i64 b);
void gen_gvec_suqadd_qc(unsigned vece, uint32_t rd_ofs,
uint32_t rn_ofs, uint32_t rm_ofs,
uint32_t opr_sz, uint32_t max_sz);
void gen_usqadd_bhs(TCGv_i64 res, TCGv_i64 qc,
TCGv_i64 a, TCGv_i64 b, MemOp esz);
void gen_usqadd_d(TCGv_i64 res, TCGv_i64 qc, TCGv_i64 a, TCGv_i64 b);
void gen_gvec_usqadd_qc(unsigned vece, uint32_t rd_ofs,
uint32_t rn_ofs, uint32_t rm_ofs,
uint32_t opr_sz, uint32_t max_sz);
void gen_sve_ldr(DisasContext *s, TCGv_ptr, int vofs, int len, int rn, int imm);
void gen_sve_str(DisasContext *s, TCGv_ptr, int vofs, int len, int rn, int imm);

View File

@ -794,6 +794,12 @@ DO_3SAME(VQADD_S, gen_gvec_sqadd_qc)
DO_3SAME(VQADD_U, gen_gvec_uqadd_qc)
DO_3SAME(VQSUB_S, gen_gvec_sqsub_qc)
DO_3SAME(VQSUB_U, gen_gvec_uqsub_qc)
DO_3SAME(VRSHL_S, gen_gvec_srshl)
DO_3SAME(VRSHL_U, gen_gvec_urshl)
DO_3SAME(VQSHL_S, gen_neon_sqshl)
DO_3SAME(VQSHL_U, gen_neon_uqshl)
DO_3SAME(VQRSHL_S, gen_neon_sqrshl)
DO_3SAME(VQRSHL_U, gen_neon_uqrshl)
/* These insns are all gvec_bitsel but with the inputs in various orders. */
#define DO_3SAME_BITSEL(INSN, O1, O2, O3) \
@ -835,6 +841,12 @@ DO_3SAME_NO_SZ_3(VPMAX_S, gen_gvec_smaxp)
DO_3SAME_NO_SZ_3(VPMIN_S, gen_gvec_sminp)
DO_3SAME_NO_SZ_3(VPMAX_U, gen_gvec_umaxp)
DO_3SAME_NO_SZ_3(VPMIN_U, gen_gvec_uminp)
DO_3SAME_NO_SZ_3(VHADD_S, gen_gvec_shadd)
DO_3SAME_NO_SZ_3(VHADD_U, gen_gvec_uhadd)
DO_3SAME_NO_SZ_3(VHSUB_S, gen_gvec_shsub)
DO_3SAME_NO_SZ_3(VHSUB_U, gen_gvec_uhsub)
DO_3SAME_NO_SZ_3(VRHADD_S, gen_gvec_srhadd)
DO_3SAME_NO_SZ_3(VRHADD_U, gen_gvec_urhadd)
#define DO_3SAME_CMP(INSN, COND) \
static void gen_##INSN##_3s(unsigned vece, uint32_t rd_ofs, \
@ -912,51 +924,6 @@ DO_SHA2(SHA256H, gen_helper_crypto_sha256h)
DO_SHA2(SHA256H2, gen_helper_crypto_sha256h2)
DO_SHA2(SHA256SU1, gen_helper_crypto_sha256su1)
#define DO_3SAME_64(INSN, FUNC) \
static void gen_##INSN##_3s(unsigned vece, uint32_t rd_ofs, \
uint32_t rn_ofs, uint32_t rm_ofs, \
uint32_t oprsz, uint32_t maxsz) \
{ \
static const GVecGen3 op = { .fni8 = FUNC }; \
tcg_gen_gvec_3(rd_ofs, rn_ofs, rm_ofs, oprsz, maxsz, &op); \
} \
DO_3SAME(INSN, gen_##INSN##_3s)
#define DO_3SAME_64_ENV(INSN, FUNC) \
static void gen_##INSN##_elt(TCGv_i64 d, TCGv_i64 n, TCGv_i64 m) \
{ \
FUNC(d, tcg_env, n, m); \
} \
DO_3SAME_64(INSN, gen_##INSN##_elt)
DO_3SAME_64(VRSHL_S64, gen_helper_neon_rshl_s64)
DO_3SAME_64(VRSHL_U64, gen_helper_neon_rshl_u64)
DO_3SAME_64_ENV(VQSHL_S64, gen_helper_neon_qshl_s64)
DO_3SAME_64_ENV(VQSHL_U64, gen_helper_neon_qshl_u64)
DO_3SAME_64_ENV(VQRSHL_S64, gen_helper_neon_qrshl_s64)
DO_3SAME_64_ENV(VQRSHL_U64, gen_helper_neon_qrshl_u64)
#define DO_3SAME_32(INSN, FUNC) \
static void gen_##INSN##_3s(unsigned vece, uint32_t rd_ofs, \
uint32_t rn_ofs, uint32_t rm_ofs, \
uint32_t oprsz, uint32_t maxsz) \
{ \
static const GVecGen3 ops[4] = { \
{ .fni4 = gen_helper_neon_##FUNC##8 }, \
{ .fni4 = gen_helper_neon_##FUNC##16 }, \
{ .fni4 = gen_helper_neon_##FUNC##32 }, \
{ 0 }, \
}; \
tcg_gen_gvec_3(rd_ofs, rn_ofs, rm_ofs, oprsz, maxsz, &ops[vece]); \
} \
static bool trans_##INSN##_3s(DisasContext *s, arg_3same *a) \
{ \
if (a->size > 2) { \
return false; \
} \
return do_3same(s, a, gen_##INSN##_3s); \
}
/*
* Some helper functions need to be passed the tcg_env. In order
* to use those with the gvec APIs like tcg_gen_gvec_3() we need
@ -969,67 +936,12 @@ DO_3SAME_64_ENV(VQRSHL_U64, gen_helper_neon_qrshl_u64)
FUNC(d, tcg_env, n, m); \
}
#define DO_3SAME_32_ENV(INSN, FUNC) \
WRAP_ENV_FN(gen_##INSN##_tramp8, gen_helper_neon_##FUNC##8); \
WRAP_ENV_FN(gen_##INSN##_tramp16, gen_helper_neon_##FUNC##16); \
WRAP_ENV_FN(gen_##INSN##_tramp32, gen_helper_neon_##FUNC##32); \
static void gen_##INSN##_3s(unsigned vece, uint32_t rd_ofs, \
uint32_t rn_ofs, uint32_t rm_ofs, \
uint32_t oprsz, uint32_t maxsz) \
{ \
static const GVecGen3 ops[4] = { \
{ .fni4 = gen_##INSN##_tramp8 }, \
{ .fni4 = gen_##INSN##_tramp16 }, \
{ .fni4 = gen_##INSN##_tramp32 }, \
{ 0 }, \
}; \
tcg_gen_gvec_3(rd_ofs, rn_ofs, rm_ofs, oprsz, maxsz, &ops[vece]); \
} \
static bool trans_##INSN##_3s(DisasContext *s, arg_3same *a) \
{ \
if (a->size > 2) { \
return false; \
} \
return do_3same(s, a, gen_##INSN##_3s); \
}
DO_3SAME_32(VHADD_S, hadd_s)
DO_3SAME_32(VHADD_U, hadd_u)
DO_3SAME_32(VHSUB_S, hsub_s)
DO_3SAME_32(VHSUB_U, hsub_u)
DO_3SAME_32(VRHADD_S, rhadd_s)
DO_3SAME_32(VRHADD_U, rhadd_u)
DO_3SAME_32(VRSHL_S, rshl_s)
DO_3SAME_32(VRSHL_U, rshl_u)
DO_3SAME_32_ENV(VQSHL_S, qshl_s)
DO_3SAME_32_ENV(VQSHL_U, qshl_u)
DO_3SAME_32_ENV(VQRSHL_S, qrshl_s)
DO_3SAME_32_ENV(VQRSHL_U, qrshl_u)
#define DO_3SAME_VQDMULH(INSN, FUNC) \
WRAP_ENV_FN(gen_##INSN##_tramp16, gen_helper_neon_##FUNC##_s16); \
WRAP_ENV_FN(gen_##INSN##_tramp32, gen_helper_neon_##FUNC##_s32); \
static void gen_##INSN##_3s(unsigned vece, uint32_t rd_ofs, \
uint32_t rn_ofs, uint32_t rm_ofs, \
uint32_t oprsz, uint32_t maxsz) \
{ \
static const GVecGen3 ops[2] = { \
{ .fni4 = gen_##INSN##_tramp16 }, \
{ .fni4 = gen_##INSN##_tramp32 }, \
}; \
tcg_gen_gvec_3(rd_ofs, rn_ofs, rm_ofs, oprsz, maxsz, &ops[vece - 1]); \
} \
static bool trans_##INSN##_3s(DisasContext *s, arg_3same *a) \
{ \
if (a->size != 1 && a->size != 2) { \
return false; \
} \
return do_3same(s, a, gen_##INSN##_3s); \
}
{ return a->size >= 1 && a->size <= 2 && do_3same(s, a, FUNC); }
DO_3SAME_VQDMULH(VQDMULH, qdmulh)
DO_3SAME_VQDMULH(VQRDMULH, qrdmulh)
DO_3SAME_VQDMULH(VQDMULH, gen_gvec_sqdmulh_qc)
DO_3SAME_VQDMULH(VQRDMULH, gen_gvec_sqrdmulh_qc)
#define WRAP_FP_GVEC(WRAPNAME, FPST, FUNC) \
static void WRAPNAME(unsigned vece, uint32_t rd_ofs, \

View File

@ -459,6 +459,31 @@ void gen_gvec_sshl(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz);
void gen_gvec_ushl(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz);
void gen_gvec_srshl(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz);
void gen_gvec_urshl(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz);
void gen_neon_sqshl(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz);
void gen_neon_uqshl(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz);
void gen_neon_sqrshl(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz);
void gen_neon_uqrshl(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz);
void gen_gvec_shadd(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz);
void gen_gvec_uhadd(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz);
void gen_gvec_shsub(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz);
void gen_gvec_uhsub(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz);
void gen_gvec_srhadd(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz);
void gen_gvec_urhadd(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz);
void gen_cmtst_i64(TCGv_i64 d, TCGv_i64 a, TCGv_i64 b);
void gen_ushl_i32(TCGv_i32 d, TCGv_i32 a, TCGv_i32 b);
@ -466,12 +491,27 @@ void gen_sshl_i32(TCGv_i32 d, TCGv_i32 a, TCGv_i32 b);
void gen_ushl_i64(TCGv_i64 d, TCGv_i64 a, TCGv_i64 b);
void gen_sshl_i64(TCGv_i64 d, TCGv_i64 a, TCGv_i64 b);
void gen_uqadd_bhs(TCGv_i64 res, TCGv_i64 qc,
TCGv_i64 a, TCGv_i64 b, MemOp esz);
void gen_uqadd_d(TCGv_i64 d, TCGv_i64 q, TCGv_i64 a, TCGv_i64 b);
void gen_gvec_uqadd_qc(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz);
void gen_sqadd_bhs(TCGv_i64 res, TCGv_i64 qc,
TCGv_i64 a, TCGv_i64 b, MemOp esz);
void gen_sqadd_d(TCGv_i64 d, TCGv_i64 q, TCGv_i64 a, TCGv_i64 b);
void gen_gvec_sqadd_qc(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz);
void gen_uqsub_bhs(TCGv_i64 res, TCGv_i64 qc,
TCGv_i64 a, TCGv_i64 b, MemOp esz);
void gen_uqsub_d(TCGv_i64 d, TCGv_i64 q, TCGv_i64 a, TCGv_i64 b);
void gen_gvec_uqsub_qc(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz);
void gen_sqsub_bhs(TCGv_i64 res, TCGv_i64 qc,
TCGv_i64 a, TCGv_i64 b, MemOp esz);
void gen_sqsub_d(TCGv_i64 d, TCGv_i64 q, TCGv_i64 a, TCGv_i64 b);
void gen_gvec_sqsub_qc(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz);
@ -499,6 +539,10 @@ void gen_gvec_sri(unsigned vece, uint32_t rd_ofs, uint32_t rm_ofs,
void gen_gvec_sli(unsigned vece, uint32_t rd_ofs, uint32_t rm_ofs,
int64_t shift, uint32_t opr_sz, uint32_t max_sz);
void gen_gvec_sqdmulh_qc(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz);
void gen_gvec_sqrdmulh_qc(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz);
void gen_gvec_sqrdmlah_qc(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz);
void gen_gvec_sqrdmlsh_qc(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,

View File

@ -311,6 +311,38 @@ void HELPER(neon_sqrdmulh_h)(void *vd, void *vn, void *vm,
clear_tail(d, opr_sz, simd_maxsz(desc));
}
void HELPER(neon_sqdmulh_idx_h)(void *vd, void *vn, void *vm,
void *vq, uint32_t desc)
{
intptr_t i, j, opr_sz = simd_oprsz(desc);
int idx = simd_data(desc);
int16_t *d = vd, *n = vn, *m = (int16_t *)vm + H2(idx);
for (i = 0; i < opr_sz / 2; i += 16 / 2) {
int16_t mm = m[i];
for (j = 0; j < 16 / 2; ++j) {
d[i + j] = do_sqrdmlah_h(n[i + j], mm, 0, false, false, vq);
}
}
clear_tail(d, opr_sz, simd_maxsz(desc));
}
void HELPER(neon_sqrdmulh_idx_h)(void *vd, void *vn, void *vm,
void *vq, uint32_t desc)
{
intptr_t i, j, opr_sz = simd_oprsz(desc);
int idx = simd_data(desc);
int16_t *d = vd, *n = vn, *m = (int16_t *)vm + H2(idx);
for (i = 0; i < opr_sz / 2; i += 16 / 2) {
int16_t mm = m[i];
for (j = 0; j < 16 / 2; ++j) {
d[i + j] = do_sqrdmlah_h(n[i + j], mm, 0, false, true, vq);
}
}
clear_tail(d, opr_sz, simd_maxsz(desc));
}
void HELPER(sve2_sqrdmlah_h)(void *vd, void *vn, void *vm,
void *va, uint32_t desc)
{
@ -474,6 +506,38 @@ void HELPER(neon_sqrdmulh_s)(void *vd, void *vn, void *vm,
clear_tail(d, opr_sz, simd_maxsz(desc));
}
void HELPER(neon_sqdmulh_idx_s)(void *vd, void *vn, void *vm,
void *vq, uint32_t desc)
{
intptr_t i, j, opr_sz = simd_oprsz(desc);
int idx = simd_data(desc);
int32_t *d = vd, *n = vn, *m = (int32_t *)vm + H4(idx);
for (i = 0; i < opr_sz / 4; i += 16 / 4) {
int32_t mm = m[i];
for (j = 0; j < 16 / 4; ++j) {
d[i + j] = do_sqrdmlah_s(n[i + j], mm, 0, false, false, vq);
}
}
clear_tail(d, opr_sz, simd_maxsz(desc));
}
void HELPER(neon_sqrdmulh_idx_s)(void *vd, void *vn, void *vm,
void *vq, uint32_t desc)
{
intptr_t i, j, opr_sz = simd_oprsz(desc);
int idx = simd_data(desc);
int32_t *d = vd, *n = vn, *m = (int32_t *)vm + H4(idx);
for (i = 0; i < opr_sz / 4; i += 16 / 4) {
int32_t mm = m[i];
for (j = 0; j < 16 / 4; ++j) {
d[i + j] = do_sqrdmlah_s(n[i + j], mm, 0, false, true, vq);
}
}
clear_tail(d, opr_sz, simd_maxsz(desc));
}
void HELPER(sve2_sqrdmlah_s)(void *vd, void *vn, void *vm,
void *va, uint32_t desc)
{
@ -1555,6 +1619,14 @@ DO_SAT(gvec_sqsub_b, int, int8_t, int8_t, -, INT8_MIN, INT8_MAX)
DO_SAT(gvec_sqsub_h, int, int16_t, int16_t, -, INT16_MIN, INT16_MAX)
DO_SAT(gvec_sqsub_s, int64_t, int32_t, int32_t, -, INT32_MIN, INT32_MAX)
DO_SAT(gvec_usqadd_b, int, uint8_t, int8_t, +, 0, UINT8_MAX)
DO_SAT(gvec_usqadd_h, int, uint16_t, int16_t, +, 0, UINT16_MAX)
DO_SAT(gvec_usqadd_s, int64_t, uint32_t, int32_t, +, 0, UINT32_MAX)
DO_SAT(gvec_suqadd_b, int, int8_t, uint8_t, +, INT8_MIN, INT8_MAX)
DO_SAT(gvec_suqadd_h, int, int16_t, uint16_t, +, INT16_MIN, INT16_MAX)
DO_SAT(gvec_suqadd_s, int64_t, int32_t, uint32_t, +, INT32_MIN, INT32_MAX)
#undef DO_SAT
void HELPER(gvec_uqadd_d)(void *vd, void *vq, void *vn,
@ -1645,6 +1717,62 @@ void HELPER(gvec_sqsub_d)(void *vd, void *vq, void *vn,
clear_tail(d, oprsz, simd_maxsz(desc));
}
void HELPER(gvec_usqadd_d)(void *vd, void *vq, void *vn,
void *vm, uint32_t desc)
{
intptr_t i, oprsz = simd_oprsz(desc);
uint64_t *d = vd, *n = vn, *m = vm;
bool q = false;
for (i = 0; i < oprsz / 8; i++) {
uint64_t nn = n[i];
int64_t mm = m[i];
uint64_t dd = nn + mm;
if (mm < 0) {
if (nn < (uint64_t)-mm) {
dd = 0;
q = true;
}
} else {
if (dd < nn) {
dd = UINT64_MAX;
q = true;
}
}
d[i] = dd;
}
if (q) {
uint32_t *qc = vq;
qc[0] = 1;
}
clear_tail(d, oprsz, simd_maxsz(desc));
}
void HELPER(gvec_suqadd_d)(void *vd, void *vq, void *vn,
void *vm, uint32_t desc)
{
intptr_t i, oprsz = simd_oprsz(desc);
uint64_t *d = vd, *n = vn, *m = vm;
bool q = false;
for (i = 0; i < oprsz / 8; i++) {
int64_t nn = n[i];
uint64_t mm = m[i];
int64_t dd = nn + mm;
if (mm > (uint64_t)(INT64_MAX - nn)) {
dd = INT64_MAX;
q = true;
}
d[i] = dd;
}
if (q) {
uint32_t *qc = vq;
qc[0] = 1;
}
clear_tail(d, oprsz, simd_maxsz(desc));
}
#define DO_SRA(NAME, TYPE) \
void HELPER(NAME)(void *vd, void *vn, uint32_t desc) \

View File

@ -39,7 +39,7 @@ QEMU_BUILD_BUG_ON(TCG_PHYS_ADDR_BITS > TARGET_PHYS_ADDR_SPACE_BITS);
*/
void x86_cpu_do_interrupt(CPUState *cpu);
#ifndef CONFIG_USER_ONLY
void x86_cpu_exec_halt(CPUState *cpu);
bool x86_cpu_exec_halt(CPUState *cpu);
bool x86_need_replay_interrupt(int interrupt_request);
bool x86_cpu_exec_interrupt(CPUState *cpu, int int_req);
#endif

View File

@ -128,7 +128,7 @@ void x86_cpu_do_interrupt(CPUState *cs)
}
}
void x86_cpu_exec_halt(CPUState *cpu)
bool x86_cpu_exec_halt(CPUState *cpu)
{
if (cpu->interrupt_request & CPU_INTERRUPT_POLL) {
X86CPU *x86_cpu = X86_CPU(cpu);
@ -138,6 +138,7 @@ void x86_cpu_exec_halt(CPUState *cpu)
cpu_reset_interrupt(cpu, CPU_INTERRUPT_POLL);
bql_unlock();
}
return cpu_has_work(cpu);
}
bool x86_need_replay_interrupt(int interrupt_request)

View File

@ -37,18 +37,18 @@ class Aarch64SbsarefMachine(QemuSystemTest):
Used components:
- Trusted Firmware 2.10.2
- Tianocore EDK2 stable202402
- Tianocore EDK2-platforms commit 085c2fb
- Trusted Firmware 2.11.0
- Tianocore EDK2 stable202405
- Tianocore EDK2-platforms commit 4bbd0ed
"""
# Secure BootRom (TF-A code)
fs0_xz_url = (
"https://artifacts.codelinaro.org/artifactory/linaro-419-sbsa-ref/"
"20240313-116475/edk2/SBSA_FLASH0.fd.xz"
"20240528-140808/edk2/SBSA_FLASH0.fd.xz"
)
fs0_xz_hash = "637593749cc307dea7dc13265c32e5d020267552f22b18a31850b8429fc5e159"
fs0_xz_hash = "fa6004900b67172914c908b78557fec4d36a5f784f4c3dd08f49adb75e1892a9"
tar_xz_path = self.fetch_asset(fs0_xz_url, asset_hash=fs0_xz_hash,
algorithm='sha256')
archive.extract(tar_xz_path, self.workdir)
@ -57,9 +57,9 @@ class Aarch64SbsarefMachine(QemuSystemTest):
# Non-secure rom (UEFI and EFI variables)
fs1_xz_url = (
"https://artifacts.codelinaro.org/artifactory/linaro-419-sbsa-ref/"
"20240313-116475/edk2/SBSA_FLASH1.fd.xz"
"20240528-140808/edk2/SBSA_FLASH1.fd.xz"
)
fs1_xz_hash = "cb0a5e8cf5e303c5d3dc106cfd5943ffe9714b86afddee7164c69ee1dd41991c"
fs1_xz_hash = "5f3747d4000bc416d9641e33ff4ac60c3cc8cb74ca51b6e932e58531c62eb6f7"
tar_xz_path = self.fetch_asset(fs1_xz_url, asset_hash=fs1_xz_hash,
algorithm='sha256')
archive.extract(tar_xz_path, self.workdir)
@ -98,15 +98,15 @@ class Aarch64SbsarefMachine(QemuSystemTest):
# AP Trusted ROM
wait_for_console_pattern(self, "Booting Trusted Firmware")
wait_for_console_pattern(self, "BL1: v2.10.2(release):")
wait_for_console_pattern(self, "BL1: v2.11.0(release):")
wait_for_console_pattern(self, "BL1: Booting BL2")
# Trusted Boot Firmware
wait_for_console_pattern(self, "BL2: v2.10.2(release)")
wait_for_console_pattern(self, "BL2: v2.11.0(release)")
wait_for_console_pattern(self, "Booting BL31")
# EL3 Runtime Software
wait_for_console_pattern(self, "BL31: v2.10.2(release)")
wait_for_console_pattern(self, "BL31: v2.11.0(release)")
# Non-trusted Firmware
wait_for_console_pattern(self, "UEFI firmware (version 1.0")