Let's reset m_last_used for each register that will be used
in an instruction before we start allocating any of them,
so that one of the earlier allocations doesn't spill a
register that we want in a later allocation. (We must still
also increment/reset m_last_used in R and RW, otherwise we
end up in trouble when emulating lmw/stmw since those access
more guest registers than there are available host registers.)
This should ensure that the asserts added earlier in this
pull request are never triggered.
If the register pressure is high when allocating registers,
Arm64FPRCache may spill a guest register which we are going to
allocate later during the current instruction, which has the
side effect of turning it into double precision. This will have
bad consequences if we are assuming that it is single precision,
so let's add some asserts to detect if that ever happens.
PR #2663 added a Jit64 implementation of dcbX and a fast path to skip JIT cache invalidation. Unfortunately, there is a mismatch between address spaces in this optimization. It tests the effective address (with the top 3 bits cleared) against the valid block bitset which is intended to be indexed by physical address. While this works in the common case, it fails (for example) when the effective address is in the 7E... region (a.k.a. "fake VMEM"). This may also fall apart under more complex memory mapping scenarios requiring full MMU emulation.
The good news is that even without this fast path, the underlying call to JitInterface::InvalidateICache() still does very little work in the common case. It correctly translates the effective address to a physical address which it tests against the valid block bitset, skipping invalidation if it is not necessary. As such, the cost of removing the fast path should not be too high.
The Jit64 implementation is retained, though all it does now is emit a call. This is marginally more efficient than simple interpreter fallback, which involves an extra call. The JitArm64 implementation has also been fixed.
The game Happy Feet is fixed by this change, as it loads code in the 7E... address region and depends upon JIT cache invalidation in reponse to dcbf.
https://bugs.dolphin-emu.org/issues/12133
At least on some CPUs (I found out about this from the
Arm Cortex-A76 Software Optimization Guide), using X30
with BLR is one cycle slower than using another register.