Let's reset m_last_used for each register that will be used
in an instruction before we start allocating any of them,
so that one of the earlier allocations doesn't spill a
register that we want in a later allocation. (We must still
also increment/reset m_last_used in R and RW, otherwise we
end up in trouble when emulating lmw/stmw since those access
more guest registers than there are available host registers.)
This should ensure that the asserts added earlier in this
pull request are never triggered.
If the register pressure is high when allocating registers,
Arm64FPRCache may spill a guest register which we are going to
allocate later during the current instruction, which has the
side effect of turning it into double precision. This will have
bad consequences if we are assuming that it is single precision,
so let's add some asserts to detect if that ever happens.
Maybe a little wonky, but I needed a way to run unit tests
on AArch64, and this was the best way of running unit tests
on Android that I could come up with.
For certain occurrences of nandx/norx, we declare a ReadWrite constraint
on the destination register, even though the value of the destination
register is irrelevant. This false dependency would force the RegCache
to generate a redundant MOV when the destination register wasn't already
assigned to a host register.
Example 1:
BF 00 00 00 00 mov edi,0
8B FE mov edi,esi
F7 D7 not edi
Example 2:
8B 7D 80 mov edi,dword ptr [rbp-80h]
8B FE mov edi,esi
F7 D7 not edi
FinalizeCarryOverflow didn't maintain XER[OV/SO] properly due to an
oversight. Here's the code it would generate:
0: 9c pushf
1: 80 65 3b fe and BYTE PTR [rbp+0x3b],0xfe
5: 71 04 jno b <jno>
7: c6 45 3b 03 mov BYTE PTR [rbp+0x3b],0x3
000000000000000b <jno>:
b: 9d popf
At first glance it seems reasonable. The host flags are carefully
preserved with PUSHF. The AND instruction clears XER[OV]. Next, an
conditional branch checks the host's overflow flag and, if needed, skips
over a MOV that sets XER[OV/SO]. Finally, host flags are restored with
POPF.
However, the AND instruction also clears the host's overflow flag. As a
result, the branch that follows it is always taken and the MOV is always
skipped. The end result is that XER[OV] is always cleared while XER[SO]
is left unchanged.
Putting POPF immediately after the AND would fix this, but we already
have GenerateOverflow doing it correctly (and without the PUSHF/POPF
shenanigans too). So let's just use that instead.