Someone thought it would be a good idea to have the location as the first argument on the instruction.
Changed it to how it is supposed to be disassembled.
Optimistically assume used GQRs are 0 in blocks that only use one GQR, and
bail at the start of the block and recompile if that assumption fails.
Many games use almost entirely unquantized stores (e.g. Rebel Strike, Sonic
Colors), so this will likely be a big performance improvement across the board
for games with heavy use of paired singles.
Won't work with all games, but provides a nice way to spend extra CPU to make
a variable framerate game faster (e.g. Spyro or The Last Story), or to make
a game use less CPU at the cost of a lower framerate (e.g. Rogue Leader).
If we have a shift amount that is the full length of the source register then we have an invalid instruction.
This can happen when dealing with a couple of PowerPC instructions.
This same adjustment is already in the ARMv7 emitter.
Fixes issues with negative offsets in loadstore instructions.
Adds ADRP/ADR instructions.
Optimizes MOVI2R function to take advantage of ADRP on pointers, can change a 3 instruction operation down to one.
Adds GPR push/pop operations for ABI related things.
The reason this didn't break is that bitwise instructions like VPAND,
VANDPS, and VANDPD do the exact same thing. The only difference is the
data type they are intended for.
The builtin byteswap routines cause critical failure on AArch64 when built with the Android toolchain.
I didn't experience this issue when building for Linux using a local qemu chroot.
Seems to be only an issue with the Android toolchain when building AArch64.
Use our generic version instead.
If the inputs are both float singles, and the top half is known to be identical
to the bottom half, we can use packed arithmetic instead of scalar to skip
the movddup.
This is slower on a few rather old CPUs, plus the Atom+Silvermont, so detect
Atom and disable it in that case.
Also avoid PPC_FP on stores if we know that the output came from a float op.
Move the JITed function/basic-block registration logic out of the CPU
subsystem in order to add JIT registration to JITed DSP and
Video/VertexLoader code.
This necessary in order to add /tmp/perf-$pid.map support to other
JITed code as they need to write to the same file.
When we cleaned up the code to calculate the shm_position and total_mem
in one step, we sometimes skipped over certain views because they were
Wii-only. When looking at the total memory, we'd look at the last field,
whether or not it was skipped. Since Wii-only fields are the last view,
this meant that the shm_position was 0, since it was skipped, causing us
to map a 0-sized field. Fix this by explicitly returning the total size
from MemoryMap_InitializeViews.
Additionally, the shm_position was being calculated incorrectly because
it was adding up the shm_position *before* the mirror, rather than after
it. Fix this by adopting a scheme similar to what we had before.
The code to calculate the offsets into the SHM file wasn't properly
respecting the skip flags, causing it to calculate offsets beyond
the end of the SHM file.
This code was ported from out_ptr, which was a double-pointer, and
wanted to double-check that the proper arena was actually allocated.
When I ported it to store the pointer directly in the view regardless
of whether out_ptr was non-NULL, I got confused here and instead
caused the code to only free the arena if the first byte was non-zero.
This code originally tried to map the "low space" for the Gamecube's
memory layout, but since has expanded to mapping all of the easily
mappable memory on the system. Change the name to "GrabSHMSegment" to
indicate that we're looking for a shared memory segment we can map into
our process space.
These are effectively unused, since the memmap already maps them in one
place. For 32-bit, they might have some slight advantage, but we already
special-case the regular "high-mem" pointer for 32-bit, so just use the
one we already have...
This is a higher level, more concise wrapper for bitsets which supports
efficiently counting and iterating over set bits. It's similar to
std::bitset, but the latter does not support efficient iteration (and at
least in libc++, the count algorithm is subpar, not that it really
matters). The converted uses include both bitsets and, notably,
considerably less efficient regular arrays (for in/out registers in
PPCAnalyst).
Unfortunately, this may slightly pessimize unoptimized builds.
We weren't dropping a newline character from the string, we were cutting off the last character of the hardware name.
This fixes my TK1 being called 'lagun' when it's name is 'laguna'
GCC has optimized this using the exact same code since 4.7 or 4.8.
Android building falls back to the __linux__ route.
No need to keep these around anymore since we aren't building on an old GCC version.
Before this commit, the two were reversed ("cpu_string" had the brand, e.g. "AuthenticAMD"; and "brand_string" had the CPU type, e.g. "AMD Phenom II X4 925").