This change was done because with the previous method of dumping audio, the mixer would handle switching the RL being emitted by the DSP to LR, and thus would provide the proper channel orientation. Because we're now dumping directly from PushSamples() and PushStreamingSamples(), it was writing the right channel to the left channel of the wave file and vice versa.
Initialize now just takes the handle directly. Reinitialize is added because it is much more straightforward in comparison to doing the Shutdown-Initialize manually.
No known games, not even Wii MMU games like Cars 2 and Toy Story 3, use custom
BATs. This makes BAT resolution a waste of time. BAT handling could be
optimized significantly, but as long as nothing uses it, it's easier to just
disable it.
Should significantly improve performance in MMU-heavy games.
This also fixes an issue where it would show the ARMv7 JIT recompiler on AArch64, also the issue of showing the now non-existant ARM JITIL.
Also fixes an issue where it would show the x86 JIT recompilers on a non ARM platform.
Should be at least a bit better than the previous LRU approach. Currently
has two basic components: whether a register is dirty (dirty registers need
to be stored, so clobbering them hurts more) and how many other registers will
be used between now and the next time a register gets used.
Also don't pre-load values that don't need to be in registers.
apply() changes the in-memory instance of SharedPreferences and writes to the disk asynchronously, rather than synchronously, which commit() does. Since these are done on the UI thread, they should be asynchronous.
Previously it did the opposite of what it was supposed to; when checked, it'd
turn block linking on, and when unchecked, it'd turn it off.
Also update JITIL's block linking disabling in debug mode to match the behavior
of the regular JIT.
This was a subtle bug I introduced since I removed a LDR in one of the ComputeCarry functions.
Basically since I wasn't loading the XER value prior to operations when I did a BIC tmp, tmp, 1 it would clear the first bit in our temp register but
retain the rest of the "random" data from that temp register. This would then save in to xer_ca, which the Interpreter will use later without any
masking to generate the XER value. Our XER generation helper functions don't do any masking since they were only expecting a single bit worth of data
in xer_ca with the rest being zero.
So now we only have one bit of data being stored in xer_ca from the ARMv7 JIT recompiler, and also a slight optimization in the ComputeCarry function
that is used on the immediate path. There wasn't any reason to load xer_ca since it only contains one bit of data now.
There's no need for the preprocessor checks for wx, since this is used in wx code. Also, this being a part of the InputCommon namespace is kind of wrong.
It only ever did anything on 32-bit OS X.
Anyway, it wasn't even on the right functions, and these days
ABI_PushRegistersAndAdjustStack should handle maintaining the ABI
correctly.
wxGetActiveWindow is implemented as "return NULL" on OS X, while
wxWindow::FindFocus works. On Windows, the difference is in the use of
GetActiveWindow() vs. GetForegroundWindow(). A MSDN comment says:
> A system has only one active window, which GetForegroundWindow()
> returns. GetActiveWindow() seems to return the same window as
> GetForegroundWindow() if the foreground window belongs to the current
> thread. Otherwise, it always returns null, rather than the topmost
> window of the calling thread.
Since we are on the GUI thread, it shouldn't make any difference.
This noticeably includes GL_ARB_get_program_binary, which was previously
thought unsupported on OS X. Well, actually, the OS X implementation is
trivial and reports 0 binary formats (as of 10.10; this is hardcoded in
GLEngine, by the way), but at least it'll work if it's fixed someday.
It now affects the GPU determinism mode as well as some miscellaneous
things that were calling IsNetPlayRunning. Probably incomplete.
Notably, this can change while paused, if the user starts recording a
movie. The movie code appears to have been missing locking between
setting g_playMode and doing other things, which probably had a small
chance of causing crashes or even desynced movies; fix that with
PauseAndLock.
The next commit will add a hidden config variable to override GPU
determinism mode.
It's a relatively big commit (less big with -w), but it's hard to test
any of this separately...
The basic problem is that in netplay or movies, the state of the CPU
must be deterministic, including when the game receives notification
that the GPU has processed FIFO data. Dual core mode notifies the game
whenever the GPU thread actually gets around to doing the work, so it
isn't deterministic. Single core mode is because it notifies the game
'instantly' (after processing the data synchronously), but it's too slow
for many systems and games.
My old dc-netplay branch worked as follows: everything worked as normal
except the state of the CP registers was a lie, and the CPU thread only
delivered results when idle detection triggered (waiting for the GPU if
they weren't ready at that point). Usually, a game is idle iff all the
work for the frame has been done, except for a small amount of work
depending on the GPU result, so neither the CPU or the GPU waiting on
the other affected performance much. However, it's possible that the
game could be waiting for some earlier interrupt, and any of several
games which, for whatever reason, never went into a detectable idle
(even when I tried to improve the detection) would never receive results
at all. (The current method should have better compatibility, but it
also has slightly higher overhead and breaks some other things, so I
want to reimplement this, hopefully with less impact on the code, in the
future.)
With this commit, the basic idea is that the CPU thread acts as if the
work has been done instantly, like single core mode, but actually hands
it off asynchronously to the GPU thread (after backing up some data that
the game might change in memory before it's actually done). Since the
work isn't done, any feedback from the GPU to the CPU, such as real
XFB/EFB copies (virtual are OK), EFB pokes, performance queries, etc. is
broken; but most games work with these options disabled, and there is no
need to try to detect what the CPU thread is doing.
Technically: when the flag g_use_deterministic_gpu_thread (currently
stuck on) is on, the CPU thread calls RunGpu like in single core mode.
This function synchronously copies the data from the FIFO to the
internal video buffer and updates the CP registers, interrupts, etc.
However, instead of the regular ReadDataFromFifo followed by running the
opcode decoder, it runs ReadDataFromFifoOnCPU ->
OpcodeDecoder_Preprocess, which relatively quickly scans through the
FIFO data, detects SetFinish calls etc., which are immediately fired,
and saves certain associated data from memory (e.g. display lists) in
AuxBuffers (a parallel stream to the main FIFO, which is a bit slow at
the moment), before handing the data off to the GPU thread to actually
render. That makes up the bulk of this commit.
In various circumstances, including the aforementioned EFB pokes and
performance queries as well as swap requests (i.e. the end of a frame -
we don't want the CPU potentially pumping out frames too quickly and the
GPU falling behind*), SyncGPU is called to wait for actual completion.
The overhead mainly comes from OpcodeDecoder_Preprocess (which is,
again, synchronous), as well as the actual copying.
Currently, display lists and such are escrowed from main memory even
though they usually won't change over the course of a frame, and
textures are not even though they might, resulting in a small chance of
graphical glitches. When the texture locking (i.e. fault on write) code
lands, I can make this all correct and maybe a little faster.
* This suggests an alternate determinism method of just delaying results
until a short time before the end of each frame. For all I know this
might mostly work - I haven't tried it - but if any significant work
hinges on the competion of render to texture etc., the frame will be
missed.
videoBuffer -> s_video_buffer
size -> s_video_buffer_write_ptr
g_pVideoData -> g_video_buffer_read_ptr (impl moved to Fifo.cpp)
This eradicates the wonderful use of 'size' as a global name, and makes
it clear that s_video_buffer_write_ptr and g_video_buffer_read_ptr are
the two ends of the FIFO buffer s_video_buffer.
Oh, and remove a useless namespace {}.
This state will be used to calculate sizes for skipping over commands on
a separate thread. An alternative to having these state variables would
be to have the preprocessor stash "state as we go" somewhere, but I
think that would be much uglier.
GetVertexSize now takes an extra argument to determine which state to
use, as does FifoCommandRunnable, which calls it. While I'm modifying
FifoCommandRunnable, I also change it to take a buffer and size as
parameters rather than using g_pVideoData, which will also be necessary
later. I also get rid of an unused overload.
VertexLoader::VertexLoader was setting loop_counter, a *static*
variable, to 0. This was nonsensical, but harmless until I started to
run it on a separate thread, where it had a chance of interfering with a
running vertex translator.
Switch to just using a register for the loop counter.
- Lazily create the native vertex format (which involves GL calls) from
RunVertices rather than RefreshLoader itself, freeing the latter to be
run from the CPU thread (hopefully).
- In order to avoid useless allocations while doing so, store the native
format inside the VertexLoader rather than using a cache entry.
- Wrap the s_vertex_loader_map in a lock, for similar reasons.