Change TMemCheck::Action to return whether to break rather than calling
PPCDebugInterface::BreakNow, as this simplified the implementation; then
remove said method, as that was its only caller. One "interface" method
down, many to go...
Without fastmem, the JIT code still does an inline check for RAM
addresses. With watchpoints we have to disable that too. (Hardware
watchpoints would avoid all the slow, but be complicated to implement
and limited in number - I doubt most people debugging games care much if
they run slower.)
With this change and watchpoints enabled, Melee runs at no more than 40%
speed, despite running at full speed without them. Oh well. Better
works slowly than doesn't bloody work.
Incidentally, I'm getting an unrelated crash in
PowerPC::HostIsRAMAddress when shutting down a game. This code sucks.
- Move JitState::memcheck to JitOptions because it's an option.
- Add JitOptions::fastmem; switch JIT code to checking that rather than
bFastmem directly.
- Add JitBase::UpdateMemoryOptions(), which sets both two JIT options
(replacing the duplicate lines in Jit64 and JitIL that set memcheck
from bMMU).
- (!) The ARM JITs both had some lines that checked js.memcheck
despite it being uninitialized in their cases. I've added
UpdateMemoryOptions to both. There is a chance this could make
something slower compared to the old behavior if the uninitialized
value happened to be nonzero... hdkr should check this.
- UpdateMemoryOptions forces jo.fastmem and jo.memcheck off and on,
respectively, if there are any watchpoints set.
- Also call that function from ClearCache.
- Have MemChecks call ClearCache when the {first,last} watchpoint is
{added,removed}.
Enabling jo.memcheck (bah, confusing names) is currently pointless
because hitting a watchpoint does not interrupt the basic block. That
will change in the next commit.
On OS X, this broke Cmd-V to paste in the text boxes. Apparently wx
thinks having mnemonics (which are Alt-* on Windows) be Cmd-* on OS X,
even if this disables standard shortcuts, is a good idea.
Lioncash suggested just getting rid of the accelerators on non-menu
controls, so I'm doing that rather than disabling them only on OS X.
1) Apparently wxString::Format is type safe, and passing a u32 to it
with the format "%lu" crashes with a meaningless assertion failure.
Sure, it's the wrong type, but the error sure doesn't help...
2) "A MenuItem ID of Zero does not work under Mac". Thanks for the
helpful assert message, no thanks for making your construct have random
platform-specific differences for no reason (it's not like menu item IDs
directly correspond to a part of Cocoa's menu API like they do on
Win32).
All the multiplying and dividing by 100 in controller configs is
messy... An attempted solution to the problem was to not multiply
default_value by 100 in ControllerEmu::ControlGroup::LoadConfig,
but that broke other things instead, so I went with this.
Boot_BS2Emu was trying to read from the inserted disc even when
nothing was inserted, and this happened to not crash (but not
work either) before VolumeHandler was removed. This commit adds
a check that restores the old behavior, so there is no longer a
crash, but the game ID still doesn't get set for WADs. I don't
know if/how it should be set, so this felt like the safest option.
This was causing a race condition where the "absurdly large aux buffer"
panic alert would be triggered in the last bit of fifo processing on the
CPU thread in deterministic mode (i.e. netplay). SyncGPU is supposed to
move the auxiliary queue data to the beginning of the containing buffer
so we don't have to deal with wraparound; if GpuRunningState is false,
however, it just returns, because it's set to false by another thread -
thus it doesn't know whether RunGpuLoop is still executing (in which
case it can't just reset the pointers, because it may still be using the
buffer) or not (in which case the condition variable it normally waits
for to avoid the previous problem will never be signaled). However,
SyncGPU's caller PushFifoAuxBuffer wasn't aware of this, so if the
buffer was filling at just the right time, it'd stay full and that
function would complain that it was about to overflow it. Similar
problem with ReadDataFromFifoOnCPU afaik. Fix this by returning early
from those as well; other callers of SyncGPU should be safe. A
*slightly* cleaner alternative would be giving the CPU thread a way to
tell when RunGpuLoop has actually exited, but whatever, this works.