Previously this was using the default deleter (which just calls delete
on the pointer), which is incorrect, since the ENetHost instance is
allocated through ENet's C API, so we need to use its functions to
deallocate the host instead.
This isn't used anywhere and not really a generic utility, so we can get
rid of it.
This also lets us remove MathUtil.cpp, since this was the only thing
within that file.
Fixes us forgetting to add its include directories, which could result in linking to a dylib from MacPorts while using the system's header, and failing to link because they use different function names
Added AchievementSettings in Config with RA_INTEGRATION_ENABLED, RA_USERNAME, and RA_API_TOKEN. Includes code to load and store from Achievements.ini file in config folder.
This fixes a crash when recording fifologs, as the mutex is acquired when BPWritten calls AfterFrameEvent::Trigger, but then acquired again when FifoRecorder::EndFrame calls m_end_of_frame_event.reset(). std::mutex does not allow calling lock() if the thread already owns the mutex, while std::recursive_mutex does allow this.
This is a regression from #11522 (which introduced the HookableEvent system).
This second stack leads to JNI problems on Android, because ART fetches
the address and size of the original stack using pthread functions
(see GetThreadStack in art/runtime/thread.cc), and (presumably) treats
stack addresses outside of the original stack as invalid. (What I don't
understand is why some JNI operations on the CPU thread work fine
despite this but others don't.)
Instead of creating a second stack, let's borrow the approach ART uses:
Use pthread functions to find out the stack's address and size, then
install guard pages at an appropriate location. This lets us get rid
of a workaround we had in the MsgAlert function.
Because we're no longer choosing the stack size ourselves, I've made some
tweaks to where the put the guard pages. Previously we had a stack of
2 MiB and a safe zone of 512 KiB. We now accept stacks as small as 512 KiB
(used on macOS) and use a safe zone of 256 KiB. I feel like this should
be fine, but haven't done much testing beyond "it seems to work".
By the way, on Windows it was already the case that we didn't create
a second stack... But there was a bug in the implementation!
The code for protecting the stack has to run on the CPU thread, since
it's the CPU thread's stack we want to protect, but it was actually
running on EmuThread. This commit fixes that, since now this bug
matters on other operating systems too.
This very much isn't a build configuration that we're going to ship,
but I want to be able to tell people that they can build it on their
own if they really want to see how terribly it performs :)
Just like before, you'll need to edit two lines in app/build.gradle to
define ENABLE_GENERIC=ON and actually enable armeabi-v7a if you want an
armeabi-v7a build. This commit just fixes some compilations errors that
crop up if you do so.
This fixes a problem I was having where using frame advance with the
debugger open would frequently cause panic alerts about invalid addresses
due to the CPU thread changing MSR.DR while the host thread was trying
to access memory.
To aid in tracking down all the places where we weren't properly locking
the CPU, I've created a new type (in Core.h) that you have to pass as a
reference or pointer to functions that require running as the CPU thread.
While the NV extension is totally fine, the KHR extension should be able to support more hardware.
For NVIDIA, the hardware either supports both or neither, it just needs a driver from the last two years.
For AMD, the drivers from late 2022-12 seems to bring support for the KHR extension.
For Intel, the KHR is also supported for some years.
- Cancel doesn't shut down anymore.
Allowing it to be used multiple times thoughout the life of
the WorkQueue
- Remove Clear, so we only have Cancel semantics
- Add IsCancelling so work items can abort early if cancelling
- Replace m_cancelled and m_thread.joinable() guars with m_shutdown.
- Rename Flush to WaitForCompletion (As it's ambiguous if a function
called flush should be blocking or not)
- Add documentation
A lot of the remaining complexity in Renderer is the massive Swap function
which tries to handle a bunch of FrameBegin/FrameEnd events.
Rather than create a new place for it. This event system will try
to distribute it all over the place
Macros that expand to include the standard define macro are undefined.
This is pretty trivial to fix. We can just do the test and then define
the name itself if it's true, rather than making the set of definition
checks the macro itself.
Now that we've flipped the C++20 switch, let's start making use of
the nice new <bit> header.
I'm planning on handling this move away from BitUtils.h incrementally
in a series of PRs. There may be a few functions remaining in
BitUtils.h by the end that C++20 doesn't have any equivalents for.
The "vector shift by immediate" category encodes the shift amount for
right shifts as `size - amount`, whereas left shifts use `amount`.
We're not actually using SHRN/SHRN2 anywhere, which is why this has gone
undetected.
For quite some time now, we've had a setting on x86-64 that makes Dolphin
handle NaNs in a more accurate but slower way. There's only one game that
cares about this, Dragon Ball: Revenge of King Piccolo, and what that game
cares about more specifically is that the default NaN (or "generated NaN"
as I believe it's called in PowerPC documentation) is the same as on
PowerPC. On ARM, the default NaN is the same as on PowerPC, so for the
longest time we didn't need to do anything special to get Dragon Ball:
Revenge of King Piccolo working. However, in 93e636a I changed how we
handle FMA instructions in a way that resulted in the sign of NaNs
becoming inverted for nmadd/nmsub instructions, breaking the game.
To fix this, let's implement the AccurateNaNs setting, like on x86-64.
1. In some cases, ps_merge01 can be implemented using one instruction.
2. When we need two instructions for ps_merge01, it's best to start with
a MOV to avoid false dependencies on the destination register.
3. ps_merge10 can be implemented using a single EXT instruction.
This new function is like MOVP2R, except it masks out the lower 12 bits,
returning them instead of writing them to the register. These lower
12 bits can then be used as an offset for LDR/STR. This lets us turn
ADRP+ADD+LDR sequences with a zero offset into ADRP+LDR sequences with
a non-zero offset, saving one instruction.
ARM64 can do perform various types of sign and zero extension on a
register value before using it. The Arm64Emitter already had support for
this, but it was kinda hidden away.
This commit exposes the functionality by making the ExtendSpecifier enum
available everywhere and adding a new ArithOption constructor.