Ninja puts way more effort into compiling targets in parallel, and
ignores dependenceis until link time.
So we need to jump though hoops to force ninja to compile
pch.cpp before any targets which depend on the PCH.
I have no idea why cmake supports PUBLIC on target_sources,
but it does. It causes all targets that depend on this target
to try and include the files in their sources.
Except it doesn't take paths into account, so it breaks. Mabye
it would work if you used an abolute source? But I'm not sure
there is a sane usecase.
These messages hid other, more important, ones often. I have left AttemptMaxTimesWithExponentialDelay and GetSysDirectory/SetSysDirectory as info, since those are called infrequently and can be useful to the end-user.
Fixes https://bugs.dolphin-emu.org/issues/12827.
A description of what was going wrong:
JitArm64::Init first calls CodeBlock::AllocCodeSpace, after which
CodeBlock and Arm64Emitter consider us to have 96 MB of code space
available. JitArm64::Init then calls AddChildCodeSpace, which is
supposed to take 64 MiB of that space and give it to m_far_code.
CodeBlock's view of how much space there is gets updated from 96 MiB
to 32 MiB, but due to the missing call, Arm64Emitter keeps thinking
that it has 96 MiB of space available.
The last thing JitArm64::Init does is to call ResetFreeMemoryRanges.
This function asks Arm64Emitter how much code space is available and
stores a range of that size in m_free_ranges_near, meaning that
m_free_ranges_near ends up being backed by both nearcode and farcode!
This is a ticking time bomb; as soon as we grab memory from
m_free_ranges_near which is backed by farcode, we're in trouble.
The crash I ran into in my testing was caused by fastmem code being
allocated in farcode (our backpatch handler only handles accesses made
from nearcode), but you may as well get errors caused by code intended
for nearcode overwriting code intended for farcode or vice versa.
So why did NBA Live 2005 crash when most games had no problems,
and why was the bug bisected to the commit that increased the size
of far code from 16 MiB to 64 MiB? Well, as long as we're only
using the first 32 MiB of the big 96 MiB range, everything works.
What happens with NBA Live 2005 (I have not investigated exactly
through what mechanism this happens) is that at some point the range
in m_free_ranges_near gets split into two ranges, one which is
backed by nearcode and one which is backed by farcode. Dolphin
prefers to select the biggest range available (we don't want to
pick a tiny 1 KiB range that may not be able to fit the whole block
we're about to emit, after all), and after increasing the size of
farcode to 64 MiB, farcode is bigger than nearcode.
Fixes a crash that could occur if the static constructor function for
the MainSettings.cpp TU happened to run before the variables in
Common/Version.cpp are initialised. (This is known as the static
initialisation order fiasco.)
By using wrapper functions, those variables are now guaranteed to be
constructed on first use.
Using unsigned char* or signed char* results in a deprecation warning, which is treated as an error. It needs to be casted to regular char* for it to work.
This format string is by definition dynamic and can't be checked at compile time. There are other similar strings in the log handler and in asserts, but they use vformat and thus don't need fmt::runtime. We might be able to do a similar thing where the untranslated string is compile-time checked, but FmtFormatT is used in so few places that I don't want to handle that in this PR.
HRWrap now allows HRESULT to be formatted, giving useful information beyond "it failed" or a hex code that isn't obvious to most users. This commit does not add any uses of it, though.
Now, enums are properly displayed, and BitFieldArray is also displayed nicely. Signed values also work correctly, and 1-bit fields are not treated as bools unless the bitfield is explicitly marked as a bool.
PNG_FORMAT_RGB and PNG_COLOR_TYPE_RGB both evaluate to 2, but PNG_FORMAT_RGBA evaluates to 3 while PNG_COLOR_TYPE_RGBA evaluates to 6; the bit indicating a palette is 1 while the bit indicating alpha is 4.
This does this following things:
- Default to the runtime automatic number of threads for pre-compiling shaders
- Adds a distinct automatic thread count computation for pre-compilation (which has less other things going on
and should scale better beyond 4 cores)
- Removes the unused logical_core_count field from the CPU detection
- Changes the semantics of num_cores from maximaum addressable number of cores to actually available CPU cores
(which is also how it was actually used)
- Updates the computation of the HTT flag now that AMD no longer lies about it for its Zen processors
- Background shader compilation is *not* enabled by default
This moves the only direct call to zlib’s crc32() into its own
translation unit, but that operation is cold enough that this won’t
matter in the slightest. crc32_z() would be more appropriate, but
Android has an older zlib version…
This replaces the MAX_LOGLEVEL define with a constexpr variable
in order to fix self-comparison warnings in the logging macros
when compiling with Clang. (Without this change, the log level check
in the logging macros is expanded into something like this:
`if (LINFO <= LINFO)`, which triggers a tautological compare warning.)
PR #10066 added functionality to call std::abort when a panic alert occurs; however, that PR only implemented it for MsgAlert and not MsgAlertFmtImpl, meaning that the functionality was not used with PanicAlertFmt (only PanicAlert, which is not used frequently).
When RenderDoc is attached, wglShareLists fails for some reason (see baldurk/renderdoc#2361). wglCreateContextAttribsARB has a parameter for the share context, so there's no reason to use a separate wglShareLists call.
Co-authored-by: baldurk <baldurk@baldurk.org>
Prompted by https://dolphin.ci/#/builders/24/builds/985
A 1-character typo in a recent PR caused FifoCI builds to break
horribly and spew millions of panic alerts until buildbot crashed.
This PR adds a new config option -- defaulting to off -- that allows
Dolphin to abort early on when a panic alert occurs instead of
continuing forever.
Manually encoding and decoding logical immediates is error-prone.
Using ORRI2R and friends lets us avoid doing the work manually,
but in exchange, there is a runtime performance penalty. It's
probably rather small, but still, it would be nice if we could
let the compiler do the work at compile-time. And that's exactly
what this commit does, so now I have no excuse for trying to
manually write logical immediates anymore.
Public domain does not have an internationally agreed upon definition,
As such it's generally preferred to use an extremely liberal license,
which can explicitly list the rights granted by the copyright holder.
The CC0 license is the usual choice here.
This "relicensing" is done without hunting down copyright holders, since
it is presumed that their release of this work into the public domain
authorizes us to redistribute this code under any other license of our
choosing.
This code was part of Dolphin's relicensing from v2 to v2+ a while back,
we just never updated these copyright headers. I double-checked that
segher gave us permission to relicense this code to v2+ on 2015-05-16.
SPDX standardizes how source code conveys its copyright and licensing
information. See https://spdx.github.io/spdx-spec/1-rationale/ . SPDX
tags are adopted in many large projects, including things like the Linux
kernel.
Without this, the code added in ac28b89 misbehaves and considers
AArch64 netplay clients to not have hardware FMA support, telling
all clients to disable FMA support, which causes a desync between
x64 and AArch64 due to JitArm64 not being able to disable FMA support.
Not doing this can cause desyncs when TASing. (I don't know
how common such desyncs would be, though. For games that
don't change rounding modes, they shouldn't be a problem.)
When I added the software FMA path in 2c38d64 and made us use
it when determinism is enabled, I was assuming that either the
performance impact of software FMA wouldn't be too large or CPUs
that were too old to have FMA instructions were too slow to run
Dolphin well anyway. This was wrong. To give an example, the
netplay performance went from 60 FPS to 30 FPS in one case.
This change makes netplay clients negotiate whether FMA should
be used. If all clients use an x64 CPU that supports FMA, or
AArch64, then FMA is enabled, and otherwise FMA is disabled.
In other words, we sacrifice accuracy if needed to avoid massive
slowdown, but not otherwise. When not using netplay, whether to
enable FMA is simply based on whether the host CPU supports it.
The only remaining case where the software FMA path gets used
under normal circumstances is when an input recording is created
on a CPU with FMA support and then played back on a CPU without.
This is not an especially common scenario (though it can happen),
and TASers are generally less picky about performance and more
picky about accuracy than other users anyway.
With this change, FMA desyncs are avoided between AArch64 and
modern x64 CPUs (unlike before 2c38d64), but we do get FMA
desyncs between AArch64 and old x64 CPUs (like before 2c38d64).
This desync can be avoided by adding a non-FMA path to JitArm64 as
an option, which I will wait with for another pull request so that
we can get the performance regression fixed as quickly as possible.
https://bugs.dolphin-emu.org/issues/12542
Added RAII wrapper around the the JITPageWriteEnableExecuteDisable() and
JITPageWriteDisableExecuteEnable() to make it so that it is harder to forget to
pair the calls in all code branches as suggested by leoetlino.
In MacOS 11.2 mprotect can no longer change the access protection settings of
pages that were previously marked as executable to anything but PROT_NONE. This
commit works around this new restriction by bypassing the mprotect based write
protection and instead relying on the write protection provided by MAP_JIT.
Analytics:
- Incorporated fix to allow the full set of analytics that was recommended by
spotlightishere
BuildMacOSUniversalBinary:
- The x86_64 slice for a universal binary is now built for 10.12
- The universal binary build script now can be configured though command line
options instead of modifying the script itself.
- os.system calls were replaced with equivalent subprocess calls
- Formatting was reworked to be more PEP 8 compliant
- The script was refactored to make it more modular
- The com.apple.security.cs.disable-library-validation entitlement was removed
Memory Management:
- Changed the JITPageWrite*Execute*() functions to incorporate support for
nesting
Other:
- Fixed several small lint errors
- Fixed doc and formatting mistakes
- Several small refactors to make things clearer
This commit adds support for compiling Dolphin for ARM on MacOS so that it can
run natively on the M1 processors without running through Rosseta2 emulation
providing a 30-50% performance speedup and less hitches from Rosseta2.
It consists of several key changes:
- Adding support for W^X allocation(MAP_JIT) for the ARM JIT
- Adding the machine context and config info to identify the M1 processor
- Additions to the build system and docs to support building universal binaries
- Adding code signing entitlements to access the MAP_JIT functionality
- Updating the MoltenVK libvulkan.dylib to a newer version with M1 support
Any file which includes scmrev.h must be rebuilt when scmrev.h
is regenerated. By not including scmrev.h from any file other
than Version.cpp, incremental builds become a little faster.
The STL has everything we need nowadays.
I have tried to not alter any behavior or semantics with this
change wherever possible. In particular, WriteLow and WriteHigh
in CommandProcessor retain the ability to accidentally undo
another thread's write to the upper half or lower half
respectively. If that should be fixed, it should be done in a
separate commit for clarity. One thing did change: The places
where we were using += on a volatile variable (not an atomic
operation) are now using fetch_add (actually an atomic operation).
Tested with single core and dual core on x86-64 and AArch64.
The control expression editor allows line breaks, but the serialization was
losing anything after the first line break (/r /n).
Instead of opting to encode them and decode them on serialization
(which I tried but was not safe, as it would lose /n written in the string by users),
I opted to replace them with a space.
If we can prove that FCVT will provide a correct conversion,
we can use FCVT. This makes the common case a bit faster
and the less likely cases (unfortunately including zero,
which FCVT actually can convert correctly) a bit slower.
Fixes https://bugs.dolphin-emu.org/issues/12388. Might also fix
other games that have problems with float/paired instructions
in JitArm64, but I haven't tested any.