Note: It's not 100% perfect, as some of the GPU capablities leak into the
pixel shader UID.
Currently our UIDs don't get exported, so there is no issue. But someone
might want to fix this in the future.
As much as possible, the asserts have been moved out of the GetUID
function. But there are some places where asserts depend on variables
that aren't stored in the shader UID.
Bug Fix: It was theoretically possible for a shader with depth writes
disabled to map to the same UID as a shader with late depth
writes.
No known test cases trigger this.
Bug Fix: The normal stage UIDs were randomly overwriting indirect
stage texture map UID fields. It was possible for multiple
shaders with diffrent indirect texture targets to map to
the same UID.
Once again, it dpesn't look like this bug was ever triggered.
This frees up 21 bits and allows us to shorten the UID struct by an entire
32 bits.
It's not strictly needed (as it's encoded into the length) but I added a
bit for per-pixel lighiting to make my life easier in the following
commits.
The only code which touches xfmem is code which writes directly into
uid_data.
All the rest now read their parameters out of uid_data.
I also simplified the lighting code so it always generated seperate
codepaths for alpha and color channels instead of trying to combine
them on the off-chance that the same equation works for all 4 channels.
As modern (post 2008) GPUs generally don't calcualte all 4 channels
in a single vector, this optimisation is pointless. The shader compiler
will undo it during the GLSL/HLSL to IR step.
Bug Fix: The about optimisation was also broken, applying the color light
equation to the alpha light channel instead of the alpha light
euqation. But doesn't look like anything trigged this bug.
OS X uses a case insensitive filesystem by default: when I try to build,
a system header does #include <assert.h>, which picks up
Source/Core/Common/Assert.h. This only happens because CMakeLists adds
'${PROJECT_BINARY_DIR}/Source/Core/Common' as an include directory: in
an out-of-tree build, that directory contains no other source files, but
in an in-tree build PROJECT_BINARY_DIR is just the source root.
This is only used for scmrev.h. Change the include directory to
'${PROJECT_BINARY_DIR}/Source/Core' and the include to
"Common/scmrev.h", which is more consistent with normal headers anyway.
Small cleanup by using std::shared_ptr and getting rid of
ciface.Devices() which just returned the m_devices (which defeats the
point of making m_devices protected).
Incidentally, this should make the code safer when we have
different threads accessing devices in the future (for hotplug?).
A lot of code use Device references directly so there is
no easy way to remove FindDevice() and make those unique_ptrs.
Using a minimum width is a good compromise between
setting all buttons to the same width
and letting them all decide their own width.
This is because the small buttons are kept tidy and regular
while allowing the biggest buttons to fit their contents.
Previously, the devices vector would be passed to all backends. They
would then manually push_back to it to add new devices. This was fine
but caused issues when trying to add synchronisation.
Instead, backends now call AddDevice() to fill m_devices so that it is
not accessible from the outside.
This changes Bluetooth device discovery on Linux to use LIAC (Limited
Dedicated Inquiry Access Code) since third-party Wiimotes (such as Rock
Candy Wiimotes) are not discovered without it.
Also added accessor function in IONix class to help with checking if
the discovered Wiimote has already been found.
[leoetlino: code review suggested changes, remove unused variable,
commit message formatting fixes, and build fix]
This commit makes real Wiimotes really disconnect when they are
disconnected by the emulated software, which is more similar to how
it works with a real Wii and allows Wiimotes to be disconnected after
timeout for power saving.
This is currently only enabled on Linux, because of limitations on
the other platforms.
Fully opt-in, reports to analytics.dolphin-emu.org over SSL. Collects system
information and settings at Dolphin start time and game start time.
UI not implemented yet, so users are required to opt in through config editing.
Fixes a major preformance regression in Skies of Arcadia during
battle transisions.
I had plans for a more advanced version of this code after 5.0,
but here is a minimal implemenation for now.
allthough this is a mesa bug, this is a simple enough workaround for context
creation fails with EGL_CONTEXT_OPENGL_FORWARD_COMPATIBLE_BIT_KHR set.
Otherwise dolphin will fail to create 3.3+ core context with current mesa
version
Cleanup code style.
Move ActionReplay code->INI saving into ActionReplay namespace.
Threadsafety Cleanup: ActionReplay is accessed from the Host, Emu
and CPU Threads so the internal storage needs to be protected by a
lock to prevent vectors/strings being deleted/moved while in use by
the CPU Thread.
UI Consistency: Make ARCodes behave like Gecko Codes - only apply
changes when Apply is pressed. Save changes to INI from CheatsWindow.
ISOProperties/CheatsWindow now synchronize with each other.
ISOProperties loads codes using ActionReplay::LoadCodes which actually applies
the codes to the global state. If a game is running then that games receives
all the codes (and ACTIVE status) from the second game being shown in
ISOProperties which is not desirable.
Donkey Kong Country Returns is writing new data to some files in /tmp
when loading each level. But the savestate code was opening the files
a second time and reading some old and stale data out.
As of #3798, dolphin now correctly restores that stale data to /tmp,
which broke DKCR (and probally countless other games).
This PR closes all file handles before saving and loading savestates,
which flushes the data out and pervents this issue. (old savestates
are corrupted and will still cause crashes if loaded)
On master, when polling the 1st in-game controller, Dolphin would poll all the 1st local controllers. With the 1st commit, each client waits its turn, which would dramatically increase the lag.
Now with this commit, it even polls all local controllers at once, so it should have even less latency than master in a few setups. Like one player with 3 controllers and the 2nd one with just one controller.
This fixes issues with setups like:
Player 1 uses port 1 and player 2 uses port 3, or
player 1 uses port 2 and player 2 uses port 3, so nobody uses port 1
Also swaps the byte order from RGBA->BGRA to match GL/D3D12, and what
the read handler is expecting.
Depth reads will now return the minimum depth of all samples, instead of
the average of all samples.
Using glMapBufferRange to read back the contents of the SSBO is extremely
slow on NVIDIA drivers. This is more noticeable at higher internal
resolutions. Using glGetBufferSubData instead does not seem to exhibit
this slowdown.
make sure Reset() can’t be run concurrently with AddGCAdapter() or
ResetRumble() (which is called on other threads) which can cause
crashes (issue #9462)
So they share the same emitter, and so they are in the same 128MB range.
This allows us to use B() to jump to the dispatcher.
However, so we have to regenerate them on every cache clear.
EndPlayInput runs on the CPU thread so it can't directly call
UpdateWantDeterminism. PlayController also tries to ChangeDisc
from the CPU Thread which is also invalid. It now just pauses
execution and posts a request to the Host to fix it instead.
The Core itself also did dodgy things like PauseAndLock-ing
from the CPU Thread and SetState from EmuThread which have been
removed.
Fix Frame Advance and FifoPlayer pause/unpause/stop.
CPU::EnableStepping is not atomic but is called from multiple threads
which races and leaves the system in a random state; also instruction
stepping was unstable, m_StepEvent had an almost random value because
of the dual purpose it served which could cause races where CPU::Run
would SingleStep when it was supposed to be sleeping.
FifoPlayer never FinishStateMove()d which was causing it to deadlock.
Rather than partially reimplementing CPU::Run, just use CPUCoreBase
and then call CPU::Run(). More DRY and less likely to have weird bugs
specific to the player (i.e the previous freezing on pause/stop).
Refactor PowerPC::state into CPU since it manages the state of the
CPU Thread which is controlled by CPU, not PowerPC. This simplifies
the architecture somewhat and eliminates races that can be caused by
calling PowerPC state functions directly instead of using CPU's
(because they bypassed the EnableStepping lock).