This more accurately represents what's going on, and also ends at 0 instead of 1, making some indexing operations easier. This also changes it so that position_matrix_index_cache actually starts from index 0 instead of index 1.
(Specifically, the copy for VertexLoaderManager::position_cache. The position matrix index happens elsewhere, and the float path still has special logic to copy to scratch3.)
This increases accuracy, fixing the white rendering in Major Minor's Majestic March. However, the hardware backends can only have one viewport and scissor rectangle at a time, while sometimes multiple are needed to accurately emulate what is happening. If possible, this will need to be fixed later.
I think this is a relic of D3D9. D3D11 and D3D12 seem to work fine without it. Plus, ViewportCorrectionMatrix just didn't work correctly (at least with the viewports being generated by the new scissor code).
These aren't particularly useful, and make the code a bit more confusing. If for some reason someone wants to test what happens when these functions are disabled, it's easier to just edit the code that implements them. They aren't exposed in the UI, so one would need to restart Dolphin to do it anyways.
Previously we were using this workaround when using framebuffer fetch
to emulate dual source blending, but it seems like we also need to use
it when using framebuffer fetch to emulate logic ops, otherwise some
Adreno devices get a crash when compiling OpenGL ES ubershaders.
Using the workaround in specialized shaders doesn't seem to be
necessary, but I've made the same change there for consistency.
This gets us closer to fixing https://bugs.dolphin-emu.org/issues/12791
but doesn't actually fix it.
On devices which have hardware support for dual source blending
but not logic ops, this lets us skip performing the framebuffer
fetch in situations where the game isn't actually using logic ops.
I.e. flush pokes before running an EFB peek, if the cache tile isn't present. If the cache tile is present, then EFB pokes should have been written to the cache tile and thus don't need to be flushed.
directly_mapped_vars was added in #69 (4129b30494), but for some reason FIFO_BP_LO/HI were split out from it in in #885 (65af90669b). As far as I can tell, this code (and the code that existed at the time) is identical, so there's no reason to have it handled separately.
Large amounts of logging can have an impact on performance, so moving the ones that have been determined to not matter to the warn level gives a way to hide those messages without hiding actual errors (and also gives a fast visual way of distinguishing between ignored and non-ignored ones due to the different colors).
It doesn't make sense for alpha to add the bias ONLY when dividing by 2, while color doesn't apply the bias for divide by 2 only; hardware testing indicates that alpha should have the bias.
This fixes the menus in Mario Kart Wii (https://bugs.dolphin-emu.org/issues/11909) but reintroduces the white rectangle in Fortune Street.
This reverts commit 5aaa5141ed (and several other matching changes elsewhere).
Turning off primitive restart increases performance a lot on
Adreno for some reason. We're talking numbers like 50%-100% faster
in situations which are bottlenecked by rendering.
At least in GLSL, after calling EmitVertex() the value of all 'out' variables (including gl_Layer and ps) becomes undefined. On OpenGL it seems like they were unchanged, but on Vulkan they became 0, resulting in bad rendering.
Fixes https://bugs.dolphin-emu.org/issues/12001
At least in MSVC (which is not restricted from targetting C++20), these can be resolved to either std::format_to or fmt::format_to (though I'm not sure why the std one is available). We want the latter.
This syntax is allowed by GLSL, but HLSL doesn't allow it. This meant that games using R8 comparisons in equal mode would produce shaders that failed to compile. Super Mario Galaxy's water levels were affected by this.
To do this, I had to decouple framebuffer fetch from shader blending. We need to be able to access framebuffer fetch input when using shader logic ops.
Fixes Bomberman Jetters in single core mode.
When single core mode pauses the CPU to execute the GPU
FIFO it greedily executes the whole thing. Before this commit,
Finish and Token interrupts would happen instantly, not even
taking into account how long the current FIFO window has
taken to execute. The interrupts would be effectively backdated
to the start of this execution window.
This commit does two things: It pipes the current FIFO window
execution time though to the interrupt scheduling and it enforces
a minimum delay of 500 cycles before an interrupt will be fired.
This does this following things:
- Default to the runtime automatic number of threads for pre-compiling shaders
- Adds a distinct automatic thread count computation for pre-compilation (which has less other things going on
and should scale better beyond 4 cores)
- Removes the unused logical_core_count field from the CPU detection
- Changes the semantics of num_cores from maximaum addressable number of cores to actually available CPU cores
(which is also how it was actually used)
- Updates the computation of the HTT flag now that AMD no longer lies about it for its Zen processors
- Background shader compilation is *not* enabled by default
Specifically, when using Manual Texture Sampling, if textures sizes don't match the size the game specifies, things previously broke. That can happen with custom textures, and also with scaled EFB copies at non-native IRs. It breaks most obviously by not scaling the texture coordinates (so only part of the texture shows up), but the hardware wrapping functionality also assumes texture sizes are a power of 2 (or else it will behave weirdly in a way that matches how hardware behaves weirdly). The fix is to provide alternative texture wrapping logic when custom texture sizes are possible.
Note that both GLSL and HLSL provide a fwidth (fragment width) function defined as `fwidth(p) = abs(dFdx(p)) + abs(dFdy(p))`. However, it's easy enough to implement this ourselves (and it makes the code a bit more obvious).
The benefit to exposing this over the raw BP state is that adjustments Dolphin makes, such as LOD biases from arbitrary mipmap detection, will work properly.
This function was deprecated in ffmpeg in January[1], while its
replacement got introduced in 2015[2], so now might be the time to start
using it in Dolphin. :)
[1] f7db77bd87
[2] a9a6010637
Now works with games that deliberately avoid invalidating TMEM because
they know textures are too large to fit:
* Sonic Riders
* Metal Arms: Glitch in the System
* Godzilla: Destroy All Monsters Melee
* NHL Slapshot
* Tak and the Power of Juju
* Night at the Museum: Battle of the Smithsonian
* 428: Fūsa Sareta Shibuya de
Currently the logic for addressing the individual TexUnits is splattered all
across dolphin's codebase, this commit attempts to consolidate it all into a
single place and formalise it using our new TexUnitAddress struct.
Previously, when playing back a movie, you could not see the total frame count of a movie, only the total number of input polls.
This change simply shows the total frame count on movie playback.
Note that this change also results in the framecount and framecount total ALWAYS being displayed if show_movie_window is true, regardless of whether or not m_ShowFrameCount is true. I believe this is fine, as TASers are much more likely to reference the framecount than the input poll count.
Previous code from #7950 only clamps correctly when the efb copies
left and top coordinates are (0, 0)
Now we should handle all situations.
Spyro: A hero's tail is an example of a game that does an oversized
EFB copy with a non-zero origin.
This adjusts the NaN replacement logic introduced in #9928 to work around the HLSL compiler optimizing away calls to isnan, which caused that functionality to not work with ubershaders on D3D11 and D3D12 (it did work with specialized shaders, despite a warning being logged for both; that warning is also now gone). Note that the `D3DCOMPILE_IEEE_STRICTNESS` flag did not solve this issue, despite the warning suggesting that it might.
Suggested by @kayru and @jamiehayes.
Fixes https://bugs.dolphin-emu.org/issues/12620
The changed code did not match the corresponding code in VertexShaderGen. Some parts of the sky have 2 color channels in each vertex, while others only have 1, despite only color channel 0 being used and XFMEM_SETNUMCHAN being set to 1 for both of them. The old code (from #4601) caused channel 0 to be set to channel 1 if the vertex contained both color channels but the number of channels was set to 1, which is wrong.
This adds about a frame of latency, and since most games don't change
VI registers during scanout, we can get away with outputting the XFB at
the start of scanout. WWE Crush Hour is the (only currently known)
exception, which has flickering problems when doing it this way.
This adds a path to perform the output at the end of scanout, and gates
it behind an option which defaults to using the latency-reducing
pre-scanout path.
This was added because YAGCD's info on MAXANISO (near TX_SETMODE0 in Section 5.11.1) claims it's the case, but Extrems says it does work. I haven't tested anything myself, and dolphin still does not actually implement anisotropic filtering based on this field.
Manually encoding and decoding logical immediates is error-prone.
Using ORRI2R and friends lets us avoid doing the work manually,
but in exchange, there is a runtime performance penalty. It's
probably rather small, but still, it would be nice if we could
let the compiler do the work at compile-time. And that's exactly
what this commit does, so now I have no excuse for trying to
manually write logical immediates anymore.
SPDX standardizes how source code conveys its copyright and licensing
information. See https://spdx.github.io/spdx-spec/1-rationale/ . SPDX
tags are adopted in many large projects, including things like the Linux
kernel.
This fixes bounding box shaders failing to compile under Vulkan, due to
differences between GLSL and HLSL in the return value of vector
comparisons and what types these functions accept. I included all() for
the sake of completeness.
At higher resolutions, our bounding box dimensions end up being
slightly larger than original hardware in some cases. This is not
necessarily wrong, it's just an artifact of rendering at a higher
resolution, due to bringing out detail that wouldn't have appeared on
original hardware. It causes a texel to fall partially on what would
have been a single pixel at native resolution, resulting in the
coordinates getting bumped up to the next valid value. In many cases,
these slightly larger bounding boxes are perfectly fine, as games don't
hard-code expected dimensions. It is problematic in Paper Mario TTYD
though, for a somewhat complicated reason.
Paper Mario TTYD frequently uses EFB copies to pre-render a bunch of
animation frames for a character sprite (especially in Chapter 2), so
that it can then render 100 or more of them without bringing the
GameCube to its knees. Based on my observation, the game seems to set
aside a region of memory to store these EFB copies. This region is
obviously fairly small, as the GameCube only has 24MB of RAM. There are
2 rooms in Chapter 2 where you fight a horde of as many as 100 Jabbies,
which are also rendered using EFB copies, so in this room the game ends
up making 130(!) EFB copies just for Puni and Jabbi sprites. This seems
to nearly fill the region of memory it set aside for them.
Unfortunately, our slightly larger bounding boxes at higher resolutions
results in overflowing this memory, causing very strange behavior. Some
EFB copies partially overlap game state, resulting in reading it as a
garbage RGB5A3 texture that constantly changes. Others apparently
somehow trigger a corner case in our persistent buffer mapping, causing
them to partially overwrite earlier EFB copies.
What this change does is only include the screen coordinates that align
with the equivalent native resolution pixel centers, which generally
results in the bounding boxes being more in line with original
hardware. It isn't perfect, but it's enough to fix Paper Mario TTYD's
Jabbi rooms by avoiding the buffer overflow. Notably, it is more
accurate at odd resolutions than at even resolutions. Native resolution
is completely unaffected by this change, as should be the case. This
change may also have a small positive impact on shader performance at
higher resolutions, as there will be less atomic operations performed.
Running the min/max operation on the upside down, quad-rounded pixel
coordinates before inverting them to the standard upper-left origin
produces wrong results. Therefore, we need to do the inversion before
rounding to pixel quads.
Fragment coordinates always have a 0.5 offset from a whole integer, as
that's where the pixel center is on modern GPUs. Therefore, we want to
always round the fragment coordinates down for bounding box
calculations. This also renders the pixel center offset useless, as 0.5
vs ~0.5833333 makes no difference when rounding down.
The SDK seems to write "default" bounding box values before every draw
(1023 0 1023 0 are the only values encountered so far, which happen to
be the extents allowed by the BP registers) to reset the registers for
comparison in the pixel engine, and presumably to detect whether GX has
updated the registers with real values. Handling these writes and
returning them on read when bounding box emulation is disabled or
unsupported, even without computing real values from rendering, seems
to prevent games from corrupting memory or crashing.
This obviously does not fix any effects that rely on bounding box
emulation, but having the game not clobber its own code/data or just
outright crash is a definite improvement.