It seems that the newer version of fmt gets tripped up by bitfields
within structs. However, we can just specify the intended type where
necessary to get around this.
Now that we have an actual interface to manage things, we can stop
duplicating the calls to to the pixel shader manager and remove the
need to remember to actually do so when disabling or enabling the
bounding box.
Rather than expose the bounding box members directly, we can instead
provide an interface for code to use. This makes it nicer to transition
from global data, as the interface function names are already in
place.
Migrates most of VideoCommon over to using fmt, with the exception being
the shader generator code. The shader generators are quite large and
have more corner cases to deal with in terms of conversion (shaders have
braces in them, so we need to make sure to escape them).
Because of the large amount of code that would need to be converted, the
conversion of VideoCommon will be in two parts:
- This change (which converts over the general case string formatting),
- A follow up change that will specifically deal with converting over
the shader generators.
Since the copy X and Y coordinates/sizes are 10-bit, the game can configure a
copy region up to 1024x1024. Hardware tests have found that the number of bytes
written does not depend on the configured stride, instead it is based on the
size registers, writing beyond the length of a single row. The data written
for the pixels which lie outside the EFB bounds does not wrap around instead
returning different colors based on the pixel format of the EFB.
This suggests it's not based on coordinates, but instead on memory addresses.
The effect of a within-bounds size but out-of-bounds offset
(e.g. offset 320,0, size 640,480) are the same.
As it would be difficult to emulate the exact behavior of out-of-bounds reads,
instead of writing the junk data, we don't write anything to RAM at all for
over-sized copies, and clamp to the EFB borders for over-offset copies.
Also makes y_scale a dynamic parameter for EFB copies, as it doesn't
make sense to keep it as part of the uid, otherwise we're generating
redundant shaders.
Tested on a linux Intel Skylake integrated graphics with
blend_func_extended force-disabled, as it's the only platform I have
that doesn't crash with ubershaders and supports fb_fetch
This is a remake of https://github.com/dolphin-emu/dolphin/pull/3749
Full credit goes to phire.
Old message:
"If none of the texture registers have changed and TMEM hasn't been invalidated or changed in other ways, we can blindly reuse the old texture cache entries without rehashing.
Not only does this fix the bloom effect in Spyro: A Hero's Tail (The game abused texture cache) but it will also provide speedups for other games which use the same texture over multiple draw calls, especially when safe texture cache is in use."
Changed the pr per phire's instructions to only return the current texture(s) if none of the texture registers were changed. If any texture register was changed, fall back to the default hashing and rebuilding textures from memory.
It was only implemented in OpenGL, though the option was visible in both
backends, leading to memory leaks if you enabled it in DirectX.
And it wasn't particularly useful as a debug feature as it only showed
where in the EFB the copies were taken from, not what format it was, or
what the copy was used for, or what content was in the EFB at that point
in time.
Also, it stretched the copy regions relative to the window, so the
on-screen regions don't even line up with the window unless the game used
the full EFB (some pal games) and you game image stretched to the full
window.
Texture updates have been moved into TextureCache, while
TMEM updates where moved into bpmem. Code for handling
efb2ram updates was added to TextureCache.
There was a bug for preloaded RGBA8 textures, it only copied
half the texture. The TODO was wrong too.
Baten Kaitos allocates its XFBs from a tagged heap
structure. With the old calculation, too many lines
were being written so the tag of the allocation
after the XFB was being corrupted. Fixes crash
mentioned in this comment:
https://code.google.com/p/dolphin-emu/issues/detail?id=7734#c6
Through just returning the last written value sounds better, this crashes Paper Mario.
In my opinion, gfx issues are fine on older GPUs, but crashes should not happen.
We will now rely on Memory::CopyFromEmu to do bounds checking.
Some games actually load palettes from 0x00000000, despite the
fact no valid palette data should ever be there.
Fixes Issue 7792.
It's a relatively big commit (less big with -w), but it's hard to test
any of this separately...
The basic problem is that in netplay or movies, the state of the CPU
must be deterministic, including when the game receives notification
that the GPU has processed FIFO data. Dual core mode notifies the game
whenever the GPU thread actually gets around to doing the work, so it
isn't deterministic. Single core mode is because it notifies the game
'instantly' (after processing the data synchronously), but it's too slow
for many systems and games.
My old dc-netplay branch worked as follows: everything worked as normal
except the state of the CP registers was a lie, and the CPU thread only
delivered results when idle detection triggered (waiting for the GPU if
they weren't ready at that point). Usually, a game is idle iff all the
work for the frame has been done, except for a small amount of work
depending on the GPU result, so neither the CPU or the GPU waiting on
the other affected performance much. However, it's possible that the
game could be waiting for some earlier interrupt, and any of several
games which, for whatever reason, never went into a detectable idle
(even when I tried to improve the detection) would never receive results
at all. (The current method should have better compatibility, but it
also has slightly higher overhead and breaks some other things, so I
want to reimplement this, hopefully with less impact on the code, in the
future.)
With this commit, the basic idea is that the CPU thread acts as if the
work has been done instantly, like single core mode, but actually hands
it off asynchronously to the GPU thread (after backing up some data that
the game might change in memory before it's actually done). Since the
work isn't done, any feedback from the GPU to the CPU, such as real
XFB/EFB copies (virtual are OK), EFB pokes, performance queries, etc. is
broken; but most games work with these options disabled, and there is no
need to try to detect what the CPU thread is doing.
Technically: when the flag g_use_deterministic_gpu_thread (currently
stuck on) is on, the CPU thread calls RunGpu like in single core mode.
This function synchronously copies the data from the FIFO to the
internal video buffer and updates the CP registers, interrupts, etc.
However, instead of the regular ReadDataFromFifo followed by running the
opcode decoder, it runs ReadDataFromFifoOnCPU ->
OpcodeDecoder_Preprocess, which relatively quickly scans through the
FIFO data, detects SetFinish calls etc., which are immediately fired,
and saves certain associated data from memory (e.g. display lists) in
AuxBuffers (a parallel stream to the main FIFO, which is a bit slow at
the moment), before handing the data off to the GPU thread to actually
render. That makes up the bulk of this commit.
In various circumstances, including the aforementioned EFB pokes and
performance queries as well as swap requests (i.e. the end of a frame -
we don't want the CPU potentially pumping out frames too quickly and the
GPU falling behind*), SyncGPU is called to wait for actual completion.
The overhead mainly comes from OpcodeDecoder_Preprocess (which is,
again, synchronous), as well as the actual copying.
Currently, display lists and such are escrowed from main memory even
though they usually won't change over the course of a frame, and
textures are not even though they might, resulting in a small chance of
graphical glitches. When the texture locking (i.e. fault on write) code
lands, I can make this all correct and maybe a little faster.
* This suggests an alternate determinism method of just delaying results
until a short time before the end of each frame. For all I know this
might mostly work - I haven't tried it - but if any significant work
hinges on the competion of render to texture etc., the frame will be
missed.