Using 8-bit integer math here lead to precision loss for depth copies,
which broke various effects in games, e.g. lens flare in MK:DD.
It's unlikely the console implements this as a floating-point multiply
(fixed-point perhaps), but since we have the float round trip in our
EFB2RAM shaders anyway, it's not going to make things any worse. If we
do rewrite our shaders to use integer math completely, then it might be
worth switching this conversion back to integers.
However, the range of the values (format) should be known, or we should
expand all values out to 24-bits first.
Also makes y_scale a dynamic parameter for EFB copies, as it doesn't
make sense to keep it as part of the uid, otherwise we're generating
redundant shaders.
Some lines of code in Dolphin just plainly grabbed the value of
g_ActiveConfig.iEFBScale, which resulted in Auto being treated as
0x rather than the actual automatically selected scale.
Improve bookkeeping around formats. Hopefully make code less confusing.
- Rename TlutFormat -> TLUTFormat to follow conventions.
- Use enum classes to prevent using a Texture format where an EFB Copy format
is expected or vice-versa.
- Use common EFBCopyFormat names regardless of depth and YUV configurations.
Since ResolveSubresource cannot be used with depth textures (and throws an error with the debug layer enabled), use a shader which selects the minimum depth value from all samples.
Changes the sampler by XFBEncoder to use a linear filter, rather than point, to match GL behavior.
Instead of blindly using the expected width, clamp it to the stride of the
buffer which dx11 returns. This prevents use from reading invalid memory
at the end of textures.
This doesn't solve the base issue of what to do when a game tries to copy
from outside the efb. On real hardware it returns random noise (biased
to all ones)
Addded a few duplicated depth copy texture formats to the enum
in TextureDecoder.h. These texture formats were already implemented
in TextureCacheBase and the ogl/dx11 texture cache implementations.
Rendering EFB textures currently crashes with the D3D backend when MSAA is enabled, because the depth texture wasn't correctly resolved. An example for a crash would be starting Pokemon Snap with D3D and MSAA enabled.
A texture was still being bound when OMSetRenderTargets is called.
State manager resource cache must be flushed to unbind it.
This fixes The Last Story cut scene rendering.
With strings, we don't need to care about passing in a length, since it internally stores it. So now, we don't even need a length parameter for these functions anymore as well.
This also kills off some sprintf_s calls.