Also makes y_scale a dynamic parameter for EFB copies, as it doesn't
make sense to keep it as part of the uid, otherwise we're generating
redundant shaders.
This will generate one shader per copy format. For now, it is the same
shader with the colmat hard coded. So it should already improve the GPU
performance a bit, but a rewrite of the shader generator is suggested.
Half of the patch is done by linkmauve1:
VideoCommon: Reorganise the shader writes.
Improve bookkeeping around formats. Hopefully make code less confusing.
- Rename TlutFormat -> TLUTFormat to follow conventions.
- Use enum classes to prevent using a Texture format where an EFB Copy format
is expected or vice-versa.
- Use common EFBCopyFormat names regardless of depth and YUV configurations.
This should get Donkey Kong Country Returns characters to be as broken as they should be. They will be fixed in a later pr.
Expected result is:
efbtex: characters are always flickering or invisible, no matter what scaling or IR setting
efb2ram: characters are always working properly at 1xIR, no matter what scaling or IR setting
Approximately three or four times now, the issue of pointers being
in an inconsistent state been an issue in the video backend renderers
with regards to tripping up other developers.
Global (ugh) resources are put into a unique_ptr and will always have a
well-defined state of being - null or not null
Since ResolveSubresource cannot be used with depth textures (and throws an error with the debug layer enabled), use a shader which selects the minimum depth value from all samples.
Changes the sampler by XFBEncoder to use a linear filter, rather than point, to match GL behavior.
Instead of having special case code for efb2tex that ignores hashes,
the only diffence between efb2tex and efb2ram now is that efb2tex
writes zeros to the memory instead of actual texture data.
Though keep in mind, all efb2tex copies will have hashes of zero as
their hash.
Addded a few duplicated depth copy texture formats to the enum
in TextureDecoder.h. These texture formats were already implemented
in TextureCacheBase and the ogl/dx11 texture cache implementations.
A number of games make an EFB copy in I4/I8 format, then use it as a
texture in C4/C8 format. Detect when this happens, and decode the copy on
the GPU using the specified palette.
This has a few advantages: it allows using EFB2Tex for a few more games,
it, it preserves the resolution of scaled EFB copies, and it's probably a
bit faster.
D3D only at the moment, but porting to OpenGL should be straightforward..
Make sure we don't have a texture bound as both an ShaderResourceView and
a RenderTargetView; this causes rendering glitches.
This isn't really the right place to do this... but I'm not sure
how the code should be structured.