Instead of having special case code for efb2tex that ignores hashes,
the only diffence between efb2tex and efb2ram now is that efb2tex
writes zeros to the memory instead of actual texture data.
Though keep in mind, all efb2tex copies will have hashes of zero as
their hash.
Addded a few duplicated depth copy texture formats to the enum
in TextureDecoder.h. These texture formats were already implemented
in TextureCacheBase and the ogl/dx11 texture cache implementations.
A number of games make an EFB copy in I4/I8 format, then use it as a
texture in C4/C8 format. Detect when this happens, and decode the copy on
the GPU using the specified palette.
This has a few advantages: it allows using EFB2Tex for a few more games,
it, it preserves the resolution of scaled EFB copies, and it's probably a
bit faster.
D3D only at the moment, but porting to OpenGL should be straightforward..
Make sure we don't have a texture bound as both an ShaderResourceView and
a RenderTargetView; this causes rendering glitches.
This isn't really the right place to do this... but I'm not sure
how the code should be structured.
This reverts an optimization which isn't worth imo. Every texture uploads have to alloc vram and a staging buffer, so there is no need to do both in the same call.
Texture was being bound as a render target while still being set as a shader resource.
D3D automatically unbinds the SRV in this case and generates a validation error.
The fix is to manually unbind SRV, render into it and then re-bind to old slots.