Looks like a copy-paste gone wrong. The compute shaders for the other
formats use a group size of 8 * 8, whereas the CMPR compute shader
is supposed to use a flattened 64 * 1 as I understand it.
The only remaining casts for these types that I know of are in TextureInfo (where format_name is set to the int version of the format, and since that affects filenames and probably would break resource packs, I'm not changing it) and in TextureDecoder_Common's TexDecoder_DrawOverlay, which will be handled separately.
SPDX standardizes how source code conveys its copyright and licensing
information. See https://spdx.github.io/spdx-spec/1-rationale/ . SPDX
tags are adopted in many large projects, including things like the Linux
kernel.
Additional changes:
- For TevStageCombiner's ColorCombiner and AlphaCombiner, op/comparison and scale/compare_mode have been split as there are different meanings and enums if bias is set to compare. (Shift has also been renamed to scale)
- In TexMode0, min_filter has been split into min_mip and min_filter.
- In TexImage1, image_type is now cache_manually_managed.
- The unused bit in GenMode is now exposed.
- LPSize's lineaspect is now named adjust_for_aspect_ratio.
Now that we've converted all of the shader generators over to using fmt,
we can drop the old Write() member function and perform a rename
operation on the WriteFmt() to turn it into the new Write() function.
All changes within this are the removal of a <cstdarg> header, since the
previous printf-based Write() required it, and renaming. No functional
changes are made at all.
Migrates the shader generator off the use of a global array, eliminating
the use of some global state. This also allows us to move the shader
generation over to using fmt in a subsequent change.
Using 8-bit integer math here lead to precision loss for depth copies,
which broke various effects in games, e.g. lens flare in MK:DD.
It's unlikely the console implements this as a floating-point multiply
(fixed-point perhaps), but since we have the float round trip in our
EFB2RAM shaders anyway, it's not going to make things any worse. If we
do rewrite our shaders to use integer math completely, then it might be
worth switching this conversion back to integers.
However, the range of the values (format) should be known, or we should
expand all values out to 24-bits first.
Also makes y_scale a dynamic parameter for EFB copies, as it doesn't
make sense to keep it as part of the uid, otherwise we're generating
redundant shaders.
HLSL does not define roundEven(), only round(). This means that the
output may differ slightly for OpenGL vs Direct3D. However, it ensures
consistency across OpenGL drivers, as round() in GLSL can go either way.
This was breaking Metroid Prime 2: Echoes’s scanner in some rooms, such
as the one in https://fifoci.dolphin-emu.org/dff/mp2-scanner/
It was found on Ivy Bridge on Mesa, the alpha value read back from the
EFB was off-by-4 in multiple objects, which was a conversion error
because int4() is equivalent to floor() and the value wasn’t always
higher.
Improve bookkeeping around formats. Hopefully make code less confusing.
- Rename TlutFormat -> TLUTFormat to follow conventions.
- Use enum classes to prevent using a Texture format where an EFB Copy format
is expected or vice-versa.
- Use common EFBCopyFormat names regardless of depth and YUV configurations.
Currently, we use the alpha channel from the EFB even if the current
format does not include an alpha channel. Now, the alpha channel is set
to 1 if the format does not have an alpha channel, as well as truncating
to 5/6 bits per channel. This matches the EFB-to-texture behavior.