Please test it!
GS supports 3 formats for the output:
32 bits: normal case
=> no change
24 bits: like 32 bits but without alpha channel
=> mask alpha channel (ie don't write it anymore)
=> Always uses 1.0f as blending coefficient
16 bits: RGB5A1, emulated by a 32 bits openGL texture. I think it will be more correct to use
a real 16 bits GL texture. Unfortunately it would cost several (slow) target conversions.
Anyway as a current solution
=> apply a mask of 0xF8 on color when SW blending is used (improve Castlevania shadow)
unfortunately normal blending mode still uses the full range of colors!
This commit also corrects a couple of blending factor. 128/255 is equivalent to 1.0f in PS2, whereas GPU uses 1.0f. So the blending factor must be 255/128 instead of 2
Note: disable CRC hack and enable accurate_colclip to see Castlevania shadow ^^
(issue #380).
Note2: SW renderer is darker on Castlevania. I don't know why maybe linked to the 16 bits format poorly emulated
When the RT is used as an input texture, we need to rescale it.
Previous behavior was to always uses a linear filtering (more smooth).
Unfortunately it broke some games that expected an exact value like Star Ocean 3
This commit will disable the linear filtering in normal filtering mode (filter = 0
or filter = 2)
This way, shadow of Star Ocean 3 will appear correctly in upscaling (not
100% perfect but can't do better)
Note: SO3 only requires a nearest sampling of the alpha channel but
I don't know the behavior for others games.
The purpose of the code is to support alpha channel
of RT uses as an index for a palette texure.
I'm afraid that code will likely break pure palette texture. Only used
if paltex is enabled
It fixes missing shadow in Star Ocean 3 (issue #374) in Native resolution
with filter = 0 (no filtering) or = 2 (normal fitering)
Rendering explanation:
The game emulates a stencil buffer with the alpha channel
The alpha channel of the RT can contains a palette texture index (format 4HH)
The idea is to have a gradient of value in the palette (16/32/48/...).
This way you can implement a +16/-16 and even wrap the alpha value every time
you hit the pixel.
Bilinear filtering breaks the rendering because it interpolates between counts
so you doesn't have the exact count
Upscaling breaks the rendering because the RT is reused as an input texture. It means
that we need to scale it down which again create some interpolations.
Still not yet enabled by default
Potentially it can be optimized with the dot product but special care
need to be taken to ensure float accuracy.
Bonus: it could work on old GPU (aka DX9)
* Dump context before the increase of s_n
=> aligned with the global call number
* Don't print colclip not supported when it is optimized away
* dettach the input texture when it is useless
=> avoid to show a wrong texture in the debugger
This way it will allow to implement all blendings operartion in FS.
Of course it will be slow, but it would be nice for debug and quickly check
game error rendering.
Currently we're trying to infer the conversion shader based on the output format
It only works if the input data is RGBA8. It might not be true in the future