GSDX: Improvements to the config interface.
- GSDX: Add new logos to dialog
- GSDX: Remove all the extra null renderers
- GSDX: Changes to renderer combobox
- Sort all the renderers in ascending order. (the fact that D3D11 was
above D3D9 really annoyed me >_<)
- Properly display usage of D3D10/D3D11 on the combobox.
- Use highest available version of DX by default.
- GSDX: gray out upscaling hacks at native resolution
- GSDX-PSX: Modifications to the dialog
- Add new logos
- Remove SDL renderer from combobox since it was removed long ago.
Will use integral coordinate to avoid any rescaling.
Bilinear interpolation isn't supported. I don't think it is allowed to
filter a depth texture anyway.
Texture coordinate could be dummy/float/int integral/int normalized.
Old behavior:
* VS was in charge to select the texture coordinate
* int integral format wasn't supported
New behavior:
* Always compute all formats
* FS will be in charge to select the good format
Impact:
* VS will be slightly slower but it reduces shaders permutation from
little to 0 (won't be bad for CPU)
* FS speed isn't impacted as 2 separate code paths were already required
to support both format
* Rasterizer will be 33% slower but unlikely to be the limited factor of
the GPU
* In future we could directly use the integral format in the FS.
V2: remove useless PSin_t
In the DATE42 algo, first pass must find the primitive that write the
bad alpha value. If depth test is fail, alpha value won't be written therefore
you mustn't keep the primitive id.
In theory to ensure 100% correctness, depth would need to be fully executed
(currently depth write is disabled). However it requires to copy the depth buffer.
It is likely bad for the perf.
Issue reported on DBZInfWorld
* add lengthly comment to explain the format
* Likely reduce the number of shader permutation
* Avoid slow AEM (on GPU)
Expect regressions because TC needs some fixes
v2: fix palette mode
Add 2 new shaders:
* ps_main12: cast a 16 bit depth to a RGB5A1 color
* ps_main16: cast a a RGB5A1 color to a 16 bit depth
Shader might be used in future commit as it seems Silent Hill uses this
kind of format.
Fix tab/indentation too
The purpose is to avoid issue on MS-Intel driver without
a dedicated hack in the compilation.
Code doesn't use it so I suspect others implement to discard the
statement.
I: pcsx2: spelling-error-in-binary usr/lib/i386-linux-gnu/pcsx2/libGSdx-1.0.0.so allows to allows one to
I: pcsx2: spelling-error-in-binary usr/lib/i386-linux-gnu/pcsx2/libGSdx-1.0.0.so Allow to Allow one to
.
Apparently lintian checks grammar too (most common ones).
The idea is to use a floating texture to accumulate the data and
then do a final postprocessing pass to apply the modulo
v2:
* use bounding box to
* fix vertex corruption issue
* use negative number in shader which allow to use half float (+12
fps@4x)
Basically the code does the alpha multiplication in the shader therefore
the blend unit only does a pure addition. This way the multiplication is
accurate and accurate_blending doesn't requires a costly barrier.
This code also avoid variable duplication to make the code more separated.
Hopefully blending can be done in a separated function
It is preliminary work to support fast color clipping with HDR
v2: fix assertion compilation failure
v3: fix regression in not accurate mode
v3: Cs * As/Af is not an accumulation
Those cases don't need the Cd addition and were already optimized anyway
Fix a regression on GoW2
Sourceforge was dead for more than a week therefore add the license
information. I could not find the original TGM source (dead link) so I'm not
even sure if this still applies or if the glsl was totally rewritten. None
of the glsl files have a copyright header so it's hard to tell.
Accurate options do a better jobs. Technically it can still
be useful for old gpu/driver that doesn't support the GL4.5 extension.
On Windows, you can still rely on Dx
On linux, free driver support it (except Intel)
GS uses integer value and does integer operation too.
This commit trunc the sampled texture, the interpoled fragment color
and the product of the 2.
It impacts negatively the perf of about 3/4% (GPU) but it fixes rendering on
suikoden and potentially some others games too.
Code was completey bitrotten
Code was a partial test (and yet 500 lines already)
Shader is more and more complex and multithreading support greatly
reduce the cost of shader switch
Code is not yet enabled because it requires extensive test
The idea is to replace point by a 1 pixels sprite with the help of
a geometry shader. In 4x, point will be replaced by a 4x4 sprite.
It might save a couple of fps
Add a define to test the perf if we keep only the blue channel. It brokes
the code in Prince Of Persia that use the Red/Green channel... Maybe the
speed hack :( Or find a way to replace all if with a lookup table
Note: it is only supported on OpenGL currently
It might help to fix a bit the color on a couple of games
accurate_fbmask = 1
Code uses GL4.5 extensions. So far it seems the effect is ony used a couple
of time and often in non-overlapping primitive. Speed impact will likely remain small
GS doesn't supports texture shuffle/swizzle so it is emulated in a
complex way.
The idea is to read/write the 32 bits color format as a 16 bit format.
This way, RG (16 lsb bits) or BA (16 msb bits) can be read or written with
square texture that targets pixels 1-8 or pixels 8-16.
However shuffle is limited. For example you can copy the green channel
to either the alpha channel or another green channel.
Note: Partial masking of channel is not yet implemented
V2: improve logging
V3: better support of green channel in shader
V4: improve detection of destination (issue due to rounding)
Please test it!
GS supports 3 formats for the output:
32 bits: normal case
=> no change
24 bits: like 32 bits but without alpha channel
=> mask alpha channel (ie don't write it anymore)
=> Always uses 1.0f as blending coefficient
16 bits: RGB5A1, emulated by a 32 bits openGL texture. I think it will be more correct to use
a real 16 bits GL texture. Unfortunately it would cost several (slow) target conversions.
Anyway as a current solution
=> apply a mask of 0xF8 on color when SW blending is used (improve Castlevania shadow)
unfortunately normal blending mode still uses the full range of colors!
This commit also corrects a couple of blending factor. 128/255 is equivalent to 1.0f in PS2, whereas GPU uses 1.0f. So the blending factor must be 255/128 instead of 2
Note: disable CRC hack and enable accurate_colclip to see Castlevania shadow ^^
(issue #380).
Note2: SW renderer is darker on Castlevania. I don't know why maybe linked to the 16 bits format poorly emulated
The purpose of the code is to support alpha channel
of RT uses as an index for a palette texure.
I'm afraid that code will likely break pure palette texture. Only used
if paltex is enabled
It fixes missing shadow in Star Ocean 3 (issue #374) in Native resolution
with filter = 0 (no filtering) or = 2 (normal fitering)
Rendering explanation:
The game emulates a stencil buffer with the alpha channel
The alpha channel of the RT can contains a palette texture index (format 4HH)
The idea is to have a gradient of value in the palette (16/32/48/...).
This way you can implement a +16/-16 and even wrap the alpha value every time
you hit the pixel.
Bilinear filtering breaks the rendering because it interpolates between counts
so you doesn't have the exact count
Upscaling breaks the rendering because the RT is reused as an input texture. It means
that we need to scale it down which again create some interpolations.
Still not yet enabled by default
Potentially it can be optimized with the dot product but special care
need to be taken to ensure float accuracy.
Bonus: it could work on old GPU (aka DX9)
This way it will allow to implement all blendings operartion in FS.
Of course it will be slow, but it would be nice for debug and quickly check
game error rendering.
Much faster for small batch that write the alpha value. Code can
be enabled with accurate_date option.
Here a summary of all DATE possibilities:
1/ no overlap of primitive
=> texture barrier (pro no setup of stencil and single draw)
2/ alpha written
=> small batch => texture barrier (primitive by primitive). Done in N-primitive draw calls.
(based on GL_ARB_texture_barrier)
=> bigger batch => compute the first good primitive, slow but only 2 draw calls.
(based on GL_ARB_shader_image_load_store)
=> Otherwise there is the UserHacks_AlphaStencil but it is a hack!
3/ alpha written
=> full setup of stencil ( 2 draw calls)
If there is no overlap, it is allowed to directly read from the render target.
On SotC testcase with 6x scaling: 30fps -> 40fps
Note: it requires GL_ARB_texture_barrier extension so be sure to have a recent driver
Note2: it requires a lots of testing too
Open question: in case of complex date (written alpha)
Will it be faster to split the draw call into multiple call with no
primitive overlap
UserHacks_UnscaleSprite = 1 will unscale flat sprites
UserHacks_UnscaleSprite = 2 will unscale all sprites (don't work well so far)
The idea of the hack is to redo the interpolation of texture coordinate
based on the non-upscaled pixel position.
It avoids various glitches but sprites aren't upscaled anymore (so no
more anti-aliasing, potentially a coefficient can be added).
* separate VS/GS and FS
* separate subroutine part of the FS
It already complex enough without subroutine stuff. Besides I'm not sure
we will keep subroutine on the future.
* Only a single VAO
=> Format is set once
=> Only a single bind at startup
=> GSVertexBufferStateOGL is nearly useless
=> barely faster but better than nothing :)
Reformatted fx files that were causing issues on certain text editors. They should now display correctly in those editors.
git-svn-id: http://pcsx2.googlecode.com/svn/trunk@5897 96395faa-99c1-11dd-bbfe-3dabce05a288
Also removed the fallback recovery ps, and replaced the compile fail catch to a simple console print. Which I think is safer, and faster.
git-svn-id: http://pcsx2.googlecode.com/svn/trunk@5894 96395faa-99c1-11dd-bbfe-3dabce05a288
* properly detect gl nv depth extension
* Always show the hack on the gui. Add a new hack option for DATE (gl4.2) only
* Save the scan mode on linux too (f7)
* hopefully fix some crash on some drivers... (ensure aligment 256 bits alignment, and if not use std memcpy)
git-svn-id: http://pcsx2.googlecode.com/svn/trunk@5888 96395faa-99c1-11dd-bbfe-3dabce05a288
Best setting if you driver support GL_NV_depth_buffer_float => GL_NV_Depth = 1 & logz = 0
Otherwise => GL_NV_Depth = 0 & logz = 1
Explanation of the bug:
Dx z position ranges from 0.0f to 1.0f (FS ranges 0.0f to 1.0f)
GL z Position ranges from -1.0f to 1.0f (FS ranges 0.0f to 1.0f)
Why it sucks:
GS small depth value will be "mapped" to -1.0f. In others all small values will be 1.0f! Terrible lost
of accuraccy.
The GL_NV_depth_buffer_float extension allow to set the near plane as -1.0f.
So
"GL z Position ranges from -1.0f to 1.0f (FS ranges 0.0f to 1.0f)"
will become
"GL z Position ranges from -1.0f to 1.0f (FS ranges -1.0f to 1.0f)"
and therefore
"z posision [0.0f;1.0f] will map to FS [0.0f;1.0f]" as DX
Yes we just get back all precision lost previously :)
However you need hardware (intel?) and driver support (free driver?/gles?) :(
git-svn-id: http://pcsx2.googlecode.com/svn/trunk@5860 96395faa-99c1-11dd-bbfe-3dabce05a288
* restore the old fxaa (Asmodeam will be integrated when I got time)
* port the recently added new scanline algo
git-svn-id: http://pcsx2.googlecode.com/svn/trunk@5818 96395faa-99c1-11dd-bbfe-3dabce05a288