Re add point sampler to OpenGL. Fixes graphical issues when
Allow 8-bit textures is enabled on AMD gpus.
Issue: https://forums.pcsx2.net/Thread-GSDX-Hardware-mode-Bug-Report-Allow-8-bit-Texture
Adjust the code to be easier to read, and execute the gl code only on amd - suggested by Gregory.
Remove useless ATI_SUCKS define in tfx shader.It wasn't used anywhere outside of the shader.
* Windows behavior must be checked
* remove glsl_source.h
v2: fix missing include
Big thanks to Turtleli
v3:
fix indentation in gsdx-res.xml
add dependency in cmake
remove old res/glsl/fxaa.fx symlink
add tfx.cl for OpenCL support on Linux
v4, v5
fix cmake indentation
Bilinear applies to all renderer
* Common code done in GSVertexTrace
* Extend it with forced but sprite (trade-off between linear/upscale glitches)
* Linux GUI option was moved at the top with the renderer selection
Trilinear is moved to OGL hack
close#1837
Thanks to Flatout for the review and feedback.
It will take care to update the Window GUI :)
Mesa AMD was updated :)
all drivers[1] that support GL_ARB_shader_image_load_store got GL_ARB_clear_texture
[1] Intel driver misses others extensions to run GSdx
OMG, Zone of Ender got a speed boost from 11 fps to 45 fps
Seriously, the goal is to allow benchmarking GSdx without too much overhead of the main renderer draw call
Note: unlike the null renderer, texture/vertex uploading, 2D draw, texture conversions are still done.
* Add full PMODE register to replace slbg/mmod
* Add full EXTBUF register (will allow to emulate write feedback)
* Add a third source (which will actually be the destination of the
write feedback)
mipmap option 3. Actually maybe a separate tri-linear option will be better
m_mipmap == 2 => use manual PS2 trilinear/mipmap
Otherwise
m_filter == 3 => always use full automatic trilinear interpolation
m_filter == 4 => use automatic trilinear interpolation when PS2 uses mipmap
m_filter == 5 => like 4 but force bilinear interpolation inside layer
The game sets the framebuffer as an input texture. So I did the same for
openGL. Code is protected with a CRC. It is working because the game want to sample
pixels.
For the record, I tested it GTA too, it doesn't work as expected because
the game will resize the framebuffer to a smaller one. So you don't have
the guarantee that pixel will be read before a data write.
Note: it requires at least accurate blending set on basic
Note: I need CRC of all Jak games that suffers of this issue. Thanks you :)
* Support Mesa Nouveau IR (free driver for Nvidia's GPU)
=> Print intermediate representation + final shader
=> Dump GPR usage
* Move dumped shader in /tmp/GSdx_Shader/<sub_dir>
=> Avoid the landing of 3 thousands of files in $PWD ^^
* Use function instead of macro
Game often uses date to allow a single pixel pass. If this
use case is detected, stencil buffer will be cleared after first pixels
that pass both depth&stencil test.
It seems to reduce the load on the GPU.
Note: with the help of texture barriere, maybe we could implement the algo
with a single pass.
Will use integral coordinate to avoid any rescaling.
Bilinear interpolation isn't supported. I don't think it is allowed to
filter a depth texture anyway.
Texture coordinate could be dummy/float/int integral/int normalized.
Old behavior:
* VS was in charge to select the texture coordinate
* int integral format wasn't supported
New behavior:
* Always compute all formats
* FS will be in charge to select the good format
Impact:
* VS will be slightly slower but it reduces shaders permutation from
little to 0 (won't be bad for CPU)
* FS speed isn't impacted as 2 separate code paths were already required
to support both format
* Rasterizer will be 33% slower but unlikely to be the limited factor of
the GPU
* In future we could directly use the integral format in the FS.
V2: remove useless PSin_t