The hack relies on the undefined behavior of the hardware so it can
potentially generate rendering corruption.
This new hack drops the cache flusing when only the alpha channel is masked.
Alpha is a direct copy of the fragment. Normally masked bits will be constant
everywhere (RT, FS output, texture cache) so it would likely work.
Just in case, code is only enabled with the new shiny hack
Intially GSBlendStateOGL was an alias of the m_blendMapD3D9 array
The object was replaced by an index in the array. Save 2k of memory duplication.
And too much useless code.
v2: push/pop blending state in DATE stuff
v3: remove m_state which is useless now
The purpose is to emulate correctly destination alpha factor
An alpha channel of 128 is 1.0 in the GS but only ~0.5 in the GPU
I think few draw call use destination alpha so impact on perf must remains small.
The idea is to use a floating texture to accumulate the data and
then do a final postprocessing pass to apply the modulo
v2:
* use bounding box to
* fix vertex corruption issue
* use negative number in shader which allow to use half float (+12
fps@4x)
The updated medium level will run for all sprites. It helps sotc blooming effect and it remains
fast enough to be enabled by default (at least on 3D games)
The new high level will run for all sprites + color clipping
The idea is that sprites are often use for post-processing effect (ofc except 2D games)
Most of the time post-processing supports SW blending with a small speed penality. SW
blending is more accurate so it is better to use it.
Accurate options do a better jobs. Technically it can still
be useful for old gpu/driver that doesn't support the GL4.5 extension.
On Windows, you can still rely on Dx
On linux, free driver support it (except Intel)
It might help to fix a bit the color on a couple of games
accurate_fbmask = 1
Code uses GL4.5 extensions. So far it seems the effect is ony used a couple
of time and often in non-overlapping primitive. Speed impact will likely remain small
No barrier => draw all primitives
Barrier but without overlap => draw all primitives
Barrier with overlap => draw primitive by primitve
It will ease the implementation of accurate blending and why not date too
If there is no overlap, it is allowed to directly read from the render target.
On SotC testcase with 6x scaling: 30fps -> 40fps
Note: it requires GL_ARB_texture_barrier extension so be sure to have a recent driver
Note2: it requires a lots of testing too
Open question: in case of complex date (written alpha)
Will it be faster to split the draw call into multiple call with no
primitive overlap
UserHacks_UnscaleSprite = 1 will unscale flat sprites
UserHacks_UnscaleSprite = 2 will unscale all sprites (don't work well so far)
The idea of the hack is to redo the interpolation of texture coordinate
based on the non-upscaled pixel position.
It avoids various glitches but sprites aren't upscaled anymore (so no
more anti-aliasing, potentially a coefficient can be added).
* add a non-working hack: UserHacks_DateGL4, goal was to replace UserHacks_AlphaStencil
+ Detection of good/bad samples is based on primitive ID variable. However I'm not sure
the behavior is always the same between draw call...Anyway let's keep a copy of the current
work
* Dump integer texture into text csv
* add gl4.2 ARB_shader_image_load_store extension (needed by UserHacks_DateGL4)
git-svn-id: http://pcsx2.googlecode.com/svn/trunk@5707 96395faa-99c1-11dd-bbfe-3dabce05a288
* use gles header file, disable opengl code (mainly GLX, ARB_sso, geometry shader)
* Define properly the function pointer, GLES use basic linking whereas GL must get the symbol dynamically
* cmake: properly search and set libglesv2.so
* don't use dual source blending => HW renderer work (only miss unimportant FBA)
git-svn-id: http://pcsx2.googlecode.com/svn/trunk@5701 96395faa-99c1-11dd-bbfe-3dabce05a288
Now the brach is ready to be merged :)
Dears Window users. If you can test that:
1/ still compile
2/ still running on DX
3/ can run with opengl
git-svn-id: http://pcsx2.googlecode.com/svn/branches/gsdx-ogl-wnd@5663 96395faa-99c1-11dd-bbfe-3dabce05a288
* Emulate Geometry Shader from the CPU.
* add some option to override opengl extension detection
* redo shader interface (again) to compile on the free driver
SW renderer is now working on the free driver.
To test it on your linux box use this cmake option -DEGL_API=TRUE
Note: (need opengl 3.0) I test mesa 9.2 git
git-svn-id: http://pcsx2.googlecode.com/svn/branches/gsdx-ogl-wnd@5646 96395faa-99c1-11dd-bbfe-3dabce05a288
* port KrossX patch from r5556 to openGL
* add a basic gui entry, would love an additional description
* also add the pointsampler hack but don't activate it yet
git-svn-id: http://pcsx2.googlecode.com/svn/trunk@5570 96395faa-99c1-11dd-bbfe-3dabce05a288
* Use the new map interface/separate texture coordinate inside shader
* support new format on texture
Note: it is quite instable with various crashes and GL error but at least it compiles now :p
git-svn-id: http://pcsx2.googlecode.com/svn/branches/gsdx-ogl@5094 96395faa-99c1-11dd-bbfe-3dabce05a288
* fix vertex shader for HW renderer :) Remains the fragment part...
* add some dumping infrastucture (DUMP_START and DUMP_LENGTH)
git-svn-id: http://pcsx2.googlecode.com/svn/branches/gsdx-ogl@5030 96395faa-99c1-11dd-bbfe-3dabce05a288
Current goal is to implement the SW render with pure opengl instead of SDL.
I plan to use OpenGL4.2 capability (the latest actually) => need libglew1.7 and a Dx11 capable GPU/drivers.
git-svn-id: http://pcsx2.googlecode.com/svn/branches/gsdx-ogl@4970 96395faa-99c1-11dd-bbfe-3dabce05a288
GSdx: Removed OpenGL "support". Nobody showed any interest in getting this working.
GSdx: Removed PS1 GPU support. pcsx2 does not use this and it is unmaintained, likely broken, and frequently confuses intellisense.
GSDumpGUI: Use the correct export for the library name, was using the PS1 version.
If any of the above code is needed in the future, we have this wonderful technology called version control.
git-svn-id: http://pcsx2.googlecode.com/svn/trunk@2754 96395faa-99c1-11dd-bbfe-3dabce05a288
* Removed GSTextureFX classes
* Built shaders right into GSState classes, using GSStateDX as an interface, so that all shader caches get auto-destroyed along with GSState.
In addition to being a bit of a code cleanup, it should be a bit more efficient too since all of the extra dereferences to GSState from GSTextureFX have been removed. :)
git-svn-id: http://pcsx2.googlecode.com/svn/branches/GSopen2@1849 96395faa-99c1-11dd-bbfe-3dabce05a288
* Software mode seems to work fine. Suspend and resume emulation work nicely, without flaws.
* Hardware DX9 mode suspends but displays only black after resuming.
* Hardware DX10 status is unknown.
git-svn-id: http://pcsx2.googlecode.com/svn/branches/GSopen2@1842 96395faa-99c1-11dd-bbfe-3dabce05a288
- trying the dx10.1-only "gather" shader instruction for palletized lookups ("8-bit texture" mode), saves 4 instructions which isn't much but still... (not tested, don't have ati)
- may fix the intel gma "no output" bug (don't have gma either :P)
- and the usual small code optimizations
git-svn-id: http://pcsx2.googlecode.com/svn/trunk@1549 96395faa-99c1-11dd-bbfe-3dabce05a288