Regression was introduced in #1954
GSdx caused the emulator to crash when the renderer was restarted.
It may have affected older gpus from nvidia/amd
with older OpenGL support as well.
The swap interval function must be called on the same thread that
rendering takes place on. This fixes an issue where the turbo speed and
frame limiter hotkeys fail to disable vsync when the OpenGL renderer is
used.
Using range loops where possible (correctly).
Using auto where possible (minimize code changes whenever it's decided to change back to a std container).
Use more efficient erase pattern (where possible).
Minor code tweaks.
PCSX2 sends a negative value (-1) to GSdx when adaptive mode is
specified for Vsync, this mode is exclusive to OpenGL at the moment
and is unimplemented on the D3D11 renderer. Also the present function
of swapchain only accepts values from 0 to 4 as parameter, hence
passing negative values to the function is undefined behavior.
So let's fallback to standard synchronization method on D3D11 when
PCSX2 requests for adaptive mode.
Fix CRC hacks on PAL version.
PAL version will no longer experience very high brightness/contrast
issues on stages in hw mode caused by an incorrect CRC hack.
Moved a CRC hack back to OpenGL mode only for the PAL version
because texture shuffling does not work properly on PAL games.
Forgot to replace `IDC_TEXT` with `IDC_VALUE` macros, due to this the
text containing the name of the options was being updated with the
current value of the option instead of updating the text designated for
holding the values.
DBY isn't an offset to the frame memory but rather an offset to read
output circuit inside the frame memory, hence the top offset should also
be calculated for the total height of the frame memory. Fixes software
mode regression in Beyond Good and Evil.
Also handle cases when GetFrameRect() is called without any paramerer to
avoid an illegal value access violation on the DISP register.
output 1 strip of 2 triangles instead of 2 strips of 1 triangle.
Potentially it would reduce the geometry shader overhead. And it
might avoid a middle line in sprite in some AMD GPU/driver/OS
bad combination
GSdx-ogl: Console messages v2
Follow up to
commit/ec63b04719fd9c05a6aeeacb55dc1c54f5ef145b
Add intel broken driver wiki link message in console (OpenGL).
Print intel / amd buggy driver message once in console (OpenGL).
Pring texture barrier and viewpoint array info once in console (OpenGL).
Enables character outlines to partially work on Full CRC.
DX9 has a small issue where a small black line at the bottom shakes when outlines are enabled.
You can either use Aggressive crc or x,y offset to fix the issue.
Removed unnecessary crc hack that caused shadows on stationary objects (trees) to move on Direct3D in a weird
motion blur type way when the player moved slightly.
* Cast return value of IsEof() to bool. (Avoids int -> bool performance
warning error)
* Cast field and index to the required parameter type of AppendRawData.
The sprite geometry shader was still being used even if the sprites were
converted on the CPUs.
Convert all sprites using the GPU - the fix isn't ideal, but it'll
likely have to do unless someone feels like porting over more of the
OpenGL changes to the D3D11 renderer.
Closes#1921.
Class member variables are initialised in order of declaration in the
class definition. Move native_buffer to the top of the class definition
to avoid initialising m_width and m_height to random values.
Move the custom resolution scaling code to a separate subroutine and
allow future RT buffer resize calls when the buffer size isn't enough.
(Example: when a game's CRTC/Framebuffer size changes. The older code
didn't consider such cases)
Added a more robust buffer size calculation mechanism for custom
resolutions. Improves performance in higher resolutions for games
which don't need a big buffer. There's a great boost in performance
at GS limited scenarios.
I don't even feel there's a need for the large framebuffer option right
now, For future - I plan on making the large framebuffer enabled version
as the default as the overhead is there only at situations when it's
necessary. Until then keeping the original code just to be on the safe
side in case any issue pops up.
An EOF only occurs after attempting to read past the end of the file.
Account for this correctly, which fixes a potential infinite loop when
reading back an xz compressed GS dump.
Add the bitfield structure of the undocumented SYNCV register,
potentially might be useful in proper height determination of the output
circuit for some weird games which still get it wrong but still haven't
figured out how it might be useful. Maybe some sort of black magic
formula with the vertical synchronization values?
The differential phase value seems to closely resemble the display
height value of the video modes (480 for NTSC, 576 for PAL) but after
some investigating into the differential phase, I have no clue on how
they might be even related. Hopefully the mystery will be unveiled in
the near future.
Reduce disk space. Easy to share.
It would be nice to port the code to Windows.
libzma code was taken from https://git.tukaani.org/xz.git
Note: only short dumps are supported so far. Big dump will freeze the interface during the compression.
Or will suck all the RAM.
Note2: a multithreaded encoder would badly impact the compression ratio
Thanks to Turtleli for all review comments
* replace gtk_table by gtk_grid
=> it still misses some paddings
* Use 3.22 monitor API to query screen size
=> need to be tested
* directly add scrolled windows into a container without bothering with
the viewport.
Code compile fine but wasn't tested.
v2: disable the code until I (or someone) get a chance to test and fix it.