Ratchet & Clank (the third) uses an address of 0 for invalid mipmap.
It would be very awkward to put the middle layer of texture in start of
memory. So let's use this information to correct the lod.
It make the game more robust on the lod rounding
The hack only fix the HW renderer but not the SW renderer. However I'm not sure
the issue is from GSdx.
The hack will disable alpha test that used to generate empty draw call.
At higher resolutions it takes too much time to save a screenshot at the
maximum compression level. So let's allow the user to set the
compression level.
This re-uses the png_compression_level setting. The default compression
level is 1 for speed, but if the user wishes to increase the compression
level (without using an external tool) and doesn't mind if the
screenshot takes more time to save then they can increase the
compression level up to a maximum of 9 (which can take quite a while).
Fixes#1527.
* Improve frame buffer height management on custom resolution. Width seems to be fine with the same size as scaled image output.
* Prevent offset issues on Persona 3 based on the data from merge circuit.
Note: Fixes custom resolution upscaling on ICO 50Hz/60Hz mode when large frame buffer is enabled. previously 60Hz mode only displayed half of the screen and 50Hz mode only worked due to the scissor hack.
Enable it to ensure correct rendering (FMV)
Disable it to reduce GPU/memory requirement
Option will likely be removed when the perf impact will be reduced.
It would requires some texture dynamic width convert shaders.
So as a quick solution, let's add a new CRC hack.
For issue #1362 (granted the CRC is correct)
So let's increase the height. It will increase the memory requirement on some games
v2: try to do it automatically
(not sure it will useful as most game will requires it)
v3: let's back to an hardcoded 1280 size. It generates too much issue
Try to avoid random black screen frame
v2: don't force the preload hack on the frame
It creates a ghost image over FMV
v3: support offset within a frame
The long story:
Game blits FMV far aways of the RT which is actually the input of the RO texture...
Currently GSdx suffers of 2 bugs.
1/ RT is too small
2/ texture isn't properly updated with the rendered value. Texture is invalidated
but it reads back the pixels from the GS memory whereas the correct
value is located on the GPU.
This commit will replace the standard draw by a manual blit. Therefore it avoid
size issue and bad upscaling issue.
v2:
* Use various copy to be more compatible with dx api
* Move all part of the hack info the BlitFMV function
v3: add log message
There are likely few games (RE4) which constantly change the FBW register value causing the framebuffer width to be updated at every interval. Adding a safe limit (512) similar to frame buffer height would prevent such constant changes of the framebuffer width when FBW changes once again to an even lower value.
Valid values for png_compression_level are from 0 (no compression) to 9
(max compression). The default is 1.
v2: Use zlib Z_BEST_SPEED (1) and Z_BEST_COMPRESSION (9) defines.
The following patch uses the height value of the display rectangle rather than make an estimation of the Frame buffer height when the game uses a non-referenceable height (or) width.
Don't lookup a depth buffer if depth test is always pass without write
Boost performance on Tekken5 when depth emulation is enabled in openGL
(Tekken5 sets same address for both the RT and the depth but depth is disabled)
v2:
Keep ds if DATE is enabled (some implementation uses a stencil buffer)
Be more aggressive to avoid an useless depth lookup
A couple of useless members were removed too.
Also fix wnd initialization
Coverity:
CID 146955 (#1 of 1): Uninitialized pointer read (UNINIT)
18. uninit_use: Using uninitialized value wnd[i].
This is the internal resolution which GSdx uses and recording at this resolution
is optimal, i.e. without any dumb scaling, with all relevant pixels and without
redundant pixels.
The resulting clip still doesn't have the correct aspect ratio set, but that's
just a property which can be set to the clip afterwards, which is where the DAR
becomes useful. Since it's usually anamorphic, when muxing later with the audio
use the DAR to set the playback aspect ratio.
gsdx changes:
Remove native resolution checkbox from GUI and rework associated code
Small changes to Windows and Linux GUI
Support 8x native resolution
Fix custom resolution width less than native width use case
This fixes the following issues when custom resolution is selected.
- When the width is smaller than the native resolution width, the
texture cache targets are removed on every Vsync signal, causing a
black screen issue.
- The texture cache code needs a 1 returned for the custom resolution
upscale multiplier or there'll be some really funny graphical issues.
It also removes unnecessary GetConfig (which I think unconditionally
does a a file read on Windows) calls if the width was increased - the
upscale multiplier is already stored, and the custom resolution width
and height calls are now unnecessary.
Also fix some whitespace issues.
upscale_multiplier function values have been changed to allocate native resolution and also move custom resolution to 9.
Remove the old native checkbox value and include Native in the combo box.
Internal GSDX functions have also been updated with this new update to the upscale_multiplier variable.
* Greatly reduce the number of clut read (factor 10x)
* Avoid to get wrong TEXA texture in the cache.
* Fix "jump depends on uninitialized variable" Valgrind warning.
Fix#748
I try my best to avoid any breakage of DX but please test it too.
CID 146833 (#2-1 of 2): Division or modulo by zero (DIVIDE_BY_ZERO)
divide_by_zero: In expression this->m_width / this->m_upscale_multiplier, division by expression this->m_upscale_multiplier which may be zero has undefined behavior.
I used to disable the RT but it doesn't work well with 50cents. It seems
texture writes weren't propagated correctly to the depth buffer. Besides
the game write an alpha value whereas depth is 24 bits only...