Previously, the calculation for the size of data to be loaded was done
based on the rendering target buffer size and scaling multiplier, which
was totally wrong. This led to different resolutions having different
load sizes while the size of the real GS memory is common regardless of
the scaling variancies.
Hence use the default rendering target buffer size for the load size
independent of the scaling values. I've also removed a buffer height saturation
code which seemed unreliable.
Note: The accurate version of the code can be enabled using the macro
provided in config.h (which is more intensive on resources), the current
code goes along with the approach of maintaining a decent performance
level along with a formidable accuracy.
Fixes glitchy water in Rogue Galaxy in Direct3D when the hack is
enabled.
Fixes Test Drive car reflection in Direct3D when the hack is enabled.
OTher games are affected as well.
Add HW Hack that enables Framebuffer Conversion on the CPU instead of the GPU.
Can fix broken textures on games but at the cost of slower performance.
List of games: Harry Potter games, FIFA Street games.
Games like Call of Duty, Kung fu Panda might also be affected as well as others
especially on Direct3D.
Add HW Hack GUI option on Windows/Linux for 4-bit and 8-bit Framebuffer conversion hack
named "Frame Buffer Conversion".
Using range loops where possible (correctly).
Using auto where possible (minimize code changes whenever it's decided to change back to a std container).
Use more efficient erase pattern (where possible).
Minor code tweaks.
Adds GS Memory Wrapping hack to Windows. Enabling the hack will fix cut-off cutscenes in Wallace & Gromit: The Curse of the Were-Rabbit and Thrillville.
If a user switches renderer they also have to remember to change the CRC
hack level for the best user experience with the selected renderer.
This commit adds a new automatic CRC level that autoselects the
recommended CRC level for the selected renderer, so that a user doesn't
have to make the change manually.
coauthor: turtleli
If OpenGL software is the saved ini renderer and F9 is pressed to toggle
to the hardware renderer, depth emulation will be disabled. This fixes
that issue.
Previously, the auto output circuit selection of the GSdx wasn't good, it simply defaulted to the second output circuit even when the first output circuit is also enabled. The new algorithm for auto selecting returns the merged rectangle dimensions when both of the output circuits are enabled and if the condition for merge is not satisfied then it returns the bigger output circuit.
This reverts commit
d6383e6c21
It created a regression in Everybody's Golf 4/Hot Shots Golf 4, breaking the renderering when depth emulation is disabled/when using a Direct3D Hardware renderer.
The code is now a mirror of the ::add. So 1 insert == 1 erase
This way it won't crash on future update. And it will support future GS
memory wrapping improvement.
ZoE2:
RemoveAt overhead plummet to 0.5%. It was 17% !
However insertion is a bit slower. Due to the begin() after the push_front
v2: use std:: for lists and arrays
This reverts commit f77c1900fa.
Conflicts:
plugins/GSdx/GSTextureCache.cpp
Another fix was done later for Jak cut scene (or FMV). One game got a regression (don't remember which)
Previously, we only calculated the width of a single output circuit which lead to missing a single pixel from the other output circuit which in turn causes offset issues in Persona games, I have customized GetDisplayRect() to now also calculate the dimensions of the merged rectangle when both the output circuits are enabled through the PMODE register, so this hack is no longer needed. :)
TL;DR - The above commit of mine accurately handles the offset issues by calculating union of the rects, removing this stupid hack. (not insulting any other developers, this stupid hack was mine :)
Passes the merged output circuit as the base size for texture cache scaling code. Helps fixing scaling issues where games use both of the output circuits for rendering.
Future Note: Alter the behavior of IsEnabled() check always preferring the second output circuit for some weird reason. I plan on changing it to a better auto-output circuit selection mechanism but that could probably be done some time in the future.
Previously the dedicated custom resolution scaling equation was ignored for the second SetScale() call, generalizing the equations will also fix the DMC scaling issue on custom resolution. Also remove unnecessary checks for null on scale factors. The possibility for having a null scale factor value only exists on custom resolution and it will only happen on cases where the output circuit isn't ready yet. So the ideal way would be to handle all the required conditions of output circuit on "m_renderer->CanUpscale()" itself.
Just clear the buffer. The generic solution will be a copy from buffer A
to buffer B But it requires
1/ a big buffer A (otherwise it would overflow)
2/ a line width rescaling (+ the upscaling mess support)
It must work fine without it now.
From the google code comments:
It would be nice to test those games
* Ar Tonelico 2 (line in sprite regression?)
* breath of fire dragon quarter (overlayed user interface in the game)
v2: update Dx code to use the good format
It creates a regression on game that uses a small temporary target to
upload textures of various sizes. Inital code was done to handle direct
frame write (background, FMV) so big target