Issue1: Depth buffer is wrongly invalidated only the first page is detected.
Issue2: First page seems to be partially written. Could be a GSdx transfer bug.
Anyway, invalidation only support a page granularity.
So here a quick workaround that will clear depth buffer in case of very small partial write.
Might worth to check regression on nocturne/digital saga
GSOffset is already based on a lookup of PSM/BP/BW. Coverage only adds
the size parameters (so only 256 possibilities)
It replaces the hash lookup with a free array access.
The hypothesis is that game will use a depth (aka Z32/Z24/Z16/Z16S)
format when sampling depth texture as color. Technically one could use
a standard color format but block/pixel order won't be the same.
(otherwise I'm screwed)
=> Hypothesis invalid on GoW. They just do a scrambled rendering...
Lookup info:
* The first searched list is the depth pool as we search a depth
texture.
* 2nd one is the render target pool (if a depth was converted to a
render target already)
To avoid any CPU overhead, the source will be a pointer to the real texture
* Conversion (if float texture) will be done on the fly by the shader (GPU).
* Relative rescaling won't be supported. Texture must be fetched with
integral coordinate
Cache page coverage of texture into a hash map
Test done on Champion of Norrath (paltex + DisablePartialInvalidation)
Profiler:
Self of GSTextureCache::SourceMap::Add 5.39% => 0.23%
Self of GSTextureCache::LookupSource 15.27% => 10.82%
Hard to measure on CoN as it depends on memory transfer. Seem to be 5-10 fps faster.
Someone ought to add the Windows option too (and DisablePartialInvalidation too)
It might break a couple of games but most of them run better with depth enabled.
* Silent Hill 2 doesn't need the CRC hack
* GSRenderer: no need to explicitly set bottom value for r.
* Texture Cache: Removed a check which couldn't possibly enter true
branch.
Try to avoid random black screen frame
v2: don't force the preload hack on the frame
It creates a ghost image over FMV
v3: support offset within a frame
It often happens the game try to upload the FMV directly which typically
gave a black screen.
Commit fix rules of roses and I hope various black screen FMV
Performance impact must be tested, and I'm afraid of strange texture cache behavior.
V2: check the size of the transfer too
V3: add support of 16 bits format
V4: avoid division by 0
It actually removes the previous hack that read the full target.
Unfortunately snowblind engine game uses big target so the read is very big too (1280x448)
which is killer for the perf. Whereas the game requires only 24x12 texels
Give a 2x speed boost on Champion of Norrath !!!
Games uses very special texture with a lots of repeating.
It is much faster to send the full texture rather than trying to partially invalidate it.
On my gs dump:
FPS: 29 => 68 !
Avoid a crash on Onimusha3 (PAL 60HZ)
In theory it will be better to find the root cause of overflow. I.e. somewhere in this
code below. Dirty rectangle is too big.
***********************************************************************
if(rowsize > 0 && offset % rowsize == 0)
{
int y = GSLocalMemory::m_psm[psm].pgs.y * offset / rowsize;
if(r.bottom > y)
{
GL_CACHE("TC: Dirty After Target(%s) %d (0x%x)", to_string(type),
t->m_texture ? t->m_texture->GetID() : 0,
t->m_TEX0.TBP0);
// TODO: do not add this rect above too
t->m_dirty.push_back(GSDirtyRect(GSVector4i(r.left, r.top - y, r.right, r.bottom - y), psm));
t->m_TEX0.TBW = bw;
continue;
}
}
***********************************************************************
So as a temporary solution (that will likely stay for a couple of
years), buffers were increased.
Height of the dirty rectangle must be the GS size of the RT. Of course
RT doesn't have any height so we compute the max safest value.
Fix issue #987
Candidate for 1.4 release
1. Add GS_Renderer Enum
Replace all instances of int/uint32 renderer identifier by a strongly
typed enum and appropriate casts.
Only instances in GS[*].cpp/h classes were touched. GPU[*].cpp/h classes
do not to follow the same convention.
2. Add default renderer according to OS
The default renderer is OS dependent (Win -> Dx9HW, others -> OGLHW).
Consequently one should always check againt the appropriate default
value on config load.
The old behaviour was only - if a at all - problematic if the respective
element in the gsdx.ini was missing and probably even then didn't create
issues. The current implementation is still more stable and does not
depend on the implementation of GS.cpp -> GetConfig()
The goal is to check the impact on game that have wrong RT content.
It helps a bit Smash Court Tennis Pro Tournament 2 but the game suffers
another texture cache bug. (RT BW is 10 whereas texture BW is 8)
Note: Armored Core: Last Raven must be tested (only game so far
that rely on the option and I didn't want to add a new one).
Typical wrong draw:
1/ draw in 32 bits
2/ draw in 24 bits
3/ Use alpha as a texure. (Must reuse the GPU data)
4/ Write alpha from EE
5/ Use alpha as a texure. (Must upload new data)
This commit fixes the step 5.
Fix#917 (Conflict - Desert Storm)
A couple of useless members were removed too.
Also fix wnd initialization
Coverity:
CID 146955 (#1 of 1): Uninitialized pointer read (UNINIT)
18. uninit_use: Using uninitialized value wnd[i].
gsdx changes:
Remove native resolution checkbox from GUI and rework associated code
Small changes to Windows and Linux GUI
Support 8x native resolution
Fix custom resolution width less than native width use case