SSAA relies on MSAA being active to work. We only supports 4x SSAA while in fact you can enable SSAA at any MSAA level.
I even managed to run 64xMSAA + SSAA on my Quadro which made some pretty sleek looking games. They were very cinematic though.
With this, it properly fixes up SSAA and MSAA support in GLES as well. Before they were broken when stereo rendering was enabled.
Now in GLES they can properly support MSAA and also stereo rendering with MSAA enabled(with proper extensions).
Baten Kaitos allocates its XFBs from a tagged heap
structure. With the old calculation, too many lines
were being written so the tag of the allocation
after the XFB was being corrupted. Fixes crash
mentioned in this comment:
https://code.google.com/p/dolphin-emu/issues/detail?id=7734#c6
Samsung updated the video drivers on the SGS6 which introduced a bug when disabling vsync.
Both the driver versions are r5p0, but the md5sums of the blob differ.
To work around the issue, make sure to never disable vsync by calling eglSwapInterval.
We can't actually determine the driver version on Android yet.
So until the driver version lands that displays the driver version string in the GL_VERSION string
we will need to keep this workaround enabled at all times, which is a bit annoying.
Current mali drivers return the video driver version in one of the EGL strings you can query.
The issue with that is that Android eats all of those strings, so we can't query it.
Their new driver that supports GLES3.1 + AEP has issues with it.
At the very least they don't implement all of the geometry shader features fully which causes shader linker issues when we attempt to use them.
I don't have a device so I can't fully test, so until I do I'm going to blanket disable the whole thing.
The Star Wars games really push the hardware to its limits, which can cause the shaders that are produced to be 18kb or more.
Double our maximum shader size to compensate.
Fixes issue #8860
The last heuristic wasn't quite smart enough and had a few
false positives in Mario Kart: Double Dash and Metroid prime 2.
Now we only activate if the game is rendering a 16:9
projection to a 4:3 viewport.
Someone suggested on IRC that we should make a database of memory
locations in GameCube games which contain the 'Widescreen' setting
so we can automatically detect if the game is in 4:3 or 16:9 mode.
But that's hardly optimal, when the game actually tells the gpu
what aspect ratio to render in. 10 min and 6 lines of code later,
this is the result. Not only does it detect the correct aspect ratio
it does so on the fly.
I'm a little suprised nobody thought about doing this before.
When calculating the size of the undisplayed margin in the case where
fbWidth != fbStride for RealXFB for displaying in the output window,
we do not scale by IR - RealXFB is implicitly 1x.
This bug has been reported to IMGTec at https://pvrsupport.imgtec.com/ticket/472
The basic idea of the bug is that if you're doing a bitwise and of a constant value vector with a constant scalar value, this causes PowerVR's shader
compiler to fail out with a very non-descriptive message.
Working around the issue by making the value a vector that it is being masked by.
In particular this fixes the 6666 colour format
We were loading from the wrong location and it was causing /terrible/ colour changes.
This also fixes a bug in the all the colour formats(except 888) where the unaligned path was loading in to the wrong register.
- Fixes remaining lighting issues (Mario Tennis, etc)
- Apply same fixes to Software Renderer
- Corrected zero length light direction vector to resolve with normal direction (essentially becomes LIGHTDIF_NONE which was what I was after)
The new implementation has 3 options:
SyncGpuMaxDistance
SyncGpuMinDistance
SyncGpuOverclock
The MaxDistance controlls how many CPU cycles the CPU is allowed to be in front
of the GPU. Too low values will slow down extremly, too high values are as
unsynchronized and half of the games will crash.
The -MinDistance (negative) set how many cycles the GPU is allowed to be in
front of the CPU. As we are used to emulate an infinitiv fast GPU, this may be
set to any high (negative) number.
The last parameter is to hack a faster (>1.0) or slower(<1.0) GPU. As we don't
emulate GPU timing very well (eg skip the timings of the pixel stage completely),
an overclock factor of ~0.5 is often much more accurate than 1.0
This fixes issue 6563:
https://code.google.com/p/dolphin-emu/issues/detail?id=6563
This PR adds a 2nd map to texture cache, which uses the hash as key. Cache entries from this new map are used only if the address matches or if the texture was fully hashed. This restriction avoids false positive cache hits. This results in a possible situation where safe texture cache accuracy could be faster than the fast one.
Small textures means up to 1KB for fast texture cache accuracy, 4KB for medium, and all textures for safe accuracy.
Since this adds a small overhead to all texture cache handling, some regression testing would be nice. Games, which use a lot of textures the same time, should be affected the most.
I tried to change messages that contained instructions for users,
while avoiding messages that are so technical that most users
wouldn't understand them even if they were in the right language.
Address static memory relative to a base register, analog to what we're
doing with PPCSTATE in the CPU JIT. This allows executable memory for
the vertex loader JIT to be allocated anywhere, not just within 2 GiB of
static data.
Fixes issue 8180.
Yet another story of games loading weird shit into registers.
For some reason, Burnout 2 would (in rare situations) load invalid
addresses into cp_state.array_bases. What would the real hardware
do in this situation? Who knows, Burnout 2 doesn't actually enable
the vertex array with the invalid address so nothing kinky happens.
But dolphin tries to optimise things and starts using the address
as soon as it is loaded into memory. This causes GetPointer (which is
now much more vocal) to throw an error.
The Fix: We don't call GetPointer until we are sure the vertex array
has been enabled.
- FileSearch is now just one function, and it converts the original glob
into a regex on all platforms rather than relying on native Windows
pattern matching on there and a complete hack elsewhere. It now
supports recursion out of the box rather than manually expanding
into a full list of directories in multiple call sites.
- This adds a GCC >= 4.9 dependency due to older versions having
outright broken <regex>. MSVC is fine with it.
- ScanDirectoryTree returns the parent entry rather than filling parts
of it in via reference. The count is now stored in the entry like it
was for subdirectories.
- .glsl file search is now done with DoFileSearch.
- IOCTLV_READ_DIR now uses ScanDirectoryTree directly and sorts the
results after replacements for better determinism.
Through just returning the last written value sounds better, this crashes Paper Mario.
In my opinion, gfx issues are fine on older GPUs, but crashes should not happen.
There is no nice way to correctly "detect" the "used" memory, so we just say
we're fine to use 50% of the physical memory for custom textures.
This will fix out-of-memory crashes, but we still might run into swapping issues.
This was causing a race condition where the "absurdly large aux buffer"
panic alert would be triggered in the last bit of fifo processing on the
CPU thread in deterministic mode (i.e. netplay). SyncGPU is supposed to
move the auxiliary queue data to the beginning of the containing buffer
so we don't have to deal with wraparound; if GpuRunningState is false,
however, it just returns, because it's set to false by another thread -
thus it doesn't know whether RunGpuLoop is still executing (in which
case it can't just reset the pointers, because it may still be using the
buffer) or not (in which case the condition variable it normally waits
for to avoid the previous problem will never be signaled). However,
SyncGPU's caller PushFifoAuxBuffer wasn't aware of this, so if the
buffer was filling at just the right time, it'd stay full and that
function would complain that it was about to overflow it. Similar
problem with ReadDataFromFifoOnCPU afaik. Fix this by returning early
from those as well; other callers of SyncGPU should be safe. A
*slightly* cleaner alternative would be giving the CPU thread a way to
tell when RunGpuLoop has actually exited, but whatever, this works.
This drops the "feature" to load level 0 from the custom texture
and all other levels from the native one if the size matches.
But in my opinion, when a custom texture only provide one level,
no more should be used at all.
A number of games make an EFB copy in I4/I8 format, then use it as a
texture in C4/C8 format. Detect when this happens, and decode the copy on
the GPU using the specified palette.
This has a few advantages: it allows using EFB2Tex for a few more games,
it, it preserves the resolution of scaled EFB copies, and it's probably a
bit faster.
D3D only at the moment, but porting to OpenGL should be straightforward..
The obvious question here is, why does it matter if we round or truncate?
The key is that GC/Wii does fixed-point interpolation, where PC GPUs do
floating-point interpolation. Discarding fractional bits makes the conversion
from floating-point to fixed point give more consistent results.
I'm not confident this is really the right fix, or that my explanation is
completely correct; ideally, we don't want to depend on floating-point
interpolation at all.
This is the same trick which is used for Metroid's fonts/texts, but for all textures. If 2 different textures at the same address are loaded during the same frame, create a 2nd entry instead of overwriting the existing one. If the entry was overwritten in this case, there wouldn't be any caching, which results in a big performance drop.
The restriction to textures, which are loaded during the same frame, prevents creating lots of textures when textures are used in the regular way. This restriction is new. Overwriting textures, instead of creating new ones is faster, if the old ones are unlikely to be used again.
Since this would break efb copies, don't do it for efb copies.
Castlevania 3 goes from 80 fps to 115 fps for me.
There might be games that need a higher texture cache accuracy with this, but those games should also see a performance boost from this PR.
Some games, which use paletted textures, which are not efb copies, might be faster now. And also not require a higher texture cache accuracy anymore. (similar sitation as PR https://github.com/dolphin-emu/dolphin/pull/1916)
In nearly all direct loadstore cases we can use unscaled loadstores.
Still have a fallback in case we hit a situation that we /can't/ do a unscaled loadstore.
When enabled, the silent option will avoid popping up dialog boxes for
overwrite confirmation or codec selection. The codec selection defaults to
uncompressed RGB.
This is required for FifoCI on Windows which needs to drive Dolphin from the
command line exclusively.
We want to move the vertex by 1/12 pixel, but the old code
did miss the perspective division. So by multiplying with pos.w,
the position is moved correctly after the perspective division.
On D3D, we read from the depth buffer using the format
DXGI_FORMAT_R24_UNORM_X8_TYPELESS (essentially, the "r" component contains
the depth, and the other components contain nothing).
Well, that's not strictly true, but trying to memcpy between two buffers
using different row lengths and different strides is at minimum extremely
unintuitive.
This changes the behavior if both texture are available. The old code did
try to load the modfied texID, the new code tries the unmodified texID first.
- Calculate ZSlope every flush but only set PixelShader Constant on Reset Buffer when zfreeze
- Fixed another Pixel Shader bug in D3D that was giving me grief
Results are still not correct, but things are getting closer.
* Don't cull CULLALL primitives so early so they can be used as reference
planes.
* Convert CalculateZSlope to screenspace coordinates.
* Convert Pixelshader to screenspace coordinates (instead of worldspace
xy coordinates, which is totally wrong)
* Divide depth by 2^24 instead of clamping to 0.0-1.0 as was done
before.
Progress:
* Rouge Squadron 2/3 appear correct in game (videos in rs2 save file
selection are missing)
* Shadows draw 100% correctly in NHL 2003.
* Mario golf menu renders correctly.
* NFS: HP2, shadows sometimes render on top of car or below the road.
* Mario Tennis, courts and shadows render correctly, but at wrong depth
* Blood Omen 2, doesn't work.
Based on the feedback from pull request #1767 I have put in most of
degasus's suggestions in here now.
I think we have a real winner here as moving the code to
VertexManagerBase for a function has allowed OGL to utilize zfreeze now
:)
Correct use of the vertex pointer has also corrected most of the issue
found in pull request #1767 that JMC47 stated. Which also for me now
has Mario Tennis working with no polygon spikes on the characters
anymore! Shadows are still an issue and probably in the other games
with shadow problems. Rebel Strike also seems better but random skybox
glitches can show up.
Initial port of original zfreeze branch (3.5-1729) by neobrain into
most recent build of Dolphin.
Makes Rogue Squadron 2 very playable at full speed thanks to recent core
speedups made to Dolphin. Works on DirectX Video plugin only for now.
Enjoy! and Merry Xmas!!
On locales that don't use period as a separator this would break us.
For vector values in a configuration, we use comma as a separator which causes the configuration to balloon to massive sizes due to never saving them
correctly. Loading would then break since it would load a million configuration options.
Fixes issue #7569.
Allows the UI to easily check the current exclusive mode state.
This simplifies a few checks and prevents the user from ever getting stuck in fullscreen.
Don't change the texID depending on the tlut_hash for paletted textures that are efb copies and don't have an entry in the cache for texID ^ tlut_hash. This makes those textures less broken when using efb to texture.
This is not really fixing those textures, but it's a step forward. The mini map in Twilight Princess for example is in grayscales with this and is more or less usable.
We'll loose data on invalidating them. So just keep them until a new copy is done.
A wrong scaled copy is better than no copy if the game doesn't creates a new one.
This reverts an optimization which isn't worth imo. Every texture uploads have to alloc vram and a staging buffer, so there is no need to do both in the same call.
Previously it was packed into spare slots in clippos.xy and normal.w,
but it's ugly and more importantly it's causing bugs.
This was discovered during the debugging of a zfreeze branch, which
expected clippos.xy to be xy position coordinates in clipspace (as
the name suggested).
Turns out the stereoscopy shader had also run into this trap, modifying
clippos.x (introducing errors with per-pixel lighting).
This commit has been moved outside of the zfreeze PR for fast merging.
They are used to remove the flush amounts, but as we don't
flush anymore on vertex loader changes (only on native
vertex format right now), this optimization is now unneeded.
This will allow us to hard code the frac factors within the
vertex loaders.
Fixes a typo where the official IMGTec drivers were said to be the OSS driver support.
Removes Mali GPU family detection just like I removed the Adreno family detection.
We don't support Mali Utgard anyway.
If we need family detection we can properly add it, right now it isn't needed.
Adreno 300 and 400 have the same video driver performance issues because they are very similar architectures which use basically the same thing with
everything.
There isn't any need to detect the family of the driver with Qualcomm anyway. If we ever need family specific bugs then we can implement real support
for that.
Performance issue on Adreno 400 series was due to us only detecting Adreno 300 series, and with Adreno 400 it wouldn't use the bugs, which would cause
it to use glBufferSubData, causing the huge performance hit.