I tried to change messages that contained instructions for users,
while avoiding messages that are so technical that most users
wouldn't understand them even if they were in the right language.
We are used to use the texture parameter for all util draw calls,
but AMD seems to have a bug where they use the sampler parameter
of stage 0 if no sampler is bound to the used stage.
So as workaround (and a bit as nicer code), we now use sampler
objects everywhere.
- FileSearch is now just one function, and it converts the original glob
into a regex on all platforms rather than relying on native Windows
pattern matching on there and a complete hack elsewhere. It now
supports recursion out of the box rather than manually expanding
into a full list of directories in multiple call sites.
- This adds a GCC >= 4.9 dependency due to older versions having
outright broken <regex>. MSVC is fine with it.
- ScanDirectoryTree returns the parent entry rather than filling parts
of it in via reference. The count is now stored in the entry like it
was for subdirectories.
- .glsl file search is now done with DoFileSearch.
- IOCTLV_READ_DIR now uses ScanDirectoryTree directly and sorts the
results after replacements for better determinism.
We are declaring we require ARB_shader_image_load_store in the shader, this isn't an extension on GLES because it is part of the GLSL ES 3.1 spec.
If we are running as GLES then just not put it in the shaders.
This drops the "feature" to load level 0 from the custom texture
and all other levels from the native one if the size matches.
But in my opinion, when a custom texture only provide one level,
no more should be used at all.
GLES3 spec is worthless and only returns a boolean result for occlusion queries. This is fine for simple cellular games but we need more than a
boolean result.
Thankfully Nvidia exposes GL_NV_occlusion_queries under a OpenGL ES extension, which allows us to get full samples rendered.
The only device this change affects is the Nexus 9, since it is an Nvidia K1 crippled to only support OpenGL ES.
No other OpenGL ES device that I know of supports this extension.
A number of games make an EFB copy in I4/I8 format, then use it as a
texture in C4/C8 format. Detect when this happens, and decode the copy on
the GPU using the specified palette.
This has a few advantages: it allows using EFB2Tex for a few more games,
it, it preserves the resolution of scaled EFB copies, and it's probably a
bit faster.
D3D only at the moment, but porting to OpenGL should be straightforward..
- Calculate ZSlope every flush but only set PixelShader Constant on Reset Buffer when zfreeze
- Fixed another Pixel Shader bug in D3D that was giving me grief
Results are still not correct, but things are getting closer.
* Don't cull CULLALL primitives so early so they can be used as reference
planes.
* Convert CalculateZSlope to screenspace coordinates.
* Convert Pixelshader to screenspace coordinates (instead of worldspace
xy coordinates, which is totally wrong)
* Divide depth by 2^24 instead of clamping to 0.0-1.0 as was done
before.
Progress:
* Rouge Squadron 2/3 appear correct in game (videos in rs2 save file
selection are missing)
* Shadows draw 100% correctly in NHL 2003.
* Mario golf menu renders correctly.
* NFS: HP2, shadows sometimes render on top of car or below the road.
* Mario Tennis, courts and shadows render correctly, but at wrong depth
* Blood Omen 2, doesn't work.
Based on the feedback from pull request #1767 I have put in most of
degasus's suggestions in here now.
I think we have a real winner here as moving the code to
VertexManagerBase for a function has allowed OGL to utilize zfreeze now
:)
Correct use of the vertex pointer has also corrected most of the issue
found in pull request #1767 that JMC47 stated. Which also for me now
has Mario Tennis working with no polygon spikes on the characters
anymore! Shadows are still an issue and probably in the other games
with shadow problems. Rebel Strike also seems better but random skybox
glitches can show up.
Initial port of original zfreeze branch (3.5-1729) by neobrain into
most recent build of Dolphin.
Makes Rogue Squadron 2 very playable at full speed thanks to recent core
speedups made to Dolphin. Works on DirectX Video plugin only for now.
Enjoy! and Merry Xmas!!
The maths appears to give crazy impossible answers without this fix, but the cause is all the ints being "promoted" to unsigned because of the single unsigned division at the end.
This reverts an optimization which isn't worth imo. Every texture uploads have to alloc vram and a staging buffer, so there is no need to do both in the same call.
This fixes running Dolphin on the Nexus 9.
Android's EGL stack has internal arrays that they use for tracking OpenGL function usage. Probably has something to do with their OpenGL profiling
garbage that used to be in ADT.
Android has three of these arrays, each statically allocated.
One array is for all GLES 1.x functions
One array is for all GLES 2.0/3.0/3.1 and a couple of extensions they deem worthy of being in this array.
The last array is for all function pointers grabbed via eglGetProcAddress that isn't in the other two arrays.
The last array is the issue that we are having problems with. This array is 256 members in length.
So if you are pulling more than 256 function pointers that Google doesn't track in their internal array, the function will return NULL and yell at you
in logcat.
The Nvidia Shield Tablet gets around this by replacing part of the EGL stack with their own implementation that doesn't have this garbage.
The Nexus 9 on the other hand doesn't get away with this. So we pull >100 more function pointers than the array can handle, and some of those we need
to use.
The workaround for this is to grab OpenGL 1.1 functions last because we won't actually be using those functions, so we get away with not grabbing the
function pointers.
Fixes a typo where the official IMGTec drivers were said to be the OSS driver support.
Removes Mali GPU family detection just like I removed the Adreno family detection.
We don't support Mali Utgard anyway.
If we need family detection we can properly add it, right now it isn't needed.
Adreno 300 and 400 have the same video driver performance issues because they are very similar architectures which use basically the same thing with
everything.
There isn't any need to detect the family of the driver with Qualcomm anyway. If we ever need family specific bugs then we can implement real support
for that.
Performance issue on Adreno 400 series was due to us only detecting Adreno 300 series, and with Adreno 400 it wouldn't use the bugs, which would cause
it to use glBufferSubData, causing the huge performance hit.
If the host device supports GLES 3.1 and AEP we can have stereo rendering.
Just need to make sure to grab the correct function pointer that GL_EXT_geometry_shader provides, and enable AEP in the shaders.
We can't just check if AEP is in the extension list for support because Qualcomm has failed once more.
With the Nexus 6 it reports support for AEP but doesn't support OpenGL ES 3.1, which is an impossible combination.
From reports on their forum it seems that attempting to use any AEP things results in nothing happening, seems like a stub implementation.
Instead of abusing whatever VAO is previously bound, which might have
enabled arrays.
Only used in one instance currently, which fixes a crash with older
NVIDIA drivers.
This is the same extension that we all know and love but under a different name with some different requirements.
In regular OpenGL fashion, you can't just move a desktop OpenGL extension to OpenGL ES without ratifying a new extension, which is why this falls
under a EXT extension, which in turn causes it to have suffixes attached to their function names.
This is the first step in our way towards conquering all mobile GPUs that don't support desktop OpenGL, hopefully we also can add support for
buffer_storage to OpenGL ES as well so we can make full use of this extension.
This wasn't too much of a concern since we normally don't care about this feature set, but it is nice when testing on new devices and they don't
support the higher feature sets but want to run under software renderer.
The Mesa softpipe and PowerVR 5xx drivers don't support higher GL versions, but they shouldn't exit out just because they couldn't get a GL3 function
pointer that isn't even going to be used at that point.
This is pretty much a step backwards in our code. We used to use attributes in our PP shader system a long time ago but we changed it to attributeless
for code simplicity and cleanliness. This reimplements the attribute code path as an optional path to take in the case your system doesn't work with
attributeless rendering. In this case the only shipping drivers that we can know for sure supports attributeless rendering is the Nexus 5's v95 driver
that is included in the Android 5.0 image.
I hadn't planned on implementing a work around to get post processing working in these cases, but due to us force enabling the PP shader system at all
times it sort of went up on the priority list. We can't be having a supported platform black screening at all times can we?
Due to changes in how we render to the final framebuffer we no longer encounter this bug.
With the change to post processing being enabled at all times and no longer using glBlitFramebuffer, Qualcomm no longer has the chance to rotate our
framebuffer underneath of us.
This is good hygiene, and also happens to be required to build Dolphin
using Clang modules.
(Under this setup, each header file becomes a module, and each #include
is automatically translated to a module import. Recursive includes
still leak through (by default), but modules are compiled independently,
and can't depend on defines or types having previously been set up. The
main reason to retrofit it onto Dolphin is compilation performance - no
more textual includes whatsoever, rather than putting a few blessed
common headers into a PCH. Unfortunately, I found multiple Clang bugs
while trying to build Dolphin this way, so it's not ready yet, but I can
start with this prerequisite.)
This noticeably includes GL_ARB_get_program_binary, which was previously
thought unsupported on OS X. Well, actually, the OS X implementation is
trivial and reports 0 binary formats (as of 10.10; this is hardcoded in
GLEngine, by the way), but at least it'll work if it's fixed someday.
The D3D / OGL backends only ever used RGBA textures, and the Software
backend uses its own custom code for sampling. The ARGB path seems to
just be dead code.
Since ARGB and RGBA formats are similar, I don't think this will make
the code more difficult to read or unable to be used as
reference. Somebody who wants to use this code to output ARGB can simply
modify the MakeRGBA function to put the shift at the other end.
This causes glDrawArrays to fail in core profile, and thus on OS X, see:
http://renderingpipeline.com/2012/03/attribute-less-rendering/
There must be something bound, even though it is not used.
Fixes#7599. I'm not sure this is actually the best way to fix it,
since AFAICT it makes a nonobvious assumption that *something* will be
bound before the first attributeless rendering in
TextureConverter::DecodeToTexture, but it's what degasus suggested and
seems to work.
We were generating a texture without ever setting the data to a known value.
This happened on the old code as well, just that PP shaders are receiving some love and people are using it and noticing some of its issues.
The only possible functionality change is that s_efbAccessRequested and
s_swapRequested are no longer reset at init and shutdown of the OGL
backend (only; this is the only interaction any files other than
MainBase.cpp have with them). I am fairly certain this was entirely
vestigial.
Possible performance implications: efbAccessReady now uses an Event
rather than spinning, which might be slightly slower, but considering
the slow loop the flags are being checked in from the GPU thread, I
doubt it's noticeable.
Also, this uses sequentially consistent rather than release/acquire
memory order, which might be slightly slower, especially on ARM...
something to improve in Event/Flag, really.
This is effectively unused, as the window handles that we pass to the
GLInterface are window handles for the frame which isn't ever a real
toplevel window. Host_UpdateTitle is what actually sets the proper title
on the render window.
Now that MainNoGUI is properly architected and GLX doesn't need to
sometimes craft its own windows sometimes which we have to thread back
into MainNoGUI, we don't need to thread the window handle that GLX
creates at all.
This removes the reference to pass back here, and the g_pWindowHandle
always be the same as the window returned by Host_GetRenderHandle().
A future cleanup could remove g_pWindowHandle entirely.
Seems mesa has a quirk where
define THING(x) (#x)
is the same as
define THING(x) (##x)
Didn't realize I messed it up since it just worked since I only tested on Mesa.
This class loads all the common PP shader configuration options and passes those options through to a inherited class that OpenGL or D3D will have.
Makes it so all the common code for PP shaders is in VideoCommon instead of duplicating the code across each backend.