The loop was allocating one-too-many levels, as well as incorrect sizes
for each level. Probably not an issue as mipmapped render targets aren't
used, but the logic should be correct anyway.
This was a regression from the remove-everything-static-from-renderer
PR. As the comment indicates, it would be nice to move all of this logic
out of the Renderer constructor, but this is a much larger change.
The FrameBufferManager::CreateTexture (from the OpenGL backend) method introduced by commit 69cedf41 incorrectly compares the texture variable (which contains a name provided by glGenTextures) against GL_TEXTURE_2D_MULTISAMPLE_ARRAY and GL_TEXTURE_2D_MULTISAMPLE.
It should instead use the texture_type variable for this (as done in the first branch of the if).
This commit should have zero performance effect if SSBOs are supported.
If they aren't (e.g. on all Macs), this commit alters FramebufferManager
to attach a new stencil buffer and VertexManager to draw to it when
bounding box is active. `BBoxRead` gets the pixel data from the buffer
and dumbly loops through it to find the bounding box.
This patch can run Paper Mario: The Thousand-Year Door at almost full
speed (50–60 FPS) without Dual-Core enabled for all common bounding
box-using actions I tested (going through pipes, Plane Mode, Paper
Mode, Prof. Frankly's gate, combat, walking around the overworld, etc.)
on my computer (macOS 10.12.3, 2.8 GHz Intel Core i7, 16 GB 1600 MHz
DDR3, and Intel Iris 1536 MB).
A few more demanding scenes (e.g. the self-building bridge on the way
to Petalburg) slow to ~15% of their speed without this patch (though
they don't run quite at full speed even on master). The slowdown is
caused almost solely by `glReadPixels` in `OGL::BoundingBox::Get`.
Other implementation ideas:
- Use a stencil buffer that's separate from the depth buffer. This would
require ARB_texture_stencil8 / OpenGL 4.4, which isn't available on
macOS.
- Use `glGetTexImage` instead of `glReadPixels`. This is ~5 FPS slower
on my computer, presumably because it has to transfer the entire
combined depth-stencil buffer instead of only the stencil data.
Getting only stencil data from `glGetTexImage` requires
ARB_texture_stencil8 / OpenGL 4.4, which (again) is not available on
macOS.
- Don't use a PBO, and use `glReadPixels` synchronously. This has no
visible performance effect on my computer, and is theoretically
slower.
This stops the virtual method call from within the Renderer constructor.
The initialization here for GL had to be moved to VideoBackend, as the
Renderer constructor will not have been executed before the value is
required.
This moves all the byte swapping utilities into a header named Swap.h.
A dedicated header is much more preferable here due to the size of the
code itself. In general usage throughout the codebase, CommonFuncs.h was
generally only included for these functions anyway. These being in their
own header avoids dumping the lesser used utilities into scope. As well
as providing a localized area for more utilities related to byte
swapping in the future (should they be needed). This also makes it nicer
to identify which files depend on the byte swapping utilities in
particular.
Since this is a completely new header, moving the code uncovered a few
indirect includes, as well as making some other inclusions unnecessary.
Before #4581, an invocation of `SetBlendMode` could invoke
`glBlendEquationSeparate` and `glBlendFuncSeparate` even when it was
setting `glDisable(GL_BLEND)`. I couldn't figure out how to map the old
behavior over to the new BlendingState code, so I changed it to always
call the two blend functions.
Fixes https://bugs.dolphin-emu.org/issues/10120 : "Sonic Adventure 2
Battle: graphics crash when loading first Dark level".
Keeps associated data together. It also eliminates the possibility of out
parameters not being initialized properly. For example, consider the
following example:
-- some FramebufferManager implementation --
void FBMgrImpl::GetTargetSize(u32* width, u32* height) override
{
// Do nothing
}
-- somewhere else where the function is used --
u32 width, height;
framebuffer_manager_instance->GetTargetSize(&width, &height);
if (texture_width != width) <-- Uninitialized variable usage
{
...
}
It makes it much more obvious to spot any initialization issues, because
it requires something to be returned, as opposed to allowing an
implementation to just not do anything.
Making changes to ConfigManager.h has always been a pain, because
it means rebuilding half of Dolphin, since a lot of files depend on
and include this header.
However, it turns out some includes are unnecessary. This commit
removes ConfigManager includes from files which don't contain
SConfig or GPUDeterminismMode or GPU_DETERMINISM (which means the
ConfigManager include is not used).
(I've also had to get rid of some indirect includes.)
This fixes the screenshot stutter, as this needs more than a frame.
So we won't stall on the png writing at all until emulation stops or
a new screenshot is requested.
This is needed because for some reason the WSI for NV Vulkan drivers
doesn't return VK_ERROR_OUT_OF_DATE_KHR, so there is no other way to know
that a resize has occured apart from polling, which is a poor solution for
X11 (since it is blocking).
It's not available in OpenGL ES and officially it's not supported on OpenGL 3.0/3.1.
Fallback to old depth range code if there is no method to disable depth clipping.
It's more important to have correct clipping than to have accurate depth values.
Inaccurate depth values can be fixed by slow depth.
In TimeSplitters: Future Perfect, PerfQuery is used to detect
the visibility of lights and draw coronas. 25 points are drawn
for each light. However, the returned count was incorrectly
being divided by four leading to dim coronas.
Using 4x antialiasing was a workaround because of a bug where
antialiasing multiplied the PerfQuery results. This commit
fixes that bug too (but only for OpenGL).
Note: It's not 100% perfect, as some of the GPU capablities leak into the
pixel shader UID.
Currently our UIDs don't get exported, so there is no issue. But someone
might want to fix this in the future.
As much as possible, the asserts have been moved out of the GetUID
function. But there are some places where asserts depend on variables
that aren't stored in the shader UID.
Using glMapBufferRange to read back the contents of the SSBO is extremely
slow on NVIDIA drivers. This is more noticeable at higher internal
resolutions. Using glGetBufferSubData instead does not seem to exhibit
this slowdown.
The D3D backend was always forcing Anisotropic filtering when that is enabled regardless of how the game chose to configure the texture filtering registers; this causes the same issues as "Force Filtering" without Anisotropy, such as causing game UI elements to no longer line up adjacent correctly. Historically, OpenGL's Anisotropy support has always worked "better" than D3D's due to seeming to not have this problem; unfortunately, OpenGL's Anisotropy specification only gives GL_LINEAR based filtering modes defined behavior, with only the mipmap setting being required to be considered. Some OpenGL implementations were implicitly disabling Anisotropy when the min/mag filters were set to GL_NEAREST, but this behavior is not required by the spec so cannot be relied on.
By default unique_ptr will call delete on the given type if an array
qualifier isn't present, not delete[]. It's important to explicitly
specify an array is being handled.
This fixes the crashes occuring at startup with a non-empty shader cache.
Because LinearDiskCache reads/writes to the storage of ShaderUid, ShaderUid must be trivially copyable.
Additionally, adds a static assert to LinearDiskCache to ensure this doesn't happen in the future.
The initialization of ShaderUid data has been moved to the code generation functions, so the above condition holds true.
This reverts commit 81414b4fa2, reversing
changes made to b926061f64.
Conflicts:
Source/Core/DolphinWX/Frame.cpp
Source/Core/VideoCommon/VideoConfig.cpp
Source/Core/VideoCommon/VideoConfig.h
Approximately three or four times now, the issue of pointers being
in an inconsistent state been an issue in the video backend renderers
with regards to tripping up other developers.
Global (ugh) resources are put into a unique_ptr and will always have a
well-defined state of being - null or not null
This was relying on behaviour that GLExtensions was adding fake extensions to the supported list with ES.
This no longer happens so it needed to be changed.
The spec says that vendors can set the max texture size to be 65KB and we want 1MB.
Check the maximum supported and drop to the max if it is less than 1MB
They were only called at once, so no need to seperate them.
This also removes the only dereference of the NativeVertexFormat in VideoCommon, so backends may just return nullptr.
It was only implemented in OpenGL, though the option was visible in both
backends, leading to memory leaks if you enabled it in DirectX.
And it wasn't particularly useful as a debug feature as it only showed
where in the EFB the copies were taken from, not what format it was, or
what the copy was used for, or what content was in the EFB at that point
in time.
Also, it stretched the copy regions relative to the window, so the
on-screen regions don't even line up with the window unless the game used
the full EFB (some pal games) and you game image stretched to the full
window.
Removed Quality Levels from D3D AA options
Dropdown text now shows whether you're applying MSAA or SSAA
Added a description for SSAA
Moved SSAA checkbox
Cleaned up AA in backends slightly. Supported modes is now a list of ints.
Instead of having special case code for efb2tex that ignores hashes,
the only diffence between efb2tex and efb2ram now is that efb2tex
writes zeros to the memory instead of actual texture data.
Though keep in mind, all efb2tex copies will have hashes of zero as
their hash.
Addded a few duplicated depth copy texture formats to the enum
in TextureDecoder.h. These texture formats were already implemented
in TextureCacheBase and the ogl/dx11 texture cache implementations.
SSAA relies on MSAA being active to work. We only supports 4x SSAA while in fact you can enable SSAA at any MSAA level.
I even managed to run 64xMSAA + SSAA on my Quadro which made some pretty sleek looking games. They were very cinematic though.
With this, it properly fixes up SSAA and MSAA support in GLES as well. Before they were broken when stereo rendering was enabled.
Now in GLES they can properly support MSAA and also stereo rendering with MSAA enabled(with proper extensions).
OpenGL ES 3.2 adds this feature to core
It was available to GLES 3.1 as GL_{EXT, OES}_texture_buffer as well.
For the non-Nvidia vendors that implemented this is:
- Qualcomm's Adreno 4xx
- IMGTec's PowerVR Rogue
Samsung updated the video drivers on the SGS6 which introduced a bug when disabling vsync.
Both the driver versions are r5p0, but the md5sums of the blob differ.
To work around the issue, make sure to never disable vsync by calling eglSwapInterval.
We can't actually determine the driver version on Android yet.
So until the driver version lands that displays the driver version string in the GL_VERSION string
we will need to keep this workaround enabled at all times, which is a bit annoying.
Current mali drivers return the video driver version in one of the EGL strings you can query.
The issue with that is that Android eats all of those strings, so we can't query it.
OpenGL ES 3.2 adds a few things we care about supporting in core. In particular:
- GL_{ARB,EXT,OES}_draw_elements_base_vertex
- KHR_Debug
- Sample Shading
- GL_{ARB,EXT,OES,NV}_copy_image
- Geometry shaders
- Geometry shader instancing (If they support GL_{EXT,OES}_geometry_point_size)
Nvidia was the first to release an OpenGL ES 3.2 driver which I uesd to test this on.
This also enables GS Instancing on GLES 3.1 hardware if it supports all of the required extensions.
Their new driver that supports GLES3.1 + AEP has issues with it.
At the very least they don't implement all of the geometry shader features fully which causes shader linker issues when we attempt to use them.
I don't have a device so I can't fully test, so until I do I'm going to blanket disable the whole thing.
When calculating the size of the undisplayed margin in the case where
fbWidth != fbStride for RealXFB for displaying in the output window,
we do not scale by IR - RealXFB is implicitly 1x.
The default EGL_RENDERABLE_TYPE is GLES1, so vendors have the ability to choose between returning only the bits requested, or all of the bits
supported in addition to the one requested.
PowerVR chose to take the route where they only return the bits requested, everyone else returns all of the bits supported.
Instead of letting the vendor have control of this, let's incrementally go through each renderable type and make sure it supports everything we want.
This will cover all devices for now, and for the future.
I tried to change messages that contained instructions for users,
while avoiding messages that are so technical that most users
wouldn't understand them even if they were in the right language.