Fixes a bug where "Use Fullscreen" would initialize into exclusive fullscreen regardless of the borderless fullscreen setting.
Also relieves the need for the video renderer to check the borderless fullscreen setting each time.
It was only used for Windows XP and lower.
This also bumps the _WIN32_WINNT define in the stdafx precompiled headers to set the minimum version as Windows Vista.
The hack was needed because the Nvidia 3D Vision heuristics are documented to only support surfaces that are the same size as the backbuffer. This would be the case if you enabled the hack and selected the "Auto (Window Size)" internal resolution.
However, on recent drivers the same effect is achieved by selecting the "Auto (Multiple)" internal resolution. Therefore the hack is no longer required.
Also have the renderer remember its own fullscreen state. This is done to prevent a case where we exit exclusive fullscreen through the configuration and a focus shift at the same time. In this case the renderer would fail to detect that the fullscreen state was changed.
This ensures the transition from/to exclusive mode happens while the RenderFrame is fullscreen.
This prevents fullscreen loops and relieves us of having to restore the window size after we exit fullscreen.
In the cases where we support the binding layout keyword, use it for more than binding UBO location.
This changes it so it is supported for samplers as well.
Instances when this is enabled is if a device supports GL_ARB_shading_language_420pack, or if it supports GLES 3.10.
Old code dumped the efb, which was no-longer relevant since the
backend gained xfb support.
New code dumps the colour texture which is about to be rendered to
the screen so correctly reflects the bypassXFB option.
A previous PR changed a whole lot of min/maxes to std::min/std::max
but made a mistake here and used a templated min which cast it's
arguments to unsigned instead of casting return value.
This resulted in glitchy artifacts in bright areas (See issue 7439)
I rewrote the code to use a proper clamping function so it's cleaner
to read.
s_encodingPrograms is defined as an array with a length of 64
NUM_ENCODING_PROGRAMS is also defined as 64.
However 64 is out of bounds, so we want to be comparing for "equal to or
greater than here"
With strings, we don't need to care about passing in a length, since it internally stores it. So now, we don't even need a length parameter for these functions anymore as well.
This also kills off some sprintf_s calls.
- Isolate it into it's own namespace
- Shorten function names, the namespace self-documents.
- Just use the std I/O, we can just write directly to the stream for
logging.
gcc doesn't optimize this loops with -O2, so using memset now.
A flag to skip the clear funktion was added as the cache is already cleared most of the time.
These were only compiled in on Windows and x86_32.
They provided "optimized" copies and compares based on blocksizes for the AMD Athlon and Duron CPU families.
The code was taken from something that AMD provides with a as-is license.
Just get rid of this crap.
For whatever reason, the hardware doesn't do a full divide by 255, but
instead uses an approximation with shifting, similar to the way it is done
in TEV.
For quite a while this has been causing integer division to generate a warning as error, blocking shader compiling. This means probably no one has even been running D3D in debug builds...
I tried disabling the warning with a #pragma, but it doesn't seem to apply when this flag is used.
We need to pull in function pointers for OpenGL 3.0 in order to use glAttribIPointer.
This isn't too big of an issue, and this code will be gone in the future when we change over to libepoxy.
Just need to push code upstream to libepoxy to support Android with GLES and GL first.
This matches how ARM handles their naming in their drivers for different models.
Really it's that way because both Mali-T6xx and Mali-T7xx fall under Midgard.
While everything else (except Mali-55) fall under Utgard.
We need to explicitly round when converting colors from float to uint
because multiplying a normalized float by 255 might not result in a whole
number. (The exact result here may vary depending on your
drivers/hardware.)
Ideally, we shouldn't be using floating point here, but fixing that is a
much more complicated patch.
Fixes gxtest TEV tests using Intel HD 4000.
They are similar enough that they will share bugs with their drivers, so make them fall under the same Mali-Txxx umbrella of bug issues.
If there is ever a need in the future for having separate bugs depending on family, we can support that then.
This is the only way we can determine the video driver version with mali.
Really it's a good thing that they only push driver updates once every two years, makes it easy to determine what driver anybody is running.
CreateInputLayout requires a shader as an input, but it only cares about
the signature; we don't need to recompute it for different shaders with
the same inputs.
GLSL ES 3.10 adds implicit support for the binding layout qualifier that we use.
Changes our GLSL version enums to bit values so we can check for both ES versions easily.
Trying to use GetDepthMatrixProgram outside of
TCacheEntry::FromRenderTarget is a bad idea, so don't. Instead, use a
shader which only copies the input.
Fixes lens flare depth test in Twilight Princess. See
http://code.google.com/p/dolphin-emu/issues/detail?id=5999 .
The variable is already dereferenced both before and after this
check which means that if this variable would ever be zero it would
have crashed dolphin already.
Our defines were never clear between what meant 64bit or x86_64
This makes a clear cut between bitness and architecture.
This commit also has the side effect of bringing up aarch64 compiling support.
Fifoplayer depends on the old behaviour of videosoftware (and the other
hardware backends in non virtual/real xfb modes) where the framebuffer
gets rendered directly to the screen.
Really fifoplayer should call BeginFrame/EndFrame when it finished
rendering a frame, but adding this hack back in is simpler.
- remove unused variables
- reduce the scope where it makes sense
- correct limits (did you know that strcat()'s last parameter does not
include the \0 that is always added?)
- set some free()'d pointers to NULL
If there is an issue with a reported extension, disable it instead of failing out entirely.
Fixes an issue with buffer_storage that I had overlooked as well.
This changes from using logical and to bitwise and, which causes the compile time to drop from an absurd amount of time to around five seconds on my
crappy laptop.
This structure fields should match byte-to-byte the layout of MMIO registers:
it is addressed using the MMIO reg address when doing a CP MMIO read. This was
unfortunately not the case, causing CP reads to be mostly broken with the
software renderer.
This option was known to break every second game and only boost a bit.
It also seems to be broken because of streaming into pinned memory and buffer storage buffers.
v2: also remove dlc_desc
Older Qualcomm drivers rotated the framebuffer 90 degrees and this fix didn't work.
Now for some obscene reason it rotates a full 180 degrees.
This can at least be worked around by flipping around the image on our end.
On Windows, nvidia don't give us their driver version, so we can't workaround any issues.
As buffer_storage is broken on some drivers, we wanted to disble it for them.
So we can't.
Luckyly only "some" released driver versions are affected as this extension is only available since some months. Let's hope that nobody have to use one of this driver version, else they will get a black screen ...
OSX has their own driver, so performance issues aren't shared with the nvidia driver (unlike the closed source linux and windows nvidia driver). So now they'll also use the MapAndSync backend like all other osx drivers.
fixes issue 6596
I've also cleaned up the if/else block selecting the best backend a bit.
The only two devices that do this are Mesa software rasterizer and Intel Ironlake(With a few hacks).
Basically since it doesn't support OpenGL 3.0, it can't grab the version the new way.
So failing that, it sets to GL 2.1, and continues.
Further along, on Ironlake at least, it tries grabbing the extensions the new GL 3.0 way and fails.
So have a fallback that grabs the extensions string the old way, in probably the most elegant way possible.
The old way was to use big switch/case statements based on a type of buffer.
The new one is to use inheritance.
This change prohibits us to change the buffer type while running, but I doubt we'll ever do so.
Performance should also be a bit better. Also a nice cleanup.
Added some comments about this different kind of buffers.
This is a bit slower on map_and_* because of flushing and _very_ much slower on buffer(sub)?data because of a new memcpy.
But this design allow us to decode directly into a gpu buffer, eg vertexloader will profit :)
gl.h and glext.h provide most of the function pointer typedefs and defines for extensions and core features.
The only one it doesn't provide is GL 1.1 function typedefs, but this is to be expected.
If anything needs defines or typedefs in their header in the future, that's as easy as before.
This one was introduced to reduce the glBindTexture and glActiveTexture calls. But it was quite a bit of logic and only an improvment on uploading/creating a texture, which is done rarely.
This adds xfb support to the videosoftware backend, which increases it's
accuracy and more imporantly, enables the usage of many homebrew apps
which write directly to the xfb on the videosoftware backend.
Conflicts:
Source/Core/VideoBackends/Software/SWRenderer.cpp
Source/Core/VideoBackends/Software/SWmain.cpp
This branch is the final step of fully supporting both OpenGL and OpenGL ES in the same binary.
This of course only applies to EGL and won't work for GLX/AGL/WGL since they don't really support GL ES.
The changes here actually aren't too terrible, basically change every #ifdef USE_GLES to a runtime check.
This adds a DetectMode() function to the EGL context backend.
EGL will iterate through each of the configs and check for GL, GLES3_KHR, and GLES2 bits
After that it'll change the mode from _DETECT to whichever one is the best supported.
After that point we'll just create a context with the mode that was detected
We are used to render them out of order as long as everything else matches, but rendering order does matter, so we have to flush on primitive switch. This commit implements this flush.
Also as we flush on primitive switch, we don't have to create three different index buffers. All indices are now stored in one buffer.
This will slow down games which switch often primitive types (eg ztp), but it should be more accurate.
add the GL include (back) to Base.props
use a similar technique to GLX.cpp (by Sonic) in WGL.cpp to get
wglSwapIntervalEXT without the WGLEW check
Conflicts:
Source/Core/VideoBackends/OGL/OGL.vcxproj
Source/Core/VideoBackends/OGL/OGL.vcxproj.filters
Source/VSProps/Base.props