If the host device supports GLES 3.1 and AEP we can have stereo rendering.
Just need to make sure to grab the correct function pointer that GL_EXT_geometry_shader provides, and enable AEP in the shaders.
We can't just check if AEP is in the extension list for support because Qualcomm has failed once more.
With the Nexus 6 it reports support for AEP but doesn't support OpenGL ES 3.1, which is an impossible combination.
From reports on their forum it seems that attempting to use any AEP things results in nothing happening, seems like a stub implementation.
Previously we had decided to busy loop on systems due to Windows' scheduler being terrible and moving us around CPU cores when we yielded.
Along with context switching being a hot spot.
We had decided to busy loop in these situations instead, which allows us greater CPU performance on the video thread.
This can be attributed to multiple things, CPU not downclocking while busy looping, context switches happening less often, yielding taking more time
than a busy loop, etc.
One thing we had considered when moving over to a busy loop is the issues that dual core systems would now face due to Dolphin eating all of their CPU
resources. Effectively we are starving a dual core system of any time to do anything else due to the CPU thread always being pinned at 100% and then
the GPU thread also always at 100% just spinning around. We noted the potential for a performance regression, but dismissed it as most computers are
now becoming quad core or higher.
This change in particular has performance advantages on the dual core Nvidia Denver due to its architecture being nonstandard. If both CPU cores are
maxed out, the CPU can't effectively take any idle time to recompile host code blocks to its native VLIW architecture.
It can still do so, but it does less frequently which results in performance issues in Dolphin due to most code just running through the in-order
instruction decoder instead of the native VLIW architecture.
In one particular example, yielding moves the performance from 35-40FPS to 50-55FPS. So it is far more noticeable on Denver than any other system.
Of course once a triple or quad core Denver system comes out this will no longer be an issue on this architecture since it'll have a free core to do
all of this work.
Instead of abusing whatever VAO is previously bound, which might have
enabled arrays.
Only used in one instance currently, which fixes a crash with older
NVIDIA drivers.
The reason this didn't break is that bitwise instructions like VPAND,
VANDPS, and VANDPD do the exact same thing. The only difference is the
data type they are intended for.
Updated PTE.R bit on Write and Instruction fetch.
Added code to read the PTE from MEM2 if the PTE is stored there.
Refactored the two hash functions to reduce code duplication.
Updated save state version.