From wxWidgets master 81570ae070b35c9d52de47b1f14897f3ff1a66c7.
include/wx/defs.h -- __w64 warning disable patch by comex brought forward.
include/wx/msw/window.h -- added GetContentScaleFactor() which was not implemented on Windows but is necessary for wxBitmap scaling on Mac OS X so it needs to work to avoid #ifdef-ing the code.
src/gtk/window.cpp -- Modified DoSetClientSize() to direct call wxWindowGTK::DoSetSize() instead of using public wxWindowBase::SetSize() which now prevents derived classes (like wxAuiToolbar) intercepting the call and breaking it. This matches Windows which does NOT need to call DoSetSize internally. End result is this fixes Dolphin's debug tools toolbars on Linux.
src/osx/window_osx.cpp -- Same fix as for GTK since it has the same issue.
src/msw/radiobox.cpp -- Hacked to fix display in HiDPI (was clipping off end of text).
Updated CMakeLists for Linux and Mac OS X. Small code changes to Dolphin to fix debug error boxes, deprecation warnings, and retain previous UI behavior on Windows.
Note: It's not 100% perfect, as some of the GPU capablities leak into the
pixel shader UID.
Currently our UIDs don't get exported, so there is no issue. But someone
might want to fix this in the future.
As much as possible, the asserts have been moved out of the GetUID
function. But there are some places where asserts depend on variables
that aren't stored in the shader UID.
Bug Fix: It was theoretically possible for a shader with depth writes
disabled to map to the same UID as a shader with late depth
writes.
No known test cases trigger this.
Bug Fix: The normal stage UIDs were randomly overwriting indirect
stage texture map UID fields. It was possible for multiple
shaders with diffrent indirect texture targets to map to
the same UID.
Once again, it dpesn't look like this bug was ever triggered.
This frees up 21 bits and allows us to shorten the UID struct by an entire
32 bits.
It's not strictly needed (as it's encoded into the length) but I added a
bit for per-pixel lighiting to make my life easier in the following
commits.
The only code which touches xfmem is code which writes directly into
uid_data.
All the rest now read their parameters out of uid_data.
I also simplified the lighting code so it always generated seperate
codepaths for alpha and color channels instead of trying to combine
them on the off-chance that the same equation works for all 4 channels.
As modern (post 2008) GPUs generally don't calcualte all 4 channels
in a single vector, this optimisation is pointless. The shader compiler
will undo it during the GLSL/HLSL to IR step.
Bug Fix: The about optimisation was also broken, applying the color light
equation to the alpha light channel instead of the alpha light
euqation. But doesn't look like anything trigged this bug.
OS X uses a case insensitive filesystem by default: when I try to build,
a system header does #include <assert.h>, which picks up
Source/Core/Common/Assert.h. This only happens because CMakeLists adds
'${PROJECT_BINARY_DIR}/Source/Core/Common' as an include directory: in
an out-of-tree build, that directory contains no other source files, but
in an in-tree build PROJECT_BINARY_DIR is just the source root.
This is only used for scmrev.h. Change the include directory to
'${PROJECT_BINARY_DIR}/Source/Core' and the include to
"Common/scmrev.h", which is more consistent with normal headers anyway.
Small cleanup by using std::shared_ptr and getting rid of
ciface.Devices() which just returned the m_devices (which defeats the
point of making m_devices protected).
Incidentally, this should make the code safer when we have
different threads accessing devices in the future (for hotplug?).
A lot of code use Device references directly so there is
no easy way to remove FindDevice() and make those unique_ptrs.
Using a minimum width is a good compromise between
setting all buttons to the same width
and letting them all decide their own width.
This is because the small buttons are kept tidy and regular
while allowing the biggest buttons to fit their contents.