Specifically, when using Manual Texture Sampling, if textures sizes don't match the size the game specifies, things previously broke. That can happen with custom textures, and also with scaled EFB copies at non-native IRs. It breaks most obviously by not scaling the texture coordinates (so only part of the texture shows up), but the hardware wrapping functionality also assumes texture sizes are a power of 2 (or else it will behave weirdly in a way that matches how hardware behaves weirdly). The fix is to provide alternative texture wrapping logic when custom texture sizes are possible.
SPDX standardizes how source code conveys its copyright and licensing
information. See https://spdx.github.io/spdx-spec/1-rationale/ . SPDX
tags are adopted in many large projects, including things like the Linux
kernel.
Currently, we do not display every second frame in 25fps/30fps games
which run to vsync. This improves performance as there's less rendering
for the GPU to perform, but when combined with vsync, could cause frame
pacing issues.
This commit adds an option to force every frame generated by the console
to be displayed to the host, which may improve pacing for these games.
This header doesn't actually make use of MathUtil.h within itself, so
this can be removed. Many other source files used VideoCommon.h as an
indirect include to include MathUtil.h, so these includes can also be
adjusted.
While we're at it, we can also migrate valid inclusions of VideoCommon.h
into cpp files where it can feasibly be done to minimize propagating it
via other headers.
Also makes y_scale a dynamic parameter for EFB copies, as it doesn't
make sense to keep it as part of the uid, otherwise we're generating
redundant shaders.
Skip ubershader mode works the same as hybrid ubershaders in that the
shaders are compiled asynchronously. However, instead of using the
ubershader to draw the object, it skips it entirely until the
specialized shader is made available.
This mode will likely result in broken effects where a game creates an
EFB copy, and does not redraw it every frame. Therefore, it is not a
recommended option, however, it may result in better performance on
low-end systems.
This enables shaders to be compiled while the game is starting, instead
of blocking startup. If a shader is needed before it is compiled,
emulation will block.
The option is named DisableCopyToVRAM under the Hacks section in
GFX.ini. It is intentionally not exposed to the GUI, as users should not
need to use it under normal circumstances. The main use is debugging
issues in the EFB-to-RAM shaders.
Otherwise we might get UB if the value we cast is larger than the
max value of the underlying type that the compiled picked for the enum.
I haven't done any extensive check through Dolphin to find cases
of this, I'm just fixing the cases I already know of.
This was mainly included for debugging, but could end up being confusing
for users, as well as polluting the GL program cache with a mix of uber
and specialized shaders if the option was changed.
This stops the virtual method call from within the Renderer constructor.
The initialization here for GL had to be moved to VideoBackend, as the
Renderer constructor will not have been executed before the value is
required.
This is mainly for potential Android fifoci usage, and thus is not
exposed anywhere in the UI. To enable, set DumpFramesAsImages under
Settings in GFX.ini.
It's not available in OpenGL ES and officially it's not supported on OpenGL 3.0/3.1.
Fallback to old depth range code if there is no method to disable depth clipping.
It's more important to have correct clipping than to have accurate depth values.
Inaccurate depth values can be fixed by slow depth.
* Focus "Hash Code" / "IP address" text box by default in "Connect"
* Focus game list in "Host" tab
* RETURN keypress now host/join depending on selected tab
* Remember last hosted game
* Remove PanicAlertT:
* Simply log message to netplay window
* Remove them when they are useless
* Show some netplay message in OSD
* Chat messages
* Pad buffer changes
* Desync alerts
* Stop the game consistently when another player disconnects / crashes
* Prettify chat textbox
* Log netplay ping to OSD
Join scenario:
* Copy netplay code
* Open netplay
* Paste code
* Press enter
Host scenario:
* Open netplay
* Go to host tab
* Press enter
Fast depth is now more accurate than slow depth and should always be used.
The option will be kept in a different form as it is still used as a hack to fix some games.
Also, the slow depth code path will still be relied upon by cards that don't support GL_ARB_clip_control.