This makes it possible to use enums as the config type.
Default values are now clearer and there's no need for casts
when calling Config::Get/Set anymore.
In order to add support for enums, the common code was updated to
handle enums by using the underlying type when loading/saving settings.
A copy constructor is also provided for conversions from
`ConfigInfo<Enum>` to `ConfigInfo<underlying_type<Enum>>`
so that enum settings can still easily work with code that doesn't care
about the actual enum values (like Graphics{Choice,Radio} in DolphinQt2
which only treat the setting as an integer).
Also makes y_scale a dynamic parameter for EFB copies, as it doesn't
make sense to keep it as part of the uid, otherwise we're generating
redundant shaders.
Skip ubershader mode works the same as hybrid ubershaders in that the
shaders are compiled asynchronously. However, instead of using the
ubershader to draw the object, it skips it entirely until the
specialized shader is made available.
This mode will likely result in broken effects where a game creates an
EFB copy, and does not redraw it every frame. Therefore, it is not a
recommended option, however, it may result in better performance on
low-end systems.
- In D3D, shaders could be compiled on the main thread, blocking
startup.
- Reduced the latency between a pipeline being requested and used in all
backends in hybrid ubershader mode, when no shader stages were present.
- Fixed a case where async compilation could cause the same UID to be
appended multiple times to the UID cache.
- Fix incorrect number of threads being used when immediately compile
shaders was enabled.
This enables shaders to be compiled while the game is starting, instead
of blocking startup. If a shader is needed before it is compiled,
emulation will block.
The option is named DisableCopyToVRAM under the Hacks section in
GFX.ini. It is intentionally not exposed to the GUI, as users should not
need to use it under normal circumstances. The main use is debugging
issues in the EFB-to-RAM shaders.
There was a race condition between the video thread and the host thread,
if corrections need to be made by VerifyValidity(). Briefly, the config
will contain invalid values. Instead, pause emulation first, which will
flush the video thread, update the config and correct it, then resume
emulation, after which the video thread will detect the config has
changed and act accordingly.
This was mainly included for debugging, but could end up being confusing
for users, as well as polluting the GL program cache with a mix of uber
and specialized shaders if the option was changed.
This code hadn't been touched since 2010. Nowadays, the panic alert
setting is loaded by ConfigManager and applied in UICommon.
VideoConfig has no business messing with it.
This stops the virtual method call from within the Renderer constructor.
The initialization here for GL had to be moved to VideoBackend, as the
Renderer constructor will not have been executed before the value is
required.
This is mainly for potential Android fifoci usage, and thus is not
exposed anywhere in the UI. To enable, set DumpFramesAsImages under
Settings in GFX.ini.
* Focus "Hash Code" / "IP address" text box by default in "Connect"
* Focus game list in "Host" tab
* RETURN keypress now host/join depending on selected tab
* Remember last hosted game
* Remove PanicAlertT:
* Simply log message to netplay window
* Remove them when they are useless
* Show some netplay message in OSD
* Chat messages
* Pad buffer changes
* Desync alerts
* Stop the game consistently when another player disconnects / crashes
* Prettify chat textbox
* Log netplay ping to OSD
Join scenario:
* Copy netplay code
* Open netplay
* Paste code
* Press enter
Host scenario:
* Open netplay
* Go to host tab
* Press enter
Fast depth is now more accurate than slow depth and should always be used.
The option will be kept in a different form as it is still used as a hack to fix some games.
Also, the slow depth code path will still be relied upon by cards that don't support GL_ARB_clip_control.
We don't throttle by frames, we throttle by coretiming speed.
So looking up VI for calculating the speed was just very wrong.
The new ini option is a float, 1.0f for fullspeed.
In the GUI, percentual values are used.
This reverts commit 81414b4fa2, reversing
changes made to b926061f64.
Conflicts:
Source/Core/DolphinWX/Frame.cpp
Source/Core/VideoCommon/VideoConfig.cpp
Source/Core/VideoCommon/VideoConfig.h
It was only implemented in OpenGL, though the option was visible in both
backends, leading to memory leaks if you enabled it in DirectX.
And it wasn't particularly useful as a debug feature as it only showed
where in the EFB the copies were taken from, not what format it was, or
what the copy was used for, or what content was in the EFB at that point
in time.
Also, it stretched the copy regions relative to the window, so the
on-screen regions don't even line up with the window unless the game used
the full EFB (some pal games) and you game image stretched to the full
window.
Added 3 depth/convergence presets. They are adjustable via (existing) hotkeys - changes to depth and convergence are applied to current preset.
Added 3 hotkeys for activating presets. Added hotkey for toggle between first and second preset.
Added OSD message for convergence/depth changes.
Presets are saved into per-game configs.
SSAA relies on MSAA being active to work. We only supports 4x SSAA while in fact you can enable SSAA at any MSAA level.
I even managed to run 64xMSAA + SSAA on my Quadro which made some pretty sleek looking games. They were very cinematic though.
With this, it properly fixes up SSAA and MSAA support in GLES as well. Before they were broken when stereo rendering was enabled.
Now in GLES they can properly support MSAA and also stereo rendering with MSAA enabled(with proper extensions).