In particular this fixes the 6666 colour format
We were loading from the wrong location and it was causing /terrible/ colour changes.
This also fixes a bug in the all the colour formats(except 888) where the unaligned path was loading in to the wrong register.
- Fixes remaining lighting issues (Mario Tennis, etc)
- Apply same fixes to Software Renderer
- Corrected zero length light direction vector to resolve with normal direction (essentially becomes LIGHTDIF_NONE which was what I was after)
The new implementation has 3 options:
SyncGpuMaxDistance
SyncGpuMinDistance
SyncGpuOverclock
The MaxDistance controlls how many CPU cycles the CPU is allowed to be in front
of the GPU. Too low values will slow down extremly, too high values are as
unsynchronized and half of the games will crash.
The -MinDistance (negative) set how many cycles the GPU is allowed to be in
front of the CPU. As we are used to emulate an infinitiv fast GPU, this may be
set to any high (negative) number.
The last parameter is to hack a faster (>1.0) or slower(<1.0) GPU. As we don't
emulate GPU timing very well (eg skip the timings of the pixel stage completely),
an overclock factor of ~0.5 is often much more accurate than 1.0
This fixes issue 6563:
https://code.google.com/p/dolphin-emu/issues/detail?id=6563
This PR adds a 2nd map to texture cache, which uses the hash as key. Cache entries from this new map are used only if the address matches or if the texture was fully hashed. This restriction avoids false positive cache hits. This results in a possible situation where safe texture cache accuracy could be faster than the fast one.
Small textures means up to 1KB for fast texture cache accuracy, 4KB for medium, and all textures for safe accuracy.
Since this adds a small overhead to all texture cache handling, some regression testing would be nice. Games, which use a lot of textures the same time, should be affected the most.
I tried to change messages that contained instructions for users,
while avoiding messages that are so technical that most users
wouldn't understand them even if they were in the right language.
Address static memory relative to a base register, analog to what we're
doing with PPCSTATE in the CPU JIT. This allows executable memory for
the vertex loader JIT to be allocated anywhere, not just within 2 GiB of
static data.
Fixes issue 8180.
Yet another story of games loading weird shit into registers.
For some reason, Burnout 2 would (in rare situations) load invalid
addresses into cp_state.array_bases. What would the real hardware
do in this situation? Who knows, Burnout 2 doesn't actually enable
the vertex array with the invalid address so nothing kinky happens.
But dolphin tries to optimise things and starts using the address
as soon as it is loaded into memory. This causes GetPointer (which is
now much more vocal) to throw an error.
The Fix: We don't call GetPointer until we are sure the vertex array
has been enabled.
- FileSearch is now just one function, and it converts the original glob
into a regex on all platforms rather than relying on native Windows
pattern matching on there and a complete hack elsewhere. It now
supports recursion out of the box rather than manually expanding
into a full list of directories in multiple call sites.
- This adds a GCC >= 4.9 dependency due to older versions having
outright broken <regex>. MSVC is fine with it.
- ScanDirectoryTree returns the parent entry rather than filling parts
of it in via reference. The count is now stored in the entry like it
was for subdirectories.
- .glsl file search is now done with DoFileSearch.
- IOCTLV_READ_DIR now uses ScanDirectoryTree directly and sorts the
results after replacements for better determinism.
Through just returning the last written value sounds better, this crashes Paper Mario.
In my opinion, gfx issues are fine on older GPUs, but crashes should not happen.
There is no nice way to correctly "detect" the "used" memory, so we just say
we're fine to use 50% of the physical memory for custom textures.
This will fix out-of-memory crashes, but we still might run into swapping issues.
This was causing a race condition where the "absurdly large aux buffer"
panic alert would be triggered in the last bit of fifo processing on the
CPU thread in deterministic mode (i.e. netplay). SyncGPU is supposed to
move the auxiliary queue data to the beginning of the containing buffer
so we don't have to deal with wraparound; if GpuRunningState is false,
however, it just returns, because it's set to false by another thread -
thus it doesn't know whether RunGpuLoop is still executing (in which
case it can't just reset the pointers, because it may still be using the
buffer) or not (in which case the condition variable it normally waits
for to avoid the previous problem will never be signaled). However,
SyncGPU's caller PushFifoAuxBuffer wasn't aware of this, so if the
buffer was filling at just the right time, it'd stay full and that
function would complain that it was about to overflow it. Similar
problem with ReadDataFromFifoOnCPU afaik. Fix this by returning early
from those as well; other callers of SyncGPU should be safe. A
*slightly* cleaner alternative would be giving the CPU thread a way to
tell when RunGpuLoop has actually exited, but whatever, this works.
This drops the "feature" to load level 0 from the custom texture
and all other levels from the native one if the size matches.
But in my opinion, when a custom texture only provide one level,
no more should be used at all.
A number of games make an EFB copy in I4/I8 format, then use it as a
texture in C4/C8 format. Detect when this happens, and decode the copy on
the GPU using the specified palette.
This has a few advantages: it allows using EFB2Tex for a few more games,
it, it preserves the resolution of scaled EFB copies, and it's probably a
bit faster.
D3D only at the moment, but porting to OpenGL should be straightforward..
The obvious question here is, why does it matter if we round or truncate?
The key is that GC/Wii does fixed-point interpolation, where PC GPUs do
floating-point interpolation. Discarding fractional bits makes the conversion
from floating-point to fixed point give more consistent results.
I'm not confident this is really the right fix, or that my explanation is
completely correct; ideally, we don't want to depend on floating-point
interpolation at all.
This is the same trick which is used for Metroid's fonts/texts, but for all textures. If 2 different textures at the same address are loaded during the same frame, create a 2nd entry instead of overwriting the existing one. If the entry was overwritten in this case, there wouldn't be any caching, which results in a big performance drop.
The restriction to textures, which are loaded during the same frame, prevents creating lots of textures when textures are used in the regular way. This restriction is new. Overwriting textures, instead of creating new ones is faster, if the old ones are unlikely to be used again.
Since this would break efb copies, don't do it for efb copies.
Castlevania 3 goes from 80 fps to 115 fps for me.
There might be games that need a higher texture cache accuracy with this, but those games should also see a performance boost from this PR.
Some games, which use paletted textures, which are not efb copies, might be faster now. And also not require a higher texture cache accuracy anymore. (similar sitation as PR https://github.com/dolphin-emu/dolphin/pull/1916)
In nearly all direct loadstore cases we can use unscaled loadstores.
Still have a fallback in case we hit a situation that we /can't/ do a unscaled loadstore.
When enabled, the silent option will avoid popping up dialog boxes for
overwrite confirmation or codec selection. The codec selection defaults to
uncompressed RGB.
This is required for FifoCI on Windows which needs to drive Dolphin from the
command line exclusively.
We want to move the vertex by 1/12 pixel, but the old code
did miss the perspective division. So by multiplying with pos.w,
the position is moved correctly after the perspective division.
On D3D, we read from the depth buffer using the format
DXGI_FORMAT_R24_UNORM_X8_TYPELESS (essentially, the "r" component contains
the depth, and the other components contain nothing).
Well, that's not strictly true, but trying to memcpy between two buffers
using different row lengths and different strides is at minimum extremely
unintuitive.
This changes the behavior if both texture are available. The old code did
try to load the modfied texID, the new code tries the unmodified texID first.
- Calculate ZSlope every flush but only set PixelShader Constant on Reset Buffer when zfreeze
- Fixed another Pixel Shader bug in D3D that was giving me grief
Results are still not correct, but things are getting closer.
* Don't cull CULLALL primitives so early so they can be used as reference
planes.
* Convert CalculateZSlope to screenspace coordinates.
* Convert Pixelshader to screenspace coordinates (instead of worldspace
xy coordinates, which is totally wrong)
* Divide depth by 2^24 instead of clamping to 0.0-1.0 as was done
before.
Progress:
* Rouge Squadron 2/3 appear correct in game (videos in rs2 save file
selection are missing)
* Shadows draw 100% correctly in NHL 2003.
* Mario golf menu renders correctly.
* NFS: HP2, shadows sometimes render on top of car or below the road.
* Mario Tennis, courts and shadows render correctly, but at wrong depth
* Blood Omen 2, doesn't work.
Based on the feedback from pull request #1767 I have put in most of
degasus's suggestions in here now.
I think we have a real winner here as moving the code to
VertexManagerBase for a function has allowed OGL to utilize zfreeze now
:)
Correct use of the vertex pointer has also corrected most of the issue
found in pull request #1767 that JMC47 stated. Which also for me now
has Mario Tennis working with no polygon spikes on the characters
anymore! Shadows are still an issue and probably in the other games
with shadow problems. Rebel Strike also seems better but random skybox
glitches can show up.
Initial port of original zfreeze branch (3.5-1729) by neobrain into
most recent build of Dolphin.
Makes Rogue Squadron 2 very playable at full speed thanks to recent core
speedups made to Dolphin. Works on DirectX Video plugin only for now.
Enjoy! and Merry Xmas!!
On locales that don't use period as a separator this would break us.
For vector values in a configuration, we use comma as a separator which causes the configuration to balloon to massive sizes due to never saving them
correctly. Loading would then break since it would load a million configuration options.
Fixes issue #7569.
Allows the UI to easily check the current exclusive mode state.
This simplifies a few checks and prevents the user from ever getting stuck in fullscreen.
Don't change the texID depending on the tlut_hash for paletted textures that are efb copies and don't have an entry in the cache for texID ^ tlut_hash. This makes those textures less broken when using efb to texture.
This is not really fixing those textures, but it's a step forward. The mini map in Twilight Princess for example is in grayscales with this and is more or less usable.
We'll loose data on invalidating them. So just keep them until a new copy is done.
A wrong scaled copy is better than no copy if the game doesn't creates a new one.
This reverts an optimization which isn't worth imo. Every texture uploads have to alloc vram and a staging buffer, so there is no need to do both in the same call.
Previously it was packed into spare slots in clippos.xy and normal.w,
but it's ugly and more importantly it's causing bugs.
This was discovered during the debugging of a zfreeze branch, which
expected clippos.xy to be xy position coordinates in clipspace (as
the name suggested).
Turns out the stereoscopy shader had also run into this trap, modifying
clippos.x (introducing errors with per-pixel lighting).
This commit has been moved outside of the zfreeze PR for fast merging.
They are used to remove the flush amounts, but as we don't
flush anymore on vertex loader changes (only on native
vertex format right now), this optimization is now unneeded.
This will allow us to hard code the frac factors within the
vertex loaders.
Fixes a typo where the official IMGTec drivers were said to be the OSS driver support.
Removes Mali GPU family detection just like I removed the Adreno family detection.
We don't support Mali Utgard anyway.
If we need family detection we can properly add it, right now it isn't needed.
Adreno 300 and 400 have the same video driver performance issues because they are very similar architectures which use basically the same thing with
everything.
There isn't any need to detect the family of the driver with Qualcomm anyway. If we ever need family specific bugs then we can implement real support
for that.
Performance issue on Adreno 400 series was due to us only detecting Adreno 300 series, and with Adreno 400 it wouldn't use the bugs, which would cause
it to use glBufferSubData, causing the huge performance hit.
Previously we had decided to busy loop on systems due to Windows' scheduler being terrible and moving us around CPU cores when we yielded.
Along with context switching being a hot spot.
We had decided to busy loop in these situations instead, which allows us greater CPU performance on the video thread.
This can be attributed to multiple things, CPU not downclocking while busy looping, context switches happening less often, yielding taking more time
than a busy loop, etc.
One thing we had considered when moving over to a busy loop is the issues that dual core systems would now face due to Dolphin eating all of their CPU
resources. Effectively we are starving a dual core system of any time to do anything else due to the CPU thread always being pinned at 100% and then
the GPU thread also always at 100% just spinning around. We noted the potential for a performance regression, but dismissed it as most computers are
now becoming quad core or higher.
This change in particular has performance advantages on the dual core Nvidia Denver due to its architecture being nonstandard. If both CPU cores are
maxed out, the CPU can't effectively take any idle time to recompile host code blocks to its native VLIW architecture.
It can still do so, but it does less frequently which results in performance issues in Dolphin due to most code just running through the in-order
instruction decoder instead of the native VLIW architecture.
In one particular example, yielding moves the performance from 35-40FPS to 50-55FPS. So it is far more noticeable on Denver than any other system.
Of course once a triple or quad core Denver system comes out this will no longer be an issue on this architecture since it'll have a free core to do
all of this work.
Seems to be pretty high in the profile in some geometry-heavy games like The
Last Story, and the compiler-generated assembly is terrifyingly bad, so
SSE-ize it.
Just use regular boolean negation in our pixel shader's depth test everywhere except on Qualcomm.
This works around a bug in the Intel Windows driver where comparing a boolean value against true or false fails but boolean negation works fine.
Quite silly.
Should fix issues #7830 and #7899.
This shader constant was previously used for depth remapping in D3D and for pixel center correction. Now it only serves one purpose and the new name makes it clear.
This particular issue was fixed in the v66 (07-08-2014) development drivers from Qualcomm.
To make sure we cover all drivers that may or may not have the issue fixed, make sure to mandate v95 minimum to work around the issue.
The next commit is the actual work around for post processing for this.
Due to changes in how we render to the final framebuffer we no longer encounter this bug.
With the change to post processing being enabled at all times and no longer using glBlitFramebuffer, Qualcomm no longer has the chance to rotate our
framebuffer underneath of us.
This particular bug from our friends over at Qualcomm manifests itself due to our alpha testing code having a conditional if statement in it.
This is a fairly recent breakage this time around, it was introduced in the v95 driver which comes with Android 5.0 on the Nexus 5.
So to break this issue down; In our alpha testing code we have two comparisons that happen and if they are true we will continue rendering, but if
they aren't true we do an early discard and return. This is summed up with a fairly simple if statement.
if (!(condition_1 <logic op> condition_2)) { /* discard and return */ }
This particular issue isn't actually due to the conditions within the if statement, but the negation of the result. This is the particular issue that
causes Qualcomm to fall flat on its face while doing so.
I've got two simple test cases that demonstrate this.
Non-working: http://hastebin.com/evugohixov.avrasm
Working: http://hastebin.com/afimesuwen.avrasm
As one can see, the disassembled output between the two shaders is different even though in reality it should have the same visual result.
I'm currently writing up a simple test program for Qualcomm to enjoy, since they will be asking for one when I tell them about the bug.
It will be tracked in our video driver failure spreadsheet along with the others.
We will now rely on Memory::CopyFromEmu to do bounds checking.
Some games actually load palettes from 0x00000000, despite the
fact no valid palette data should ever be there.
Fixes Issue 7792.
The timing information is set on s_scaled_frame->pts, giving precise
timing information to the encoder. Frames arriving too early (less than
one tick after the previous frame) are droped. The setting of packet's
timestamps and flags is done after the call to avcodec_encode_video2()
as this function resets these fields according to its documentation.
This is good hygiene, and also happens to be required to build Dolphin
using Clang modules.
(Under this setup, each header file becomes a module, and each #include
is automatically translated to a module import. Recursive includes
still leak through (by default), but modules are compiled independently,
and can't depend on defines or types having previously been set up. The
main reason to retrofit it onto Dolphin is compilation performance - no
more textual includes whatsoever, rather than putting a few blessed
common headers into a PCH. Unfortunately, I found multiple Clang bugs
while trying to build Dolphin this way, so it's not ready yet, but I can
start with this prerequisite.)
ShaderConstantProfile and ShaderUid now have an empty implementation
of Write() that uses variadic templates instead of varargs. MSVC is now
able to inline and optimize away this when necessary.
It only ever did anything on 32-bit OS X.
Anyway, it wasn't even on the right functions, and these days
ABI_PushRegistersAndAdjustStack should handle maintaining the ABI
correctly.
It now affects the GPU determinism mode as well as some miscellaneous
things that were calling IsNetPlayRunning. Probably incomplete.
Notably, this can change while paused, if the user starts recording a
movie. The movie code appears to have been missing locking between
setting g_playMode and doing other things, which probably had a small
chance of causing crashes or even desynced movies; fix that with
PauseAndLock.
The next commit will add a hidden config variable to override GPU
determinism mode.
It's a relatively big commit (less big with -w), but it's hard to test
any of this separately...
The basic problem is that in netplay or movies, the state of the CPU
must be deterministic, including when the game receives notification
that the GPU has processed FIFO data. Dual core mode notifies the game
whenever the GPU thread actually gets around to doing the work, so it
isn't deterministic. Single core mode is because it notifies the game
'instantly' (after processing the data synchronously), but it's too slow
for many systems and games.
My old dc-netplay branch worked as follows: everything worked as normal
except the state of the CP registers was a lie, and the CPU thread only
delivered results when idle detection triggered (waiting for the GPU if
they weren't ready at that point). Usually, a game is idle iff all the
work for the frame has been done, except for a small amount of work
depending on the GPU result, so neither the CPU or the GPU waiting on
the other affected performance much. However, it's possible that the
game could be waiting for some earlier interrupt, and any of several
games which, for whatever reason, never went into a detectable idle
(even when I tried to improve the detection) would never receive results
at all. (The current method should have better compatibility, but it
also has slightly higher overhead and breaks some other things, so I
want to reimplement this, hopefully with less impact on the code, in the
future.)
With this commit, the basic idea is that the CPU thread acts as if the
work has been done instantly, like single core mode, but actually hands
it off asynchronously to the GPU thread (after backing up some data that
the game might change in memory before it's actually done). Since the
work isn't done, any feedback from the GPU to the CPU, such as real
XFB/EFB copies (virtual are OK), EFB pokes, performance queries, etc. is
broken; but most games work with these options disabled, and there is no
need to try to detect what the CPU thread is doing.
Technically: when the flag g_use_deterministic_gpu_thread (currently
stuck on) is on, the CPU thread calls RunGpu like in single core mode.
This function synchronously copies the data from the FIFO to the
internal video buffer and updates the CP registers, interrupts, etc.
However, instead of the regular ReadDataFromFifo followed by running the
opcode decoder, it runs ReadDataFromFifoOnCPU ->
OpcodeDecoder_Preprocess, which relatively quickly scans through the
FIFO data, detects SetFinish calls etc., which are immediately fired,
and saves certain associated data from memory (e.g. display lists) in
AuxBuffers (a parallel stream to the main FIFO, which is a bit slow at
the moment), before handing the data off to the GPU thread to actually
render. That makes up the bulk of this commit.
In various circumstances, including the aforementioned EFB pokes and
performance queries as well as swap requests (i.e. the end of a frame -
we don't want the CPU potentially pumping out frames too quickly and the
GPU falling behind*), SyncGPU is called to wait for actual completion.
The overhead mainly comes from OpcodeDecoder_Preprocess (which is,
again, synchronous), as well as the actual copying.
Currently, display lists and such are escrowed from main memory even
though they usually won't change over the course of a frame, and
textures are not even though they might, resulting in a small chance of
graphical glitches. When the texture locking (i.e. fault on write) code
lands, I can make this all correct and maybe a little faster.
* This suggests an alternate determinism method of just delaying results
until a short time before the end of each frame. For all I know this
might mostly work - I haven't tried it - but if any significant work
hinges on the competion of render to texture etc., the frame will be
missed.
videoBuffer -> s_video_buffer
size -> s_video_buffer_write_ptr
g_pVideoData -> g_video_buffer_read_ptr (impl moved to Fifo.cpp)
This eradicates the wonderful use of 'size' as a global name, and makes
it clear that s_video_buffer_write_ptr and g_video_buffer_read_ptr are
the two ends of the FIFO buffer s_video_buffer.
Oh, and remove a useless namespace {}.
This state will be used to calculate sizes for skipping over commands on
a separate thread. An alternative to having these state variables would
be to have the preprocessor stash "state as we go" somewhere, but I
think that would be much uglier.
GetVertexSize now takes an extra argument to determine which state to
use, as does FifoCommandRunnable, which calls it. While I'm modifying
FifoCommandRunnable, I also change it to take a buffer and size as
parameters rather than using g_pVideoData, which will also be necessary
later. I also get rid of an unused overload.
VertexLoader::VertexLoader was setting loop_counter, a *static*
variable, to 0. This was nonsensical, but harmless until I started to
run it on a separate thread, where it had a chance of interfering with a
running vertex translator.
Switch to just using a register for the loop counter.
- Lazily create the native vertex format (which involves GL calls) from
RunVertices rather than RefreshLoader itself, freeing the latter to be
run from the CPU thread (hopefully).
- In order to avoid useless allocations while doing so, store the native
format inside the VertexLoader rather than using a cache entry.
- Wrap the s_vertex_loader_map in a lock, for similar reasons.
To avoid FPRs being pushed unnecessarily, I checked the uses: DSPEmitter
doesn't use FPRs, and VertexLoader doesn't use anything but RAX, so I
specified the register list accordingly. The regular JIT, however, does
use FPRs, and as far as I can tell, it was incorrect not to save them in
the outer routine. Since the dispatcher loop is only exited when
pausing or stopping, this should have no noticeable performance impact.
- Factor common work into a helper function.
- Replace confusingly named "noProlog" with "rsp_alignment". Now that
x86 is not supported, we can just specify it explicitly as 8 for
clarity.
- Add the option to include more frame size, which I'll need later.
- Revert a change by magumagu in March which replaced MOVAPD with MOVUPD
on account of 32-bit Windows, since it's no longer supported. True,
apparently recent processors don't execute the former any faster if the
pointer is, in fact, aligned, but there's no point using MOVUPD for
something that's guaranteed to be aligned...
(I discovered that GenFrsqrte and GenFres were incorrectly passing false
to noProlog - they were, in fact, functions without prologs, the
original meaning of the parameter - which caused the previous change to
break. This is now fixed.)
For a long time, we've had ugly and inconsistent function names here as
helpers, names like "decodebytesRGB5A3rgba" which are absolutely
incomprehensible to understand. Fix this by introducing a new consistent
naming scheme, where the above function now becomes "DecodeBytes_RGB5A3".
Instead of having three separate functions and checking the tlutfmt in a
variety of places, just do it once in a helper method. This is already
for the slow path either in our Generic decoder or in our Software
renderer, so it doesn't matter that this is slower.
x64 will continue using the separate functions for speed.
The D3D / OGL backends only ever used RGBA textures, and the Software
backend uses its own custom code for sampling. The ARGB path seems to
just be dead code.
Since ARGB and RGBA formats are similar, I don't think this will make
the code more difficult to read or unable to be used as
reference. Somebody who wants to use this code to output ARGB can simply
modify the MakeRGBA function to put the shift at the other end.
This pulls all the duplicate code from TextureDecoder_Generic /
TextureDecoder_x64 out and puts it in a common file. Out custom font
used for debugging the texture cache is also pulled out and put in a
common "sfont.inc" file. At some point we should also combine this font
with the other six binary fonts we ship.
GetPC_TexFormat was never used. It was added in commit d02426a, with the
only user being commented out code. The commented out code was later
removed in 9893122, but the implementation stayed.
We were decoding to BGRA32 textures in our RGBA32 texture decoder. Since
this is the same for the BGRA32 decoder implementation, this is most
likely a copy/paste typo, rather than the texture actually being
bit-swapped. Fix this.
I'm not sure of any games that use the C14X2 texture format, so I'm not
sure this fixes any games, but it does make the code cleaner for when we
clean it up in the future, and merge some of these similar loops.
Separated out from my gpu-determinism branch by request. It's not a big
commit; I just like to write long commit messages.
The main reason to kill it is hopefully a slight performance improvement
from avoiding the double switch (especially in single core mode);
however, this also improves cycle calculation, as described below.
- FifoCommandRunnable is removed; in its stead, Decode returns the
number of cycles (which only matters for "sync" GPU mode), or 0 if there
was not enough data, and is also responsible for unknown opcode alerts.
Decode and DecodeSemiNop are almost identical, so the latter is replaced
with a skipped_frame parameter to Decode. Doesn't mean we can't improve
skipped_frame mode to do less work; if, at such a point, branching on it
has too much overhead (it certainly won't now), it can always be changed
to a template parameter.
- FifoCommandRunnable used a fixed, large cycle count for display lists,
regardless of the contents. Presumably the actual hardware's processing
time is mostly the processing time of whatever commands are in the list,
and with this change InterpretDisplayList can just return the list's
cycle count to be added to the total. (Since the calculation for this
is part of Decode, it didn't seem easy to split this change up.)
To facilitate this, Decode also gains an explicit 'end' parameter in
lieu of FifoCommandRunnable's call to GetVideoBufferEndPtr, which can
point to there or to the end of a display list (or elsewhere in
gpu-determinism, but that's another story). Also, as a small
optimization, InterpretDisplayList now calls OpcodeDecoder_Run rather
than having its own Decode loop, to allow Decode to be inlined (haven't
checked whether this actually happens though).
skipped_frame mode still does not traverse display lists and uses the
old fake value of 45 cycles. degasus has suggested that this hack is
not essential for performance and can be removed, but I want to separate
any potential performance impact of that from this commit.
This is required to make packing consistent between compilers: with u32, MSVC
would not allocate a bitfield that spans two u32s (it would leave a "hole").
The only possible functionality change is that s_efbAccessRequested and
s_swapRequested are no longer reset at init and shutdown of the OGL
backend (only; this is the only interaction any files other than
MainBase.cpp have with them). I am fairly certain this was entirely
vestigial.
Possible performance implications: efbAccessReady now uses an Event
rather than spinning, which might be slightly slower, but considering
the slow loop the flags are being checked in from the GPU thread, I
doubt it's noticeable.
Also, this uses sequentially consistent rather than release/acquire
memory order, which might be slightly slower, especially on ARM...
something to improve in Event/Flag, really.
This shouldn't affect functionality. I'm not sure if the breakpoint
distinction is actually necessary (my commit messages from the old
dc-netplay last year claim that breakpoints are broken anyway, but I
don't remember why), but I don't actually need to change this part of
the code (yet), so I'll stick with the trimmings change for now.
This is effectively unused, as the window handles that we pass to the
GLInterface are window handles for the frame which isn't ever a real
toplevel window. Host_UpdateTitle is what actually sets the proper title
on the render window.
Now that MainNoGUI is properly architected and GLX doesn't need to
sometimes craft its own windows sometimes which we have to thread back
into MainNoGUI, we don't need to thread the window handle that GLX
creates at all.
This removes the reference to pass back here, and the g_pWindowHandle
always be the same as the window returned by Host_GetRenderHandle().
A future cleanup could remove g_pWindowHandle entirely.
The framebuffer is no longer rotated the wrong way around in Qualcomm's latest development drivers.
They did something right, only took them over a year.
This catches most instances of configuration failures that can happen in a post processing shader.
Gives a user a helpful error message that lets them know what they have failed to set up correctly
This class loads all the common PP shader configuration options and passes those options through to a inherited class that OpenGL or D3D will have.
Makes it so all the common code for PP shaders is in VideoCommon instead of duplicating the code across each backend.
Fixes a bug where "Use Fullscreen" would initialize into exclusive fullscreen regardless of the borderless fullscreen setting.
Also relieves the need for the video renderer to check the borderless fullscreen setting each time.
The hack was needed because the Nvidia 3D Vision heuristics are documented to only support surfaces that are the same size as the backbuffer. This would be the case if you enabled the hack and selected the "Auto (Window Size)" internal resolution.
However, on recent drivers the same effect is achieved by selecting the "Auto (Multiple)" internal resolution. Therefore the hack is no longer required.
Also have the renderer remember its own fullscreen state. This is done to prevent a case where we exit exclusive fullscreen through the configuration and a focus shift at the same time. In this case the renderer would fail to detect that the fullscreen state was changed.
In the cases where we support the binding layout keyword, use it for more than binding UBO location.
This changes it so it is supported for samplers as well.
Instances when this is enabled is if a device supports GL_ARB_shading_language_420pack, or if it supports GLES 3.10.
ffmpeg 2.0 changed requirements for the FFV1 encoder and made them more strict,
requiring more fields of the input frame to be initialized. Explicitly setting
pixfmt, width and height solve the EINVAL issues with FFV1 encoding.
Original fix from http://ffmpeg.org/pipermail/libav-user/2013-October/005759.html
We are used to have a 1:1 mapping of GX vertex formats and the native (OGL + D3D) ones, but there are by far more GX ones.
This new cache maps them directly so that we don't flush on GX vertex format changes as long as the native one doesn't change.
The idea is stolen from galop1n.
- Isolate it into it's own namespace
- Shorten function names, the namespace self-documents.
- Just use the std I/O, we can just write directly to the stream for
logging.
Some headers where using #ifndef to guard being including multiple times. But most were using pragma once. So for consistency I changed them all to use pragma once.
When I dropped ARM from a generic target, this caused the vertexloader to try using the JIT path.
Instead of !_M_GENERIC, check for _M_X86 instead. Since it is only for the x86 target
These were only compiled in on Windows and x86_32.
They provided "optimized" copies and compares based on blocksizes for the AMD Athlon and Duron CPU families.
The code was taken from something that AMD provides with a as-is license.
Just get rid of this crap.
For whatever reason, the hardware doesn't do a full divide by 255, but
instead uses an approximation with shifting, similar to the way it is done
in TEV.
Noticed this while messing with EFB to RAM.
We were having an implicit conversion from integer to float, GLSL ES doesn't allow implicit conversion.
Changes it to a explicit conversion to float.
This matches how ARM handles their naming in their drivers for different models.
Really it's that way because both Mali-T6xx and Mali-T7xx fall under Midgard.
While everything else (except Mali-55) fall under Utgard.
This was a mistake of mine when translating floating point values to integer values.
Also, the max() part of that line was just completely redundant because the sign of an absolute value is always greater than or equal to zero.
Fixes issue 7178.
They are similar enough that they will share bugs with their drivers, so make them fall under the same Mali-Txxx umbrella of bug issues.
If there is ever a need in the future for having separate bugs depending on family, we can support that then.
Fixes issue 6990.
This uses a bit of templating to remove the duplicate code that is the CodeBlocks in each emitter headers.
No actual functionality change in this.
GLSL ES 3.10 adds implicit support for the binding layout qualifier that we use.
Changes our GLSL version enums to bit values so we can check for both ES versions easily.
This is undefined behavior in C++, and a clang warning suggests it is
actually producing bad code as a result:
../Source/Core/VideoCommon/BPFunctions.cpp:164:45: warning: comparison of constant 4294967295 with expression of type 'PEControl::PixelFormat' is always false [-Wtautological-constant-out-of-range-compare]
if (new_format == old_format || old_format == (unsigned int)-1)
Fixes all warnings on Android build except for what is in externals.
Removes a function from TextureDecoder_Generic since it is unused and generates a warning.
Metroid: Other M was the only game which required this field, but the
issue in that game can be fixed properly by enabling format change
emulation. Hence, there's no point in having this around anymore.
Fixes issue 6644.
Our defines were never clear between what meant 64bit or x86_64
This makes a clear cut between bitness and architecture.
This commit also has the side effect of bringing up aarch64 compiling support.
- remove unused variables
- reduce the scope where it makes sense
- correct limits (did you know that strcat()'s last parameter does not
include the \0 that is always added?)
- set some free()'d pointers to NULL
inline halfxb
So we know which is the first pixel by masking.
inline xl
inline xb a bit
inline yl
inline uv1.x shift
remove likely wrong guessed ternary operator
add pixel layout comment
inline xel
optimize the shifts a bit
inline xb
optimize shifts in a second step
extract xb
rename all variables
calculate cache line by position.x
Revert 5115b459f40d53044cd7a858f52e6e876e1211b4 "optimize the shifts a bit"
It seems I was wrong, the other way is the more natural.
use x_virtual_position instead of uv1.x for x_offset_in_block
This looks more natural and the offset should be masked anyway.
substitude factor with cache_lines
move 32bit logic in a conditional block
This option was known to break every second game and only boost a bit.
It also seems to be broken because of streaming into pinned memory and buffer storage buffers.
v2: also remove dlc_desc
Older Qualcomm drivers rotated the framebuffer 90 degrees and this fix didn't work.
Now for some obscene reason it rotates a full 180 degrees.
This can at least be worked around by flipping around the image on our end.
On Windows, nvidia don't give us their driver version, so we can't workaround any issues.
As buffer_storage is broken on some drivers, we wanted to disble it for them.
So we can't.
Luckyly only "some" released driver versions are affected as this extension is only available since some months. Let's hope that nobody have to use one of this driver version, else they will get a black screen ...
OSX has their own driver, so performance issues aren't shared with the nvidia driver (unlike the closed source linux and windows nvidia driver). So now they'll also use the MapAndSync backend like all other osx drivers.
fixes issue 6596
I've also cleaned up the if/else block selecting the best backend a bit.
The new chapter title in Paper Mario TTYD had a small graphical bug due to the new code because it read one extra pixel, this fixes it.
I hope this gets everything, I though I had checked most bugs and yet here I am, commit-spamming...
Fixes some remaining bbox related bugs in Mickey's Magical Mirror and a slight graphical glitch in Paper Mario: TTYD when flipping and Vivian as your companion (I've been scratching my head for days to find this one).
Instead of being vertex-based, it is now primitive (point, line or dissected triangle) based, with proper clipping.
Also, screen position is now calculated based on viewport values, instead of "guesstimating".
This fixes many graphical glitches in Paper Mario: TTYD and Super Paper Mario.
Also, the new code allows Mickey's Magical Mirror and Disney's Hide & Sneak to work (mostly) bug-free. I changed their inis to use bbox.
These changes have a slight cost in performance when bbox is being used (rare), mostly due to the new clipping algorithm.
Please check for any regressions or crashes.
This branch is the final step of fully supporting both OpenGL and OpenGL ES in the same binary.
This of course only applies to EGL and won't work for GLX/AGL/WGL since they don't really support GL ES.
The changes here actually aren't too terrible, basically change every #ifdef USE_GLES to a runtime check.
This adds a DetectMode() function to the EGL context backend.
EGL will iterate through each of the configs and check for GL, GLES3_KHR, and GLES2 bits
After that it'll change the mode from _DETECT to whichever one is the best supported.
After that point we'll just create a context with the mode that was detected
As we do lots of writes to *Iptr, the compiler isn't allowed to cache any shared variable (neither index nor Iptr itself).
This commit inlines Iptr + index into the index generator functions, so the compiler know that they are const.
We are used to render them out of order as long as everything else matches, but rendering order does matter, so we have to flush on primitive switch. This commit implements this flush.
Also as we flush on primitive switch, we don't have to create three different index buffers. All indices are now stored in one buffer.
This will slow down games which switch often primitive types (eg ztp), but it should be more accurate.
This "u32 components" is a list of flags which attributes of the vertex loader are present.
We are used to append this variable to lots of vertex generation functions, but some of them don't need it at all.
The usual way to handle this kind of request is to rise a flag which the gpu thread polls.
The gpu thread itself either generates the result or just write zeros if disabled.
After this, it rise another flag which says that this work is done.
So if disabled, we still have the cpu-gpu round trip time. This commit just returns 0 on the cpu thread
instead of playing ping pong...
Some information on this bug since this isn't quite true.
Seemingly with the v53 driver, Qualcomm has actually fixed this bug. So we can dynamically access UBO array members.
The issue that is cropping up is actually converting our attribute 'fposmtx' to an integer.
int posmtx = int(fpostmtx);
This line causes some seemingly garbage values to enter in to the posmtx variable.
Not sure exactly why it is failing, probably them just not actually converting the float to an integer and just handling the float directly as a integer.
So the bug is going to stay active with Qualcomm devices until we convert this vertex attribute from a float to a integer.
Let's talk a bit about this bug. 12nd oldest bug not fixed in Dolphin, it was a
lot of fun to debug and it kept me busy for a while :)
Shoutout to Nintendo for framework.map, without which this could have taken a
lot longer.
Basic debugging using apitrace shows that the heat effect is rendered in an
interesting way:
* An EFB copy texture is created, using the hardware scaler to divide the
texture resolution by two and that way create the blur effect.
* This texture is then warped using indirect texturing: a deformation map is
used to "move" the texture coordinates used to sample the framebuffer copy.
Pixel shader: http://pastie.org/private/25oe1pqn6s0h5yieks1jfw
Interestingly, when looking at apitrace, the deformation texture was only 4x4
pixels... weird. It also does not have any feature that you would expect from a
deformation map. Seeing how the heat effect glitches, this deformation texture
being wrong looks like a good candidate for the problem. Let's see how it's
loaded!
By NOPing random calls to GXSetTevIndirect, we find a call that when removed
breaks the effect completely. The parameters used for this call come from the
results of methods of JPAExTexShapeArc objects. 3 different objects go through
this code path, by breaking each one we can notice that the one "controlling"
the heat effect is the one at 0x81575b98.
Following the path of this object a bit more, we can see that it has a method
called "getIndTexId". When this is called, the returned texture ID is used to
index a map and get a JPATextureArc object stored at 0x81577bec.
Nice feature of JPATextureArc: they have a getName method. For this object, it
returns "AK_kagerouInd01". We can probably use that to see how this texture
should look like, by loading it "manually" from the Wind Waker DVD.
Unfortunately I don't know how to do that. Fortunately @Abahbob got me the
texture I wanted in less than 10min after I asked him on Twitter.
AK_kagerouInd01 is a 32x32 texture that really looks like a deformation map:
http://i.imgur.com/0TfZEVj.png . Fun fact: "kagerou" means "heat haze" in JP.
So apparently we're not using the right texture object when rendering! The
GXTexObj that maps to the JPATextureArc is at offset 0x81577bf0 and points to
data at 0x80ed0460, but we're loading texture data from 0x0039d860 instead.
I started to suspect the BP write that loads the texture parameters "did not
work" somehow. Logged that and yes: nothing gets loaded to texture stage 1! ...
but it turns out this is normal, the deformation map is loaded to texture stage
5 (hardcoded in the DOL). Wait, why is the TextureCache trying to load from
texture stage 1 then?!
Because someone sucked at hex.
Fixes issue 2338.
At the moment, custom textures with:
- invalid mipmap size
- invalid aspect ratio
- non-fractional scaling factors
are allowed. But they can't be loaded fine by the backend, so generate a warning if someone trys to load them.
fixes issue 6898
OpenGL defaults are GL_REPEAT, which is even more unlikely than GL_CLAMP_TO_EDGE.
As I can't test the behavoir of the real hardware, I changed it to how it works before,
but I guess just clip the texture makes more sense.
This flag wasn't cleared at all, so we set our constants dirty every time...
This could fix some performance regressions because of revision 6798a4763e
This removes the redundant code and also implements this feature for OSX and Wayland.
But so it's dropped for non-wx builds...
imo DolphinWX still isn't the best place for this, but now it's in the same file as all other hotkeys. Maybe they'll be moved to InputCommon sometimes at once ...
Now OGL doesn't rely on WX for PNG saving.
FlipImageData supports (pixel data len > 3) now.
TextureToPng is now in ImageWrite.cpp/h
Video Common depends on zlib and png.
D3D no longer depends on zlib and png.
Revert "Actually, filename really does need to be a parameter because of some random debug thing."
Revert "fix non-HAVE_WX case"
Revert "Handle screenshot saving in RenderBase. Removes dependency on D3DX11 for screenshots (texture dumping is still broken)."
This reverts commits 00fe5057f1, 74b5fb3ab4, cd46138d29 and 5f72542e06 because taking screenshots in D3D still crashed for me so there was no point in the code changes (which I found ugly anyway).
This reverts commit 6cece6b486.
In fact, there was a _huge_ speedup on lots of games (mostly on nvidia+ogl), but there are some crashes on D3D.
I have to fix this crash and then I'll commit something like this again :-)
Conflicts:
Source/Core/VideoCommon/Src/TextureCacheBase.cpp
We often need the same native texture objects for new textures. This commit
try to avoid destroying and creation of this textures by pooling them.
This should be a big performance gain for some efb2ram games as they may
overwrites partially a cached texture (which would be deleted) and afterwards
try to read it.
Creating/destroying sounds like an easy task, but it isn't. eg the nvidia ogl
driver synchonize their threads do avoid use-after-free issues.
D3D doesn't allow bigger viewports than rendertargets. But flipper does, so the viewport will be clipped and the transformation matrix will be changed.
This was done in the D3D backend itself. This is now moved into VideoCommon. This don't reduce code, but in this way, VideoCommon doesn't depend on the backends.
This isn't needed for both OGL+D3D11 as they support sample shading directly. So we
could use the common MSAA util shaders instead of writing custom ones.
* Currently there is no DEBUGFAST configuration. Defining DEBUGFAST as a preprocessor definition in Base.props (or a global header) enables it for now, pending a better method. This was done to make managing the build harder to screw up. However it may not even be an issue anymore with the new .props usage.
* D3DX11SaveTextureToFile usage is dropped and not replaced.
* If you have $(DXSDK_DIR) in your global property sheets (Microsoft.Cpp.$(PlatformName).user), you need to remove it. The build will error out with a message if it's configured incorrectly.
* If you are on Windows 8 or above, you no longer need the June 2010 DirectX SDK installed to build dolphin. If you are in this situation, it is still required if you want your built binaries to be able to use XAudio2 and XInput on previous Windows versions.
* GLew updated to 1.10.0
* compiler switches added: /volatile:iso, /d2Zi+
* LTCG available via msbuild property: DolphinRelease
* SDL updated to 2.0.0
* All Externals (excl. OpenAL and SDL) are built from source.
* Now uses STL version of std::{mutex,condition_variable,thread}
* Now uses Build as root directory for *all* intermediate files
* Binary directory is populated as post-build msbuild action
* .gitignore is simplified
* UnitTests project is no longer compiled
D3D9 only supports 8 texcoords. But we need a new one for ppl, so we just store it in the first 4 texcoords in the free 4th component.
This isn't needed for both d3d11 and ogl3, so just remove it.
This isn't needed in VertexShaderManager as it's still in the old dirty flag way.
But it's very importend for PixelShaderManager as some float4s aren't initialized as 0.0f
The old way was to use a dirty flag per setter. Now we just update the const buffer per setter directly.
The old optimization isn't needed any more as the setters don't call the backend any more.
The follow parts are rewritten:
Alpha
ZTextureType
zbias
FogParam
FogColor
Color
TexDim
IndMatrix
MaterialColor
FogRangeAdjust
Lights
The upload in the backend isn't done, it's just pushed by the mostly removed SetMulti*SConstant4fv.
Also no optimizations was done on VideoCommon side, but I can start now :-)
Sorry for the hacky way, but I think this is a nice (working) snapshot for a much bigger change.
It changes the string in the Android backend select to just OpenGL ES.
Adds a check in the Android code to check for Tegra 4 and to enable the option to select the OpenGL ES backend.
Adds a DriverDetails bug under BUG_ISTEGRA as a blanket case of Tegra 4 support.
The changes that effects most lines in this change. Removing all float suffixes in the pixel/vertex/util shaders since OpenGL ES 2 doesn't support float suffixes.
Disables the shaders for reinterpreting the EFB format since Tegra 4 doesn't support integers.
Changes GLFunctions.cpp to grab the correct Tegra extension functions.
Readds the GLSL 1.2 'hacks' as GLSLES2 'hacks' since they are required for GLSL ES 2
Adds a GLSLES2 to the GLSL_VERSION enum.
Disable the SamplerCache on Tegra since Tegra doesn't support samplers...
Enable glBufferSubData on Tegra since it is the only mobile GPU to correctly work with it.
Disable glDrawRangeElements on Tegra since it doesn't support it, This uses glDrawElements instead.
- Call ABI_AlignStack even on x86-64.
- Have ABI_AlignStack respect the difference in current alignment
between the root JIT function, which has a prolog, and
ProtectFunction thunks, which do not. This was causing many games
to crash on start on OS X. Since this might otherwise mean changing
the stack pointer before every call...
- Have one prolog/epilog function rather than two (one of which
definitely did not do what it was thought to do), and make it
actually work like a normal one, so that the stack frame shows up
properly in the debugger. There should be no performance impact.
attn is sometimes very big (eg 1e27), so attn*attn doesn't fit into a float.
So the funny part here is: 0.0 * (1e27*1e27) = 0.0 * Inf = NaN
As the shader compiler is allowed to change the order of multiplications,
this issue isn't fixed completely.
Use strings internally, use a multimap and std::function for callbacks (instead
of a flat vector + loop over the vector to find the right callback type), fix
coding style issues. Simplify MainAndroid code a bit.
It's possible to configure to use the vertex color as lightning source without enabling the vertex color at all.
The old implementation will use zero, but it seems to be wrong (prooven by THPS3), more likely is to disable
the lightning and just return the global color.
This fixes THPS3 on OpenGL, but it isn't verifed on hardware
- rewrite loops to not use divisions and multiplications
- remove warnings as the current implementations seems to be correct (ignore additional vertices)
use always ppd is a huge gpu performance drop: 20%-50%
and always disable it cause some rendering issues
so there is an option again
But this time it's called "Fast Depth Calculation"
fix issue 6282
The Last Story seems to render a fan with two vertices. It is non-sense as it
shouldn't do anything, but the code underflows at (u32)numVerts-3
Block braces on new lines.
Also killed off trailing whitespace and dangling elses.
Spaced some things out to make them more readable (only in places where it looked like a bit of a clusterfuck).
See Render.cpp, PixelShaderGen.cpp, and PixelShaderManager.cpp for most of the changes.
See VertexShaderManager.cpp for a logging typo fix.
See SWRenderer.cpp for a small typo fix for a message that gets swprintf'd in DrawDebugText.
See SWVertexLoader.cpp for a typo fix of an assert message.
Should slightly improve the readability of some of those files.
This commit mainly elaborates on some messages a little more. Also fixes some typos that slipped through the last commit.
A large change in text can be seen in EXI_DeviceMemoryCard.cpp. I added more info as to why a write to a memory card may fail. (This actually was a reason I was unable to write to a memcard recently).
Elaborations can be seen in WGL.cpp
I did change some comments in some files that I was correcting logging messages in, however this is only if I spot a typo or if an abbreviation is lower-cased. Even in that case, the amount of changes done to comments is very minimal.
Sorry for a direct commit to the main branch but i need fast feedback, and i don't want to leave problematic code in the main branch for a long time.
if this approach does not work for the drivers with problems will transform dual source blend to an option in the D3D9 backend.
I appreciate the help of the people that tested my last commit and thanks to neobrain for pointing this solution.
Convert all quads+triangles into trangle_strip and uses primitive restart to split them.
Speed up triangle_strip, but slows down all others primitive formats.
Only implemented in ogl.
this implementation does not work in windows xp (sorry no support for dual source blending there).
this should improve speed on older hardware or in newer hardware using super sampling.
disable partial fix for 4x supersampling as I'm interested in knowing the original issue with the implementation to fix it correctly.
remove the deprecation label from the plugin while I'm working on it.
* Fast-EE:
Forced the exception check only for ARAM DMA transfers. Removed the Eternal Darkness boot hack and replaced it with an exception check.
Reverted rd76ca5783743 as it was made obsolete by r1d550f4496e4.
Removed the tracking of the FIFO Writes as it was made obsolete by r1d550f4496e4.
Forced the external exception check to occur sooner by changing the downcount.
# By skidau (30) and Pierre Bourdon (1)
* FIFO-BP: (31 commits)
Set g_bSignalTokenInterrupt on the main thread. Fixes the random hang in Harry Potter: Prisoner of Azkaban.
Used a scheduled event to generate the ARAM DMA interrupt if the DMA is greater than a certain size. Fixes NFS:HP2 GC.
Bumped up the disc transfer speed enough to prevent audio stuttering in Gauntlet: Dark Legacy.
Enabled Synchronise GPU on "SPEED CHALLENGE - Jacques Villeneuve's Racing Vision". Required to go in-game.
Added direct GameCube controller commands to the Serial Interface emulation. Fixes the controls in MaxPlay Classic Games Volume 1 and the Action Replay disc.
Increased the FIFO buffer size to 2MB from 1MB. Fixes Killer 7's Angel boss.
Used an immediate GenerateDSPInterrupt when transferring data from ARAM to MRAM and a scheduled DSP interrupt when transferring data from MRAM to ARAM.
Fixes the audio cutting in and out in the Resident Evil GC games using DSP HLE. Triggered the ARAM interrupt by the scheduler instead of directly in function.
Implemented proper timing for the sample counter in the AudioInterface, removing the previous hack. Cleaned up some of the audio streaming code.
Skipped the EE check if there is a CP interrupt pending.
Disabled "Speed up disc transfer" from the ZTP GC game ini.
Removed the disc seek times for GC games and removed the disc speed option on Wii games. Checked for external exceptions only in mtmsr.
Delayed the interrupts in the EXI Channel.
Merge aram-dma-fixes (r76a13604ef49b522281af75675f044d59a74e871)
Added a patch that bypasses the FIFO reset code in Wallace and Gromit: Project Zoo, allowing it to go in-game.
Made vertex loading take constant time.
Increased the cycle time of the vertex command. Fixes "Speed Challenge: Jacques Villeneuve's Racing Vision".
Moved the setting of the Finish interrupt signal back to the main thread as it was causing Wii games like Resident Evil 4 (Wii) to hang.
Profile stores, fp stores and ps stores only to the fifo write addresses list. This should make the JIT a little faster as it will not be checking for external exceptions unnecessarily.
...
Conflicts:
Source/Core/VideoCommon/Src/PixelEngine.cpp
Adds support for PE performance metrics.
Used in Super Mario Sunshine's "Scrubbing Sirena Beach" level to determine when enough goop has been cleaned up to finish the level.
Also used in TimeSplitters: Future Perfect to determine the appearance of flares around light sources (e.g. sun).
OpenGL and D3D11 only. D3D9 support unlikely to be added unless anyone bothers to do the work.
Initial work and D3D11 support by me. Kudos go to Billiard for adding the OpenGL support and reviving development of this branch that way :D
Slightly (~7%) decreases performance when performance metrics are used (and only then).
Fixes issue 1498.
Fixes issue 5368.
# By Jordan Woyak (46) and others
# Via Jordan Woyak (2) and others
* master: (70 commits)
Fixes two memory leaks, one is pretty bad for OSX. Yell at pauldachz if this doesn't work. Or... say thanks.
Added a BluetoothEnumerateInstalledServices call so that the wiimote remembers the pairing.
Make ARMJit core default CPU core on ARM architecture
Fix a StringUtil regression from the arm-noglsl merge
Small improvement to cmpli/cmpi in ARMJit.
Merge latest ArmEmitter changes from ppsspp while we're at it.
Ah. I blame vim on this typo entirely.
Add disabled code for authenticating wiimotes on Windows.
Add the missing FPR cache
Buildfix.
Yell at the user if they change window size while dumping frames, and some other avi dumping stuff.
Not sure if this is the right way to handle this, but it makes the save states perfectly stable. That's all that really matters, right?
Abort loading states from incompatible graphics backends.
ARM Support without GLSL
Improve VideoSoftware save states. They are fairly stable, but not perfect. OpcodeDecoder::DoState() needs to be fixed.
Begin implementing save states to video software. Kind of works, sometimes.
Make error message for loading save state with wrong dsp engine shorter.
Abort load state if it uses a different dsp engine, instead of crashing.
Update the gameini of F-zero. Efb to Ram is no longer the default choice.
fix last commit by neobrain
...
Conflicts:
Source/Core/VideoCommon/Src/Fifo.cpp
This reverts commit 4653adecf1.
Also dx9 isn't able to hanlde more than 11 varying registers.
More frustrating is the lightning issue by this commit. I don't know why it happens...
# By Jordan Woyak (9) and others
* master:
Fixed a buffer overflow in the OpenAL buffer.
TextureCache: Fix D3D backends crashing when a game uses multiple 1x1-sized LODs.
WII_IPC_HLE_Device_FileIO: don't rebuild the filename on every operation.
Some cleanup of CWII_IPC_HLE_Device_FileIO: The real file was never kept open for longer than a single operation so there was no point in dealing with it in DoState. Saving the real path in the savestate was also probably a bad idea. Savestates should be a bit more portable now.
Removing destination on rename when source isn't present doesn't make sense. IOCTL_RENAME_FILE still might not be totally correct.
Change some CNANDContentLoader logic to what was probably intended. Kills some warn logs when opening Dolphin.
Let's not CreateDir an empty string every time CreateFullPath is used, logging an error every time.
Fix a memleak. Probably/maybe improve USBGecko performance.
Remove the core count from the cpu info OSD message. It was often wrong and not rather important.
Use omp_get_num_procs to set the number of OpenMP threads rather than our core count detection.
Bulk send TCP data to the client with the emulated USB Gecko.
Added the ability to reverse the direction of the force feedback by allowing negative range values.
Changes/cleanup to TextureCache::Load and other mipmap related code. The significant change is what is now line 520 of TextureCacheBase.cpp: ((std::max(mipWidth, bsw) * std::max(mipHeight, bsh) * bsdepth) >> 1) to TexDecoder_GetTextureSizeInBytes(expanded_mip_width, expanded_mip_height, texformat);
The significant change is what is now line 520 of TextureCacheBase.cpp:
((std::max(mipWidth, bsw) * std::max(mipHeight, bsh) * bsdepth) >> 1)
to
TexDecoder_GetTextureSizeInBytes(expanded_mip_width, expanded_mip_height, texformat);
Fixes issue 5328.
Fixes issue 5461.
# By Jordan Woyak (24) and others
# Via Jordan Woyak (3) and others
* master: (66 commits)
Reduce some DI command delays. Fix DKCR hanging with DSP HLE. My other games continue to work.
Video_Software: Fix ZComploc option breaking stuff.
Video_Software: Fix the ZFreeze option doing nothing.
Video_Software: Toggable zfreeze and early_z support for testing.
Fix header guard and definitions not being set to 1
Add the option to turn on only the EGL interface to use desktop OpenGL with it.
Change the ugly "no banner" banner to the sexy "X" from the website.
Fix a crash in the FifoPlayer dialog.
Use different reply delays for various DI commands. Fixes issue 5983.
Revert "[bugfix] DX9::TextureCache: Use max_lod instead of min_lod where necessary."
Fix some potential issues when blending on EFB formats without alpha. Clean up state transition tables.
Disable play and record buttons if an iso was selected, but is later deselected.
Disable start/play recording buttons when no iso is selected.
Only delay DI and fs IPC replies. Fixes issue 5982.
Fix compilation with SDL2. (based on a patch from matthewharveys) Fixes issue 5971.
"Fix" using SDL from externals.
Clean up SDL includes a bit. Maybe fix an SDL2 problem.
Number "unknown" axes in OSX rather than call them all "unk".
Revert "Only delay DI command replies." Fix "Wii Party" again.
Hopefully make wiimote speaker less crappy.
...
Conflicts:
Source/Core/VideoCommon/Src/PixelShaderGen.cpp
Source/Plugins/Plugin_VideoSoftware/Src/SWRenderer.cpp
immediate-removal is a new created branch seperated from master but reverted the revert of immediate-removal
so we get less conflicts by merging
Depth calculations are always done in the pixel shader now.
Due to the unpredictability of our zcomploc hacks this commit probably changes the behavior of some games which use zcomploc.
That's what would've been done in the next TCB::Load() call, anyway.
Fixes issue 5742.
Additionally, change efb copies to specify 1 as the number of mipmaps because that makes more sense than anything else.
change naming in all the backends vertex managers to make more easy to continue with the merge an some future improvements.
please test this as i'm interested in knowing the performance in linux and windows with the different hardware platforms.