this new design will once create a texture for all chars.
while rendering a string, a list of polygons (position on screen + texture)
for this string is generated on the fly and print at once by glDrawArrays.
atm, there is no support for colors, so everything will display white.
Signed-off-by: Ryan Houdek <Sonicadvance1@gmail.com>
two last draw-calls are missing:
VertexManager::Draw (see Graphic_Update branch)
RasterFont::printString (perhaps reimplement with an texture)
Signed-off-by: Ryan Houdek <Sonicadvance1@gmail.com>
should be done with VAO, but atm, this is not possible :-(
this also partial revert the fix in fb92c338af (activating texture0 globally).
Signed-off-by: Ryan Houdek <Sonicadvance1@gmail.com>
the amount and size of the buffer is now changed to "new hardware" frienly values and will fall back to the right values if hardware does not support them.
my next commit will be to a branch, with my ogl work.
So to compensate lets bring back some speed to the emulation.
change a little the way the vertex are send to the gpu,
This first implementation changes dx9 a lot and dx11 a little to increase the parallelism between the cpu and gpu.
ogl: is my next step in ogl is a little more trickier so i have to take a little more time.
the original concept is Marcos idea, with my little touch to make it even more faster.
what to look for: SPEEEEEDDD :).
please test it a lot and let me know if you see any problem.
in dx9 the code is prepared to fall back to the previous implementation if your card does not support the amount of buffers needed.
So if you did not experience any speed gains you know where is the problem :).
for the ones with more experience and compression of the code please test changing the amount and size of the buffers to tune this for your specific machine.
The current values are the sweet spot for my machine.
All must Thanks Marcos, I hate him for giving good ideas when I'm full of work.
Most of the InvalidateICache calls are for a 32 bytes block: this is the
number of bytes invalidated by PowerPC dcb*/icb* instructions. Profiling
shows that a lot of CPU time is spent checking if there are any JIT blocks
covered by these 32 bytes (using std::map::lower_bound).
This patch adds a bitset containing the state of every 32 bytes block in
RAM (JIT cached/not JIT cached). Using that, a 32 bytes InvalidateICache
can check in the bitset if any JIT block might be invalidated. A bitset
check is a lot faster than an std::map::lower_bound operation, improving
performance of JitCache::InvalidateICache by more than 100%.
Some practical numbers:
* Xenoblade Chronicles (PAL)
56.04FPS -> 59.28FPS (+5.78%)
* The Last Story (PAL)
30.9FPS -> 32.83FPS (+6.25%)
* Super Mario Galaxy (PAL)
59.76FPS -> 62.46FPS (+4.52%)
This function still takes more time than it should - more optimization in
this area might be possible (specializing for 32 bytes blocks to avoid
useless memcpy, for example).
Very useful to compare performance between two builds, check the impact of
a configuration option, etc. FPS log is stored in User/Logs/fps.txt and is
reset each time you launch a game. Only enabled if you check the "Log FPS
to file" option in your graphics settings.
Could be improved a bit: currently logs only every 1s (so you can't really
see small variations), maybe output more infos to the fps.txt like
average/stddev (but Excel/Libreoffice/Google Docs can compute that easily
too).
These merges, while in theory improving emulation accuracy, cause issues
in other parts of the emulator based on invalid assumptions. memcard-delay
fixed some of these issues in the EXI memcard code, but several other
problems still exist and I don't have the time to debug that right now.
The problem here was the logic that detects SDL in the main CMakeLists.txt
is not the same as it is in DolphinWX/CmakeLists.txt to set libraries. When
using SDL from Externals it failed at link time because -lSDL was never set.
This fixes the problem by using the same condition logic to set the libs
as used when detecting SDL in the first place.