- simplified ring-buffer mechanism
- added proper locking for all variables accessed by 2 different threads
- fixed oob writes that occassionally crashed SDL's "Alsa Hotplug thread"
- make buffer sufficiently large to prebuffer enough samples to survive
the occassional SDL_Delay(1) in the frontend.
- fixed ignoring volume set by the SPU.
- improved speed and robustness by not calling malloc over and over in
SDL callback, and copying directly to the SDL buffer if volume is max
(no need to use mixer to lower the volume in that case).
the new command line parameter --scale allows to scale the window by
a floating point factor. SDL2 stretches it in hardware to the desired
size, which makes the scaled window run at almost identical speed to
1x scale.
1) the float format displayed like 50.123456789123456 wouldn't fit
into the window title bar, and
2) most likely not into the char buffer of length 20, of which half
was already used for the desmume string.
additionally:
- don't allocate memory for surfaces and textures over and over
- use one texture for each NDS screen - this allows to easily
add support for horizontal screen layout.
the command line option existed once, but was turned off when a
new generic commandline parser class was introduced. the entire
array in main.cpp using custom commandline options is currently
unused.
there were 2 logical issues which caused reproducible misbehaviour.
for example when starting up pokemon soulsilver, one can click away
the intro, but it's not possible to click on the "load savegame"
icon.
the issues were:
1) failure to record whether the down event has been
passed to the emulator before abandoning it and turning it into
a click event (on a fast click, both events would happen during
the same SDL_Pollevent loop), and
2) mouse coordinates were discarded and unless the mouse down
event was registered. that means if the down and up events happen
on the exact same coordinate, the .x and .y of the mouse weren't
updated at all.
this is probably helpful for frontends other than cli that have to repaint
and react on events in the user interface, so they can set a timeout like
100 ms, or simply poll whether the stub is active using timeout 0.
the emulator thread was consuming 100% cpu even when the debugger was
active and execution paused.
a second pipe was added to gdb stub, which allows communication in
direction stub -> emulator/frontend, and also to infinitely block
in the frontend until the debugger returns control, for example
by typing "c" (continue) in gdb.
the other frontends use an inefficient method of running usleep(1000)
or similar in a loop, which will cause high cpu usage too, albeit not
a full 100% but more like 10-20%.
in order not to fill up the pipe with data for frontends that don't use
this mechanism, the functionality needs to be explicitly enabled.
(see functions added to gdbstub.h)
the functions added could in theory also be used to communicate
other data to the frontend, and optimally even replace all the locking
between the 2 sides.
- It's a regression from commit 4578728. I'm suspecting that this particular buffer is to be read as 32-bit since all of the other Linux frontends explicitly used 16-bit except for this one.
- Added some additional comments so that I'm not tempted to change the native line tracking paradigm ever again.
- Do some refactoring to make GPUEngineBase::_targetDisplay handle more buffer associations itself instead of relying on GPUEngineBase's copies of the associations.
- For purposes of maintaining a record and make for easier reversions, the code has NOT been fully optimized or cleaned up. This will happen over a period of time as the code settles down through testing.
- All "native" buffers are no longer assumed to be in any color space and are now assumed to always be 15-bit. The native buffers are now referenced using uint16_t pointers and are now suffixed with "16" in order to reflect this change.
- Of note, all clients that reference masterNativeBuffer or nativeBuffer via NDSDisplayInfo must now assume that these native buffers will always be in the 16-bit color space.
- Any 18-bit and 24-bit rendering now happens in the custom buffers.
- Also fixes a bug in PixelOperation_SSE2::_unknownEffectMask32() that would cause 3D layers to appear black if the user was running 15-bit color mode. (Regression from commit 0db9872.)