bsnes is a Super Nintendo (SNES) emulator focused on performance, features, and ease of use.
Go to file
Tim Allen 616372e96b Update to v088r03 release.
byuu says:

static vector<uint8_t> file::read(const string &filename); replaces:
static bool file::read(const string &filename, uint8_t *&data, unsigned
&size); This allows automatic deletion of the underlying data.

Added vectorstream, which is obviously a vector<uint8_t> wrapper for
a data stream.  Plan is for all data accesses inside my emulation cores
to take stream objects, especially MSU1.  This lets you feed the core
anything: memorystream, filestream, zipstream, gzipstream, httpstream,
etc.  There will still be exceptions for link and serial, those need
actual library files on disk. But those aren't official hardware devices
anyway.

So to help with speed a bit, I'm rethinking the video rendering path.

Previous system:
- core outputs system-native samples (SNES = 19-bit LRGB, NES = 9-bit
  emphasis+palette, DMG = 2-bit grayscale, etc.)
- interfaceSystem transforms samples to 30-bit via lookup table inside
  the emulation core
- interfaceSystem masks off overscan areas, if enabled
- interfaceUI runs filter to produce new target buffer, if enabled
- interfaceUI transforms 30-bit video to native display depth (24-bit or
  30-bit), and applies color-adjustments (gamma, etc) at the same time

New system:
- all cores now generate an internal palette, and call
  Interface::videoColor(uint32_t source, uint16_t red, uint16_t green,
  uint16_t blue) to get native display color post-adjusted (gamma, etc
  applied already.)
- all cores output to uint32_t* buffer now (output video.palette[color]
  instead of just color)
- interfaceUI runs filter to produce new target buffer, if enabled
- interfaceUI memcpy()'s buffer to the video card

videoColor() is pretty neat. source is the raw pixel (as per the
old-format, 19-bit SNES, 9-bit NES, etc), and you can create a color
from that if you really want to. Or return that value to get a buffer
just like v088 and below.  red, green, blue are 16-bits per channel,
because why the hell not, right? Just lop off all the bits you don't
want. If you have more bits on your display than that, fuck you :P

The last step is extremely difficult to avoid. Video cards can and do
have pitches that differ from the width of the texture.  Trying to make
the core account for this would be really awful. And even if we did
that, the emulation routine would need to write directly to a video card
RAM buffer.  Some APIs require you to lock the video buffer while
writing, so this would leave the video buffer locked for a long time.
Probably not catastrophic, but still awful.  And lastly, if the
emulation core tried writing directly to the display texture, software
filters would no longer be possible (unless you -really- jump through
hooks and divert to a memory buffer when a filter is enabled, but ...
fuck.)

Anyway, the point of all that work was to eliminate an extra video copy,
and the need for a really painful 30-bit to 24-bit conversion (three
shifts, three masks, three array indexes.) So this basically reverts us,
performance-wise, to where we were pre-30 bit support.

[...]

The downside to this is that we're going to need a filter for each
output depth. Since the array type is uint32_t*, and I don't intend to
support higher or lower depths, we really only need 24+30-bit versions
of each filter.  Kinda shitty, but oh well.
2012-04-27 22:12:53 +10:00
bsnes Update to v088r03 release. 2012-04-27 22:12:53 +10:00
purify Update to v088 release. 2012-04-24 23:17:52 +10:00
snesfilter Update to v085 release. 2012-01-04 00:10:46 +11:00
snesshader Update to v085 release. 2012-01-04 00:10:46 +11:00
.gitignore Update to v088 release. 2012-04-24 23:17:52 +10:00