bsnes/nall/stream/stream.hpp

95 lines
2.2 KiB
C++
Raw Normal View History

#pragma once
namespace nall {
struct stream {
stream() = default;
virtual ~stream() = default;
stream(const stream&) = delete;
auto operator=(const stream&) -> stream& = delete;
virtual auto seekable() const -> bool = 0;
virtual auto readable() const -> bool = 0;
virtual auto writable() const -> bool = 0;
virtual auto randomaccess() const -> bool = 0;
virtual auto data() const -> uint8_t* { return nullptr; }
virtual auto size() const -> uint = 0;
virtual auto offset() const -> uint = 0;
virtual auto seek(uint offset) const -> void = 0;
virtual auto read() const -> uint8_t = 0;
virtual auto write(uint8_t data) const -> void = 0;
virtual auto read(uint) const -> uint8_t { return 0; }
virtual auto write(uint, uint8_t) const -> void {}
explicit operator bool() const {
Update to v088r03 release. byuu says: static vector<uint8_t> file::read(const string &filename); replaces: static bool file::read(const string &filename, uint8_t *&data, unsigned &size); This allows automatic deletion of the underlying data. Added vectorstream, which is obviously a vector<uint8_t> wrapper for a data stream. Plan is for all data accesses inside my emulation cores to take stream objects, especially MSU1. This lets you feed the core anything: memorystream, filestream, zipstream, gzipstream, httpstream, etc. There will still be exceptions for link and serial, those need actual library files on disk. But those aren't official hardware devices anyway. So to help with speed a bit, I'm rethinking the video rendering path. Previous system: - core outputs system-native samples (SNES = 19-bit LRGB, NES = 9-bit emphasis+palette, DMG = 2-bit grayscale, etc.) - interfaceSystem transforms samples to 30-bit via lookup table inside the emulation core - interfaceSystem masks off overscan areas, if enabled - interfaceUI runs filter to produce new target buffer, if enabled - interfaceUI transforms 30-bit video to native display depth (24-bit or 30-bit), and applies color-adjustments (gamma, etc) at the same time New system: - all cores now generate an internal palette, and call Interface::videoColor(uint32_t source, uint16_t red, uint16_t green, uint16_t blue) to get native display color post-adjusted (gamma, etc applied already.) - all cores output to uint32_t* buffer now (output video.palette[color] instead of just color) - interfaceUI runs filter to produce new target buffer, if enabled - interfaceUI memcpy()'s buffer to the video card videoColor() is pretty neat. source is the raw pixel (as per the old-format, 19-bit SNES, 9-bit NES, etc), and you can create a color from that if you really want to. Or return that value to get a buffer just like v088 and below. red, green, blue are 16-bits per channel, because why the hell not, right? Just lop off all the bits you don't want. If you have more bits on your display than that, fuck you :P The last step is extremely difficult to avoid. Video cards can and do have pitches that differ from the width of the texture. Trying to make the core account for this would be really awful. And even if we did that, the emulation routine would need to write directly to a video card RAM buffer. Some APIs require you to lock the video buffer while writing, so this would leave the video buffer locked for a long time. Probably not catastrophic, but still awful. And lastly, if the emulation core tried writing directly to the display texture, software filters would no longer be possible (unless you -really- jump through hooks and divert to a memory buffer when a filter is enabled, but ... fuck.) Anyway, the point of all that work was to eliminate an extra video copy, and the need for a really painful 30-bit to 24-bit conversion (three shifts, three masks, three array indexes.) So this basically reverts us, performance-wise, to where we were pre-30 bit support. [...] The downside to this is that we're going to need a filter for each output depth. Since the array type is uint32_t*, and I don't intend to support higher or lower depths, we really only need 24+30-bit versions of each filter. Kinda shitty, but oh well.
2012-04-27 12:12:53 +00:00
return size();
}
auto end() const -> bool {
Update to v088r03 release. byuu says: static vector<uint8_t> file::read(const string &filename); replaces: static bool file::read(const string &filename, uint8_t *&data, unsigned &size); This allows automatic deletion of the underlying data. Added vectorstream, which is obviously a vector<uint8_t> wrapper for a data stream. Plan is for all data accesses inside my emulation cores to take stream objects, especially MSU1. This lets you feed the core anything: memorystream, filestream, zipstream, gzipstream, httpstream, etc. There will still be exceptions for link and serial, those need actual library files on disk. But those aren't official hardware devices anyway. So to help with speed a bit, I'm rethinking the video rendering path. Previous system: - core outputs system-native samples (SNES = 19-bit LRGB, NES = 9-bit emphasis+palette, DMG = 2-bit grayscale, etc.) - interfaceSystem transforms samples to 30-bit via lookup table inside the emulation core - interfaceSystem masks off overscan areas, if enabled - interfaceUI runs filter to produce new target buffer, if enabled - interfaceUI transforms 30-bit video to native display depth (24-bit or 30-bit), and applies color-adjustments (gamma, etc) at the same time New system: - all cores now generate an internal palette, and call Interface::videoColor(uint32_t source, uint16_t red, uint16_t green, uint16_t blue) to get native display color post-adjusted (gamma, etc applied already.) - all cores output to uint32_t* buffer now (output video.palette[color] instead of just color) - interfaceUI runs filter to produce new target buffer, if enabled - interfaceUI memcpy()'s buffer to the video card videoColor() is pretty neat. source is the raw pixel (as per the old-format, 19-bit SNES, 9-bit NES, etc), and you can create a color from that if you really want to. Or return that value to get a buffer just like v088 and below. red, green, blue are 16-bits per channel, because why the hell not, right? Just lop off all the bits you don't want. If you have more bits on your display than that, fuck you :P The last step is extremely difficult to avoid. Video cards can and do have pitches that differ from the width of the texture. Trying to make the core account for this would be really awful. And even if we did that, the emulation routine would need to write directly to a video card RAM buffer. Some APIs require you to lock the video buffer while writing, so this would leave the video buffer locked for a long time. Probably not catastrophic, but still awful. And lastly, if the emulation core tried writing directly to the display texture, software filters would no longer be possible (unless you -really- jump through hooks and divert to a memory buffer when a filter is enabled, but ... fuck.) Anyway, the point of all that work was to eliminate an extra video copy, and the need for a really painful 30-bit to 24-bit conversion (three shifts, three masks, three array indexes.) So this basically reverts us, performance-wise, to where we were pre-30 bit support. [...] The downside to this is that we're going to need a filter for each output depth. Since the array type is uint32_t*, and I don't intend to support higher or lower depths, we really only need 24+30-bit versions of each filter. Kinda shitty, but oh well.
2012-04-27 12:12:53 +00:00
return offset() >= size();
}
auto readl(uint length = 1) const -> uintmax {
uintmax data = 0, shift = 0;
while(length--) { data |= read() << shift; shift += 8; }
return data;
}
auto readm(uint length = 1) const -> uintmax {
uintmax data = 0;
while(length--) data = (data << 8) | read();
return data;
}
auto read(uint8_t* data, uint length) const -> void {
while(length--) *data++ = read();
}
auto text() const -> string {
string buffer;
Update to v094r12 release. byuu says: Changelog: * added driver selection * added video scale + aspect correction settings * added A/V sync + audio mute settings * added configuration file * fixed compilation bugs under Windows and Linux * fixed window sizing * removed HSU1 * the system menu stays as "System", because "Game Boy Advance" was too long a string for the smallest scale size * some more stuff You guys probably won't be ecstatic about the video sizing options, but it's basically your choice of 1x, 2x or 4x scale with optional aspect correction. 3x was intentionally skipped because it looks horrible on hires SNES games. The window is resized and recentered upon loading games. The window doesn't resize otherwise. I never really liked the way v094 always left you with black screen areas and left you with off-centered window positions. I might go ahead and add the pseudo-fullscreen toggle that will jump into 4x mode (respecting your aspect setting.) Short-term: * add input port changing support * add other input types (mouse-based, etc) * add save states * add cheat codes * add timing configuration (video/audio sync) * add hotkeys (single state) We can probably do a new release once the short-term items are completed. Long-term: * add slotted cart loader (SGB, BSX, ST) * add DIP switch selection window (NSS) * add cheat code database * add state manager * add overscan masking Not planned: * video color adjustments (will allow emulated color vs raw color; but no more sliders) * pixel shaders * ananke integration (will need to make a command-line version to get my games in) * fancy audio adjustment controls (resampler, latency, volume) * input focus settings * relocating game library (not hard, just don't feel like it) * localization support (not enough users) * window geometry memory * anything else not in higan v094
2015-03-03 10:14:49 +00:00
buffer.resize(size());
seek(0);
read((uint8_t*)buffer.get(), size());
return buffer;
}
auto writel(uintmax data, uint length = 1) const -> void {
while(length--) {
write(data);
data >>= 8;
}
}
auto writem(uintmax data, uint length = 1) const -> void {
uintmax shift = 8 * length;
while(length--) {
shift -= 8;
write(data >> shift);
}
}
auto write(const uint8_t* data, uint length) const -> void {
while(length--) write(*data++);
}
struct byte {
byte(const stream& s, uint offset) : s(s), offset(offset) {}
operator uint8_t() const { return s.read(offset); }
auto operator=(uint8_t data) -> byte& { s.write(offset, data); return *this; }
private:
const stream& s;
const uint offset;
};
auto operator[](uint offset) const -> byte {
return byte(*this, offset);
}
};
}