2012-01-15 08:29:57 +00:00
|
|
|
#ifndef NALL_STREAM_STREAM_HPP
|
|
|
|
#define NALL_STREAM_STREAM_HPP
|
|
|
|
|
|
|
|
namespace nall {
|
|
|
|
|
|
|
|
struct stream {
|
|
|
|
virtual bool seekable() const = 0;
|
|
|
|
virtual bool readable() const = 0;
|
|
|
|
virtual bool writable() const = 0;
|
|
|
|
virtual bool randomaccess() const = 0;
|
|
|
|
|
Update to v088r03 release.
byuu says:
static vector<uint8_t> file::read(const string &filename); replaces:
static bool file::read(const string &filename, uint8_t *&data, unsigned
&size); This allows automatic deletion of the underlying data.
Added vectorstream, which is obviously a vector<uint8_t> wrapper for
a data stream. Plan is for all data accesses inside my emulation cores
to take stream objects, especially MSU1. This lets you feed the core
anything: memorystream, filestream, zipstream, gzipstream, httpstream,
etc. There will still be exceptions for link and serial, those need
actual library files on disk. But those aren't official hardware devices
anyway.
So to help with speed a bit, I'm rethinking the video rendering path.
Previous system:
- core outputs system-native samples (SNES = 19-bit LRGB, NES = 9-bit
emphasis+palette, DMG = 2-bit grayscale, etc.)
- interfaceSystem transforms samples to 30-bit via lookup table inside
the emulation core
- interfaceSystem masks off overscan areas, if enabled
- interfaceUI runs filter to produce new target buffer, if enabled
- interfaceUI transforms 30-bit video to native display depth (24-bit or
30-bit), and applies color-adjustments (gamma, etc) at the same time
New system:
- all cores now generate an internal palette, and call
Interface::videoColor(uint32_t source, uint16_t red, uint16_t green,
uint16_t blue) to get native display color post-adjusted (gamma, etc
applied already.)
- all cores output to uint32_t* buffer now (output video.palette[color]
instead of just color)
- interfaceUI runs filter to produce new target buffer, if enabled
- interfaceUI memcpy()'s buffer to the video card
videoColor() is pretty neat. source is the raw pixel (as per the
old-format, 19-bit SNES, 9-bit NES, etc), and you can create a color
from that if you really want to. Or return that value to get a buffer
just like v088 and below. red, green, blue are 16-bits per channel,
because why the hell not, right? Just lop off all the bits you don't
want. If you have more bits on your display than that, fuck you :P
The last step is extremely difficult to avoid. Video cards can and do
have pitches that differ from the width of the texture. Trying to make
the core account for this would be really awful. And even if we did
that, the emulation routine would need to write directly to a video card
RAM buffer. Some APIs require you to lock the video buffer while
writing, so this would leave the video buffer locked for a long time.
Probably not catastrophic, but still awful. And lastly, if the
emulation core tried writing directly to the display texture, software
filters would no longer be possible (unless you -really- jump through
hooks and divert to a memory buffer when a filter is enabled, but ...
fuck.)
Anyway, the point of all that work was to eliminate an extra video copy,
and the need for a really painful 30-bit to 24-bit conversion (three
shifts, three masks, three array indexes.) So this basically reverts us,
performance-wise, to where we were pre-30 bit support.
[...]
The downside to this is that we're going to need a filter for each
output depth. Since the array type is uint32_t*, and I don't intend to
support higher or lower depths, we really only need 24+30-bit versions
of each filter. Kinda shitty, but oh well.
2012-04-27 12:12:53 +00:00
|
|
|
virtual uint8_t* data() const { return nullptr; }
|
2012-01-15 08:29:57 +00:00
|
|
|
virtual unsigned size() const = 0;
|
|
|
|
virtual unsigned offset() const = 0;
|
|
|
|
virtual void seek(unsigned offset) const = 0;
|
|
|
|
|
|
|
|
virtual uint8_t read() const = 0;
|
|
|
|
virtual void write(uint8_t data) const = 0;
|
|
|
|
|
Update to v088r03 release.
byuu says:
static vector<uint8_t> file::read(const string &filename); replaces:
static bool file::read(const string &filename, uint8_t *&data, unsigned
&size); This allows automatic deletion of the underlying data.
Added vectorstream, which is obviously a vector<uint8_t> wrapper for
a data stream. Plan is for all data accesses inside my emulation cores
to take stream objects, especially MSU1. This lets you feed the core
anything: memorystream, filestream, zipstream, gzipstream, httpstream,
etc. There will still be exceptions for link and serial, those need
actual library files on disk. But those aren't official hardware devices
anyway.
So to help with speed a bit, I'm rethinking the video rendering path.
Previous system:
- core outputs system-native samples (SNES = 19-bit LRGB, NES = 9-bit
emphasis+palette, DMG = 2-bit grayscale, etc.)
- interfaceSystem transforms samples to 30-bit via lookup table inside
the emulation core
- interfaceSystem masks off overscan areas, if enabled
- interfaceUI runs filter to produce new target buffer, if enabled
- interfaceUI transforms 30-bit video to native display depth (24-bit or
30-bit), and applies color-adjustments (gamma, etc) at the same time
New system:
- all cores now generate an internal palette, and call
Interface::videoColor(uint32_t source, uint16_t red, uint16_t green,
uint16_t blue) to get native display color post-adjusted (gamma, etc
applied already.)
- all cores output to uint32_t* buffer now (output video.palette[color]
instead of just color)
- interfaceUI runs filter to produce new target buffer, if enabled
- interfaceUI memcpy()'s buffer to the video card
videoColor() is pretty neat. source is the raw pixel (as per the
old-format, 19-bit SNES, 9-bit NES, etc), and you can create a color
from that if you really want to. Or return that value to get a buffer
just like v088 and below. red, green, blue are 16-bits per channel,
because why the hell not, right? Just lop off all the bits you don't
want. If you have more bits on your display than that, fuck you :P
The last step is extremely difficult to avoid. Video cards can and do
have pitches that differ from the width of the texture. Trying to make
the core account for this would be really awful. And even if we did
that, the emulation routine would need to write directly to a video card
RAM buffer. Some APIs require you to lock the video buffer while
writing, so this would leave the video buffer locked for a long time.
Probably not catastrophic, but still awful. And lastly, if the
emulation core tried writing directly to the display texture, software
filters would no longer be possible (unless you -really- jump through
hooks and divert to a memory buffer when a filter is enabled, but ...
fuck.)
Anyway, the point of all that work was to eliminate an extra video copy,
and the need for a really painful 30-bit to 24-bit conversion (three
shifts, three masks, three array indexes.) So this basically reverts us,
performance-wise, to where we were pre-30 bit support.
[...]
The downside to this is that we're going to need a filter for each
output depth. Since the array type is uint32_t*, and I don't intend to
support higher or lower depths, we really only need 24+30-bit versions
of each filter. Kinda shitty, but oh well.
2012-04-27 12:12:53 +00:00
|
|
|
virtual uint8_t read(unsigned) const { return 0; }
|
|
|
|
virtual void write(unsigned, uint8_t) const {}
|
2012-01-15 08:29:57 +00:00
|
|
|
|
Update to v088r03 release.
byuu says:
static vector<uint8_t> file::read(const string &filename); replaces:
static bool file::read(const string &filename, uint8_t *&data, unsigned
&size); This allows automatic deletion of the underlying data.
Added vectorstream, which is obviously a vector<uint8_t> wrapper for
a data stream. Plan is for all data accesses inside my emulation cores
to take stream objects, especially MSU1. This lets you feed the core
anything: memorystream, filestream, zipstream, gzipstream, httpstream,
etc. There will still be exceptions for link and serial, those need
actual library files on disk. But those aren't official hardware devices
anyway.
So to help with speed a bit, I'm rethinking the video rendering path.
Previous system:
- core outputs system-native samples (SNES = 19-bit LRGB, NES = 9-bit
emphasis+palette, DMG = 2-bit grayscale, etc.)
- interfaceSystem transforms samples to 30-bit via lookup table inside
the emulation core
- interfaceSystem masks off overscan areas, if enabled
- interfaceUI runs filter to produce new target buffer, if enabled
- interfaceUI transforms 30-bit video to native display depth (24-bit or
30-bit), and applies color-adjustments (gamma, etc) at the same time
New system:
- all cores now generate an internal palette, and call
Interface::videoColor(uint32_t source, uint16_t red, uint16_t green,
uint16_t blue) to get native display color post-adjusted (gamma, etc
applied already.)
- all cores output to uint32_t* buffer now (output video.palette[color]
instead of just color)
- interfaceUI runs filter to produce new target buffer, if enabled
- interfaceUI memcpy()'s buffer to the video card
videoColor() is pretty neat. source is the raw pixel (as per the
old-format, 19-bit SNES, 9-bit NES, etc), and you can create a color
from that if you really want to. Or return that value to get a buffer
just like v088 and below. red, green, blue are 16-bits per channel,
because why the hell not, right? Just lop off all the bits you don't
want. If you have more bits on your display than that, fuck you :P
The last step is extremely difficult to avoid. Video cards can and do
have pitches that differ from the width of the texture. Trying to make
the core account for this would be really awful. And even if we did
that, the emulation routine would need to write directly to a video card
RAM buffer. Some APIs require you to lock the video buffer while
writing, so this would leave the video buffer locked for a long time.
Probably not catastrophic, but still awful. And lastly, if the
emulation core tried writing directly to the display texture, software
filters would no longer be possible (unless you -really- jump through
hooks and divert to a memory buffer when a filter is enabled, but ...
fuck.)
Anyway, the point of all that work was to eliminate an extra video copy,
and the need for a really painful 30-bit to 24-bit conversion (three
shifts, three masks, three array indexes.) So this basically reverts us,
performance-wise, to where we were pre-30 bit support.
[...]
The downside to this is that we're going to need a filter for each
output depth. Since the array type is uint32_t*, and I don't intend to
support higher or lower depths, we really only need 24+30-bit versions
of each filter. Kinda shitty, but oh well.
2012-04-27 12:12:53 +00:00
|
|
|
operator bool() const {
|
|
|
|
return size();
|
|
|
|
}
|
|
|
|
|
|
|
|
bool empty() const {
|
|
|
|
return size() == 0;
|
2012-01-15 08:29:57 +00:00
|
|
|
}
|
|
|
|
|
Update to v088r03 release.
byuu says:
static vector<uint8_t> file::read(const string &filename); replaces:
static bool file::read(const string &filename, uint8_t *&data, unsigned
&size); This allows automatic deletion of the underlying data.
Added vectorstream, which is obviously a vector<uint8_t> wrapper for
a data stream. Plan is for all data accesses inside my emulation cores
to take stream objects, especially MSU1. This lets you feed the core
anything: memorystream, filestream, zipstream, gzipstream, httpstream,
etc. There will still be exceptions for link and serial, those need
actual library files on disk. But those aren't official hardware devices
anyway.
So to help with speed a bit, I'm rethinking the video rendering path.
Previous system:
- core outputs system-native samples (SNES = 19-bit LRGB, NES = 9-bit
emphasis+palette, DMG = 2-bit grayscale, etc.)
- interfaceSystem transforms samples to 30-bit via lookup table inside
the emulation core
- interfaceSystem masks off overscan areas, if enabled
- interfaceUI runs filter to produce new target buffer, if enabled
- interfaceUI transforms 30-bit video to native display depth (24-bit or
30-bit), and applies color-adjustments (gamma, etc) at the same time
New system:
- all cores now generate an internal palette, and call
Interface::videoColor(uint32_t source, uint16_t red, uint16_t green,
uint16_t blue) to get native display color post-adjusted (gamma, etc
applied already.)
- all cores output to uint32_t* buffer now (output video.palette[color]
instead of just color)
- interfaceUI runs filter to produce new target buffer, if enabled
- interfaceUI memcpy()'s buffer to the video card
videoColor() is pretty neat. source is the raw pixel (as per the
old-format, 19-bit SNES, 9-bit NES, etc), and you can create a color
from that if you really want to. Or return that value to get a buffer
just like v088 and below. red, green, blue are 16-bits per channel,
because why the hell not, right? Just lop off all the bits you don't
want. If you have more bits on your display than that, fuck you :P
The last step is extremely difficult to avoid. Video cards can and do
have pitches that differ from the width of the texture. Trying to make
the core account for this would be really awful. And even if we did
that, the emulation routine would need to write directly to a video card
RAM buffer. Some APIs require you to lock the video buffer while
writing, so this would leave the video buffer locked for a long time.
Probably not catastrophic, but still awful. And lastly, if the
emulation core tried writing directly to the display texture, software
filters would no longer be possible (unless you -really- jump through
hooks and divert to a memory buffer when a filter is enabled, but ...
fuck.)
Anyway, the point of all that work was to eliminate an extra video copy,
and the need for a really painful 30-bit to 24-bit conversion (three
shifts, three masks, three array indexes.) So this basically reverts us,
performance-wise, to where we were pre-30 bit support.
[...]
The downside to this is that we're going to need a filter for each
output depth. Since the array type is uint32_t*, and I don't intend to
support higher or lower depths, we really only need 24+30-bit versions
of each filter. Kinda shitty, but oh well.
2012-04-27 12:12:53 +00:00
|
|
|
bool end() const {
|
|
|
|
return offset() >= size();
|
2012-01-15 08:29:57 +00:00
|
|
|
}
|
|
|
|
|
Update to v088r03 release.
byuu says:
static vector<uint8_t> file::read(const string &filename); replaces:
static bool file::read(const string &filename, uint8_t *&data, unsigned
&size); This allows automatic deletion of the underlying data.
Added vectorstream, which is obviously a vector<uint8_t> wrapper for
a data stream. Plan is for all data accesses inside my emulation cores
to take stream objects, especially MSU1. This lets you feed the core
anything: memorystream, filestream, zipstream, gzipstream, httpstream,
etc. There will still be exceptions for link and serial, those need
actual library files on disk. But those aren't official hardware devices
anyway.
So to help with speed a bit, I'm rethinking the video rendering path.
Previous system:
- core outputs system-native samples (SNES = 19-bit LRGB, NES = 9-bit
emphasis+palette, DMG = 2-bit grayscale, etc.)
- interfaceSystem transforms samples to 30-bit via lookup table inside
the emulation core
- interfaceSystem masks off overscan areas, if enabled
- interfaceUI runs filter to produce new target buffer, if enabled
- interfaceUI transforms 30-bit video to native display depth (24-bit or
30-bit), and applies color-adjustments (gamma, etc) at the same time
New system:
- all cores now generate an internal palette, and call
Interface::videoColor(uint32_t source, uint16_t red, uint16_t green,
uint16_t blue) to get native display color post-adjusted (gamma, etc
applied already.)
- all cores output to uint32_t* buffer now (output video.palette[color]
instead of just color)
- interfaceUI runs filter to produce new target buffer, if enabled
- interfaceUI memcpy()'s buffer to the video card
videoColor() is pretty neat. source is the raw pixel (as per the
old-format, 19-bit SNES, 9-bit NES, etc), and you can create a color
from that if you really want to. Or return that value to get a buffer
just like v088 and below. red, green, blue are 16-bits per channel,
because why the hell not, right? Just lop off all the bits you don't
want. If you have more bits on your display than that, fuck you :P
The last step is extremely difficult to avoid. Video cards can and do
have pitches that differ from the width of the texture. Trying to make
the core account for this would be really awful. And even if we did
that, the emulation routine would need to write directly to a video card
RAM buffer. Some APIs require you to lock the video buffer while
writing, so this would leave the video buffer locked for a long time.
Probably not catastrophic, but still awful. And lastly, if the
emulation core tried writing directly to the display texture, software
filters would no longer be possible (unless you -really- jump through
hooks and divert to a memory buffer when a filter is enabled, but ...
fuck.)
Anyway, the point of all that work was to eliminate an extra video copy,
and the need for a really painful 30-bit to 24-bit conversion (three
shifts, three masks, three array indexes.) So this basically reverts us,
performance-wise, to where we were pre-30 bit support.
[...]
The downside to this is that we're going to need a filter for each
output depth. Since the array type is uint32_t*, and I don't intend to
support higher or lower depths, we really only need 24+30-bit versions
of each filter. Kinda shitty, but oh well.
2012-04-27 12:12:53 +00:00
|
|
|
uintmax_t readl(unsigned length = 1) const {
|
2012-01-15 08:29:57 +00:00
|
|
|
uintmax_t data = 0, shift = 0;
|
|
|
|
while(length--) { data |= read() << shift; shift += 8; }
|
|
|
|
return data;
|
|
|
|
}
|
|
|
|
|
Update to v088r03 release.
byuu says:
static vector<uint8_t> file::read(const string &filename); replaces:
static bool file::read(const string &filename, uint8_t *&data, unsigned
&size); This allows automatic deletion of the underlying data.
Added vectorstream, which is obviously a vector<uint8_t> wrapper for
a data stream. Plan is for all data accesses inside my emulation cores
to take stream objects, especially MSU1. This lets you feed the core
anything: memorystream, filestream, zipstream, gzipstream, httpstream,
etc. There will still be exceptions for link and serial, those need
actual library files on disk. But those aren't official hardware devices
anyway.
So to help with speed a bit, I'm rethinking the video rendering path.
Previous system:
- core outputs system-native samples (SNES = 19-bit LRGB, NES = 9-bit
emphasis+palette, DMG = 2-bit grayscale, etc.)
- interfaceSystem transforms samples to 30-bit via lookup table inside
the emulation core
- interfaceSystem masks off overscan areas, if enabled
- interfaceUI runs filter to produce new target buffer, if enabled
- interfaceUI transforms 30-bit video to native display depth (24-bit or
30-bit), and applies color-adjustments (gamma, etc) at the same time
New system:
- all cores now generate an internal palette, and call
Interface::videoColor(uint32_t source, uint16_t red, uint16_t green,
uint16_t blue) to get native display color post-adjusted (gamma, etc
applied already.)
- all cores output to uint32_t* buffer now (output video.palette[color]
instead of just color)
- interfaceUI runs filter to produce new target buffer, if enabled
- interfaceUI memcpy()'s buffer to the video card
videoColor() is pretty neat. source is the raw pixel (as per the
old-format, 19-bit SNES, 9-bit NES, etc), and you can create a color
from that if you really want to. Or return that value to get a buffer
just like v088 and below. red, green, blue are 16-bits per channel,
because why the hell not, right? Just lop off all the bits you don't
want. If you have more bits on your display than that, fuck you :P
The last step is extremely difficult to avoid. Video cards can and do
have pitches that differ from the width of the texture. Trying to make
the core account for this would be really awful. And even if we did
that, the emulation routine would need to write directly to a video card
RAM buffer. Some APIs require you to lock the video buffer while
writing, so this would leave the video buffer locked for a long time.
Probably not catastrophic, but still awful. And lastly, if the
emulation core tried writing directly to the display texture, software
filters would no longer be possible (unless you -really- jump through
hooks and divert to a memory buffer when a filter is enabled, but ...
fuck.)
Anyway, the point of all that work was to eliminate an extra video copy,
and the need for a really painful 30-bit to 24-bit conversion (three
shifts, three masks, three array indexes.) So this basically reverts us,
performance-wise, to where we were pre-30 bit support.
[...]
The downside to this is that we're going to need a filter for each
output depth. Since the array type is uint32_t*, and I don't intend to
support higher or lower depths, we really only need 24+30-bit versions
of each filter. Kinda shitty, but oh well.
2012-04-27 12:12:53 +00:00
|
|
|
uintmax_t readm(unsigned length = 1) const {
|
2012-01-15 08:29:57 +00:00
|
|
|
uintmax_t data = 0;
|
|
|
|
while(length--) data = (data << 8) | read();
|
|
|
|
return data;
|
|
|
|
}
|
|
|
|
|
2012-04-29 06:16:44 +00:00
|
|
|
void read(uint8_t *data, unsigned length) const {
|
2012-01-15 08:29:57 +00:00
|
|
|
while(length--) *data++ = read();
|
|
|
|
}
|
|
|
|
|
Update to v088r03 release.
byuu says:
static vector<uint8_t> file::read(const string &filename); replaces:
static bool file::read(const string &filename, uint8_t *&data, unsigned
&size); This allows automatic deletion of the underlying data.
Added vectorstream, which is obviously a vector<uint8_t> wrapper for
a data stream. Plan is for all data accesses inside my emulation cores
to take stream objects, especially MSU1. This lets you feed the core
anything: memorystream, filestream, zipstream, gzipstream, httpstream,
etc. There will still be exceptions for link and serial, those need
actual library files on disk. But those aren't official hardware devices
anyway.
So to help with speed a bit, I'm rethinking the video rendering path.
Previous system:
- core outputs system-native samples (SNES = 19-bit LRGB, NES = 9-bit
emphasis+palette, DMG = 2-bit grayscale, etc.)
- interfaceSystem transforms samples to 30-bit via lookup table inside
the emulation core
- interfaceSystem masks off overscan areas, if enabled
- interfaceUI runs filter to produce new target buffer, if enabled
- interfaceUI transforms 30-bit video to native display depth (24-bit or
30-bit), and applies color-adjustments (gamma, etc) at the same time
New system:
- all cores now generate an internal palette, and call
Interface::videoColor(uint32_t source, uint16_t red, uint16_t green,
uint16_t blue) to get native display color post-adjusted (gamma, etc
applied already.)
- all cores output to uint32_t* buffer now (output video.palette[color]
instead of just color)
- interfaceUI runs filter to produce new target buffer, if enabled
- interfaceUI memcpy()'s buffer to the video card
videoColor() is pretty neat. source is the raw pixel (as per the
old-format, 19-bit SNES, 9-bit NES, etc), and you can create a color
from that if you really want to. Or return that value to get a buffer
just like v088 and below. red, green, blue are 16-bits per channel,
because why the hell not, right? Just lop off all the bits you don't
want. If you have more bits on your display than that, fuck you :P
The last step is extremely difficult to avoid. Video cards can and do
have pitches that differ from the width of the texture. Trying to make
the core account for this would be really awful. And even if we did
that, the emulation routine would need to write directly to a video card
RAM buffer. Some APIs require you to lock the video buffer while
writing, so this would leave the video buffer locked for a long time.
Probably not catastrophic, but still awful. And lastly, if the
emulation core tried writing directly to the display texture, software
filters would no longer be possible (unless you -really- jump through
hooks and divert to a memory buffer when a filter is enabled, but ...
fuck.)
Anyway, the point of all that work was to eliminate an extra video copy,
and the need for a really painful 30-bit to 24-bit conversion (three
shifts, three masks, three array indexes.) So this basically reverts us,
performance-wise, to where we were pre-30 bit support.
[...]
The downside to this is that we're going to need a filter for each
output depth. Since the array type is uint32_t*, and I don't intend to
support higher or lower depths, we really only need 24+30-bit versions
of each filter. Kinda shitty, but oh well.
2012-04-27 12:12:53 +00:00
|
|
|
void writel(uintmax_t data, unsigned length = 1) const {
|
2012-01-15 08:29:57 +00:00
|
|
|
while(length--) {
|
|
|
|
write(data);
|
|
|
|
data >>= 8;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
Update to v088r03 release.
byuu says:
static vector<uint8_t> file::read(const string &filename); replaces:
static bool file::read(const string &filename, uint8_t *&data, unsigned
&size); This allows automatic deletion of the underlying data.
Added vectorstream, which is obviously a vector<uint8_t> wrapper for
a data stream. Plan is for all data accesses inside my emulation cores
to take stream objects, especially MSU1. This lets you feed the core
anything: memorystream, filestream, zipstream, gzipstream, httpstream,
etc. There will still be exceptions for link and serial, those need
actual library files on disk. But those aren't official hardware devices
anyway.
So to help with speed a bit, I'm rethinking the video rendering path.
Previous system:
- core outputs system-native samples (SNES = 19-bit LRGB, NES = 9-bit
emphasis+palette, DMG = 2-bit grayscale, etc.)
- interfaceSystem transforms samples to 30-bit via lookup table inside
the emulation core
- interfaceSystem masks off overscan areas, if enabled
- interfaceUI runs filter to produce new target buffer, if enabled
- interfaceUI transforms 30-bit video to native display depth (24-bit or
30-bit), and applies color-adjustments (gamma, etc) at the same time
New system:
- all cores now generate an internal palette, and call
Interface::videoColor(uint32_t source, uint16_t red, uint16_t green,
uint16_t blue) to get native display color post-adjusted (gamma, etc
applied already.)
- all cores output to uint32_t* buffer now (output video.palette[color]
instead of just color)
- interfaceUI runs filter to produce new target buffer, if enabled
- interfaceUI memcpy()'s buffer to the video card
videoColor() is pretty neat. source is the raw pixel (as per the
old-format, 19-bit SNES, 9-bit NES, etc), and you can create a color
from that if you really want to. Or return that value to get a buffer
just like v088 and below. red, green, blue are 16-bits per channel,
because why the hell not, right? Just lop off all the bits you don't
want. If you have more bits on your display than that, fuck you :P
The last step is extremely difficult to avoid. Video cards can and do
have pitches that differ from the width of the texture. Trying to make
the core account for this would be really awful. And even if we did
that, the emulation routine would need to write directly to a video card
RAM buffer. Some APIs require you to lock the video buffer while
writing, so this would leave the video buffer locked for a long time.
Probably not catastrophic, but still awful. And lastly, if the
emulation core tried writing directly to the display texture, software
filters would no longer be possible (unless you -really- jump through
hooks and divert to a memory buffer when a filter is enabled, but ...
fuck.)
Anyway, the point of all that work was to eliminate an extra video copy,
and the need for a really painful 30-bit to 24-bit conversion (three
shifts, three masks, three array indexes.) So this basically reverts us,
performance-wise, to where we were pre-30 bit support.
[...]
The downside to this is that we're going to need a filter for each
output depth. Since the array type is uint32_t*, and I don't intend to
support higher or lower depths, we really only need 24+30-bit versions
of each filter. Kinda shitty, but oh well.
2012-04-27 12:12:53 +00:00
|
|
|
void writem(uintmax_t data, unsigned length = 1) const {
|
2012-01-15 08:29:57 +00:00
|
|
|
uintmax_t shift = 8 * length;
|
|
|
|
while(length--) {
|
|
|
|
shift -= 8;
|
|
|
|
write(data >> shift);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
Update to v088r03 release.
byuu says:
static vector<uint8_t> file::read(const string &filename); replaces:
static bool file::read(const string &filename, uint8_t *&data, unsigned
&size); This allows automatic deletion of the underlying data.
Added vectorstream, which is obviously a vector<uint8_t> wrapper for
a data stream. Plan is for all data accesses inside my emulation cores
to take stream objects, especially MSU1. This lets you feed the core
anything: memorystream, filestream, zipstream, gzipstream, httpstream,
etc. There will still be exceptions for link and serial, those need
actual library files on disk. But those aren't official hardware devices
anyway.
So to help with speed a bit, I'm rethinking the video rendering path.
Previous system:
- core outputs system-native samples (SNES = 19-bit LRGB, NES = 9-bit
emphasis+palette, DMG = 2-bit grayscale, etc.)
- interfaceSystem transforms samples to 30-bit via lookup table inside
the emulation core
- interfaceSystem masks off overscan areas, if enabled
- interfaceUI runs filter to produce new target buffer, if enabled
- interfaceUI transforms 30-bit video to native display depth (24-bit or
30-bit), and applies color-adjustments (gamma, etc) at the same time
New system:
- all cores now generate an internal palette, and call
Interface::videoColor(uint32_t source, uint16_t red, uint16_t green,
uint16_t blue) to get native display color post-adjusted (gamma, etc
applied already.)
- all cores output to uint32_t* buffer now (output video.palette[color]
instead of just color)
- interfaceUI runs filter to produce new target buffer, if enabled
- interfaceUI memcpy()'s buffer to the video card
videoColor() is pretty neat. source is the raw pixel (as per the
old-format, 19-bit SNES, 9-bit NES, etc), and you can create a color
from that if you really want to. Or return that value to get a buffer
just like v088 and below. red, green, blue are 16-bits per channel,
because why the hell not, right? Just lop off all the bits you don't
want. If you have more bits on your display than that, fuck you :P
The last step is extremely difficult to avoid. Video cards can and do
have pitches that differ from the width of the texture. Trying to make
the core account for this would be really awful. And even if we did
that, the emulation routine would need to write directly to a video card
RAM buffer. Some APIs require you to lock the video buffer while
writing, so this would leave the video buffer locked for a long time.
Probably not catastrophic, but still awful. And lastly, if the
emulation core tried writing directly to the display texture, software
filters would no longer be possible (unless you -really- jump through
hooks and divert to a memory buffer when a filter is enabled, but ...
fuck.)
Anyway, the point of all that work was to eliminate an extra video copy,
and the need for a really painful 30-bit to 24-bit conversion (three
shifts, three masks, three array indexes.) So this basically reverts us,
performance-wise, to where we were pre-30 bit support.
[...]
The downside to this is that we're going to need a filter for each
output depth. Since the array type is uint32_t*, and I don't intend to
support higher or lower depths, we really only need 24+30-bit versions
of each filter. Kinda shitty, but oh well.
2012-04-27 12:12:53 +00:00
|
|
|
void write(const uint8_t *data, unsigned length) const {
|
2012-01-15 08:29:57 +00:00
|
|
|
while(length--) write(*data++);
|
|
|
|
}
|
|
|
|
|
|
|
|
struct byte {
|
Update to v088r03 release.
byuu says:
static vector<uint8_t> file::read(const string &filename); replaces:
static bool file::read(const string &filename, uint8_t *&data, unsigned
&size); This allows automatic deletion of the underlying data.
Added vectorstream, which is obviously a vector<uint8_t> wrapper for
a data stream. Plan is for all data accesses inside my emulation cores
to take stream objects, especially MSU1. This lets you feed the core
anything: memorystream, filestream, zipstream, gzipstream, httpstream,
etc. There will still be exceptions for link and serial, those need
actual library files on disk. But those aren't official hardware devices
anyway.
So to help with speed a bit, I'm rethinking the video rendering path.
Previous system:
- core outputs system-native samples (SNES = 19-bit LRGB, NES = 9-bit
emphasis+palette, DMG = 2-bit grayscale, etc.)
- interfaceSystem transforms samples to 30-bit via lookup table inside
the emulation core
- interfaceSystem masks off overscan areas, if enabled
- interfaceUI runs filter to produce new target buffer, if enabled
- interfaceUI transforms 30-bit video to native display depth (24-bit or
30-bit), and applies color-adjustments (gamma, etc) at the same time
New system:
- all cores now generate an internal palette, and call
Interface::videoColor(uint32_t source, uint16_t red, uint16_t green,
uint16_t blue) to get native display color post-adjusted (gamma, etc
applied already.)
- all cores output to uint32_t* buffer now (output video.palette[color]
instead of just color)
- interfaceUI runs filter to produce new target buffer, if enabled
- interfaceUI memcpy()'s buffer to the video card
videoColor() is pretty neat. source is the raw pixel (as per the
old-format, 19-bit SNES, 9-bit NES, etc), and you can create a color
from that if you really want to. Or return that value to get a buffer
just like v088 and below. red, green, blue are 16-bits per channel,
because why the hell not, right? Just lop off all the bits you don't
want. If you have more bits on your display than that, fuck you :P
The last step is extremely difficult to avoid. Video cards can and do
have pitches that differ from the width of the texture. Trying to make
the core account for this would be really awful. And even if we did
that, the emulation routine would need to write directly to a video card
RAM buffer. Some APIs require you to lock the video buffer while
writing, so this would leave the video buffer locked for a long time.
Probably not catastrophic, but still awful. And lastly, if the
emulation core tried writing directly to the display texture, software
filters would no longer be possible (unless you -really- jump through
hooks and divert to a memory buffer when a filter is enabled, but ...
fuck.)
Anyway, the point of all that work was to eliminate an extra video copy,
and the need for a really painful 30-bit to 24-bit conversion (three
shifts, three masks, three array indexes.) So this basically reverts us,
performance-wise, to where we were pre-30 bit support.
[...]
The downside to this is that we're going to need a filter for each
output depth. Since the array type is uint32_t*, and I don't intend to
support higher or lower depths, we really only need 24+30-bit versions
of each filter. Kinda shitty, but oh well.
2012-04-27 12:12:53 +00:00
|
|
|
operator uint8_t() const { return s.read(offset); }
|
2012-08-07 13:28:00 +00:00
|
|
|
byte& operator=(uint8_t data) { s.write(offset, data); return *this; }
|
Update to v088r03 release.
byuu says:
static vector<uint8_t> file::read(const string &filename); replaces:
static bool file::read(const string &filename, uint8_t *&data, unsigned
&size); This allows automatic deletion of the underlying data.
Added vectorstream, which is obviously a vector<uint8_t> wrapper for
a data stream. Plan is for all data accesses inside my emulation cores
to take stream objects, especially MSU1. This lets you feed the core
anything: memorystream, filestream, zipstream, gzipstream, httpstream,
etc. There will still be exceptions for link and serial, those need
actual library files on disk. But those aren't official hardware devices
anyway.
So to help with speed a bit, I'm rethinking the video rendering path.
Previous system:
- core outputs system-native samples (SNES = 19-bit LRGB, NES = 9-bit
emphasis+palette, DMG = 2-bit grayscale, etc.)
- interfaceSystem transforms samples to 30-bit via lookup table inside
the emulation core
- interfaceSystem masks off overscan areas, if enabled
- interfaceUI runs filter to produce new target buffer, if enabled
- interfaceUI transforms 30-bit video to native display depth (24-bit or
30-bit), and applies color-adjustments (gamma, etc) at the same time
New system:
- all cores now generate an internal palette, and call
Interface::videoColor(uint32_t source, uint16_t red, uint16_t green,
uint16_t blue) to get native display color post-adjusted (gamma, etc
applied already.)
- all cores output to uint32_t* buffer now (output video.palette[color]
instead of just color)
- interfaceUI runs filter to produce new target buffer, if enabled
- interfaceUI memcpy()'s buffer to the video card
videoColor() is pretty neat. source is the raw pixel (as per the
old-format, 19-bit SNES, 9-bit NES, etc), and you can create a color
from that if you really want to. Or return that value to get a buffer
just like v088 and below. red, green, blue are 16-bits per channel,
because why the hell not, right? Just lop off all the bits you don't
want. If you have more bits on your display than that, fuck you :P
The last step is extremely difficult to avoid. Video cards can and do
have pitches that differ from the width of the texture. Trying to make
the core account for this would be really awful. And even if we did
that, the emulation routine would need to write directly to a video card
RAM buffer. Some APIs require you to lock the video buffer while
writing, so this would leave the video buffer locked for a long time.
Probably not catastrophic, but still awful. And lastly, if the
emulation core tried writing directly to the display texture, software
filters would no longer be possible (unless you -really- jump through
hooks and divert to a memory buffer when a filter is enabled, but ...
fuck.)
Anyway, the point of all that work was to eliminate an extra video copy,
and the need for a really painful 30-bit to 24-bit conversion (three
shifts, three masks, three array indexes.) So this basically reverts us,
performance-wise, to where we were pre-30 bit support.
[...]
The downside to this is that we're going to need a filter for each
output depth. Since the array type is uint32_t*, and I don't intend to
support higher or lower depths, we really only need 24+30-bit versions
of each filter. Kinda shitty, but oh well.
2012-04-27 12:12:53 +00:00
|
|
|
byte(const stream &s, unsigned offset) : s(s), offset(offset) {}
|
2012-01-15 08:29:57 +00:00
|
|
|
|
|
|
|
private:
|
|
|
|
const stream &s;
|
|
|
|
const unsigned offset;
|
|
|
|
};
|
|
|
|
|
Update to v088r03 release.
byuu says:
static vector<uint8_t> file::read(const string &filename); replaces:
static bool file::read(const string &filename, uint8_t *&data, unsigned
&size); This allows automatic deletion of the underlying data.
Added vectorstream, which is obviously a vector<uint8_t> wrapper for
a data stream. Plan is for all data accesses inside my emulation cores
to take stream objects, especially MSU1. This lets you feed the core
anything: memorystream, filestream, zipstream, gzipstream, httpstream,
etc. There will still be exceptions for link and serial, those need
actual library files on disk. But those aren't official hardware devices
anyway.
So to help with speed a bit, I'm rethinking the video rendering path.
Previous system:
- core outputs system-native samples (SNES = 19-bit LRGB, NES = 9-bit
emphasis+palette, DMG = 2-bit grayscale, etc.)
- interfaceSystem transforms samples to 30-bit via lookup table inside
the emulation core
- interfaceSystem masks off overscan areas, if enabled
- interfaceUI runs filter to produce new target buffer, if enabled
- interfaceUI transforms 30-bit video to native display depth (24-bit or
30-bit), and applies color-adjustments (gamma, etc) at the same time
New system:
- all cores now generate an internal palette, and call
Interface::videoColor(uint32_t source, uint16_t red, uint16_t green,
uint16_t blue) to get native display color post-adjusted (gamma, etc
applied already.)
- all cores output to uint32_t* buffer now (output video.palette[color]
instead of just color)
- interfaceUI runs filter to produce new target buffer, if enabled
- interfaceUI memcpy()'s buffer to the video card
videoColor() is pretty neat. source is the raw pixel (as per the
old-format, 19-bit SNES, 9-bit NES, etc), and you can create a color
from that if you really want to. Or return that value to get a buffer
just like v088 and below. red, green, blue are 16-bits per channel,
because why the hell not, right? Just lop off all the bits you don't
want. If you have more bits on your display than that, fuck you :P
The last step is extremely difficult to avoid. Video cards can and do
have pitches that differ from the width of the texture. Trying to make
the core account for this would be really awful. And even if we did
that, the emulation routine would need to write directly to a video card
RAM buffer. Some APIs require you to lock the video buffer while
writing, so this would leave the video buffer locked for a long time.
Probably not catastrophic, but still awful. And lastly, if the
emulation core tried writing directly to the display texture, software
filters would no longer be possible (unless you -really- jump through
hooks and divert to a memory buffer when a filter is enabled, but ...
fuck.)
Anyway, the point of all that work was to eliminate an extra video copy,
and the need for a really painful 30-bit to 24-bit conversion (three
shifts, three masks, three array indexes.) So this basically reverts us,
performance-wise, to where we were pre-30 bit support.
[...]
The downside to this is that we're going to need a filter for each
output depth. Since the array type is uint32_t*, and I don't intend to
support higher or lower depths, we really only need 24+30-bit versions
of each filter. Kinda shitty, but oh well.
2012-04-27 12:12:53 +00:00
|
|
|
byte operator[](unsigned offset) const {
|
2012-01-15 08:29:57 +00:00
|
|
|
return byte(*this, offset);
|
|
|
|
}
|
|
|
|
|
Update to v088r03 release.
byuu says:
static vector<uint8_t> file::read(const string &filename); replaces:
static bool file::read(const string &filename, uint8_t *&data, unsigned
&size); This allows automatic deletion of the underlying data.
Added vectorstream, which is obviously a vector<uint8_t> wrapper for
a data stream. Plan is for all data accesses inside my emulation cores
to take stream objects, especially MSU1. This lets you feed the core
anything: memorystream, filestream, zipstream, gzipstream, httpstream,
etc. There will still be exceptions for link and serial, those need
actual library files on disk. But those aren't official hardware devices
anyway.
So to help with speed a bit, I'm rethinking the video rendering path.
Previous system:
- core outputs system-native samples (SNES = 19-bit LRGB, NES = 9-bit
emphasis+palette, DMG = 2-bit grayscale, etc.)
- interfaceSystem transforms samples to 30-bit via lookup table inside
the emulation core
- interfaceSystem masks off overscan areas, if enabled
- interfaceUI runs filter to produce new target buffer, if enabled
- interfaceUI transforms 30-bit video to native display depth (24-bit or
30-bit), and applies color-adjustments (gamma, etc) at the same time
New system:
- all cores now generate an internal palette, and call
Interface::videoColor(uint32_t source, uint16_t red, uint16_t green,
uint16_t blue) to get native display color post-adjusted (gamma, etc
applied already.)
- all cores output to uint32_t* buffer now (output video.palette[color]
instead of just color)
- interfaceUI runs filter to produce new target buffer, if enabled
- interfaceUI memcpy()'s buffer to the video card
videoColor() is pretty neat. source is the raw pixel (as per the
old-format, 19-bit SNES, 9-bit NES, etc), and you can create a color
from that if you really want to. Or return that value to get a buffer
just like v088 and below. red, green, blue are 16-bits per channel,
because why the hell not, right? Just lop off all the bits you don't
want. If you have more bits on your display than that, fuck you :P
The last step is extremely difficult to avoid. Video cards can and do
have pitches that differ from the width of the texture. Trying to make
the core account for this would be really awful. And even if we did
that, the emulation routine would need to write directly to a video card
RAM buffer. Some APIs require you to lock the video buffer while
writing, so this would leave the video buffer locked for a long time.
Probably not catastrophic, but still awful. And lastly, if the
emulation core tried writing directly to the display texture, software
filters would no longer be possible (unless you -really- jump through
hooks and divert to a memory buffer when a filter is enabled, but ...
fuck.)
Anyway, the point of all that work was to eliminate an extra video copy,
and the need for a really painful 30-bit to 24-bit conversion (three
shifts, three masks, three array indexes.) So this basically reverts us,
performance-wise, to where we were pre-30 bit support.
[...]
The downside to this is that we're going to need a filter for each
output depth. Since the array type is uint32_t*, and I don't intend to
support higher or lower depths, we really only need 24+30-bit versions
of each filter. Kinda shitty, but oh well.
2012-04-27 12:12:53 +00:00
|
|
|
stream() {}
|
|
|
|
virtual ~stream() {}
|
2012-01-15 08:29:57 +00:00
|
|
|
stream(const stream&) = delete;
|
|
|
|
stream& operator=(const stream&) = delete;
|
|
|
|
};
|
|
|
|
|
|
|
|
}
|
|
|
|
|
|
|
|
#endif
|