2010-08-09 13:28:56 +00:00
|
|
|
#ifdef SYSTEM_CPP
|
|
|
|
|
|
|
|
Video video;
|
|
|
|
|
2013-12-20 11:40:39 +00:00
|
|
|
void Video::generate_palette(Emulator::Interface::PaletteMode mode) {
|
Update to v088r03 release.
byuu says:
static vector<uint8_t> file::read(const string &filename); replaces:
static bool file::read(const string &filename, uint8_t *&data, unsigned
&size); This allows automatic deletion of the underlying data.
Added vectorstream, which is obviously a vector<uint8_t> wrapper for
a data stream. Plan is for all data accesses inside my emulation cores
to take stream objects, especially MSU1. This lets you feed the core
anything: memorystream, filestream, zipstream, gzipstream, httpstream,
etc. There will still be exceptions for link and serial, those need
actual library files on disk. But those aren't official hardware devices
anyway.
So to help with speed a bit, I'm rethinking the video rendering path.
Previous system:
- core outputs system-native samples (SNES = 19-bit LRGB, NES = 9-bit
emphasis+palette, DMG = 2-bit grayscale, etc.)
- interfaceSystem transforms samples to 30-bit via lookup table inside
the emulation core
- interfaceSystem masks off overscan areas, if enabled
- interfaceUI runs filter to produce new target buffer, if enabled
- interfaceUI transforms 30-bit video to native display depth (24-bit or
30-bit), and applies color-adjustments (gamma, etc) at the same time
New system:
- all cores now generate an internal palette, and call
Interface::videoColor(uint32_t source, uint16_t red, uint16_t green,
uint16_t blue) to get native display color post-adjusted (gamma, etc
applied already.)
- all cores output to uint32_t* buffer now (output video.palette[color]
instead of just color)
- interfaceUI runs filter to produce new target buffer, if enabled
- interfaceUI memcpy()'s buffer to the video card
videoColor() is pretty neat. source is the raw pixel (as per the
old-format, 19-bit SNES, 9-bit NES, etc), and you can create a color
from that if you really want to. Or return that value to get a buffer
just like v088 and below. red, green, blue are 16-bits per channel,
because why the hell not, right? Just lop off all the bits you don't
want. If you have more bits on your display than that, fuck you :P
The last step is extremely difficult to avoid. Video cards can and do
have pitches that differ from the width of the texture. Trying to make
the core account for this would be really awful. And even if we did
that, the emulation routine would need to write directly to a video card
RAM buffer. Some APIs require you to lock the video buffer while
writing, so this would leave the video buffer locked for a long time.
Probably not catastrophic, but still awful. And lastly, if the
emulation core tried writing directly to the display texture, software
filters would no longer be possible (unless you -really- jump through
hooks and divert to a memory buffer when a filter is enabled, but ...
fuck.)
Anyway, the point of all that work was to eliminate an extra video copy,
and the need for a really painful 30-bit to 24-bit conversion (three
shifts, three masks, three array indexes.) So this basically reverts us,
performance-wise, to where we were pre-30 bit support.
[...]
The downside to this is that we're going to need a filter for each
output depth. Since the array type is uint32_t*, and I don't intend to
support higher or lower depths, we really only need 24+30-bit versions
of each filter. Kinda shitty, but oh well.
2012-04-27 12:12:53 +00:00
|
|
|
for(unsigned color = 0; color < (1 << 19); color++) {
|
2013-12-21 10:45:58 +00:00
|
|
|
if(mode == Emulator::Interface::PaletteMode::Literal) {
|
2013-12-20 11:40:39 +00:00
|
|
|
palette[color] = color;
|
|
|
|
continue;
|
|
|
|
}
|
|
|
|
|
Update to v088r03 release.
byuu says:
static vector<uint8_t> file::read(const string &filename); replaces:
static bool file::read(const string &filename, uint8_t *&data, unsigned
&size); This allows automatic deletion of the underlying data.
Added vectorstream, which is obviously a vector<uint8_t> wrapper for
a data stream. Plan is for all data accesses inside my emulation cores
to take stream objects, especially MSU1. This lets you feed the core
anything: memorystream, filestream, zipstream, gzipstream, httpstream,
etc. There will still be exceptions for link and serial, those need
actual library files on disk. But those aren't official hardware devices
anyway.
So to help with speed a bit, I'm rethinking the video rendering path.
Previous system:
- core outputs system-native samples (SNES = 19-bit LRGB, NES = 9-bit
emphasis+palette, DMG = 2-bit grayscale, etc.)
- interfaceSystem transforms samples to 30-bit via lookup table inside
the emulation core
- interfaceSystem masks off overscan areas, if enabled
- interfaceUI runs filter to produce new target buffer, if enabled
- interfaceUI transforms 30-bit video to native display depth (24-bit or
30-bit), and applies color-adjustments (gamma, etc) at the same time
New system:
- all cores now generate an internal palette, and call
Interface::videoColor(uint32_t source, uint16_t red, uint16_t green,
uint16_t blue) to get native display color post-adjusted (gamma, etc
applied already.)
- all cores output to uint32_t* buffer now (output video.palette[color]
instead of just color)
- interfaceUI runs filter to produce new target buffer, if enabled
- interfaceUI memcpy()'s buffer to the video card
videoColor() is pretty neat. source is the raw pixel (as per the
old-format, 19-bit SNES, 9-bit NES, etc), and you can create a color
from that if you really want to. Or return that value to get a buffer
just like v088 and below. red, green, blue are 16-bits per channel,
because why the hell not, right? Just lop off all the bits you don't
want. If you have more bits on your display than that, fuck you :P
The last step is extremely difficult to avoid. Video cards can and do
have pitches that differ from the width of the texture. Trying to make
the core account for this would be really awful. And even if we did
that, the emulation routine would need to write directly to a video card
RAM buffer. Some APIs require you to lock the video buffer while
writing, so this would leave the video buffer locked for a long time.
Probably not catastrophic, but still awful. And lastly, if the
emulation core tried writing directly to the display texture, software
filters would no longer be possible (unless you -really- jump through
hooks and divert to a memory buffer when a filter is enabled, but ...
fuck.)
Anyway, the point of all that work was to eliminate an extra video copy,
and the need for a really painful 30-bit to 24-bit conversion (three
shifts, three masks, three array indexes.) So this basically reverts us,
performance-wise, to where we were pre-30 bit support.
[...]
The downside to this is that we're going to need a filter for each
output depth. Since the array type is uint32_t*, and I don't intend to
support higher or lower depths, we really only need 24+30-bit versions
of each filter. Kinda shitty, but oh well.
2012-04-27 12:12:53 +00:00
|
|
|
unsigned l = (color >> 15) & 15;
|
|
|
|
unsigned b = (color >> 10) & 31;
|
|
|
|
unsigned g = (color >> 5) & 31;
|
|
|
|
unsigned r = (color >> 0) & 31;
|
|
|
|
|
2013-12-21 10:45:58 +00:00
|
|
|
if(mode == Emulator::Interface::PaletteMode::Channel) {
|
|
|
|
l = image::normalize(l, 4, 16);
|
|
|
|
r = image::normalize(r, 5, 16);
|
|
|
|
g = image::normalize(g, 5, 16);
|
|
|
|
b = image::normalize(b, 5, 16);
|
|
|
|
palette[color] = interface->videoColor(color, l, r, g, b);
|
|
|
|
continue;
|
|
|
|
}
|
|
|
|
|
2013-12-20 11:40:39 +00:00
|
|
|
if(mode == Emulator::Interface::PaletteMode::Emulation) {
|
2013-12-07 09:12:37 +00:00
|
|
|
r = gamma_ramp[r];
|
|
|
|
g = gamma_ramp[g];
|
|
|
|
b = gamma_ramp[b];
|
|
|
|
} else {
|
2013-12-21 10:45:58 +00:00
|
|
|
r = image::normalize(r, 5, 8);
|
|
|
|
g = image::normalize(g, 5, 8);
|
|
|
|
b = image::normalize(b, 5, 8);
|
2013-12-07 09:12:37 +00:00
|
|
|
}
|
|
|
|
|
Update to v088r03 release.
byuu says:
static vector<uint8_t> file::read(const string &filename); replaces:
static bool file::read(const string &filename, uint8_t *&data, unsigned
&size); This allows automatic deletion of the underlying data.
Added vectorstream, which is obviously a vector<uint8_t> wrapper for
a data stream. Plan is for all data accesses inside my emulation cores
to take stream objects, especially MSU1. This lets you feed the core
anything: memorystream, filestream, zipstream, gzipstream, httpstream,
etc. There will still be exceptions for link and serial, those need
actual library files on disk. But those aren't official hardware devices
anyway.
So to help with speed a bit, I'm rethinking the video rendering path.
Previous system:
- core outputs system-native samples (SNES = 19-bit LRGB, NES = 9-bit
emphasis+palette, DMG = 2-bit grayscale, etc.)
- interfaceSystem transforms samples to 30-bit via lookup table inside
the emulation core
- interfaceSystem masks off overscan areas, if enabled
- interfaceUI runs filter to produce new target buffer, if enabled
- interfaceUI transforms 30-bit video to native display depth (24-bit or
30-bit), and applies color-adjustments (gamma, etc) at the same time
New system:
- all cores now generate an internal palette, and call
Interface::videoColor(uint32_t source, uint16_t red, uint16_t green,
uint16_t blue) to get native display color post-adjusted (gamma, etc
applied already.)
- all cores output to uint32_t* buffer now (output video.palette[color]
instead of just color)
- interfaceUI runs filter to produce new target buffer, if enabled
- interfaceUI memcpy()'s buffer to the video card
videoColor() is pretty neat. source is the raw pixel (as per the
old-format, 19-bit SNES, 9-bit NES, etc), and you can create a color
from that if you really want to. Or return that value to get a buffer
just like v088 and below. red, green, blue are 16-bits per channel,
because why the hell not, right? Just lop off all the bits you don't
want. If you have more bits on your display than that, fuck you :P
The last step is extremely difficult to avoid. Video cards can and do
have pitches that differ from the width of the texture. Trying to make
the core account for this would be really awful. And even if we did
that, the emulation routine would need to write directly to a video card
RAM buffer. Some APIs require you to lock the video buffer while
writing, so this would leave the video buffer locked for a long time.
Probably not catastrophic, but still awful. And lastly, if the
emulation core tried writing directly to the display texture, software
filters would no longer be possible (unless you -really- jump through
hooks and divert to a memory buffer when a filter is enabled, but ...
fuck.)
Anyway, the point of all that work was to eliminate an extra video copy,
and the need for a really painful 30-bit to 24-bit conversion (three
shifts, three masks, three array indexes.) So this basically reverts us,
performance-wise, to where we were pre-30 bit support.
[...]
The downside to this is that we're going to need a filter for each
output depth. Since the array type is uint32_t*, and I don't intend to
support higher or lower depths, we really only need 24+30-bit versions
of each filter. Kinda shitty, but oh well.
2012-04-27 12:12:53 +00:00
|
|
|
double L = (1.0 + l) / 16.0;
|
|
|
|
if(l == 0) L *= 0.5;
|
2013-12-21 10:45:58 +00:00
|
|
|
unsigned R = L * image::normalize(r, 8, 16);
|
|
|
|
unsigned G = L * image::normalize(g, 8, 16);
|
|
|
|
unsigned B = L * image::normalize(b, 8, 16);
|
Update to v088r03 release.
byuu says:
static vector<uint8_t> file::read(const string &filename); replaces:
static bool file::read(const string &filename, uint8_t *&data, unsigned
&size); This allows automatic deletion of the underlying data.
Added vectorstream, which is obviously a vector<uint8_t> wrapper for
a data stream. Plan is for all data accesses inside my emulation cores
to take stream objects, especially MSU1. This lets you feed the core
anything: memorystream, filestream, zipstream, gzipstream, httpstream,
etc. There will still be exceptions for link and serial, those need
actual library files on disk. But those aren't official hardware devices
anyway.
So to help with speed a bit, I'm rethinking the video rendering path.
Previous system:
- core outputs system-native samples (SNES = 19-bit LRGB, NES = 9-bit
emphasis+palette, DMG = 2-bit grayscale, etc.)
- interfaceSystem transforms samples to 30-bit via lookup table inside
the emulation core
- interfaceSystem masks off overscan areas, if enabled
- interfaceUI runs filter to produce new target buffer, if enabled
- interfaceUI transforms 30-bit video to native display depth (24-bit or
30-bit), and applies color-adjustments (gamma, etc) at the same time
New system:
- all cores now generate an internal palette, and call
Interface::videoColor(uint32_t source, uint16_t red, uint16_t green,
uint16_t blue) to get native display color post-adjusted (gamma, etc
applied already.)
- all cores output to uint32_t* buffer now (output video.palette[color]
instead of just color)
- interfaceUI runs filter to produce new target buffer, if enabled
- interfaceUI memcpy()'s buffer to the video card
videoColor() is pretty neat. source is the raw pixel (as per the
old-format, 19-bit SNES, 9-bit NES, etc), and you can create a color
from that if you really want to. Or return that value to get a buffer
just like v088 and below. red, green, blue are 16-bits per channel,
because why the hell not, right? Just lop off all the bits you don't
want. If you have more bits on your display than that, fuck you :P
The last step is extremely difficult to avoid. Video cards can and do
have pitches that differ from the width of the texture. Trying to make
the core account for this would be really awful. And even if we did
that, the emulation routine would need to write directly to a video card
RAM buffer. Some APIs require you to lock the video buffer while
writing, so this would leave the video buffer locked for a long time.
Probably not catastrophic, but still awful. And lastly, if the
emulation core tried writing directly to the display texture, software
filters would no longer be possible (unless you -really- jump through
hooks and divert to a memory buffer when a filter is enabled, but ...
fuck.)
Anyway, the point of all that work was to eliminate an extra video copy,
and the need for a really painful 30-bit to 24-bit conversion (three
shifts, three masks, three array indexes.) So this basically reverts us,
performance-wise, to where we were pre-30 bit support.
[...]
The downside to this is that we're going to need a filter for each
output depth. Since the array type is uint32_t*, and I don't intend to
support higher or lower depths, we really only need 24+30-bit versions
of each filter. Kinda shitty, but oh well.
2012-04-27 12:12:53 +00:00
|
|
|
|
2013-12-21 10:45:58 +00:00
|
|
|
palette[color] = interface->videoColor(color, 0, R, G, B);
|
2011-10-27 13:30:19 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
Video::Video() {
|
2013-12-20 11:40:39 +00:00
|
|
|
palette = new uint32_t[1 << 19]();
|
2011-10-27 13:30:19 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
Video::~Video() {
|
|
|
|
delete[] palette;
|
|
|
|
}
|
|
|
|
|
|
|
|
//internal
|
|
|
|
|
2013-12-07 09:12:37 +00:00
|
|
|
const uint8_t Video::gamma_ramp[32] = {
|
|
|
|
0x00, 0x01, 0x03, 0x06, 0x0a, 0x0f, 0x15, 0x1c,
|
|
|
|
0x24, 0x2d, 0x37, 0x42, 0x4e, 0x5b, 0x69, 0x78,
|
|
|
|
0x88, 0x90, 0x98, 0xa0, 0xa8, 0xb0, 0xb8, 0xc0,
|
|
|
|
0xc8, 0xd0, 0xd8, 0xe0, 0xe8, 0xf0, 0xf8, 0xff,
|
|
|
|
};
|
|
|
|
|
2010-08-09 13:28:56 +00:00
|
|
|
const uint8_t Video::cursor[15 * 15] = {
|
|
|
|
0,0,0,0,0,0,1,1,1,0,0,0,0,0,0,
|
|
|
|
0,0,0,0,1,1,2,2,2,1,1,0,0,0,0,
|
|
|
|
0,0,0,1,2,2,1,2,1,2,2,1,0,0,0,
|
|
|
|
0,0,1,2,1,1,0,1,0,1,1,2,1,0,0,
|
|
|
|
0,1,2,1,0,0,0,1,0,0,0,1,2,1,0,
|
|
|
|
0,1,2,1,0,0,1,2,1,0,0,1,2,1,0,
|
|
|
|
1,2,1,0,0,1,1,2,1,1,0,0,1,2,1,
|
|
|
|
1,2,2,1,1,2,2,2,2,2,1,1,2,2,1,
|
|
|
|
1,2,1,0,0,1,1,2,1,1,0,0,1,2,1,
|
|
|
|
0,1,2,1,0,0,1,2,1,0,0,1,2,1,0,
|
|
|
|
0,1,2,1,0,0,0,1,0,0,0,1,2,1,0,
|
|
|
|
0,0,1,2,1,1,0,1,0,1,1,2,1,0,0,
|
|
|
|
0,0,0,1,2,2,1,2,1,2,2,1,0,0,0,
|
|
|
|
0,0,0,0,1,1,2,2,2,1,1,0,0,0,0,
|
|
|
|
0,0,0,0,0,0,1,1,1,0,0,0,0,0,0,
|
|
|
|
};
|
|
|
|
|
|
|
|
void Video::draw_cursor(uint16_t color, int x, int y) {
|
2013-05-05 09:21:30 +00:00
|
|
|
uint32_t* data = (uint32_t*)ppu.output;
|
2010-08-22 00:44:27 +00:00
|
|
|
if(ppu.interlace() && ppu.field()) data += 512;
|
|
|
|
|
2010-08-09 13:28:56 +00:00
|
|
|
for(int cy = 0; cy < 15; cy++) {
|
|
|
|
int vy = y + cy - 7;
|
|
|
|
if(vy <= 0 || vy >= 240) continue; //do not draw offscreen
|
|
|
|
|
|
|
|
bool hires = (line_width[vy] == 512);
|
|
|
|
for(int cx = 0; cx < 15; cx++) {
|
|
|
|
int vx = x + cx - 7;
|
|
|
|
if(vx < 0 || vx >= 256) continue; //do not draw offscreen
|
|
|
|
uint8_t pixel = cursor[cy * 15 + cx];
|
|
|
|
if(pixel == 0) continue;
|
2011-09-24 09:51:08 +00:00
|
|
|
uint32_t pixelcolor = (15 << 15) | ((pixel == 1) ? 0 : color);
|
2010-08-09 13:28:56 +00:00
|
|
|
|
|
|
|
if(hires == false) {
|
Update to v094r08 release.
byuu says:
Lots of changes this time around. FreeBSD stability and compilation is
still a work in progress.
FreeBSD 10 + Clang 3.3 = 108fps
FreeBSD 10 + GCC 4.7 = 130fps
Errata 1: I've been fighting that god-damned endian.h header for the
past nine WIPs now. The above WIP isn't building now because FreeBSD
isn't including headers before using certain types, and you end up with
a trillion error messages. So just delete all the endian.h includes from
nall/intrinsics.hpp to build.
Errata 2: I was trying to match g++ and g++47, so I used $(findstring
g++,$(compiler)), which ends up also matching clang++. Oops. Easy fix,
put Clang first and then else if g++ next. Not ideal, but oh well. All
it's doing for now is declaring -fwrapv twice, so you don't have to fix
it just yet. Probably just going to alias g++="g++47" and do exact
matching instead.
Errata 3: both OpenGL::term and VideoGLX::term are causing a core dump
on BSD. No idea why. The resources are initialized and valid, but
releasing them crashes the application.
Changelog:
- nall/Makefile is more flexible with overriding $(compiler), so you can
build with GCC or Clang on BSD (defaults to GCC now)
- PLATFORM_X was renamed to PLATFORM_XORG, and it's also declared with
PLATFORM_LINUX or PLATFORM_BSD
- PLATFORM_XORG probably isn't the best name ... still thinking about
what best to call LINUX|BSD|SOLARIS or ^(WINDOWS|MACOSX)
- fixed a few legitimate Clang warning messages in nall
- Compiler::VisualCPP is ugly as hell, renamed to Compiler::CL
- nall/platform includes nall/intrinsics first. Trying to move away from
testing for _WIN32, etc directly in all files. Work in progress.
- nall turns off Clang warnings that I won't "fix", because they aren't
broken. It's much less noisy to compile with warnings on now.
- phoenix gains the ability to set background and foreground colors on
various text container widgets (GTK only for now.)
- rewrote a lot of the MSU1 code to try and simplify it. Really hope
I didn't break anything ... I don't have any MSU1 test ROMs handy
- SNES coprocessor audio is now mixed as sclamp<16>(system_sample
+ coprocessor_sample) instead of sclamp<16>((sys + cop) / 2)
- allows for greater chance of aliasing (still low, SNES audio is
quiet), but doesn't cut base system volume in half anymore
- fixed Super Scope and Justifier cursor colors
- use input.xlib instead of input.x ... allows Xlib input driver to be
visible on Linux and BSD once again
- make install and make uninstall must be run as root again; no longer
using install but cp instead for BSD compatibility
- killed $(DESTDIR) ... use make prefix=$DESTDIR$prefix instead
- you can now set text/background colors for the loki console via (eg):
- settings.terminal.background-color 0x000000
- settings.terminal.foreground-color 0xffffff
2014-02-24 09:39:09 +00:00
|
|
|
*((uint32_t*)data + vy * 1024 + vx) = pixelcolor;
|
2010-08-09 13:28:56 +00:00
|
|
|
} else {
|
Update to v094r08 release.
byuu says:
Lots of changes this time around. FreeBSD stability and compilation is
still a work in progress.
FreeBSD 10 + Clang 3.3 = 108fps
FreeBSD 10 + GCC 4.7 = 130fps
Errata 1: I've been fighting that god-damned endian.h header for the
past nine WIPs now. The above WIP isn't building now because FreeBSD
isn't including headers before using certain types, and you end up with
a trillion error messages. So just delete all the endian.h includes from
nall/intrinsics.hpp to build.
Errata 2: I was trying to match g++ and g++47, so I used $(findstring
g++,$(compiler)), which ends up also matching clang++. Oops. Easy fix,
put Clang first and then else if g++ next. Not ideal, but oh well. All
it's doing for now is declaring -fwrapv twice, so you don't have to fix
it just yet. Probably just going to alias g++="g++47" and do exact
matching instead.
Errata 3: both OpenGL::term and VideoGLX::term are causing a core dump
on BSD. No idea why. The resources are initialized and valid, but
releasing them crashes the application.
Changelog:
- nall/Makefile is more flexible with overriding $(compiler), so you can
build with GCC or Clang on BSD (defaults to GCC now)
- PLATFORM_X was renamed to PLATFORM_XORG, and it's also declared with
PLATFORM_LINUX or PLATFORM_BSD
- PLATFORM_XORG probably isn't the best name ... still thinking about
what best to call LINUX|BSD|SOLARIS or ^(WINDOWS|MACOSX)
- fixed a few legitimate Clang warning messages in nall
- Compiler::VisualCPP is ugly as hell, renamed to Compiler::CL
- nall/platform includes nall/intrinsics first. Trying to move away from
testing for _WIN32, etc directly in all files. Work in progress.
- nall turns off Clang warnings that I won't "fix", because they aren't
broken. It's much less noisy to compile with warnings on now.
- phoenix gains the ability to set background and foreground colors on
various text container widgets (GTK only for now.)
- rewrote a lot of the MSU1 code to try and simplify it. Really hope
I didn't break anything ... I don't have any MSU1 test ROMs handy
- SNES coprocessor audio is now mixed as sclamp<16>(system_sample
+ coprocessor_sample) instead of sclamp<16>((sys + cop) / 2)
- allows for greater chance of aliasing (still low, SNES audio is
quiet), but doesn't cut base system volume in half anymore
- fixed Super Scope and Justifier cursor colors
- use input.xlib instead of input.x ... allows Xlib input driver to be
visible on Linux and BSD once again
- make install and make uninstall must be run as root again; no longer
using install but cp instead for BSD compatibility
- killed $(DESTDIR) ... use make prefix=$DESTDIR$prefix instead
- you can now set text/background colors for the loki console via (eg):
- settings.terminal.background-color 0x000000
- settings.terminal.foreground-color 0xffffff
2014-02-24 09:39:09 +00:00
|
|
|
*((uint32_t*)data + vy * 1024 + vx * 2 + 0) = pixelcolor;
|
|
|
|
*((uint32_t*)data + vy * 1024 + vx * 2 + 1) = pixelcolor;
|
2010-08-09 13:28:56 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
void Video::update() {
|
Update to v093r02 release.
byuu says:
Changelog:
- nall: fixed major memory leak in string class
- ruby: video shaders support #define-based settings now
- phoenix/GTK+: support > 256x256 icons for window / task bar / alt-tab
- sfc: remove random/ and config/, merge into system/
- ethos: delete higan.png (48x48), replace with higan512.png (512x512)
as new higan.png
- ethos: default gamma to 100% (no color adjustment)
- ethos: use "Video Shaders/Display Emulation/" instead of "Video
Shaders/Emulation/"
- use g++ instead of g++-4.7 (g++ -v must be >= 4.7)
- use -std=c++11 instead of -std=gnu++11
- applied a few patches from Debian upstream to make their packaging job
easier
So because colors are normalized in GLSL, I won't be able to offer video
shaders absolute color literals. We will have to perform basic color
conversion inside the core.
As such, the current plan is to create some sort of Emulator::Settings
interface. With that, I'll connect an option for color correction, which
will be on by default. For FC/SFC, that will mean gamma correction
(darker / stronger colors), and for GB/GBC/GBA, it will mean simulating
the weird brightness levels of the displays. I am undecided on whether
to use pea soup green for the GB or not. By not doing so, it'll be
easier for the display emulation shader to do it.
2013-11-09 11:45:54 +00:00
|
|
|
switch(configuration.controller_port2) {
|
2011-06-24 10:43:29 +00:00
|
|
|
case Input::Device::SuperScope:
|
Update to v079r06 release.
byuu says:
It does add some more code to the CPU::step() function, so performance
probably went down actually, by about 1%. Removing the input.tick() call
didn't compensate as much as I'd hoped.
Hooked up Super Scope and Justifier support. The good news is that the
Justifier alignment doesn't get fucked up anymore when you go
off-screen. Never could fix that in the old version.
The bad news is that it takes a major speed hit for the time being.
I need to figure out how to run the CPU and input threads out of order.
Every time I try, the input gets thrown off by most of a scanline.
Right now, I'm forced to sync constantly to get the latching position
really accurate. But worst case, I can cut the syncs down by skipping
large chunks around the cursor position, +/-40 clock cycles. So it's
only temporarily slow.
Lastly, killed the old Input class, merged Controllers class into it.
I actually like Controllers as a name better, but it doesn't jive with
video/audio/input, so oh well.
2011-06-25 12:56:32 +00:00
|
|
|
if(dynamic_cast<SuperScope*>(input.port2)) {
|
|
|
|
SuperScope &device = (SuperScope&)*input.port2;
|
2011-06-24 10:43:29 +00:00
|
|
|
draw_cursor(0x7c00, device.x, device.y);
|
|
|
|
}
|
|
|
|
break;
|
|
|
|
case Input::Device::Justifier:
|
|
|
|
case Input::Device::Justifiers:
|
Update to v079r06 release.
byuu says:
It does add some more code to the CPU::step() function, so performance
probably went down actually, by about 1%. Removing the input.tick() call
didn't compensate as much as I'd hoped.
Hooked up Super Scope and Justifier support. The good news is that the
Justifier alignment doesn't get fucked up anymore when you go
off-screen. Never could fix that in the old version.
The bad news is that it takes a major speed hit for the time being.
I need to figure out how to run the CPU and input threads out of order.
Every time I try, the input gets thrown off by most of a scanline.
Right now, I'm forced to sync constantly to get the latching position
really accurate. But worst case, I can cut the syncs down by skipping
large chunks around the cursor position, +/-40 clock cycles. So it's
only temporarily slow.
Lastly, killed the old Input class, merged Controllers class into it.
I actually like Controllers as a name better, but it doesn't jive with
video/audio/input, so oh well.
2011-06-25 12:56:32 +00:00
|
|
|
if(dynamic_cast<Justifier*>(input.port2)) {
|
|
|
|
Justifier &device = (Justifier&)*input.port2;
|
Update to v084r05 release.
(note: before the post announcing this release, there had been
a discussion of a performance optimisation that made the Super Scope
emulation a lot faster, but caused problems for the Justifier perpheral)
byuu says:
Spent a good two hours trying things to no avail.
I was trying to allow the CPU to run ahead, and sync on accesses to
$4016/4017/4201/4213, but that doesn't work because the controllers have
access to strobe IObit at will.
The codebase is really starting to get difficult to work with. I am
guessing because the days of massive development are long over, and the
code is starting to age.
Jonas' fix works 98% of the time, but there's still a few missed shots
here and there. So that's not going to work either.
So ... I give up. I've disabled the speed hack, so that it works 100% of
the time.
Did the same for the Super Scope: it may not have the same problem, but
I like consistency and don't feel like taking the chance.
This doesn't affect the mouse, since the mouse does not latch the
counters to indicate its X/Y position.
Speed hit is 92->82fps (accuracy profile), but only for Super Scope and
Justifier games.
But ... at least it works now. Slow and working is better than fast and
broken.
I appreciate the help in researching the issue, Jonas and krom.
Also pulled in phoenix/Makefile, which simplifies ui/Makefile.
Linux port defaults to GTK+ now. I can't get QGtkStyle to look good on
Debian.
2011-12-18 03:19:45 +00:00
|
|
|
draw_cursor(0x001f, device.player1.x, device.player1.y);
|
2011-06-24 10:43:29 +00:00
|
|
|
if(device.chained == false) break;
|
Update to v084r05 release.
(note: before the post announcing this release, there had been
a discussion of a performance optimisation that made the Super Scope
emulation a lot faster, but caused problems for the Justifier perpheral)
byuu says:
Spent a good two hours trying things to no avail.
I was trying to allow the CPU to run ahead, and sync on accesses to
$4016/4017/4201/4213, but that doesn't work because the controllers have
access to strobe IObit at will.
The codebase is really starting to get difficult to work with. I am
guessing because the days of massive development are long over, and the
code is starting to age.
Jonas' fix works 98% of the time, but there's still a few missed shots
here and there. So that's not going to work either.
So ... I give up. I've disabled the speed hack, so that it works 100% of
the time.
Did the same for the Super Scope: it may not have the same problem, but
I like consistency and don't feel like taking the chance.
This doesn't affect the mouse, since the mouse does not latch the
counters to indicate its X/Y position.
Speed hit is 92->82fps (accuracy profile), but only for Super Scope and
Justifier games.
But ... at least it works now. Slow and working is better than fast and
broken.
I appreciate the help in researching the issue, Jonas and krom.
Also pulled in phoenix/Makefile, which simplifies ui/Makefile.
Linux port defaults to GTK+ now. I can't get QGtkStyle to look good on
Debian.
2011-12-18 03:19:45 +00:00
|
|
|
draw_cursor(0x02e0, device.player2.x, device.player2.y);
|
2011-06-24 10:43:29 +00:00
|
|
|
}
|
|
|
|
break;
|
2010-08-09 13:28:56 +00:00
|
|
|
}
|
|
|
|
|
2013-05-05 09:21:30 +00:00
|
|
|
uint32_t* data = (uint32_t*)ppu.output;
|
2010-08-22 00:44:27 +00:00
|
|
|
if(ppu.interlace() && ppu.field()) data += 512;
|
2010-08-09 13:28:56 +00:00
|
|
|
|
2011-04-30 13:12:15 +00:00
|
|
|
if(hires) {
|
2010-08-09 13:28:56 +00:00
|
|
|
//normalize line widths
|
|
|
|
for(unsigned y = 0; y < 240; y++) {
|
|
|
|
if(line_width[y] == 512) continue;
|
Update to v083 release.
byuu says:
This release adds preliminary Nintendo / Famicom emulation. It's only
a week or two old, so a lot of work still needs to be done before it can
compete with the most popular NES emulators.
It's important to clarify: bsnes is primarily an SNES emulator. That
will always be its forte and my core focus. I have added Game Boy
support previously for Super Game Boy emulation, and I've added NES
support mostly for something fun to work on to break up the monotony of
working on one system for seven years now. Obviously, I'd like the
emulation to be accurate and highly compatible, but I simply cannot
afford to invest the same amount of time and money into any other
systems.
Still, either way the NES and GB emulation serve as fun side-diversions,
and allow for a unified emulator interface with all of bsnes' unique
features applied to all systems. My personal favorite feature is
mightymo's extended built-in cheat code database that now also includes
NES and Game Boy codes. And it even works in Super Game Boy mode now,
too!
I'm also not worried about speed at all: so long as NES/GB are faster
than SNES/compatibility, it's fine by me. Note that due to the NES audio
running at 1.78MHz, and Game Boy audio at 4MHz stereo, a more
sophisticated audio resampler was needed: Ryphecha (Mednafen author) has
graciously written a first-rate resampler: it is a band-limited
Kaiser-windowed polyphase sinc resampler. It is combined with two
highpass filters to remove DC bias. The filter itself is SSE optimized,
but even still, approximately 50% of CPU usage for NES/GB emulation goes
to the audio filtering alone. However, you now have the best sound
possible for NES and Game Boy emulation as a result.
The GUI has also been heavily re-structured to accommodate multiple
emulators from the same interface. As such, it's quite likely a few bugs
are still lurking here and there. Please report them and I'll iron them
out for the next release.
Changelog:
- license is now GPLv3
- re-structured GUI as a multi-system emulator
- added NES emulation [byuu, Ryphecha]
- added NES ICs: MMC1, MMC2, MMC3, MMC4, MMC5, VRC4, VRC6+audio, VRC7,
Sunsoft-5B+audio, Bandai-LZ93D50
- added NES boards: AxROM, BNROM, CNROM, ExROM, FxROM, GxROM, NROM,
PxROM, SxROM, TxROM, UxROM
- Game Boy emulation improvements [Jonas Quinn]
- SNES core outputs full 19-bit color (4-bit luma included) for more
accurate color reproduction (~5% speed hit)
- audio resampler is now a band-limited polyphase resampler [Ryphecha]
- cheat database includes NES+GB codes as well [mightymo, tukuyomi]
- lots of other changes
2011-10-14 10:05:25 +00:00
|
|
|
uint32_t *buffer = data + y * 1024;
|
2010-08-09 13:28:56 +00:00
|
|
|
for(signed x = 255; x >= 0; x--) {
|
|
|
|
buffer[(x * 2) + 0] = buffer[(x * 2) + 1] = buffer[x];
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
Update to v088r15 release.
byuu says:
Changelog:
- default placement of presentation window optimized for 1024x768
displays or larger (sorry if yours is smaller, move the window
yourself.)
- Direct3D waits until a previous Vblank ends before waiting for the
next Vblank to begin (fixes video timing analysis, and ---really---
fast computers.)
- Window::setVisible(false) clears modality, but also fixed in Browser
code as well (fixes loading images on Windows hanging)
- Browser won't consume full CPU resources (but timing analysis will,
I don't want stalls to affect the results.)
- closing settings window while analyzing stops analysis
- you can load the SGB BIOS without a game (why the hell you would want
to ...)
- escape closes the Browser window (it won't close other dialogs, it has
to be hooked up per-window)
- just for fun, joypad hat up/down moves in Browser file list, any
joypad button loads selected game [not very useful, lacks repeat, and
there aren't GUI load file open buttons]
- Super Scope and Justifier crosshairs render correctly (probably
doesn't belong in the core, but it's not something I suspect people
want to do themselves ...)
- you can load GB, SGB, GB, SGB ... without problems (not happy with how
I did this, but I don't want to add an Interface::setInterface()
function yet)
- PAL timing works as I want now (if you want 50fps on a 60hz monitor,
you must not use sync video) [needed to update the DSP frequency when
toggling video/audio sync]
- not going to save input port selection for now (lot of work), but it
will properly keep your port setting across cartridge loads at least
[just goes to controller on emulator restart]
- SFC overscan on and off both work as expected now (off centers image,
on shows entire image)
- laevateinn compiles properly now
- ethos goes to ~/.config/bsnes now that target-ui is dead [honestly,
I recommend deleting the old folder and starting over]
- Emulator::Interface callbacks converted to virtual binding structure
that GUI inherits from (simplifies binding callbacks)
- this breaks Super Game Boy for a bit, I need to rethink
system-specific bindings without direct inheritance
Timing analysis works spectacularly well on Windows, too. You won't get
your 100% perfect rate (unless maybe you leave the analysis running
overnight?), but it'll get really freaking close this way.
2012-05-07 23:29:03 +00:00
|
|
|
//overscan: when disabled, shift image down (by scrolling video buffer up) to center image onscreen
|
|
|
|
//(memory before ppu.output is filled with black scanlines)
|
|
|
|
interface->videoRefresh(
|
2013-12-20 11:40:39 +00:00
|
|
|
video.palette,
|
Update to v088r15 release.
byuu says:
Changelog:
- default placement of presentation window optimized for 1024x768
displays or larger (sorry if yours is smaller, move the window
yourself.)
- Direct3D waits until a previous Vblank ends before waiting for the
next Vblank to begin (fixes video timing analysis, and ---really---
fast computers.)
- Window::setVisible(false) clears modality, but also fixed in Browser
code as well (fixes loading images on Windows hanging)
- Browser won't consume full CPU resources (but timing analysis will,
I don't want stalls to affect the results.)
- closing settings window while analyzing stops analysis
- you can load the SGB BIOS without a game (why the hell you would want
to ...)
- escape closes the Browser window (it won't close other dialogs, it has
to be hooked up per-window)
- just for fun, joypad hat up/down moves in Browser file list, any
joypad button loads selected game [not very useful, lacks repeat, and
there aren't GUI load file open buttons]
- Super Scope and Justifier crosshairs render correctly (probably
doesn't belong in the core, but it's not something I suspect people
want to do themselves ...)
- you can load GB, SGB, GB, SGB ... without problems (not happy with how
I did this, but I don't want to add an Interface::setInterface()
function yet)
- PAL timing works as I want now (if you want 50fps on a 60hz monitor,
you must not use sync video) [needed to update the DSP frequency when
toggling video/audio sync]
- not going to save input port selection for now (lot of work), but it
will properly keep your port setting across cartridge loads at least
[just goes to controller on emulator restart]
- SFC overscan on and off both work as expected now (off centers image,
on shows entire image)
- laevateinn compiles properly now
- ethos goes to ~/.config/bsnes now that target-ui is dead [honestly,
I recommend deleting the old folder and starting over]
- Emulator::Interface callbacks converted to virtual binding structure
that GUI inherits from (simplifies binding callbacks)
- this breaks Super Game Boy for a bit, I need to rethink
system-specific bindings without direct inheritance
Timing analysis works spectacularly well on Windows, too. You won't get
your 100% perfect rate (unless maybe you leave the analysis running
overnight?), but it'll get really freaking close this way.
2012-05-07 23:29:03 +00:00
|
|
|
ppu.output - (ppu.overscan() ? 0 : 7 * 1024),
|
|
|
|
4 * (1024 >> ppu.interlace()),
|
|
|
|
256 << hires,
|
|
|
|
240 << ppu.interlace()
|
|
|
|
);
|
2010-08-09 13:28:56 +00:00
|
|
|
|
2011-04-30 13:12:15 +00:00
|
|
|
hires = false;
|
2010-08-09 13:28:56 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
void Video::scanline() {
|
|
|
|
unsigned y = cpu.vcounter();
|
|
|
|
if(y >= 240) return;
|
|
|
|
|
2011-04-30 13:12:15 +00:00
|
|
|
hires |= ppu.hires();
|
2010-08-09 13:28:56 +00:00
|
|
|
unsigned width = (ppu.hires() == false ? 256 : 512);
|
|
|
|
line_width[y] = width;
|
|
|
|
}
|
|
|
|
|
|
|
|
void Video::init() {
|
2011-04-30 13:12:15 +00:00
|
|
|
hires = false;
|
2013-05-05 09:21:30 +00:00
|
|
|
for(auto& n : line_width) n = 256;
|
2010-08-09 13:28:56 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
#endif
|