bsnes/higan/sfc/dsp/echo.cpp

131 lines
3.2 KiB
C++
Raw Normal View History

Update to v098r04 release. byuu says: Changelog: - SFC: fixed behavior of 21fx $21fe register when no device is connected (must return zero) - SFC: reduced 21fx buffer size to 1024 bytes in both directions to mirror the FT232H we are using - SFC: eliminated dsp/modulo-array.hpp [1] - higan: implemented higan/video interface and migrated all cores to it [2] [1] the echo history buffer was 8-bytes, so there was no need for it at all here. Not sure what I was thinking. The BRR buffer was 12-bytes, and has very weird behavior ... but there's only a single location in the code where it actually writes to this buffer. It's much easier to just write to the buffer three times there instead of implementing an entire class just to abstract away two lines of code. This change actually boosted the speed from ~124.5fps to around ~127.5fps, but that's within the margin of error for GCC. I doubt it's actually faster this way. The DSP core could really use a ton of work. It comes from a port of blargg's spc_dsp to my coding style, but he was extremely fond of using 32-bit signed integers everywhere. There's a lot of opportunity to remove red tape masking by resizing the variables to their actual state sizes. I really need to find where I put spc_dsp6.sfc from blargg. It's a great test to verify if I've made any mistakes in my implementation that would cause regressions. Don't suppose anyone has it? [2] so again, the idea is that higan/audio and higan/video are going to sit between the emulation cores and the user interfaces. The hope is to output raw encoding data from the emulation cores without having to worry about the video display format (generally 24-bit RGB) of the host display. And also to avoid having to repeat myself with eg three separate implementations of interframe blending, and so on. Furthermore, the idea is that the user interface can configure its side of the settings, and the emulation cores can configure their sides. Thus, neither has to worry about the other end. And now we can spin off new user interfaces much easier without having to mess with all of these things. Right now, I've implemented color emulation, interframe blending and SNES horizontal color bleed. I did not implement scanlines (and interlace effects for them) yet, but I probably will at some point. Further, for right now, the WonderSwan/Color screen rotation is busted and will only show games in the horizontal orientation. Obviously this must be fixed before the next official release, but I'll want to think about how to implement it. Also, the SNES light gun pointers are missing for now. Things are a bit messy right now as I've gone through several revisions of how to handle these things, so a good house cleaning is in order once everything is feature-complete again. I need to sit down and think through how and where I want to handle things like light gun cursors, LCD icons, and maybe even rasterized text messages. And obviously ... higan/audio is still just nall::DSP's headers. I need to revamp that whole interface. I want to make it quite powerful with a true audio mixer so I can handle things like SNES+SGB+MSU1+Voicer-Kun+SNES-CD (five separate audio streams at once.) The video system has the concept of "effects" for things like color bleed and interframe blending. I want to extend on this with useful other effects, such as NTSC simulation, maybe bringing back my mini-HQ2x filter, etc. I'd also like to restore the saturation/gamma/luma adjustment sliders ... I always liked allowing people to compensate for their displays without having to change settings system-wide. Lastly, I've always wanted to see some audio effects. Although I doubt we'll ever get my dream of CoreAudio-style profiles, I'd like to get some basic equalizer settings and echo/reverb effects in there.
2016-04-11 21:29:56 +00:00
auto DSP::calculateFIR(bool channel, int index) -> int {
int sample = state.echoHistory[channel][(uint3)(state.echoHistoryOffset + index + 1)];
return (sample * (int8)REG(FIR + index * 0x10)) >> 6;
}
auto DSP::echoOutput(bool channel) -> int {
int output = (int16)((state._mainOut[channel] * (int8)REG(MVOLL + channel * 0x10)) >> 7)
+ (int16)((state._echoIn [channel] * (int8)REG(EVOLL + channel * 0x10)) >> 7);
return sclamp<16>(output);
}
auto DSP::echoRead(bool channel) -> void {
uint addr = state._echoPointer + channel * 2;
uint8 lo = smp.apuram[(uint16)(addr + 0)];
uint8 hi = smp.apuram[(uint16)(addr + 1)];
int s = (int16)((hi << 8) + lo);
Update to v098r04 release. byuu says: Changelog: - SFC: fixed behavior of 21fx $21fe register when no device is connected (must return zero) - SFC: reduced 21fx buffer size to 1024 bytes in both directions to mirror the FT232H we are using - SFC: eliminated dsp/modulo-array.hpp [1] - higan: implemented higan/video interface and migrated all cores to it [2] [1] the echo history buffer was 8-bytes, so there was no need for it at all here. Not sure what I was thinking. The BRR buffer was 12-bytes, and has very weird behavior ... but there's only a single location in the code where it actually writes to this buffer. It's much easier to just write to the buffer three times there instead of implementing an entire class just to abstract away two lines of code. This change actually boosted the speed from ~124.5fps to around ~127.5fps, but that's within the margin of error for GCC. I doubt it's actually faster this way. The DSP core could really use a ton of work. It comes from a port of blargg's spc_dsp to my coding style, but he was extremely fond of using 32-bit signed integers everywhere. There's a lot of opportunity to remove red tape masking by resizing the variables to their actual state sizes. I really need to find where I put spc_dsp6.sfc from blargg. It's a great test to verify if I've made any mistakes in my implementation that would cause regressions. Don't suppose anyone has it? [2] so again, the idea is that higan/audio and higan/video are going to sit between the emulation cores and the user interfaces. The hope is to output raw encoding data from the emulation cores without having to worry about the video display format (generally 24-bit RGB) of the host display. And also to avoid having to repeat myself with eg three separate implementations of interframe blending, and so on. Furthermore, the idea is that the user interface can configure its side of the settings, and the emulation cores can configure their sides. Thus, neither has to worry about the other end. And now we can spin off new user interfaces much easier without having to mess with all of these things. Right now, I've implemented color emulation, interframe blending and SNES horizontal color bleed. I did not implement scanlines (and interlace effects for them) yet, but I probably will at some point. Further, for right now, the WonderSwan/Color screen rotation is busted and will only show games in the horizontal orientation. Obviously this must be fixed before the next official release, but I'll want to think about how to implement it. Also, the SNES light gun pointers are missing for now. Things are a bit messy right now as I've gone through several revisions of how to handle these things, so a good house cleaning is in order once everything is feature-complete again. I need to sit down and think through how and where I want to handle things like light gun cursors, LCD icons, and maybe even rasterized text messages. And obviously ... higan/audio is still just nall::DSP's headers. I need to revamp that whole interface. I want to make it quite powerful with a true audio mixer so I can handle things like SNES+SGB+MSU1+Voicer-Kun+SNES-CD (five separate audio streams at once.) The video system has the concept of "effects" for things like color bleed and interframe blending. I want to extend on this with useful other effects, such as NTSC simulation, maybe bringing back my mini-HQ2x filter, etc. I'd also like to restore the saturation/gamma/luma adjustment sliders ... I always liked allowing people to compensate for their displays without having to change settings system-wide. Lastly, I've always wanted to see some audio effects. Although I doubt we'll ever get my dream of CoreAudio-style profiles, I'd like to get some basic equalizer settings and echo/reverb effects in there.
2016-04-11 21:29:56 +00:00
state.echoHistory[channel][state.echoHistoryOffset] = s >> 1;
}
auto DSP::echoWrite(bool channel) -> void {
if(!(state._echoDisabled & 0x20)) {
uint addr = state._echoPointer + channel * 2;
int s = state._echoOut[channel];
smp.apuram[(uint16)(addr + 0)] = s;
smp.apuram[(uint16)(addr + 1)] = s >> 8;
}
state._echoOut[channel] = 0;
}
auto DSP::echo22() -> void {
//history
state.echoHistoryOffset++;
state._echoPointer = (uint16)((state._esa << 8) + state.echoOffset);
echoRead(0);
//FIR
int l = calculateFIR(0, 0);
Update to v098r04 release. byuu says: Changelog: - SFC: fixed behavior of 21fx $21fe register when no device is connected (must return zero) - SFC: reduced 21fx buffer size to 1024 bytes in both directions to mirror the FT232H we are using - SFC: eliminated dsp/modulo-array.hpp [1] - higan: implemented higan/video interface and migrated all cores to it [2] [1] the echo history buffer was 8-bytes, so there was no need for it at all here. Not sure what I was thinking. The BRR buffer was 12-bytes, and has very weird behavior ... but there's only a single location in the code where it actually writes to this buffer. It's much easier to just write to the buffer three times there instead of implementing an entire class just to abstract away two lines of code. This change actually boosted the speed from ~124.5fps to around ~127.5fps, but that's within the margin of error for GCC. I doubt it's actually faster this way. The DSP core could really use a ton of work. It comes from a port of blargg's spc_dsp to my coding style, but he was extremely fond of using 32-bit signed integers everywhere. There's a lot of opportunity to remove red tape masking by resizing the variables to their actual state sizes. I really need to find where I put spc_dsp6.sfc from blargg. It's a great test to verify if I've made any mistakes in my implementation that would cause regressions. Don't suppose anyone has it? [2] so again, the idea is that higan/audio and higan/video are going to sit between the emulation cores and the user interfaces. The hope is to output raw encoding data from the emulation cores without having to worry about the video display format (generally 24-bit RGB) of the host display. And also to avoid having to repeat myself with eg three separate implementations of interframe blending, and so on. Furthermore, the idea is that the user interface can configure its side of the settings, and the emulation cores can configure their sides. Thus, neither has to worry about the other end. And now we can spin off new user interfaces much easier without having to mess with all of these things. Right now, I've implemented color emulation, interframe blending and SNES horizontal color bleed. I did not implement scanlines (and interlace effects for them) yet, but I probably will at some point. Further, for right now, the WonderSwan/Color screen rotation is busted and will only show games in the horizontal orientation. Obviously this must be fixed before the next official release, but I'll want to think about how to implement it. Also, the SNES light gun pointers are missing for now. Things are a bit messy right now as I've gone through several revisions of how to handle these things, so a good house cleaning is in order once everything is feature-complete again. I need to sit down and think through how and where I want to handle things like light gun cursors, LCD icons, and maybe even rasterized text messages. And obviously ... higan/audio is still just nall::DSP's headers. I need to revamp that whole interface. I want to make it quite powerful with a true audio mixer so I can handle things like SNES+SGB+MSU1+Voicer-Kun+SNES-CD (five separate audio streams at once.) The video system has the concept of "effects" for things like color bleed and interframe blending. I want to extend on this with useful other effects, such as NTSC simulation, maybe bringing back my mini-HQ2x filter, etc. I'd also like to restore the saturation/gamma/luma adjustment sliders ... I always liked allowing people to compensate for their displays without having to change settings system-wide. Lastly, I've always wanted to see some audio effects. Although I doubt we'll ever get my dream of CoreAudio-style profiles, I'd like to get some basic equalizer settings and echo/reverb effects in there.
2016-04-11 21:29:56 +00:00
int r = calculateFIR(1, 0);
state._echoIn[0] = l;
state._echoIn[1] = r;
}
auto DSP::echo23() -> void {
Update to v098r04 release. byuu says: Changelog: - SFC: fixed behavior of 21fx $21fe register when no device is connected (must return zero) - SFC: reduced 21fx buffer size to 1024 bytes in both directions to mirror the FT232H we are using - SFC: eliminated dsp/modulo-array.hpp [1] - higan: implemented higan/video interface and migrated all cores to it [2] [1] the echo history buffer was 8-bytes, so there was no need for it at all here. Not sure what I was thinking. The BRR buffer was 12-bytes, and has very weird behavior ... but there's only a single location in the code where it actually writes to this buffer. It's much easier to just write to the buffer three times there instead of implementing an entire class just to abstract away two lines of code. This change actually boosted the speed from ~124.5fps to around ~127.5fps, but that's within the margin of error for GCC. I doubt it's actually faster this way. The DSP core could really use a ton of work. It comes from a port of blargg's spc_dsp to my coding style, but he was extremely fond of using 32-bit signed integers everywhere. There's a lot of opportunity to remove red tape masking by resizing the variables to their actual state sizes. I really need to find where I put spc_dsp6.sfc from blargg. It's a great test to verify if I've made any mistakes in my implementation that would cause regressions. Don't suppose anyone has it? [2] so again, the idea is that higan/audio and higan/video are going to sit between the emulation cores and the user interfaces. The hope is to output raw encoding data from the emulation cores without having to worry about the video display format (generally 24-bit RGB) of the host display. And also to avoid having to repeat myself with eg three separate implementations of interframe blending, and so on. Furthermore, the idea is that the user interface can configure its side of the settings, and the emulation cores can configure their sides. Thus, neither has to worry about the other end. And now we can spin off new user interfaces much easier without having to mess with all of these things. Right now, I've implemented color emulation, interframe blending and SNES horizontal color bleed. I did not implement scanlines (and interlace effects for them) yet, but I probably will at some point. Further, for right now, the WonderSwan/Color screen rotation is busted and will only show games in the horizontal orientation. Obviously this must be fixed before the next official release, but I'll want to think about how to implement it. Also, the SNES light gun pointers are missing for now. Things are a bit messy right now as I've gone through several revisions of how to handle these things, so a good house cleaning is in order once everything is feature-complete again. I need to sit down and think through how and where I want to handle things like light gun cursors, LCD icons, and maybe even rasterized text messages. And obviously ... higan/audio is still just nall::DSP's headers. I need to revamp that whole interface. I want to make it quite powerful with a true audio mixer so I can handle things like SNES+SGB+MSU1+Voicer-Kun+SNES-CD (five separate audio streams at once.) The video system has the concept of "effects" for things like color bleed and interframe blending. I want to extend on this with useful other effects, such as NTSC simulation, maybe bringing back my mini-HQ2x filter, etc. I'd also like to restore the saturation/gamma/luma adjustment sliders ... I always liked allowing people to compensate for their displays without having to change settings system-wide. Lastly, I've always wanted to see some audio effects. Although I doubt we'll ever get my dream of CoreAudio-style profiles, I'd like to get some basic equalizer settings and echo/reverb effects in there.
2016-04-11 21:29:56 +00:00
int l = calculateFIR(0, 1) + calculateFIR(0, 2);
int r = calculateFIR(1, 1) + calculateFIR(1, 2);
state._echoIn[0] += l;
state._echoIn[1] += r;
echoRead(1);
}
auto DSP::echo24() -> void {
Update to v098r04 release. byuu says: Changelog: - SFC: fixed behavior of 21fx $21fe register when no device is connected (must return zero) - SFC: reduced 21fx buffer size to 1024 bytes in both directions to mirror the FT232H we are using - SFC: eliminated dsp/modulo-array.hpp [1] - higan: implemented higan/video interface and migrated all cores to it [2] [1] the echo history buffer was 8-bytes, so there was no need for it at all here. Not sure what I was thinking. The BRR buffer was 12-bytes, and has very weird behavior ... but there's only a single location in the code where it actually writes to this buffer. It's much easier to just write to the buffer three times there instead of implementing an entire class just to abstract away two lines of code. This change actually boosted the speed from ~124.5fps to around ~127.5fps, but that's within the margin of error for GCC. I doubt it's actually faster this way. The DSP core could really use a ton of work. It comes from a port of blargg's spc_dsp to my coding style, but he was extremely fond of using 32-bit signed integers everywhere. There's a lot of opportunity to remove red tape masking by resizing the variables to their actual state sizes. I really need to find where I put spc_dsp6.sfc from blargg. It's a great test to verify if I've made any mistakes in my implementation that would cause regressions. Don't suppose anyone has it? [2] so again, the idea is that higan/audio and higan/video are going to sit between the emulation cores and the user interfaces. The hope is to output raw encoding data from the emulation cores without having to worry about the video display format (generally 24-bit RGB) of the host display. And also to avoid having to repeat myself with eg three separate implementations of interframe blending, and so on. Furthermore, the idea is that the user interface can configure its side of the settings, and the emulation cores can configure their sides. Thus, neither has to worry about the other end. And now we can spin off new user interfaces much easier without having to mess with all of these things. Right now, I've implemented color emulation, interframe blending and SNES horizontal color bleed. I did not implement scanlines (and interlace effects for them) yet, but I probably will at some point. Further, for right now, the WonderSwan/Color screen rotation is busted and will only show games in the horizontal orientation. Obviously this must be fixed before the next official release, but I'll want to think about how to implement it. Also, the SNES light gun pointers are missing for now. Things are a bit messy right now as I've gone through several revisions of how to handle these things, so a good house cleaning is in order once everything is feature-complete again. I need to sit down and think through how and where I want to handle things like light gun cursors, LCD icons, and maybe even rasterized text messages. And obviously ... higan/audio is still just nall::DSP's headers. I need to revamp that whole interface. I want to make it quite powerful with a true audio mixer so I can handle things like SNES+SGB+MSU1+Voicer-Kun+SNES-CD (five separate audio streams at once.) The video system has the concept of "effects" for things like color bleed and interframe blending. I want to extend on this with useful other effects, such as NTSC simulation, maybe bringing back my mini-HQ2x filter, etc. I'd also like to restore the saturation/gamma/luma adjustment sliders ... I always liked allowing people to compensate for their displays without having to change settings system-wide. Lastly, I've always wanted to see some audio effects. Although I doubt we'll ever get my dream of CoreAudio-style profiles, I'd like to get some basic equalizer settings and echo/reverb effects in there.
2016-04-11 21:29:56 +00:00
int l = calculateFIR(0, 3) + calculateFIR(0, 4) + calculateFIR(0, 5);
int r = calculateFIR(1, 3) + calculateFIR(1, 4) + calculateFIR(1, 5);
state._echoIn[0] += l;
state._echoIn[1] += r;
}
auto DSP::echo25() -> void {
Update to v098r04 release. byuu says: Changelog: - SFC: fixed behavior of 21fx $21fe register when no device is connected (must return zero) - SFC: reduced 21fx buffer size to 1024 bytes in both directions to mirror the FT232H we are using - SFC: eliminated dsp/modulo-array.hpp [1] - higan: implemented higan/video interface and migrated all cores to it [2] [1] the echo history buffer was 8-bytes, so there was no need for it at all here. Not sure what I was thinking. The BRR buffer was 12-bytes, and has very weird behavior ... but there's only a single location in the code where it actually writes to this buffer. It's much easier to just write to the buffer three times there instead of implementing an entire class just to abstract away two lines of code. This change actually boosted the speed from ~124.5fps to around ~127.5fps, but that's within the margin of error for GCC. I doubt it's actually faster this way. The DSP core could really use a ton of work. It comes from a port of blargg's spc_dsp to my coding style, but he was extremely fond of using 32-bit signed integers everywhere. There's a lot of opportunity to remove red tape masking by resizing the variables to their actual state sizes. I really need to find where I put spc_dsp6.sfc from blargg. It's a great test to verify if I've made any mistakes in my implementation that would cause regressions. Don't suppose anyone has it? [2] so again, the idea is that higan/audio and higan/video are going to sit between the emulation cores and the user interfaces. The hope is to output raw encoding data from the emulation cores without having to worry about the video display format (generally 24-bit RGB) of the host display. And also to avoid having to repeat myself with eg three separate implementations of interframe blending, and so on. Furthermore, the idea is that the user interface can configure its side of the settings, and the emulation cores can configure their sides. Thus, neither has to worry about the other end. And now we can spin off new user interfaces much easier without having to mess with all of these things. Right now, I've implemented color emulation, interframe blending and SNES horizontal color bleed. I did not implement scanlines (and interlace effects for them) yet, but I probably will at some point. Further, for right now, the WonderSwan/Color screen rotation is busted and will only show games in the horizontal orientation. Obviously this must be fixed before the next official release, but I'll want to think about how to implement it. Also, the SNES light gun pointers are missing for now. Things are a bit messy right now as I've gone through several revisions of how to handle these things, so a good house cleaning is in order once everything is feature-complete again. I need to sit down and think through how and where I want to handle things like light gun cursors, LCD icons, and maybe even rasterized text messages. And obviously ... higan/audio is still just nall::DSP's headers. I need to revamp that whole interface. I want to make it quite powerful with a true audio mixer so I can handle things like SNES+SGB+MSU1+Voicer-Kun+SNES-CD (five separate audio streams at once.) The video system has the concept of "effects" for things like color bleed and interframe blending. I want to extend on this with useful other effects, such as NTSC simulation, maybe bringing back my mini-HQ2x filter, etc. I'd also like to restore the saturation/gamma/luma adjustment sliders ... I always liked allowing people to compensate for their displays without having to change settings system-wide. Lastly, I've always wanted to see some audio effects. Although I doubt we'll ever get my dream of CoreAudio-style profiles, I'd like to get some basic equalizer settings and echo/reverb effects in there.
2016-04-11 21:29:56 +00:00
int l = state._echoIn[0] + calculateFIR(0, 6);
int r = state._echoIn[1] + calculateFIR(1, 6);
l = (int16)l;
r = (int16)r;
Update to v098r04 release. byuu says: Changelog: - SFC: fixed behavior of 21fx $21fe register when no device is connected (must return zero) - SFC: reduced 21fx buffer size to 1024 bytes in both directions to mirror the FT232H we are using - SFC: eliminated dsp/modulo-array.hpp [1] - higan: implemented higan/video interface and migrated all cores to it [2] [1] the echo history buffer was 8-bytes, so there was no need for it at all here. Not sure what I was thinking. The BRR buffer was 12-bytes, and has very weird behavior ... but there's only a single location in the code where it actually writes to this buffer. It's much easier to just write to the buffer three times there instead of implementing an entire class just to abstract away two lines of code. This change actually boosted the speed from ~124.5fps to around ~127.5fps, but that's within the margin of error for GCC. I doubt it's actually faster this way. The DSP core could really use a ton of work. It comes from a port of blargg's spc_dsp to my coding style, but he was extremely fond of using 32-bit signed integers everywhere. There's a lot of opportunity to remove red tape masking by resizing the variables to their actual state sizes. I really need to find where I put spc_dsp6.sfc from blargg. It's a great test to verify if I've made any mistakes in my implementation that would cause regressions. Don't suppose anyone has it? [2] so again, the idea is that higan/audio and higan/video are going to sit between the emulation cores and the user interfaces. The hope is to output raw encoding data from the emulation cores without having to worry about the video display format (generally 24-bit RGB) of the host display. And also to avoid having to repeat myself with eg three separate implementations of interframe blending, and so on. Furthermore, the idea is that the user interface can configure its side of the settings, and the emulation cores can configure their sides. Thus, neither has to worry about the other end. And now we can spin off new user interfaces much easier without having to mess with all of these things. Right now, I've implemented color emulation, interframe blending and SNES horizontal color bleed. I did not implement scanlines (and interlace effects for them) yet, but I probably will at some point. Further, for right now, the WonderSwan/Color screen rotation is busted and will only show games in the horizontal orientation. Obviously this must be fixed before the next official release, but I'll want to think about how to implement it. Also, the SNES light gun pointers are missing for now. Things are a bit messy right now as I've gone through several revisions of how to handle these things, so a good house cleaning is in order once everything is feature-complete again. I need to sit down and think through how and where I want to handle things like light gun cursors, LCD icons, and maybe even rasterized text messages. And obviously ... higan/audio is still just nall::DSP's headers. I need to revamp that whole interface. I want to make it quite powerful with a true audio mixer so I can handle things like SNES+SGB+MSU1+Voicer-Kun+SNES-CD (five separate audio streams at once.) The video system has the concept of "effects" for things like color bleed and interframe blending. I want to extend on this with useful other effects, such as NTSC simulation, maybe bringing back my mini-HQ2x filter, etc. I'd also like to restore the saturation/gamma/luma adjustment sliders ... I always liked allowing people to compensate for their displays without having to change settings system-wide. Lastly, I've always wanted to see some audio effects. Although I doubt we'll ever get my dream of CoreAudio-style profiles, I'd like to get some basic equalizer settings and echo/reverb effects in there.
2016-04-11 21:29:56 +00:00
l += (int16)calculateFIR(0, 7);
r += (int16)calculateFIR(1, 7);
state._echoIn[0] = sclamp<16>(l) & ~1;
state._echoIn[1] = sclamp<16>(r) & ~1;
}
auto DSP::echo26() -> void {
//left output volumes
//(save sample for next clock so we can output both together)
state._mainOut[0] = echoOutput(0);
//echo feedback
int l = state._echoOut[0] + (int16)((state._echoIn[0] * (int8)REG(EFB)) >> 7);
int r = state._echoOut[1] + (int16)((state._echoIn[1] * (int8)REG(EFB)) >> 7);
state._echoOut[0] = sclamp<16>(l) & ~1;
state._echoOut[1] = sclamp<16>(r) & ~1;
}
auto DSP::echo27() -> void {
//output
int outl = state._mainOut[0];
int outr = echoOutput(1);
state._mainOut[0] = 0;
state._mainOut[1] = 0;
//todo: global muting isn't this simple
//(turns DAC on and off or something, causing small ~37-sample pulse when first muted)
if(REG(FLG) & 0x40) {
outl = 0;
outr = 0;
}
//output sample to DAC
Update to v098r14 release. byuu says: Changelog: - improved attenuation of biquad filter by computing butterworth Q coefficients correctly (instead of using the same constant) - adding 1e-25 to each input sample into the biquad filters to try and prevent denormalization - updated normalization from [0.0 to 1.0] to [-1.0 to +1.0]; volume/reverb happen in floating-point mode now - good amount of work to make the base Emulator::Audio support any number of output channels - so that we don't have to do separate work on left/right channels; and can instead share the code for each channel - Emulator::Interface::audioSample(int16 left, int16 right); changed to: - Emulator::Interface::audioSample(double* samples, uint channels); - samples are normalized [-1.0 to +1.0] - for now at least, channels will be the value given to Emulator::Audio::reset() - fixed GUI crash on startup when audio driver is set to None I'm probably going to be updating ruby to accept normalized doubles as well; but I'm not sure if I will try and support anything other 2-channel audio output. It'll depend on how easy it is to do so; perhaps it'll be a per-driver setting. The denormalization thing is fierce. If that happens, it drops the emulator framerate from 220fps to about 20fps for Game Boy emulation. And that happens basically whenever audio output is silent. I'm probably also going to make a nall/denormal.hpp file at some point with platform-specific functionality to set the CPU state to "denormals as zero" where applicable. I'll still add the 1e-25 offset (inaudible) as another fallback.
2016-06-01 11:23:22 +00:00
stream->sample(outl / 32768.0, outr / 32768.0);
}
auto DSP::echo28() -> void {
state._echoDisabled = REG(FLG);
}
auto DSP::echo29() -> void {
state._esa = REG(ESA);
if(!state.echoOffset) state.echoLength = (REG(EDL) & 0x0f) << 11;
state.echoOffset += 4;
if(state.echoOffset >= state.echoLength) state.echoOffset = 0;
//write left echo
echoWrite(0);
state._echoDisabled = REG(FLG);
}
auto DSP::echo30() -> void {
//write right echo
echoWrite(1);
}