bsnes/higan/sfc/cpu/cpu.hpp

256 lines
5.2 KiB
C++
Raw Normal View History

struct CPU : Processor::R65816, Thread, PPUcounter {
auto interruptPending() const -> bool override;
auto pio() const -> uint8;
auto joylatch() const -> bool;
CPU();
Updated to v067r23 release. byuu says: Added missing $4200 IRQ lock, which fixes Chou Aniki on the fast CPU core, so slower PCs can get their brotherly love on. Added range-based controller IOBit latching to the fast CPU core, which enables Super Scope and Justifier support. Uses the priority queue as well, so there is zero speed-hit. Given the way range-testing works, the trigger point may vary by 1-2 pixels when firing at the same spot. Not really a big deal when it avoids a massive speed penalty. Fixed PAL and interlace-mode HVIRQs at V=0,H<2 on the fast CPU core. Added the dot-renderer's sprite list update-on-OAM-write functionality to the scanline-based PPU renderer. Unfortunately it looks like all the speed gain was already taken from the global dirty flag I was using before, but this certainly won't hurt speed any, so whatever. Added #ifdef to stop CoInitialize(0) on non-Windows ports. Added #ifdefs to stop gradient fade on Windows port. Not going to fuck over the Linux port aesthetic because of Qt bug #47,326,927. If there's a way to tell what Qt theme is being used, I can leave it enabled for XP/Vista themes. Moved HDMA trigger from 1104 to 1112, and reduced channel overhead from 24 to 16, to better simulate one-cycle DMA->CPU sync. Code clarity: I've re-added my varint.hpp classes, and am actively using them in the accuracy cores. So far, I haven't done anything that would detriment speed, but it is certainly cool. The APU ports exposed by the CPU and SMP now take uint2 address arguments, the CPU WRAM address register is a uint17, and the IRQ H/VTIME values are uint10. This basically allows the source to clearly convey the data sizes, and eliminates the need to manually mask values when writing to registers or reading from memory. I'm going to be doing this everywhere, and it will have a speed impact eventually, because the automation means we can't skip masks when we know the data is already masked off. Source: archive contains the launcher code, so that I can look into why it's crashing on XP tomorrow. It doesn't look like Circuit USA's flags are going to work too well with this new CPU core. Still not sure what the hell Robocop vs The Terminator is doing, I'll read through the mega SNES thread for clues tomorrow. Speedy Gonzales is definitely broken, as modifying the MDR was breaking things with my current core. Probably because the new CPU core doesn't wait for a cycle edge to trigger. I was thinking that perhaps we could keep some form of cheat codes list to work as game-specific hacks for the performance core. Keeps the hacks out of the emulator, but could allow the remaining bugs to be worked around for people who have no choice but to use the performance core.
2010-08-16 09:42:20 +00:00
alwaysinline auto step(uint clocks) -> void;
alwaysinline auto synchronizeSMP() -> void;
auto synchronizePPU() -> void;
auto synchronizeCoprocessors() -> void;
Update to v098r01 release. byuu says: Changelog: - SFC: balanced profile removed - SFC: performance profile removed - SFC: code for handling non-threaded CPU, SMP, DSP, PPU removed - SFC: Coprocessor, Controller (and expansion port) shared Thread code merged to SFC::Cothread - Cothread here just means "Thread with CPU affinity" (couldn't think of a better name, sorry) - SFC: CPU now has vector<Thread*> coprocessors, peripherals; - this is the beginning of work to allow expansion port devices to be dynamically changed at run-time - ruby: all audio drivers default to 48000hz instead of 22050hz now if no frequency is assigned - note: the WASAPI driver can default to whatever the native frequency is; doesn't have to be 48000hz - tomoko: removed the ability to change the frequency from the UI (but it will display the frequency used) - tomoko: removed the timing settings panel - the goal is to work toward smooth video via adaptive sync - the model is broken by not being in control of the audio frequency anyway - it's further broken by PAL running at 50hz and WSC running at 75hz - it was always broken anyway by SNES interlace timing varying from progressive timing - higan: audio/ stub created (for now, it's just nall/dsp/ moved here and included as a header) - higan: video/ stub created - higan/GNUmakefile: now includes build rules for essential components (libco, emulator, audio, video) The audio changes are in preparation to merge wareya's awesome WASAPI work without the need for the nall/dsp resampler.
2016-04-09 03:40:12 +00:00
auto synchronizePeripherals() -> void;
auto portRead(uint2 port) const -> uint8;
auto portWrite(uint2 port, uint8 data) -> void;
static auto Enter() -> void;
auto main() -> void;
auto power() -> void;
auto reset() -> void;
//dma.cpp
auto dmaAddClocks(uint clocks) -> void;
auto dmaTransferValid(uint8 bbus, uint24 abus) -> bool;
auto dmaAddressValid(uint24 abus) -> bool;
auto dmaRead(uint24 abus) -> uint8;
auto dmaWrite(bool valid, uint addr = 0, uint8 data = 0) -> void;
auto dmaTransfer(bool direction, uint8 bbus, uint24 abus) -> void;
auto dmaAddressB(uint n, uint channel) -> uint8;
auto dmaAddress(uint n) -> uint24;
auto hdmaAddress(uint n) -> uint24;
auto hdmaIndirectAddress(uint n) -> uint24;
auto dmaEnabledChannels() -> uint;
auto hdmaActive(uint n) -> bool;
auto hdmaActiveAfter(uint s) -> bool;
auto hdmaEnabledChannels() -> uint;
auto hdmaActiveChannels() -> uint;
auto dmaRun() -> void;
auto hdmaUpdate(uint n) -> void;
auto hdmaRun() -> void;
auto hdmaInitReset() -> void;
auto hdmaInit() -> void;
//memory.cpp
auto io() -> void override;
auto read(uint24 addr) -> uint8 override;
auto write(uint24 addr, uint8 data) -> void override;
alwaysinline auto speed(uint24 addr) const -> uint;
auto disassemblerRead(uint24 addr) -> uint8 override;
//mmio.cpp
auto apuPortRead(uint24 addr, uint8 data) -> uint8;
auto cpuPortRead(uint24 addr, uint8 data) -> uint8;
auto dmaPortRead(uint24 addr, uint8 data) -> uint8;
auto apuPortWrite(uint24 addr, uint8 data) -> void;
auto cpuPortWrite(uint24 addr, uint8 data) -> void;
auto dmaPortWrite(uint24 addr, uint8 data) -> void;
//timing.cpp
auto dmaCounter() const -> uint;
auto addClocks(uint clocks) -> void;
auto scanline() -> void;
alwaysinline auto aluEdge() -> void;
alwaysinline auto dmaEdge() -> void;
alwaysinline auto lastCycle() -> void;
//irq.cpp
alwaysinline auto pollInterrupts() -> void;
auto nmitimenUpdate(uint8 data) -> void;
auto rdnmi() -> bool;
auto timeup() -> bool;
alwaysinline auto nmiTest() -> bool;
alwaysinline auto irqTest() -> bool;
//joypad.cpp
auto stepAutoJoypadPoll() -> void;
//serialization.cpp
auto serialize(serializer&) -> void;
uint8 wram[128 * 1024];
vector<Thread*> coprocessors;
Update to v098r01 release. byuu says: Changelog: - SFC: balanced profile removed - SFC: performance profile removed - SFC: code for handling non-threaded CPU, SMP, DSP, PPU removed - SFC: Coprocessor, Controller (and expansion port) shared Thread code merged to SFC::Cothread - Cothread here just means "Thread with CPU affinity" (couldn't think of a better name, sorry) - SFC: CPU now has vector<Thread*> coprocessors, peripherals; - this is the beginning of work to allow expansion port devices to be dynamically changed at run-time - ruby: all audio drivers default to 48000hz instead of 22050hz now if no frequency is assigned - note: the WASAPI driver can default to whatever the native frequency is; doesn't have to be 48000hz - tomoko: removed the ability to change the frequency from the UI (but it will display the frequency used) - tomoko: removed the timing settings panel - the goal is to work toward smooth video via adaptive sync - the model is broken by not being in control of the audio frequency anyway - it's further broken by PAL running at 50hz and WSC running at 75hz - it was always broken anyway by SNES interlace timing varying from progressive timing - higan: audio/ stub created (for now, it's just nall/dsp/ moved here and included as a header) - higan: video/ stub created - higan/GNUmakefile: now includes build rules for essential components (libco, emulator, audio, video) The audio changes are in preparation to merge wareya's awesome WASAPI work without the need for the nall/dsp resampler.
2016-04-09 03:40:12 +00:00
vector<Thread*> peripherals;
privileged:
uint cpu_version = 2; //allowed: 1, 2
struct Status {
bool interrupt_pending;
uint clock_count;
uint line_clocks;
//timing
bool irq_lock;
uint dram_refresh_position;
bool dram_refreshed;
uint hdma_init_position;
bool hdma_init_triggered;
uint hdma_position;
bool hdma_triggered;
bool nmi_valid;
bool nmi_line;
bool nmi_transition;
bool nmi_pending;
bool nmi_hold;
bool irq_valid;
bool irq_line;
bool irq_transition;
bool irq_pending;
bool irq_hold;
Update to v097r32 release. byuu says: Changelog: - bsnes-accuracy emulates reset vector properly[1] - bsnes-balanced compiles once more - bsnes-performance compiles once more The balanced and performance profiles are fixed for the last time. They will be removed for v098r01. Please test this WIP as much as you can. I intend to release v098 soon. I know save states are a little unstable for the WS/WSC, but they work well enough for a release. If I can't figure it out soon, I'm going to post v098 anyway. [1] this one's been a really long time coming, but ... one of the bugs I found when I translated Tekkaman Blade was that my translation patch would crash every now and again when you hit the reset button on a real SNES, but it always worked upon power on. Turns out that while power-on initializes the stack register to $01ff, reset does things a little bit differently. Reset actually triggers the reset interrupt vector after putting the CPU into emulation mode, but it doesn't initialize the stack pointer. The net effect is that the stack high byte is set to $01, and the low byte is left as it was. And then the reset vector runs, which pushes the low 16-bits of the program counter, plus the processor flags, onto the stack frame. So you can actually tell where the game was at when the system was reset ... sort of. It's a really weird behavior to be sure. But here's the catch: say you're hacking a game, and so you hook the reset vector with jsl showMyTranslationCreditsSplashScreen, and inside this new subroutine, you then perform whatever bytes you hijacked, and then initialize the stack frame to go about your business drawing the screen, and when you're done, you return via rtl. Generally, this works fine. But if S={0100, 0101, or 0102}, then the stack will wrap due to being in emulation mode at reset. So it will write to {0100, 01ff, 01fe}. But now in your subroutine, you enable native mode. So when you return from your subroutine hijack, it reads the return address from {01ff, 0200, 0201} instead of the expected {01ff, 0100, 0101}. Thus, you get an invalid address back, and you "return" to the wrong location, and your program dies. The odds of this happening depend on how the game handles S, but generally speaking, it's a ~1:85 chance. By emulating this behavior, I'll likely expose this bug in many ROM hacks that do splash screen hooks like this, including my own Tekkaman Blade translation. And it's also very possible that there are commercial games that screw this up as well. But, it's what the real system does. So if any crashes start happening as of this WIP upon resetting the game, well ... it'd happen on real hardware, too.
2016-04-03 11:17:20 +00:00
bool power_pending;
bool reset_pending;
//DMA
bool dma_active;
uint dma_counter;
uint dma_clocks;
bool dma_pending;
bool hdma_pending;
bool hdma_mode; //0 = init, 1 = run
Update to v073r01 release. byuu says: While perhaps not perfect, pretty good is better than nothing ... I've added emulation of auto-joypad poll timing. Going off ikari_01's confirmation of what we suspected, that the strobe happens every 256 clocks, I've set up emulation as follows: Upon reset, our clock counter is reset to zero. At the start of each frame, our poll counter is reset to zero. Every 256 clocks, we call the step_auto_joypad_poll() function. If we are at V=225/240+ (based on overscan setting), we check the poll counter. At zero, we poll the actual controller and set the joypad polling flag in $4212.d0 to 1. From zero through fifteen, we read in one bit for each controller and shift it into the register. At sixteen, we turn off the joypad polling flag. The 256-clock divider allows the start point of polling for each frame to fluctuate wildly like real hardware. I count regardless of auto joypad enable, as per $4212.d0's behavior; but only poll when it's actually enabled. I do not consume any actual time from this polling. I honestly don't know if I even should, or if it manages to do it in the background. If it should consume time, then this most likely happens between opcode edges and we'll have to adjust the code a good bit. All commercial games should continue to work fine, but this will likely break some hacks/translations not tested on hardware. Without the timing emulation, reading $4218-421f before V=~228 would basically give you the valid input controller values of the previous frame. Now, like hardware, it should give you a state that is part previous frame, part current frame shifted into it. Button positions won't be reliable and will shift every 256 clocks. I've also removed the Qt GUI, and renamed ui-phoenix to just ui. This removes 400kb of source code (phoenix is a lean 130kb), and drops the archive size from 564KB to 475KB. Combined with the DSP HLE, and we've knocked off ~570KB of source cruft from the entire project. I am looking forward to not having to specify which GUI is included anymore.
2010-12-27 07:29:57 +00:00
//auto joypad polling
bool auto_joypad_active;
bool auto_joypad_latch;
uint auto_joypad_counter;
uint auto_joypad_clock;
Update to v073r01 release. byuu says: While perhaps not perfect, pretty good is better than nothing ... I've added emulation of auto-joypad poll timing. Going off ikari_01's confirmation of what we suspected, that the strobe happens every 256 clocks, I've set up emulation as follows: Upon reset, our clock counter is reset to zero. At the start of each frame, our poll counter is reset to zero. Every 256 clocks, we call the step_auto_joypad_poll() function. If we are at V=225/240+ (based on overscan setting), we check the poll counter. At zero, we poll the actual controller and set the joypad polling flag in $4212.d0 to 1. From zero through fifteen, we read in one bit for each controller and shift it into the register. At sixteen, we turn off the joypad polling flag. The 256-clock divider allows the start point of polling for each frame to fluctuate wildly like real hardware. I count regardless of auto joypad enable, as per $4212.d0's behavior; but only poll when it's actually enabled. I do not consume any actual time from this polling. I honestly don't know if I even should, or if it manages to do it in the background. If it should consume time, then this most likely happens between opcode edges and we'll have to adjust the code a good bit. All commercial games should continue to work fine, but this will likely break some hacks/translations not tested on hardware. Without the timing emulation, reading $4218-421f before V=~228 would basically give you the valid input controller values of the previous frame. Now, like hardware, it should give you a state that is part previous frame, part current frame shifted into it. Button positions won't be reliable and will shift every 256 clocks. I've also removed the Qt GUI, and renamed ui-phoenix to just ui. This removes 400kb of source code (phoenix is a lean 130kb), and drops the archive size from 564KB to 475KB. Combined with the DSP HLE, and we've knocked off ~570KB of source cruft from the entire project. I am looking forward to not having to specify which GUI is included anymore.
2010-12-27 07:29:57 +00:00
//MMIO
Updated to v067r23 release. byuu says: Added missing $4200 IRQ lock, which fixes Chou Aniki on the fast CPU core, so slower PCs can get their brotherly love on. Added range-based controller IOBit latching to the fast CPU core, which enables Super Scope and Justifier support. Uses the priority queue as well, so there is zero speed-hit. Given the way range-testing works, the trigger point may vary by 1-2 pixels when firing at the same spot. Not really a big deal when it avoids a massive speed penalty. Fixed PAL and interlace-mode HVIRQs at V=0,H<2 on the fast CPU core. Added the dot-renderer's sprite list update-on-OAM-write functionality to the scanline-based PPU renderer. Unfortunately it looks like all the speed gain was already taken from the global dirty flag I was using before, but this certainly won't hurt speed any, so whatever. Added #ifdef to stop CoInitialize(0) on non-Windows ports. Added #ifdefs to stop gradient fade on Windows port. Not going to fuck over the Linux port aesthetic because of Qt bug #47,326,927. If there's a way to tell what Qt theme is being used, I can leave it enabled for XP/Vista themes. Moved HDMA trigger from 1104 to 1112, and reduced channel overhead from 24 to 16, to better simulate one-cycle DMA->CPU sync. Code clarity: I've re-added my varint.hpp classes, and am actively using them in the accuracy cores. So far, I haven't done anything that would detriment speed, but it is certainly cool. The APU ports exposed by the CPU and SMP now take uint2 address arguments, the CPU WRAM address register is a uint17, and the IRQ H/VTIME values are uint10. This basically allows the source to clearly convey the data sizes, and eliminates the need to manually mask values when writing to registers or reading from memory. I'm going to be doing this everywhere, and it will have a speed impact eventually, because the automation means we can't skip masks when we know the data is already masked off. Source: archive contains the launcher code, so that I can look into why it's crashing on XP tomorrow. It doesn't look like Circuit USA's flags are going to work too well with this new CPU core. Still not sure what the hell Robocop vs The Terminator is doing, I'll read through the mega SNES thread for clues tomorrow. Speedy Gonzales is definitely broken, as modifying the MDR was breaking things with my current core. Probably because the new CPU core doesn't wait for a cycle edge to trigger. I was thinking that perhaps we could keep some form of cheat codes list to work as game-specific hacks for the performance core. Keeps the hacks out of the emulator, but could allow the remaining bugs to be worked around for people who have no choice but to use the performance core.
2010-08-16 09:42:20 +00:00
//$2140-217f
uint8 port[4];
//$2181-$2183
Updated to v067r23 release. byuu says: Added missing $4200 IRQ lock, which fixes Chou Aniki on the fast CPU core, so slower PCs can get their brotherly love on. Added range-based controller IOBit latching to the fast CPU core, which enables Super Scope and Justifier support. Uses the priority queue as well, so there is zero speed-hit. Given the way range-testing works, the trigger point may vary by 1-2 pixels when firing at the same spot. Not really a big deal when it avoids a massive speed penalty. Fixed PAL and interlace-mode HVIRQs at V=0,H<2 on the fast CPU core. Added the dot-renderer's sprite list update-on-OAM-write functionality to the scanline-based PPU renderer. Unfortunately it looks like all the speed gain was already taken from the global dirty flag I was using before, but this certainly won't hurt speed any, so whatever. Added #ifdef to stop CoInitialize(0) on non-Windows ports. Added #ifdefs to stop gradient fade on Windows port. Not going to fuck over the Linux port aesthetic because of Qt bug #47,326,927. If there's a way to tell what Qt theme is being used, I can leave it enabled for XP/Vista themes. Moved HDMA trigger from 1104 to 1112, and reduced channel overhead from 24 to 16, to better simulate one-cycle DMA->CPU sync. Code clarity: I've re-added my varint.hpp classes, and am actively using them in the accuracy cores. So far, I haven't done anything that would detriment speed, but it is certainly cool. The APU ports exposed by the CPU and SMP now take uint2 address arguments, the CPU WRAM address register is a uint17, and the IRQ H/VTIME values are uint10. This basically allows the source to clearly convey the data sizes, and eliminates the need to manually mask values when writing to registers or reading from memory. I'm going to be doing this everywhere, and it will have a speed impact eventually, because the automation means we can't skip masks when we know the data is already masked off. Source: archive contains the launcher code, so that I can look into why it's crashing on XP tomorrow. It doesn't look like Circuit USA's flags are going to work too well with this new CPU core. Still not sure what the hell Robocop vs The Terminator is doing, I'll read through the mega SNES thread for clues tomorrow. Speedy Gonzales is definitely broken, as modifying the MDR was breaking things with my current core. Probably because the new CPU core doesn't wait for a cycle edge to trigger. I was thinking that perhaps we could keep some form of cheat codes list to work as game-specific hacks for the performance core. Keeps the hacks out of the emulator, but could allow the remaining bugs to be worked around for people who have no choice but to use the performance core.
2010-08-16 09:42:20 +00:00
uint17 wram_addr;
//$4016-$4017
bool joypad_strobe_latch;
uint32 joypad1_bits;
uint32 joypad2_bits;
//$4200
bool nmi_enabled;
bool hirq_enabled, virq_enabled;
bool auto_joypad_poll;
//$4201
uint8 pio;
//$4202-$4203
uint8 wrmpya;
uint8 wrmpyb;
//$4204-$4206
uint16 wrdiva;
Updated to v067r23 release. byuu says: Added missing $4200 IRQ lock, which fixes Chou Aniki on the fast CPU core, so slower PCs can get their brotherly love on. Added range-based controller IOBit latching to the fast CPU core, which enables Super Scope and Justifier support. Uses the priority queue as well, so there is zero speed-hit. Given the way range-testing works, the trigger point may vary by 1-2 pixels when firing at the same spot. Not really a big deal when it avoids a massive speed penalty. Fixed PAL and interlace-mode HVIRQs at V=0,H<2 on the fast CPU core. Added the dot-renderer's sprite list update-on-OAM-write functionality to the scanline-based PPU renderer. Unfortunately it looks like all the speed gain was already taken from the global dirty flag I was using before, but this certainly won't hurt speed any, so whatever. Added #ifdef to stop CoInitialize(0) on non-Windows ports. Added #ifdefs to stop gradient fade on Windows port. Not going to fuck over the Linux port aesthetic because of Qt bug #47,326,927. If there's a way to tell what Qt theme is being used, I can leave it enabled for XP/Vista themes. Moved HDMA trigger from 1104 to 1112, and reduced channel overhead from 24 to 16, to better simulate one-cycle DMA->CPU sync. Code clarity: I've re-added my varint.hpp classes, and am actively using them in the accuracy cores. So far, I haven't done anything that would detriment speed, but it is certainly cool. The APU ports exposed by the CPU and SMP now take uint2 address arguments, the CPU WRAM address register is a uint17, and the IRQ H/VTIME values are uint10. This basically allows the source to clearly convey the data sizes, and eliminates the need to manually mask values when writing to registers or reading from memory. I'm going to be doing this everywhere, and it will have a speed impact eventually, because the automation means we can't skip masks when we know the data is already masked off. Source: archive contains the launcher code, so that I can look into why it's crashing on XP tomorrow. It doesn't look like Circuit USA's flags are going to work too well with this new CPU core. Still not sure what the hell Robocop vs The Terminator is doing, I'll read through the mega SNES thread for clues tomorrow. Speedy Gonzales is definitely broken, as modifying the MDR was breaking things with my current core. Probably because the new CPU core doesn't wait for a cycle edge to trigger. I was thinking that perhaps we could keep some form of cheat codes list to work as game-specific hacks for the performance core. Keeps the hacks out of the emulator, but could allow the remaining bugs to be worked around for people who have no choice but to use the performance core.
2010-08-16 09:42:20 +00:00
uint8 wrdivb;
//$4207-$420a
uint9 hirq_pos;
uint9 virq_pos;
//$420d
uint rom_speed;
//$4214-$4217
uint16 rddiv;
uint16 rdmpy;
//$4218-$421f
Update to v073r01 release. byuu says: While perhaps not perfect, pretty good is better than nothing ... I've added emulation of auto-joypad poll timing. Going off ikari_01's confirmation of what we suspected, that the strobe happens every 256 clocks, I've set up emulation as follows: Upon reset, our clock counter is reset to zero. At the start of each frame, our poll counter is reset to zero. Every 256 clocks, we call the step_auto_joypad_poll() function. If we are at V=225/240+ (based on overscan setting), we check the poll counter. At zero, we poll the actual controller and set the joypad polling flag in $4212.d0 to 1. From zero through fifteen, we read in one bit for each controller and shift it into the register. At sixteen, we turn off the joypad polling flag. The 256-clock divider allows the start point of polling for each frame to fluctuate wildly like real hardware. I count regardless of auto joypad enable, as per $4212.d0's behavior; but only poll when it's actually enabled. I do not consume any actual time from this polling. I honestly don't know if I even should, or if it manages to do it in the background. If it should consume time, then this most likely happens between opcode edges and we'll have to adjust the code a good bit. All commercial games should continue to work fine, but this will likely break some hacks/translations not tested on hardware. Without the timing emulation, reading $4218-421f before V=~228 would basically give you the valid input controller values of the previous frame. Now, like hardware, it should give you a state that is part previous frame, part current frame shifted into it. Button positions won't be reliable and will shift every 256 clocks. I've also removed the Qt GUI, and renamed ui-phoenix to just ui. This removes 400kb of source code (phoenix is a lean 130kb), and drops the archive size from 564KB to 475KB. Combined with the DSP HLE, and we've knocked off ~570KB of source cruft from the entire project. I am looking forward to not having to specify which GUI is included anymore.
2010-12-27 07:29:57 +00:00
uint16 joy1;
uint16 joy2;
uint16 joy3;
uint16 joy4;
} status;
struct ALU {
uint mpyctr;
uint divctr;
uint shift;
} alu;
struct Channel {
//$420b
bool dma_enabled;
//$420c
bool hdma_enabled;
//$43x0
bool direction;
bool indirect;
bool unused;
bool reverse_transfer;
bool fixed_transfer;
uint3 transfer_mode;
//$43x1
uint8 dest_addr;
//$43x2-$43x3
uint16 source_addr;
//$43x4
uint8 source_bank;
//$43x5-$43x6
union {
uint16_t transfer_size;
uint16_t indirect_addr;
};
//$43x7
uint8 indirect_bank;
//$43x8-$43x9
uint16 hdma_addr;
//$43xa
uint8 line_counter;
//$43xb/$43xf
uint8 unknown;
//internal state
bool hdma_completed;
bool hdma_do_transfer;
} channel[8];
struct Pipe {
bool valid;
uint addr;
uint8 data;
} pipe;
struct Debugger {
hook<auto (uint24) -> void> op_exec;
hook<auto (uint24, uint8) -> void> op_read;
hook<auto (uint24, uint8) -> void> op_write;
hook<auto () -> void> op_nmi;
hook<auto () -> void> op_irq;
} debugger;
};
extern CPU cpu;