bsnes/higan/processor/arm7tdmi/instructions-thumb.cpp

31 lines
1.5 KiB
C++
Raw Normal View History

auto ARM7TDMI::thumbALU(uint4 mode, uint4 target, uint4 source) -> void {
switch(mode) {
case 0: r(target) = BIT(r(target) & r(source)); break; //AND
case 1: r(target) = BIT(r(target) ^ r(source)); break; //EOR
case 2: r(target) = BIT(LSL(r(target), r(source))); break; //LSL
case 3: r(target) = BIT(LSR(r(target), r(source))); break; //LSR
case 4: r(target) = BIT(ASR(r(target), r(source))); break; //ASR
case 5: r(target) = ADD(r(target), r(source), cpsr().c); break; //ADC
case 6: r(target) = SUB(r(target), r(source), cpsr().c); break; //SBC
case 7: r(target) = BIT(ROR(r(target), r(source))); break; //ROR
case 8: BIT(r(target) & r(source)); break; //TST
case 9: r(target) = SUB(0, r(source), 1); break; //NEG
case 10: SUB(r(target), r(source), 1); break; //CMP
case 11: ADD(r(target), r(source), 0); break; //CMN
case 12: r(target) = BIT(r(target) | r(source)); break; //ORR
case 13: r(target) = MUL(0, r(source), r(target)); break; //MUL
case 14: r(target) = BIT(r(target) & ~r(source)); break; //BIC
case 15: r(target) = BIT(~r(source)); break; //MVN
}
}
Update to v103r28 release. byuu says: Changelog: - processor/arm7tdmi: implemented 10 of 19 ARM instructions - processor/arm7tdmi: implemented 1 of 22 THUMB instructions Today's WIP was 6 hours of work, and yesterday's was 5 hours. Half of today was just trying to come up with the design to use a lambda-based dispatcher to map both instructions and disassembly, similar to the 68K core. The problem is that the ARM core has 28 unique bits, which is just far too many bits to have a full lookup table like the 16-bit 68K core. The thing I wanted more than anything else was to perform the opcode bitfield decoding once, and have it decoded for both instructions and the disassembler. It took three hours to come up with a design that worked for the ARM half ... relying on #defines being able to pull in other #defines that were declared and changed later after the first one. But, I'm happy with it. The decoding is in the table building, as it is with the 68K core. The decoding does happen at run-time on each instruction invocation, but it has to be done. As to the THUMB core, I can create a 64K-entry lambda table to cover all possible encodings, and ... even though it's a cache killer, I've decided to go for it, given the outstanding performance it obtained in the M68K core, as well as considering that THUMB mode is far more common in GBA games. As to both cores ... I'm a little torn between two extremes: On the one hand, I can condense the number of ARM/THUMB instructions further to eliminate more redundant code. On the other, I can split them apart to reduce the number of conditional tests needed to execute each instruction. It's really the disassembler that makes me not want to split them up further ... as I have to split the disassembler functions up equally to the instruction functions. But it may be worth it if it's a speed improvement.
2017-08-07 12:20:35 +00:00
//
auto ARM7TDMI::thumbInstructionAdjustRegister
(uint3 d, uint3 n, uint3 m, uint1 mode) -> void {
switch(mode) {
case 0: r(d) = ADD(r(n), r(m), 0); break;
case 1: r(d) = SUB(r(n), r(m), 1); break;
}
}