mirror of https://github.com/bsnes-emu/bsnes.git
14 Commits
Author | SHA1 | Message | Date |
---|---|---|---|
Tim Allen | 8bdf8f2a55 |
Update to v101r01 release.
byuu says: Changelog: - added eight more 68K instructions - split ADD(direction) into two separate ADD functions I now have 54 out of 88 instructions implemented (thus, 34 remaining.) The map is missing 25,182 entries out of 65,536. Down from 32,680 for v101.00 Aside: this version number feels really silly. r10 and r11 surely will as well ... |
|
Tim Allen | c50723ef61 |
Update to v100r15 release.
byuu wrote: Aforementioned scheduler changes added. Longer explanation of why here: http://hastebin.com/raw/toxedenece Again, we really need to test this as thoroughly as possible for regressions :/ This is a really major change that affects absolutely everything: all emulation cores, all coprocessors, etc. Also added ADDX and SUB to the 68K core, which brings us just barely above 50% of the instruction encoding space completed. [Editor's note: The "aformentioned scheduler changes" were described in a previous forum post: Unfortunately, 64-bits just wasn't enough precision (we were getting misalignments ~230 times a second on 21/24MHz clocks), so I had to move to 128-bit counters. This of course doesn't exist on 32-bit architectures (and probably not on all 64-bit ones either), so for now ... higan's only going to compile on 64-bit machines until we figure something out. Maybe we offer a "lower precision" fallback for machines that lack uint128_t or something. Using the booth algorithm would be way too slow. Anyway, the precision is now 2^-96, which is roughly 10^-29. That puts us far beyond the yoctosecond. Suck it, MAME :P I'm jokingly referring to it as the byuusecond. The other 32-bits of precision allows a 1Hz clock to run up to one full second before all clocks need to be normalized to prevent overflow. I fixed a serious wobbling issue where I was using clock > other.clock for synchronization instead of clock >= other.clock; and also another aliasing issue when two threads share a common frequency, but don't run in lock-step. The latter I don't even fully understand, but I did observe it in testing. nall/serialization.hpp has been extended to support 128-bit integers, but without explicitly naming them (yay generic code), so nall will still compile on 32-bit platforms for all other applications. Speed is basically a wash now. FC's a bit slower, SFC's a bit faster. The "longer explanation" in the linked hastebin is: Okay, so the idea is that we can have an arbitrary number of oscillators. Take the SNES: - CPU/PPU clock = 21477272.727272hz - SMP/DSP clock = 24576000hz - Cartridge DSP1 clock = 8000000hz - Cartridge MSU1 clock = 44100hz - Controller Port 1 modem controller clock = 57600hz - Controller Port 2 barcode battler clock = 115200hz - Expansion Port exercise bike clock = 192000hz Is this a pathological case? Of course it is, but it's possible. The first four do exist in the wild already: see Rockman X2 MSU1 patch. Manifest files with higan let you specify any frequency you want for any component. The old trick higan used was to hold an int64 counter for each thread:thread synchronization, and adjust it like so: - if thread A steps X clocks; then clock += X * threadB.frequency - if clock >= 0; switch to threadB - if thread B steps X clocks; then clock -= X * threadA.frequency - if clock < 0; switch to threadA But there are also system configurations where one processor has to synchronize with more than one other processor. Take the Genesis: - the 68K has to sync with the Z80 and PSG and YM2612 and VDP - the Z80 has to sync with the 68K and PSG and YM2612 - the PSG has to sync with the 68K and Z80 and YM2612 Now I could do this by having an int64 clock value for every association. But these clock values would have to be outside the individual Thread class objects, and we would have to update every relationship's clock value. So the 68K would have to update the Z80, PSG, YM2612 and VDP clocks. That's four expensive 64-bit multiply-adds per clock step event instead of one. As such, we have to account for both possibilities. The only way to do this is with a single time base. We do this like so: - setup: scalar = timeBase / frequency - step: clock += scalar * clocks Once per second, we look at every thread, find the smallest clock value. Then subtract that value from all threads. This prevents the clock counters from overflowing. Unfortunately, these oscillator values are psychotic, unpredictable, and often times repeating fractions. Even with a timeBase of 1,000,000,000,000,000,000 (one attosecond); we get rounding errors every ~16,300 synchronizations. Specifically, this happens with a CPU running at 21477273hz (rounded) and SMP running at 24576000hz. That may be good enough for most emulators, but ... you know how I am. Plus, even at the attosecond level, we're really pushing against the limits of 64-bit integers. Given the reciprocal inverse, a frequency of 1Hz (which does exist in higan!) would have a scalar that consumes 1/18th of the entire range of a uint64 on every single step. Yes, I could raise the frequency, and then step by that amount, I know. But I don't want to have weird gotchas like that in the scheduler core. Until I increase the accuracy to about 100 times greater than a yoctosecond, the rounding errors are too great. And since the only choice above 64-bit values is 128-bit values; we might as well use all the extra headroom. 2^-96 as a timebase gives me the ability to have both a 1Hz and 4GHz clock; and run them both for a full second; before an overflow event would occur. Another hastebin includes demonstration code: #include <libco/libco.h> #include <nall/nall.hpp> using namespace nall; // cothread_t mainThread = nullptr; const uint iterations = 100'000'000; const uint cpuFreq = 21477272.727272 + 0.5; const uint smpFreq = 24576000.000000 + 0.5; const uint cpuStep = 4; const uint smpStep = 5; // struct ThreadA { cothread_t handle = nullptr; uint64 frequency = 0; int64 clock = 0; auto create(auto (*entrypoint)() -> void, uint frequency) { this->handle = co_create(65536, entrypoint); this->frequency = frequency; this->clock = 0; } }; struct CPUA : ThreadA { static auto Enter() -> void; auto main() -> void; CPUA() { create(&CPUA::Enter, cpuFreq); } } cpuA; struct SMPA : ThreadA { static auto Enter() -> void; auto main() -> void; SMPA() { create(&SMPA::Enter, smpFreq); } } smpA; uint8 queueA[iterations]; uint offsetA; cothread_t resumeA = cpuA.handle; auto EnterA() -> void { offsetA = 0; co_switch(resumeA); } auto QueueA(uint value) -> void { queueA[offsetA++] = value; if(offsetA >= iterations) { resumeA = co_active(); co_switch(mainThread); } } auto CPUA::Enter() -> void { while(true) cpuA.main(); } auto CPUA::main() -> void { QueueA(1); smpA.clock -= cpuStep * smpA.frequency; if(smpA.clock < 0) co_switch(smpA.handle); } auto SMPA::Enter() -> void { while(true) smpA.main(); } auto SMPA::main() -> void { QueueA(2); smpA.clock += smpStep * cpuA.frequency; if(smpA.clock >= 0) co_switch(cpuA.handle); } // struct ThreadB { cothread_t handle = nullptr; uint128_t scalar = 0; uint128_t clock = 0; auto print128(uint128_t value) { string s; while(value) { s.append((char)('0' + value % 10)); value /= 10; } s.reverse(); print(s, "\n"); } //femtosecond (10^15) = 16306 //attosecond (10^18) = 688838 //zeptosecond (10^21) = 13712691 //yoctosecond (10^24) = 13712691 (hitting a dead-end on a rounding error causing a wobble) //byuusecond? ( 2^96) = (perfect? 79,228 times more precise than a yoctosecond) auto create(auto (*entrypoint)() -> void, uint128_t frequency) { this->handle = co_create(65536, entrypoint); uint128_t unitOfTime = 1; //for(uint n : range(29)) unitOfTime *= 10; unitOfTime <<= 96; //2^96 time units ... this->scalar = unitOfTime / frequency; print128(this->scalar); this->clock = 0; } auto step(uint128_t clocks) -> void { clock += clocks * scalar; } auto synchronize(ThreadB& thread) -> void { if(clock >= thread.clock) co_switch(thread.handle); } }; struct CPUB : ThreadB { static auto Enter() -> void; auto main() -> void; CPUB() { create(&CPUB::Enter, cpuFreq); } } cpuB; struct SMPB : ThreadB { static auto Enter() -> void; auto main() -> void; SMPB() { create(&SMPB::Enter, smpFreq); clock = 1; } } smpB; auto correct() -> void { auto minimum = min(cpuB.clock, smpB.clock); cpuB.clock -= minimum; smpB.clock -= minimum; } uint8 queueB[iterations]; uint offsetB; cothread_t resumeB = cpuB.handle; auto EnterB() -> void { correct(); offsetB = 0; co_switch(resumeB); } auto QueueB(uint value) -> void { queueB[offsetB++] = value; if(offsetB >= iterations) { resumeB = co_active(); co_switch(mainThread); } } auto CPUB::Enter() -> void { while(true) cpuB.main(); } auto CPUB::main() -> void { QueueB(1); step(cpuStep); synchronize(smpB); } auto SMPB::Enter() -> void { while(true) smpB.main(); } auto SMPB::main() -> void { QueueB(2); step(smpStep); synchronize(cpuB); } // #include <nall/main.hpp> auto nall::main(string_vector) -> void { mainThread = co_active(); uint masterCounter = 0; while(true) { print(masterCounter++, " ...\n"); auto A = clock(); EnterA(); auto B = clock(); print((double)(B - A) / CLOCKS_PER_SEC, "s\n"); auto C = clock(); EnterB(); auto D = clock(); print((double)(D - C) / CLOCKS_PER_SEC, "s\n"); for(uint n : range(iterations)) { if(queueA[n] != queueB[n]) return print("fail at ", n, "\n"); } } } ...and that's everything.] |
|
Tim Allen | 306cac2b54 |
Update to v100r13 release.
byuu says: Changelog: M68K improvements, new instructions added. |
|
Tim Allen | f230d144b5 |
Update to v100r12 release.
byuu says: All of the above fixes, plus I added all 24 variations on the shift opcodes, plus SUBQ, plus fixes to the BCC instruction. I can now run 851,767 instructions into Sonic the Hedgehog before hitting an unimplemented instruction (SUB). The 68K core is probably only ~35% complete, and yet it's already within 4KiB of being the largest CPU core, code size wise, in all of higan. Fuck this chip. |
|
Tim Allen | 7ccfbe0206 |
Update to v100r11 release.
byuu says: I split the Register class and read/write handlers into DataRegister and AddressRegister, given that they have different behaviors on byte/word accesses (data tends to preserve the upper bits; address tends to sign-extend things.) I expanded EA to EffectiveAddress. No sense in abbreviating things to death. I've now implemented 26 instructions. But the new ones are just all the stupid from/to ccr/sr instructions. Ryphecha confirmed that you can't set the undefined bits, so I don't think the BitField concept is appropriate for the CCR/SR. Instead, I'm just storing direct flags and have (read,write)(CCR,SR) instead. This isn't like the 65816 where you have subroutines that push and pop the flag register. It's much more common to access individual flags. Doesn't match the consistency angle of the other CPU cores, but ... I think this is the right thing to for the 68K specifically. |
|
Tim Allen | 4b897ba791 |
Update to v100r10 release.
byuu says: Redesigned the handling of reading/writing registers to be about eight times faster than the old system. More work may be needed ... it seems data registers tend to preserve their upper bits upon assignment; whereas address registers tend to sign-extend values into them. It may make sense to have DataRegister and AddressRegister classes with separate read/write handlers. I'd have to hold two Register objects inside the EffectiveAddress (EA) class if we do that. Implemented 19 opcodes now (out of somewhere between 60 and 90.) That gets the first ~530,000 instructions in Sonic the Hedgehog running (though probably wrong. But we can run a lot thanks to large initialization loops.) If I force the core to loop back to the reset vector on an invalid opcode, I'm getting about 1500fps with a dumb 320x240 blit 60 times a second and just the 68K running alone (no Z80, PSG, VDP, YM2612.) I don't know if that's good or not. I guess we'll find out. I had to stop tonight because the final opcode I execute is an RTS (return from subroutine) that's branching back to address 0; which is invalid ... meaning something went terribly wrong and the system crashed. |
|
Tim Allen | be3f6ac0d5 |
Update to v100r09 release.
byuu says: Another six hours in ... I have all of the opcodes, memory access functions, disassembler mnemonics and table building converted over to the new template<uint Size> format. Certainly, it would be quite easy for this nightmare chip to throw me another curveball, but so far I can handle: - MOVE (EA to, EA from) case - read(from) has to update register index for +/-(aN) mode - MOVEM (EA from) case - when using +/-(aN), RA can't actually be updated until the transfer is completed - LEA (EA from) case - doesn't actually perform the final read; just returns the address to be read from - ANDI (EA from-and-to) case - same EA has to be read from and written to - for -(aN), the read has to come from aN-2, but can't update aN yet; so that the write also goes to aN-2 - no opcode can ever fetch the extension words more than once - manually control the order of extension word fetching order for proper opcode decoding To do all of that without a whole lot of duplicated code (or really bloating out every single instruction with red tape), I had to bring back the "bool valid / uint32 address" variables inside the EA struct =( If weird exceptions creep in like timing constraints only on certain opcodes, I can use template flags to the EA read/write functions to handle that. |
|
Tim Allen | 92fe5b0813 |
Update to v100r08 release.
byuu says: Six and a half hours this time ... one new opcode, and all old opcodes now in a deprecated format. Hooray, progress! For building the table, I've decided to move from: for(uint opcode : range(65536)) { if(match(...)) bind(opNAME, ...); } To instead having separate for loops for each supported opcode. This lets me specialize parts I want with templates. And to this aim, I'm moving to replace all of the (read,write)(size, ...) functions with (read,write)<Size>(...) functions. This will amount to the ~70ish instructions being triplicated ot ~210ish instructions; but I think this is really important. When I was getting into flag calculations, a ton of conditionals were needed to mask sizes to byte/word/long. There was also lots of conditionals in all the memory access handlers. The template code is ugly, but we eliminate a huge amount of branch conditions this way. |
|
Tim Allen | 059347e575 |
Update to v100r07 release.
byuu says: Four and a half hours of work and ... zero new opcodes implemented. This was the best job I could do refining the effective address computations. Should have all twelve 68000 modes implemented now. Still have a billion questions about when and how I'm supposed to perform certain edge case operations, though. |
|
Tim Allen | 0d6a09f9f8 |
Update to v100r06 release.
byuu says: Up to ten 68K instructions out of somewhere between 61 and 88, depending upon which PDF you look at. Of course, some of them aren't 100% completed yet, either. Lots of craziness with MOVEM, and BCC has a BSR variant that needs stack push/pop functions. This WIP actually took over eight hours to make, going through every possible permutation on how to design the core itself. The updated design now builds both the instruction decoder+dispatcher and the disassembler decoder into the same main loop during M68K's constructor. The special cases are also really psychotic on this processor, and I'm afraid of missing something via the fallthrough cases. So instead, I'm ordering the instructions alphabetically, and including exclusion cases to ignore binding invalid cases. If I end up remapping an existing register, then it'll throw a run-time assertion at program startup. I wanted very much to get rid of struct EA (EffectiveAddress), but it's too difficult to keep track of the internal effective address without it. So I split out the size to a separate parameter, since every opcode only has one size parameter, and otherwise it was getting duplicated in opcodes that take two EAs, and was also awkward with the flag testing. It's a bit more typing, but I feel it's more clean this way. Overall, I'm really worried this is going to be too slow. I don't want to turn the EA stuff into templates, because that will massively bloat out compilation times and object sizes, and will also need a special DSL preprocessor since C++ doesn't have a static for loop. I can definitely optimize a lot of EA's address/read/write functions away once the core is completed, but it's never going to hold a candle to a templatized 68K core. ---- Forgot to include the SA-1 regression fix. I always remember immediately after I upload and archive the WIP. Will try to get that in next time, I guess. |
|
Tim Allen | b72f35a13e |
Update to v100r05 release.
byuu says: Alright, I'm definitely going to need to find some people willing to tolerate my questions on this chip, so I'm going to go ahead and announce I'm working on this I guess. This core is way too big for a surprise like the NES and WS cores were. It'll probably even span multiple v10x releases before it's even ready. |
|
Tim Allen | 1c0ef793fe |
Update to v100r04 release.
byuu says: I now have enough of three instructions implemented to get through the first four instructions in Sonic the Hedgehog. But they're far from complete. The very first instruction uses EA addressing, which is similar to x86's ModRM in terms of how disgustingly complex it is. And it also accesses Z80 control registers, which obviously isn't going to do anything yet. The slow speed was me being stupid again. It's not 7.6MHz per frame, it's 7.67MHz per second. So yeah, speed is so far acceptable again. But we'll see how things go as I keep emulating more. The 68K decode is not pretty at all. |
|
Tim Allen | 76a8ecd32a |
Update to v100r03 release.
byuu says: Changelog: - moved Thread, Scheduler, Cheat functionality into emulator/ for all cores - start of actual Mega Drive emulation (two 68K instructions) I'm going to be rather terse on MD emulation, as it's too early for any meaningful dialogue here. |
|
Tim Allen | 3dd1aa9c1b |
Update to v100r02 release.
byuu says: Sigh ... I'm really not a good person. I'm inherently selfish. My responsibility and obligation right now is to work on loki, and then on the Tengai Makyou Zero translation, and then on improving the Famicom emulation. And yet ... it's not what I really want to do. That shouldn't matter; I should work on my responsibilities first. Instead, I'm going to be a greedy, self-centered asshole, and work on what I really want to instead. I'm really sorry, guys. I'm sure this will make a few people happy, and probably upset even more people. I'm also making zero guarantees that this ever gets finished. As always, I wish I could keep these things secret, so if I fail / give up, I could just drop it with no shame. But I would have to cut everyone out of the WIP process completely to make it happen. So, here goes ... This WIP adds the initial skeleton for Sega Mega Drive / Genesis emulation. God help us. (minor note: apparently the new extension for Mega Drive games is .md, neat. That's what I chose for the folders too. I thought it was .smd, so that'll be fixed in icarus for the next WIP.) (aside: this is why I wanted to get v100 out. I didn't want this code in a skeleton state in v100's source. Nor did I want really broken emulation, which the first release is sure to be, tarring said release.) ... So, basically, I've been ruminating on the legacy I want to leave behind with higan. 3D systems are just plain out. I'm never going to support them. They're too complex for my abilities, and they would run too slowly with my design style. I'm not willing to compromise my design ideals. And I would never want to play a 3D game system at native 240p/480i resolution ... but 1080p+ upscaling is not accurate, so that's a conflict I want to avoid entirely. It's also never going to emulate computer systems (X68K, PC-98, FM-Towns, etc) because holy shit that would completely destroy me. It's also never going emulate arcade machines. So I think of higan as a collection of 2D emulators for consoles and handhelds. I've gone over every major 2D gaming system there is, looking for ones with games I actually care about and enjoy. And I basically have five of those systems supported already. Looking at the remaining list, I see only three systems left that I have any interest in whatsoever: PC-Engine, Master System, Mega Drive. Again, I'm not in any way committing to emulating any of these, but ... if I had all of those in higan, I think I'd be content to really, truly, finally stop writing more emulators for the rest of my life. And so I decided to tackle the most difficult system first. If I'm successful, the Z80 core should cover a lot of the work on the SMS. And the HuC6280 should land somewhere between the NES and SNES in terms of difficulty ... closer to the NES. The systems that just don't appeal to me at all, which I will never touch, include, but are not limited to: * Atari 2600/5200/7800 * Lynx * Jaguar * Vectrex * Colecovision * Commodore 64 * Neo-Geo * Neo-Geo Pocket / Color * Virtual Boy * Super A'can * 32X * CD-i * etc, etc, etc. And really, even if something were mildly interesting in there ... we have to stop. I can't scale infinitely. I'm already way past my limit, but I'm doing this anyway. Too many cores bloats everything and kills quality on everything. I don't want higan to become MESS v2. I don't know what I'll do about the Famicom Disk System, PC-Engine CD, and Mega CD. I don't think I'll be able to achieve 60fps emulating the Mega CD, even if I tried to. I don't know what's going to happen here with even the Mega Drive. Maybe I'll get driven crazy with the documentation and quit. Maybe it'll end up being too complicated and I'll quit. Maybe the emulation will end up way too slow and I'll give up. Maybe it'll take me seven years to get any games playable at all. Maybe Steve Snake, AamirM and Mike Pavone will pool money to hire a hitman to come after me. Who knows. But this is what I want to do, so ... here goes nothing. |