bsnes/nall/primitives.hpp

27 lines
825 B
C++
Raw Normal View History

#pragma once
#include <nall/atoi.hpp>
#include <nall/serializer.hpp>
#include <nall/traits.hpp>
namespace nall {
Update to v106r77 release. byuu says: So this turned out to be a rather unproductive ten-hour rabbit hole, but ... I reworked nall/primitives.hpp a lot. And because the changes are massive, testing of this WIP for regressions is critically important. I really can't stress that enough, we're almost certainly going to have some hidden regressions here ... We now have a nall/primitives/ subfolder that splits up the classes into manageable components. The bit-field support is now shared between both Natural and Integer. All of the assignment operator overloads are now templated and take references instead of values. Things like the GSU::Register class are non-copyable on account of the function<> object inside of it, and previously only operator= would work with classes like that. The big change is nall/primitives/operators.hpp, which is a really elaborate system to compute the minimum number of bits needed for any operation, and to return a Natural<T> or Integer<T> when one or both of the arguments are such a type. Unfortunately, it doesn't really work yet ... Kirby's Dream Land 3 breaks if we include operators.hpp. Zelda 3 runs fine with this, but I had to make a huge amount of core changes, including introducing a new ternary(bool, lhs, rhs) function to nall/algorithm to get past Natural<X> and Natural<Y> not being equivalent (is_integral types get a special exemption to ternary ?: type equivalence, yet it's impossible to simulate with our own classes, which is bullshit.) The horrifying part is that ternary() will evaluate both lhs and rhs, unlike ?: I converted some of the functions to test ? uint(x) : uint(y), and others to ternary(test, x, y) ... I don't have a strong preference either way yet. But the part where things may have gotten broken is in the changes to where ternary() was placed. Some cases like in the GBA PPU renderer, it was rather unclear the order of evaluations, so I may have made a mistake somewhere. So again, please please test this if you can. Or even better, look over the diff. Longer-term, I'd really like the enable nall/primitives/operators.hpp, but right now I'm not sure why Kirby's Dream Land 3 is breaking. Help would be appreciated, but ... it's gonna be really complex and difficult to debug, so I'm probably gonna be on my own here ... sigh.
2019-01-13 06:25:14 +00:00
struct Boolean;
template<uint Precision = 64> struct Natural;
template<uint Precision = 64> struct Integer;
template<uint Precision = 64> struct Real;
Update to v106r76 release. byuu says: I added some useful new functions to nall/primitives: auto Natural<T>::integer() const -> Integer<T>; auto Integer<T>::natural() const -> Natural<T>; These let you cast between signed and unsigned representation without having to care about the value of T (eg if you take a Natural<T> as a template parameter.) So for instance when you're given an unsigned type but it's supposed to be a sign-extended type (example: signed multiplication), eg Natural<T> → Integer<T>, you can just say: x = y.integer() * z.integer(); The TLCS900H core gained some more pesky instructions such as DAA, BS1F, BS1B. I stole an optimization from RACE for calculating the overflow flag on addition. Assuming: z = x + y + c; Before: ~(x ^ y) & (x ^ z) & signBit; After: (x ^ z) & (y ^ z) & signBit; Subtraction stays the same. Assuming: z = x - y - c; Same: (x ^ y) & (x ^ z) & signBit; However, taking a speed penalty, I've implemented the carry computation in a way that doesn't require an extra bit. Adding before: uint9 z = x + y + c; c = z & 0x100; Subtracting before: uint9 z = x - y - c; c = z & 0x100; Adding after: uint8 z = x + y + c; c = z < x || z == x && c; Subtracting after: uint8 z = x - y - c; c = z > x || z == x && c; I haven't been able to code golf the new carry computation to be any shorter, unless I include an extra bit, eg for adding: c = z < x + c; But that defeats the entire point of the change. I want the computation to work even when T is uintmax_t. If anyone can come up with a faster method, please let me know. Anyway ... I also had to split off INC and DEC because they compute flags differently (word and long modes don't set flags at all, byte mode doesn't set carry at all.) I also added division by zero support, although I don't know if it's actually hardware accurate. It's what other emulators do, though.
2019-01-11 01:51:18 +00:00
}
#include <nall/primitives/bit-field.hpp>
Update to v106r77 release. byuu says: So this turned out to be a rather unproductive ten-hour rabbit hole, but ... I reworked nall/primitives.hpp a lot. And because the changes are massive, testing of this WIP for regressions is critically important. I really can't stress that enough, we're almost certainly going to have some hidden regressions here ... We now have a nall/primitives/ subfolder that splits up the classes into manageable components. The bit-field support is now shared between both Natural and Integer. All of the assignment operator overloads are now templated and take references instead of values. Things like the GSU::Register class are non-copyable on account of the function<> object inside of it, and previously only operator= would work with classes like that. The big change is nall/primitives/operators.hpp, which is a really elaborate system to compute the minimum number of bits needed for any operation, and to return a Natural<T> or Integer<T> when one or both of the arguments are such a type. Unfortunately, it doesn't really work yet ... Kirby's Dream Land 3 breaks if we include operators.hpp. Zelda 3 runs fine with this, but I had to make a huge amount of core changes, including introducing a new ternary(bool, lhs, rhs) function to nall/algorithm to get past Natural<X> and Natural<Y> not being equivalent (is_integral types get a special exemption to ternary ?: type equivalence, yet it's impossible to simulate with our own classes, which is bullshit.) The horrifying part is that ternary() will evaluate both lhs and rhs, unlike ?: I converted some of the functions to test ? uint(x) : uint(y), and others to ternary(test, x, y) ... I don't have a strong preference either way yet. But the part where things may have gotten broken is in the changes to where ternary() was placed. Some cases like in the GBA PPU renderer, it was rather unclear the order of evaluations, so I may have made a mistake somewhere. So again, please please test this if you can. Or even better, look over the diff. Longer-term, I'd really like the enable nall/primitives/operators.hpp, but right now I'm not sure why Kirby's Dream Land 3 is breaking. Help would be appreciated, but ... it's gonna be really complex and difficult to debug, so I'm probably gonna be on my own here ... sigh.
2019-01-13 06:25:14 +00:00
#include <nall/primitives/bit-range.hpp>
#include <nall/primitives/boolean.hpp>
#include <nall/primitives/natural.hpp>
#include <nall/primitives/integer.hpp>
#include <nall/primitives/real.hpp>
#include <nall/primitives/types.hpp>
Update to v106r79 release. byuu says: This WIP is just work on nall/primitives ... Basically, I'm coming to the conclusion that it's just not practical to try and make Natural/Integer implicitly castable to primitive signed and unsigned integers. C++ just has too many edge cases there. I also want to get away from the problem of C++ deciding that all math operations return 32-bit values, unless one of the parameters is 64-bit, in which case you get a 64-bit value. You know, so things like array[-1] won't end up accessing the 4 billionth element of the array. It's nice to be fancy and minimally size operations (eg 32-bit+32-bit = 33-bit), but it's just too unintuitive. I think all Natural<X>+Natural<Y> expessions should result in a Natural<64> (eg natural) type. nall/primitives/operators.hpp has been removed, and new Natural<>Natural / Integer<>Integer casts exist. My feeling is that signed and unsigned types should not be implicitly convertible where data loss can occur. In the future, I think an integer8*natural8 is fine to return an integer64, and the bitwise operators are probably all fine between the two types. I could probably add (Integer,Natural)+Boolean conversions as well. To simplify expressions, there are new user-defined literals for _b (boolean), _n (natural), _i (integer), _r (real), _n# (eg _n8), _i# (eg _i8), _r# (eg _r32), and _s (nall::string). In the long-term, my intention is to make the conversion and cast constructors explicit for primitive types, but obviously that'll shatter most of higan, so for now that won't be the case. Something I can do in the future is allow implicit conversion and casting to (u)int64_t. That may be a nice balance.
2019-01-15 04:33:20 +00:00
#include <nall/primitives/literals.hpp>
Update to v106r77 release. byuu says: So this turned out to be a rather unproductive ten-hour rabbit hole, but ... I reworked nall/primitives.hpp a lot. And because the changes are massive, testing of this WIP for regressions is critically important. I really can't stress that enough, we're almost certainly going to have some hidden regressions here ... We now have a nall/primitives/ subfolder that splits up the classes into manageable components. The bit-field support is now shared between both Natural and Integer. All of the assignment operator overloads are now templated and take references instead of values. Things like the GSU::Register class are non-copyable on account of the function<> object inside of it, and previously only operator= would work with classes like that. The big change is nall/primitives/operators.hpp, which is a really elaborate system to compute the minimum number of bits needed for any operation, and to return a Natural<T> or Integer<T> when one or both of the arguments are such a type. Unfortunately, it doesn't really work yet ... Kirby's Dream Land 3 breaks if we include operators.hpp. Zelda 3 runs fine with this, but I had to make a huge amount of core changes, including introducing a new ternary(bool, lhs, rhs) function to nall/algorithm to get past Natural<X> and Natural<Y> not being equivalent (is_integral types get a special exemption to ternary ?: type equivalence, yet it's impossible to simulate with our own classes, which is bullshit.) The horrifying part is that ternary() will evaluate both lhs and rhs, unlike ?: I converted some of the functions to test ? uint(x) : uint(y), and others to ternary(test, x, y) ... I don't have a strong preference either way yet. But the part where things may have gotten broken is in the changes to where ternary() was placed. Some cases like in the GBA PPU renderer, it was rather unclear the order of evaluations, so I may have made a mistake somewhere. So again, please please test this if you can. Or even better, look over the diff. Longer-term, I'd really like the enable nall/primitives/operators.hpp, but right now I'm not sure why Kirby's Dream Land 3 is breaking. Help would be appreciated, but ... it's gonna be really complex and difficult to debug, so I'm probably gonna be on my own here ... sigh.
2019-01-13 06:25:14 +00:00
namespace nall {
Update to v106r83 release. byuu says: Changelog: - reverted nall/inline-if.hpp usage for now, since the nall/primitives.hpp math operators still cast to (u)int64_t - improved nall/primitives.hpp more; integer8 x = -128; print(-x) will now print 128 (unary operator+ and - cast to (u)int64_t) - renamed processor/lr35902 to processor/sm83; after the Sharp SM83 CPU core [gekkio discovered the name] - a few bugfixes to the TLCS900H CPU core - completed the disassembler for the TLCS900H core As a result of reverting most of the inline if stuff, I guess the testing priority has been reduced. Which is probably a good thing, considering I seem to have a smaller pool of testers these days. Indeed, the TLCS900H core has ended up at 131KiB compared to the M68000 core at 128KiB. So it's now the largest CPU core in all of higan. It's even more ridiculous because the M68000 core would ordinarily be quite a bit smaller, had I not gone overboard with the extreme templating to reduce instruction decoding overhead (you kind of have to do this for RISC CPUs, and the inverted design of the TLCS900H kind of makes it infeasible to do the same there.) This CPU core is bound to have dozens of extremely difficult CPU bugs, and there's no easy way for me to test them. I would greatly appreciate any help in looking over the core for bugs. A fresh pair of eyes to spot a mistake could save me up to several days of tedious debugging work. The core still isn't ready to actually be tested: I have to hook up cartridge loading, a memory bus, interrupts, timers, and the micro DMA controller before it's likely that anything happens at all.
2019-01-19 01:34:17 +00:00
template<uint Bits> auto Natural<Bits>::integer() const -> Integer<Bits> { return Integer<Bits>(*this); }
template<uint Bits> auto Integer<Bits>::natural() const -> Natural<Bits> { return Natural<Bits>(*this); }
}