bsnes/nall/string.hpp

366 lines
14 KiB
C++
Raw Normal View History

#pragma once
#include <stdarg.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <algorithm>
#include <initializer_list>
#include <memory>
#include <nall/platform.hpp>
#include <nall/array-view.hpp>
#include <nall/atoi.hpp>
#include <nall/function.hpp>
#include <nall/intrinsics.hpp>
Update to v094r09 release. byuu says: This will easily be the biggest diff in the history of higan. And not in a good way. * target-higan and target-loki have been blown away completely * nall and ruby massively updated * phoenix replaced with hiro (pretty near a total rewrite) * target-higan restarted using hiro (just a window for now) * all emulation cores updated to compile again * installation changed to not require root privileges (installs locally) For the foreseeable future (maybe even permanently?), the new higan UI will only build under Linux/BSD with GTK+ 2.20+. Probably the most likely route for Windows/OS X will be to try and figure out how to build hiro/GTK on those platforms, as awful as that would be. The other alternative would be to produce new UIs for those platforms ... which would actually be a good opportunity to make something much more user friendly. Being that I just started on this a few hours ago, that means that for at least a few weeks, don't expect to be able to actually play any games. Right now, you can pretty much just compile the binary and that's it. It's quite possible that some nall changes didn't produce compilation errors, but will produce runtime errors. So until the UI can actually load games, we won't know if anything is broken. But we should mostly be okay. It was mostly just trim<1> -> trim changes, moving to Hash::SHA256 (much cleaner), and patching some reckless memory copy functions enough to compile. Progress isn't going to be like it was before: I'm now dividing my time much thinner between studying and other hobbies. My aim this time is not to produce a binary for everyone to play games on. Rather, it's to keep the emulator alive. I want to be able to apply critical patches again. And I would also like the base of the emulator to live on, for use in other emulator frontends that utilize higan.
2015-02-26 10:10:46 +00:00
#include <nall/memory.hpp>
#include <nall/primitives.hpp>
#include <nall/shared-pointer.hpp>
#include <nall/stdint.hpp>
#include <nall/unique-pointer.hpp>
#include <nall/utility.hpp>
2011-11-17 12:05:35 +00:00
#include <nall/varint.hpp>
#include <nall/vector.hpp>
#include <nall/view.hpp>
namespace nall {
struct string;
Update to v099r14 release. byuu says: Changelog: - (u)int(max,ptr) abbreviations removed; use _t suffix now [didn't feel like they were contributing enough to be worth it] - cleaned up nall::integer,natural,real functionality - toInteger, toNatural, toReal for parsing strings to numbers - fromInteger, fromNatural, fromReal for creating strings from numbers - (string,Markup::Node,SQL-based-classes)::(integer,natural,real) left unchanged - template<typename T> numeral(T value, long padding, char padchar) -> string for print() formatting - deduces integer,natural,real based on T ... cast the value if you want to override - there still exists binary,octal,hex,pointer for explicit print() formatting - lstring -> string_vector [but using lstring = string_vector; is declared] - would be nice to remove the using lstring eventually ... but that'd probably require 10,000 lines of changes >_> - format -> string_format [no using here; format was too ambiguous] - using integer = Integer<sizeof(int)*8>; and using natural = Natural<sizeof(uint)*8>; declared - for consistency with boolean. These three are meant for creating zero-initialized values implicitly (various uses) - R65816::io() -> idle() and SPC700::io() -> idle() [more clear; frees up struct IO {} io; naming] - SFC CPU, PPU, SMP use struct IO {} io; over struct (Status,Registers) {} (status,registers); now - still some CPU::Status status values ... they didn't really fit into IO functionality ... will have to think about this more - SFC CPU, PPU, SMP now use step() exclusively instead of addClocks() calling into step() - SFC CPU joypad1_bits, joypad2_bits were unused; killed them - SFC PPU CGRAM moved into PPU::Screen; since nothing else uses it - SFC PPU OAM moved into PPU::Object; since nothing else uses it - the raw uint8[544] array is gone. OAM::read() constructs values from the OAM::Object[512] table now - this avoids having to determine how we want to sub-divide the two OAM memory sections - this also eliminates the OAM::synchronize() functionality - probably more I'm forgetting The FPS fluctuations are driving me insane. This WIP went from 128fps to 137fps. Settled on 133.5fps for the final build. But nothing I changed should have affected performance at all. This level of fluctuation makes it damn near impossible to know whether I'm speeding things up or slowing things down with changes.
2016-07-01 11:50:32 +00:00
struct string_format;
struct string_view {
using type = string_view;
//view.hpp
inline string_view();
inline string_view(const string_view& source);
inline string_view(string_view&& source);
inline string_view(const char* data);
inline string_view(const char* data, uint size);
inline string_view(const string& source);
template<typename... P> inline string_view(P&&... p);
inline ~string_view();
inline auto operator=(const string_view& source) -> type&;
inline auto operator=(string_view&& source) -> type&;
inline operator const char*() const;
inline auto data() const -> const char*;
inline auto size() const -> uint;
inline auto begin() const { return &_data[0]; }
inline auto end() const { return &_data[size()]; }
protected:
string* _string;
const char* _data;
mutable int _size;
};
//adaptive (SSO + COW) is by far the best choice, the others exist solely to:
//1) demonstrate the performance benefit of combining SSO + COW
//2) rule out allocator bugs by trying different allocators when needed
#define NALL_STRING_ALLOCATOR_ADAPTIVE
//#define NALL_STRING_ALLOCATOR_COPY_ON_WRITE
//#define NALL_STRING_ALLOCATOR_SMALL_STRING_OPTIMIZATION
//#define NALL_STRING_ALLOCATOR_VECTOR
//cast.hpp
template<typename T> struct stringify;
//format.hpp
template<typename... P> inline auto print(P&&...) -> void;
template<typename... P> inline auto print(FILE*, P&&...) -> void;
Update to v100r16 release. byuu says: (Windows users may need to include <sys/time.h> at the top of nall/chrono.hpp, not sure.) Unchangelog: - forgot to add the Scheduler clock=0 fix because I have the memory of a goldfish Changelog: - new icarus database with nine additional games - hiro(GTK,Qt) won't constantly write its settings.bml file to disk anymore - added latency simulator for fun (settings.bml => Input/Latency in milliseconds) So the last one ... I wanted to test out nall::chrono, and I was also thinking that by polling every emulated frame, it's pretty wasteful when you are using Fast Forward and hitting 200+fps. As I've said before, calls to ruby::input::poll are not cheap. So to get around this, I added a limiter so that if you called the hardware poll function within N milliseconds, it'll return without doing any actual work. And indeed, that increases my framerate of Zelda 3 uncapped from 133fps to 142fps. Yay. But it's not a "real" speedup, as it only helps you when you exceed 100% speed (theoretically, you'd need to crack 300% speed since the game itself will poll at 16ms at 100% speed, but yet it sped up Zelda 3, so who am I to complain?) I threw the latency value into the settings file. It should be 16, but I set it to 5 since that was the lowest before it started negatively impacting uncapped speeds. You're wasting your time and CPU cycles setting it lower than 5, but if people like placebo effects it might work. Maybe I should let it be a signed integer so people can set it to -16 and think it's actually faster :P (I'm only joking. I took out the 96000hz audio placebo effect as well. Not really into psychological tricks anymore.) But yeah seriously, I didn't do this to start this discussion again for the billionth time. Please don't go there. And please don't tell me this WIP has higher/lower latency than before. I don't want to hear it. The only reason I bring it up is for the fun part that is worth discussing: put up or shut up time on how sensitive you are to latency! You can set the value above 5 to see how games feel. I personally can't really tell a difference until about 50. And I can't be 100% confident it's worse until about 75. But ... when I set it to 150, games become "extra difficult" ... the higher it goes, the worse it gets :D For this WIP, I've left no upper limit cap. I'll probably set a cap of something like 500ms or 1000ms for the official release. Need to balance user error/trolling with enjoyability. I'll think about it. [...] Now, what I worry about is stupid people seeing it and thinking it's an "added latency" setting, as if anyone would intentionally make things worse by default. This is a limiter. So if 5ms have passed since the game last polled, and that will be the case 99.9% of the time in games, the next poll will happen just in time, immediately when the game polls the inputs. Thus, a value below 1/<framerate>ms is not only pointless, if you go too low it will ruin your fast forward max speeds. I did say I didn't want to resort to placebo tricks, but I also don't want to spark up public discussion on this again either. So it might be best to default Input/Latency to 0ms, and internally have a max(5, latency) wrapper around the value.
2016-08-03 12:32:40 +00:00
template<typename T> inline auto pad(const T& value, long precision = 0, char padchar = ' ') -> string;
Update to v102r12 release. byuu says: Changelog: - MD/PSG: fixed 68K bus Z80 status read address location - MS, GG, MD/PSG: channels post-decrement their counters, not pre-decrement [Cydrak]¹ - MD/VDP: cache screen width registers once per scanline; screen height registers once per frame - MD/VDP: support 256-width display mode (used in Shining Force, etc) - MD/YM2612: implemented timers² - MD/YM2612: implemented 8-bit PCM DAC² - 68000: TRAP instruction should index the vector location by 32 (eg by 128 bytes), fixes Shining Force - nall: updated hex(), octal(), binary() functions to take uintmax instead of template<typename T> parameter³ ¹: this one makes an incredible difference. Sie noticed that lots of games set a period of 0, which would end up being a really long period with pre-decrement. By fixing this, noise shows up in many more games, and sounds way better in games even where it did before. You can hear extra sound on Lunar - Sanposuru Gakuen's title screen, the noise in Sonic The Hedgehog (Mega Drive) sounds better, etc. ²: this also really helps sound. The timers allow PSG music to play back at the correct speed instead of playing back way too quickly. And the PCM DAC lets you hear a lot of drum effects, as well as the "Sega!!" sound at the start of Sonic the Hedgehog, and the infamous, "Rise from your grave!" line from Altered Beast. Still, most music on the Mega Drive comes from the FM channels, so there's still not a whole lot to listen to. I didn't implement Cydrak's $02c test register just yet. Sie wasn't 100% certain on how the extended DAC bit worked, so I'd like to play it a little conservative and get sound working, then I'll go back and add a toggle or something to enable undocumented registers, that way we can use that to detect any potential problems they might be causing. ³: unfortunately we lose support for using hex() on nall/arithmetic types. If I have a const Pair& version of the function, then the compiler gets confused on whether Natural<32> should use uintmax or const Pair&, because compilers are stupid, and you can't have explicit arguments in overloaded functions. So even though either function would work, it just decides to error out instead >_> This is actually really annoying, because I want hex() to be useful for printing out nall/crypto keys and hashes directly. But ... this change had to be made. Negative signed integers would crash programs, and that was taking out my 68000 disassembler.
2017-02-27 08:45:51 +00:00
inline auto hex(uintmax value, long precision = 0, char padchar = '0') -> string;
inline auto octal(uintmax value, long precision = 0, char padchar = '0') -> string;
inline auto binary(uintmax value, long precision = 0, char padchar = '0') -> string;
//match.hpp
inline auto tokenize(const char* s, const char* p) -> bool;
inline auto tokenize(vector<string>& list, const char* s, const char* p) -> bool;
//utf8.hpp
inline auto characters(string_view self, int offset = 0, int length = -1) -> uint;
//utility.hpp
inline auto slice(string_view self, int offset = 0, int length = -1) -> string;
template<typename T> inline auto fromInteger(char* result, T value) -> char*;
template<typename T> inline auto fromNatural(char* result, T value) -> char*;
template<typename T> inline auto fromReal(char* str, T value) -> uint;
struct string {
using type = string;
protected:
#if defined(NALL_STRING_ALLOCATOR_ADAPTIVE)
enum : uint { SSO = 24 };
union {
struct { //copy-on-write
char* _data;
uint* _refs;
};
struct { //small-string-optimization
char _text[SSO];
};
};
inline auto _allocate() -> void;
inline auto _copy() -> void;
inline auto _resize() -> void;
#endif
#if defined(NALL_STRING_ALLOCATOR_COPY_ON_WRITE)
char* _data;
mutable uint* _refs;
inline auto _allocate() -> char*;
inline auto _copy() -> char*;
#endif
#if defined(NALL_STRING_ALLOCATOR_SMALL_STRING_OPTIMIZATION)
enum : uint { SSO = 24 };
union {
char* _data;
char _text[SSO];
};
#endif
#if defined(NALL_STRING_ALLOCATOR_VECTOR)
char* _data;
#endif
uint _capacity;
uint _size;
public:
inline string();
inline string(string& source) : string() { operator=(source); }
inline string(const string& source) : string() { operator=(source); }
inline string(string&& source) : string() { operator=(move(source)); }
template<typename T = char> inline auto get() -> T*;
template<typename T = char> inline auto data() const -> const T*;
template<typename T = char> auto size() const -> uint { return _size / sizeof(T); }
template<typename T = char> auto capacity() const -> uint { return _capacity / sizeof(T); }
inline auto reset() -> type&;
inline auto reserve(uint) -> type&;
inline auto resize(uint) -> type&;
inline auto operator=(const string&) -> type&;
inline auto operator=(string&&) -> type&;
template<typename T, typename... P> string(T&& s, P&&... p) : string() {
append(forward<T>(s), forward<P>(p)...);
}
~string() { reset(); }
explicit operator bool() const { return _size; }
operator const char*() const { return (const char*)data(); }
operator array_span<char>() { return {(char*)get(), size()}; }
operator array_view<char>() const { return {(const char*)data(), size()}; }
operator array_span<uint8_t>() { return {(uint8_t*)get(), size()}; }
operator array_view<uint8_t>() const { return {(const uint8_t*)data(), size()}; }
auto operator==(const string& source) const -> bool {
return size() == source.size() && memory::compare(data(), source.data(), size()) == 0;
}
auto operator!=(const string& source) const -> bool {
return size() != source.size() || memory::compare(data(), source.data(), size()) != 0;
}
auto operator==(const char* source) const -> bool { return strcmp(data(), source) == 0; }
auto operator!=(const char* source) const -> bool { return strcmp(data(), source) != 0; }
auto operator==(string_view source) const -> bool { return compare(source) == 0; }
auto operator!=(string_view source) const -> bool { return compare(source) != 0; }
auto operator< (string_view source) const -> bool { return compare(source) < 0; }
auto operator<=(string_view source) const -> bool { return compare(source) <= 0; }
auto operator> (string_view source) const -> bool { return compare(source) > 0; }
auto operator>=(string_view source) const -> bool { return compare(source) >= 0; }
auto begin() -> char* { return &get()[0]; }
auto end() -> char* { return &get()[size()]; }
auto begin() const -> const char* { return &data()[0]; }
auto end() const -> const char* { return &data()[size()]; }
//atoi.hpp
inline auto boolean() const -> bool;
inline auto integer() const -> intmax;
inline auto natural() const -> uintmax;
inline auto hex() const -> uintmax;
inline auto real() const -> double;
//core.hpp
inline auto operator[](uint) const -> const char&;
Update to v106r27 release. byuu says: Changelog: - nall: merged Path::config() and Path::local() to Path::userData() - ~/.local/share or %appdata or ~/Library/ApplicationSupport - higan, bsnes: render main window icon onto viewport instead of canvas - should hopefully fix a brief flickering glitch that appears on Windows - icarus: improved Super Famicom heuristics for Starfox / Starwing RAM - ruby/Direct3D: handle viewport size changes in lock() instead of output() - fixes icon disappearing when resizing main window - hiro/Windows: remove WS_DISABLED from StatusBar to fix window resize grip - this is experimental: I initially used WS_DISABLED to work around a focus bug - yet trying things now, said bug seems(?) to have gone away at some point ... - bsnes: added advanced settings panel with real-time driver change support I'd like feedback on the real-time driver change, for possible consideration into adding this to higan as well. Some drivers just crash, it's a fact of life. The ASIO driver in particular likes to crash inside the driver itself, without any error messages ever returned to try and catch. When you try to change a driver with a game loaded, it gives you a scary warning, asking if you want to proceed. When you change a driver, it sets a crash flag, and if the driver crashes while initializing, then restarting bsnes will disable the errant driver. If it fails in a recoverable way, then it sets the driver to “None” and warns you that the driver cannot be used. What I'm thinking of further adding is to call emulator→save() to write out the save RAM contents beforehand (although the periodic auto-saving RAM will handle this anyway when it's enabled), and possibly it might be wise to capture an emulator save state, although those can't be taken without advancing the emulator to the next frame, so that might not be a good idea. I'm also thinking we should show some kind of message somewhere when a driver is set to “None”. The status bar can be hidden, so perhaps on the title bar? Or maybe just a warning on startup that a driver is set to “None”.
2018-05-25 08:02:38 +00:00
inline auto operator()(uint, char = 0) const -> char;
template<typename... P> inline auto assign(P&&...) -> type&;
template<typename T, typename... P> inline auto prepend(const T&, P&&...) -> type&;
template<typename... P> inline auto prepend(const nall::string_format&, P&&...) -> type&;
template<typename T> inline auto _prepend(const stringify<T>&) -> type&;
template<typename T, typename... P> inline auto append(const T&, P&&...) -> type&;
Update to v099r14 release. byuu says: Changelog: - (u)int(max,ptr) abbreviations removed; use _t suffix now [didn't feel like they were contributing enough to be worth it] - cleaned up nall::integer,natural,real functionality - toInteger, toNatural, toReal for parsing strings to numbers - fromInteger, fromNatural, fromReal for creating strings from numbers - (string,Markup::Node,SQL-based-classes)::(integer,natural,real) left unchanged - template<typename T> numeral(T value, long padding, char padchar) -> string for print() formatting - deduces integer,natural,real based on T ... cast the value if you want to override - there still exists binary,octal,hex,pointer for explicit print() formatting - lstring -> string_vector [but using lstring = string_vector; is declared] - would be nice to remove the using lstring eventually ... but that'd probably require 10,000 lines of changes >_> - format -> string_format [no using here; format was too ambiguous] - using integer = Integer<sizeof(int)*8>; and using natural = Natural<sizeof(uint)*8>; declared - for consistency with boolean. These three are meant for creating zero-initialized values implicitly (various uses) - R65816::io() -> idle() and SPC700::io() -> idle() [more clear; frees up struct IO {} io; naming] - SFC CPU, PPU, SMP use struct IO {} io; over struct (Status,Registers) {} (status,registers); now - still some CPU::Status status values ... they didn't really fit into IO functionality ... will have to think about this more - SFC CPU, PPU, SMP now use step() exclusively instead of addClocks() calling into step() - SFC CPU joypad1_bits, joypad2_bits were unused; killed them - SFC PPU CGRAM moved into PPU::Screen; since nothing else uses it - SFC PPU OAM moved into PPU::Object; since nothing else uses it - the raw uint8[544] array is gone. OAM::read() constructs values from the OAM::Object[512] table now - this avoids having to determine how we want to sub-divide the two OAM memory sections - this also eliminates the OAM::synchronize() functionality - probably more I'm forgetting The FPS fluctuations are driving me insane. This WIP went from 128fps to 137fps. Settled on 133.5fps for the final build. But nothing I changed should have affected performance at all. This level of fluctuation makes it damn near impossible to know whether I'm speeding things up or slowing things down with changes.
2016-07-01 11:50:32 +00:00
template<typename... P> inline auto append(const nall::string_format&, P&&...) -> type&;
template<typename T> inline auto _append(const stringify<T>&) -> type&;
inline auto length() const -> uint;
//find.hpp
inline auto contains(string_view characters) const -> maybe<uint>;
template<bool, bool> inline auto _find(int, string_view) const -> maybe<uint>;
inline auto find(string_view source) const -> maybe<uint>;
inline auto ifind(string_view source) const -> maybe<uint>;
inline auto qfind(string_view source) const -> maybe<uint>;
inline auto iqfind(string_view source) const -> maybe<uint>;
inline auto findFrom(int offset, string_view source) const -> maybe<uint>;
inline auto ifindFrom(int offset, string_view source) const -> maybe<uint>;
inline auto findNext(int offset, string_view source) const -> maybe<uint>;
inline auto ifindNext(int offset, string_view source) const -> maybe<uint>;
inline auto findPrevious(int offset, string_view source) const -> maybe<uint>;
inline auto ifindPrevious(int offset, string_view source) const -> maybe<uint>;
//format.hpp
Update to v099r14 release. byuu says: Changelog: - (u)int(max,ptr) abbreviations removed; use _t suffix now [didn't feel like they were contributing enough to be worth it] - cleaned up nall::integer,natural,real functionality - toInteger, toNatural, toReal for parsing strings to numbers - fromInteger, fromNatural, fromReal for creating strings from numbers - (string,Markup::Node,SQL-based-classes)::(integer,natural,real) left unchanged - template<typename T> numeral(T value, long padding, char padchar) -> string for print() formatting - deduces integer,natural,real based on T ... cast the value if you want to override - there still exists binary,octal,hex,pointer for explicit print() formatting - lstring -> string_vector [but using lstring = string_vector; is declared] - would be nice to remove the using lstring eventually ... but that'd probably require 10,000 lines of changes >_> - format -> string_format [no using here; format was too ambiguous] - using integer = Integer<sizeof(int)*8>; and using natural = Natural<sizeof(uint)*8>; declared - for consistency with boolean. These three are meant for creating zero-initialized values implicitly (various uses) - R65816::io() -> idle() and SPC700::io() -> idle() [more clear; frees up struct IO {} io; naming] - SFC CPU, PPU, SMP use struct IO {} io; over struct (Status,Registers) {} (status,registers); now - still some CPU::Status status values ... they didn't really fit into IO functionality ... will have to think about this more - SFC CPU, PPU, SMP now use step() exclusively instead of addClocks() calling into step() - SFC CPU joypad1_bits, joypad2_bits were unused; killed them - SFC PPU CGRAM moved into PPU::Screen; since nothing else uses it - SFC PPU OAM moved into PPU::Object; since nothing else uses it - the raw uint8[544] array is gone. OAM::read() constructs values from the OAM::Object[512] table now - this avoids having to determine how we want to sub-divide the two OAM memory sections - this also eliminates the OAM::synchronize() functionality - probably more I'm forgetting The FPS fluctuations are driving me insane. This WIP went from 128fps to 137fps. Settled on 133.5fps for the final build. But nothing I changed should have affected performance at all. This level of fluctuation makes it damn near impossible to know whether I'm speeding things up or slowing things down with changes.
2016-07-01 11:50:32 +00:00
inline auto format(const nall::string_format& params) -> type&;
//compare.hpp
template<bool> inline static auto _compare(const char*, uint, const char*, uint) -> int;
inline static auto compare(string_view, string_view) -> int;
inline static auto icompare(string_view, string_view) -> int;
inline auto compare(string_view source) const -> int;
inline auto icompare(string_view source) const -> int;
inline auto equals(string_view source) const -> bool;
inline auto iequals(string_view source) const -> bool;
inline auto beginsWith(string_view source) const -> bool;
inline auto ibeginsWith(string_view source) const -> bool;
inline auto endsWith(string_view source) const -> bool;
inline auto iendsWith(string_view source) const -> bool;
//convert.hpp
inline auto downcase() -> type&;
inline auto upcase() -> type&;
inline auto qdowncase() -> type&;
inline auto qupcase() -> type&;
inline auto transform(string_view from, string_view to) -> type&;
//match.hpp
inline auto match(string_view source) const -> bool;
inline auto imatch(string_view source) const -> bool;
//replace.hpp
template<bool, bool> inline auto _replace(string_view, string_view, long) -> type&;
inline auto replace(string_view from, string_view to, long limit = LONG_MAX) -> type&;
inline auto ireplace(string_view from, string_view to, long limit = LONG_MAX) -> type&;
inline auto qreplace(string_view from, string_view to, long limit = LONG_MAX) -> type&;
inline auto iqreplace(string_view from, string_view to, long limit = LONG_MAX) -> type&;
//split.hpp
inline auto split(string_view key, long limit = LONG_MAX) const -> vector<string>;
inline auto isplit(string_view key, long limit = LONG_MAX) const -> vector<string>;
inline auto qsplit(string_view key, long limit = LONG_MAX) const -> vector<string>;
inline auto iqsplit(string_view key, long limit = LONG_MAX) const -> vector<string>;
//trim.hpp
inline auto trim(string_view lhs, string_view rhs, long limit = LONG_MAX) -> type&;
inline auto trimLeft(string_view lhs, long limit = LONG_MAX) -> type&;
inline auto trimRight(string_view rhs, long limit = LONG_MAX) -> type&;
inline auto itrim(string_view lhs, string_view rhs, long limit = LONG_MAX) -> type&;
inline auto itrimLeft(string_view lhs, long limit = LONG_MAX) -> type&;
inline auto itrimRight(string_view rhs, long limit = LONG_MAX) -> type&;
inline auto strip() -> type&;
inline auto stripLeft() -> type&;
inline auto stripRight() -> type&;
//utf8.hpp
inline auto characters(int offset = 0, int length = -1) const -> uint;
//utility.hpp
inline static auto read(string_view filename) -> string;
inline static auto repeat(string_view pattern, uint times) -> string;
inline auto fill(char fill = ' ') -> type&;
inline auto hash() const -> uint;
inline auto remove(uint offset, uint length) -> type&;
inline auto reverse() -> type&;
inline auto size(int length, char fill = ' ') -> type&;
inline auto slice(int offset = 0, int length = -1) -> string;
};
template<> struct vector<string> : vector_base<string> {
using type = vector<string>;
using vector_base<string>::vector_base;
vector(const vector& source) { vector_base::operator=(source); }
vector(vector& source) { vector_base::operator=(source); }
vector(vector&& source) { vector_base::operator=(move(source)); }
template<typename... P> vector(P&&... p) { append(forward<P>(p)...); }
inline auto operator=(const vector& source) -> type& { return vector_base::operator=(source), *this; }
inline auto operator=(vector& source) -> type& { return vector_base::operator=(source), *this; }
inline auto operator=(vector&& source) -> type& { return vector_base::operator=(move(source)), *this; }
//vector.hpp
template<typename... P> inline auto append(const string&, P&&...) -> type&;
inline auto append() -> type&;
inline auto isort() -> type&;
inline auto find(string_view source) const -> maybe<uint>;
inline auto ifind(string_view source) const -> maybe<uint>;
inline auto match(string_view pattern) const -> vector<string>;
inline auto merge(string_view separator) const -> string;
inline auto strip() -> type&;
//split.hpp
template<bool, bool> inline auto _split(string_view, string_view, long) -> type&;
};
Update to v099r14 release. byuu says: Changelog: - (u)int(max,ptr) abbreviations removed; use _t suffix now [didn't feel like they were contributing enough to be worth it] - cleaned up nall::integer,natural,real functionality - toInteger, toNatural, toReal for parsing strings to numbers - fromInteger, fromNatural, fromReal for creating strings from numbers - (string,Markup::Node,SQL-based-classes)::(integer,natural,real) left unchanged - template<typename T> numeral(T value, long padding, char padchar) -> string for print() formatting - deduces integer,natural,real based on T ... cast the value if you want to override - there still exists binary,octal,hex,pointer for explicit print() formatting - lstring -> string_vector [but using lstring = string_vector; is declared] - would be nice to remove the using lstring eventually ... but that'd probably require 10,000 lines of changes >_> - format -> string_format [no using here; format was too ambiguous] - using integer = Integer<sizeof(int)*8>; and using natural = Natural<sizeof(uint)*8>; declared - for consistency with boolean. These three are meant for creating zero-initialized values implicitly (various uses) - R65816::io() -> idle() and SPC700::io() -> idle() [more clear; frees up struct IO {} io; naming] - SFC CPU, PPU, SMP use struct IO {} io; over struct (Status,Registers) {} (status,registers); now - still some CPU::Status status values ... they didn't really fit into IO functionality ... will have to think about this more - SFC CPU, PPU, SMP now use step() exclusively instead of addClocks() calling into step() - SFC CPU joypad1_bits, joypad2_bits were unused; killed them - SFC PPU CGRAM moved into PPU::Screen; since nothing else uses it - SFC PPU OAM moved into PPU::Object; since nothing else uses it - the raw uint8[544] array is gone. OAM::read() constructs values from the OAM::Object[512] table now - this avoids having to determine how we want to sub-divide the two OAM memory sections - this also eliminates the OAM::synchronize() functionality - probably more I'm forgetting The FPS fluctuations are driving me insane. This WIP went from 128fps to 137fps. Settled on 133.5fps for the final build. But nothing I changed should have affected performance at all. This level of fluctuation makes it damn near impossible to know whether I'm speeding things up or slowing things down with changes.
2016-07-01 11:50:32 +00:00
struct string_format : vector<string> {
using type = string_format;
Update to v099r14 release. byuu says: Changelog: - (u)int(max,ptr) abbreviations removed; use _t suffix now [didn't feel like they were contributing enough to be worth it] - cleaned up nall::integer,natural,real functionality - toInteger, toNatural, toReal for parsing strings to numbers - fromInteger, fromNatural, fromReal for creating strings from numbers - (string,Markup::Node,SQL-based-classes)::(integer,natural,real) left unchanged - template<typename T> numeral(T value, long padding, char padchar) -> string for print() formatting - deduces integer,natural,real based on T ... cast the value if you want to override - there still exists binary,octal,hex,pointer for explicit print() formatting - lstring -> string_vector [but using lstring = string_vector; is declared] - would be nice to remove the using lstring eventually ... but that'd probably require 10,000 lines of changes >_> - format -> string_format [no using here; format was too ambiguous] - using integer = Integer<sizeof(int)*8>; and using natural = Natural<sizeof(uint)*8>; declared - for consistency with boolean. These three are meant for creating zero-initialized values implicitly (various uses) - R65816::io() -> idle() and SPC700::io() -> idle() [more clear; frees up struct IO {} io; naming] - SFC CPU, PPU, SMP use struct IO {} io; over struct (Status,Registers) {} (status,registers); now - still some CPU::Status status values ... they didn't really fit into IO functionality ... will have to think about this more - SFC CPU, PPU, SMP now use step() exclusively instead of addClocks() calling into step() - SFC CPU joypad1_bits, joypad2_bits were unused; killed them - SFC PPU CGRAM moved into PPU::Screen; since nothing else uses it - SFC PPU OAM moved into PPU::Object; since nothing else uses it - the raw uint8[544] array is gone. OAM::read() constructs values from the OAM::Object[512] table now - this avoids having to determine how we want to sub-divide the two OAM memory sections - this also eliminates the OAM::synchronize() functionality - probably more I'm forgetting The FPS fluctuations are driving me insane. This WIP went from 128fps to 137fps. Settled on 133.5fps for the final build. But nothing I changed should have affected performance at all. This level of fluctuation makes it damn near impossible to know whether I'm speeding things up or slowing things down with changes.
2016-07-01 11:50:32 +00:00
template<typename... P> string_format(P&&... p) { reserve(sizeof...(p)); append(forward<P>(p)...); }
template<typename T, typename... P> inline auto append(const T&, P&&... p) -> type&;
inline auto append() -> type&;
};
Update to v106r79 release. byuu says: This WIP is just work on nall/primitives ... Basically, I'm coming to the conclusion that it's just not practical to try and make Natural/Integer implicitly castable to primitive signed and unsigned integers. C++ just has too many edge cases there. I also want to get away from the problem of C++ deciding that all math operations return 32-bit values, unless one of the parameters is 64-bit, in which case you get a 64-bit value. You know, so things like array[-1] won't end up accessing the 4 billionth element of the array. It's nice to be fancy and minimally size operations (eg 32-bit+32-bit = 33-bit), but it's just too unintuitive. I think all Natural<X>+Natural<Y> expessions should result in a Natural<64> (eg natural) type. nall/primitives/operators.hpp has been removed, and new Natural<>Natural / Integer<>Integer casts exist. My feeling is that signed and unsigned types should not be implicitly convertible where data loss can occur. In the future, I think an integer8*natural8 is fine to return an integer64, and the bitwise operators are probably all fine between the two types. I could probably add (Integer,Natural)+Boolean conversions as well. To simplify expressions, there are new user-defined literals for _b (boolean), _n (natural), _i (integer), _r (real), _n# (eg _n8), _i# (eg _i8), _r# (eg _r32), and _s (nall::string). In the long-term, my intention is to make the conversion and cast constructors explicit for primitive types, but obviously that'll shatter most of higan, so for now that won't be the case. Something I can do in the future is allow implicit conversion and casting to (u)int64_t. That may be a nice balance.
2019-01-15 04:33:20 +00:00
inline auto operator"" _s(const char* value, std::size_t) -> string { return {value}; }
}
#include <nall/string/view.hpp>
#include <nall/string/pascal.hpp>
#include <nall/string/atoi.hpp>
#include <nall/string/cast.hpp>
Update to v094r09 release. byuu says: This will easily be the biggest diff in the history of higan. And not in a good way. * target-higan and target-loki have been blown away completely * nall and ruby massively updated * phoenix replaced with hiro (pretty near a total rewrite) * target-higan restarted using hiro (just a window for now) * all emulation cores updated to compile again * installation changed to not require root privileges (installs locally) For the foreseeable future (maybe even permanently?), the new higan UI will only build under Linux/BSD with GTK+ 2.20+. Probably the most likely route for Windows/OS X will be to try and figure out how to build hiro/GTK on those platforms, as awful as that would be. The other alternative would be to produce new UIs for those platforms ... which would actually be a good opportunity to make something much more user friendly. Being that I just started on this a few hours ago, that means that for at least a few weeks, don't expect to be able to actually play any games. Right now, you can pretty much just compile the binary and that's it. It's quite possible that some nall changes didn't produce compilation errors, but will produce runtime errors. So until the UI can actually load games, we won't know if anything is broken. But we should mostly be okay. It was mostly just trim<1> -> trim changes, moving to Hash::SHA256 (much cleaner), and patching some reckless memory copy functions enough to compile. Progress isn't going to be like it was before: I'm now dividing my time much thinner between studying and other hobbies. My aim this time is not to produce a binary for everyone to play games on. Rather, it's to keep the emulator alive. I want to be able to apply critical patches again. And I would also like the base of the emulator to live on, for use in other emulator frontends that utilize higan.
2015-02-26 10:10:46 +00:00
#include <nall/string/compare.hpp>
#include <nall/string/convert.hpp>
#include <nall/string/core.hpp>
Update to v094r09 release. byuu says: This will easily be the biggest diff in the history of higan. And not in a good way. * target-higan and target-loki have been blown away completely * nall and ruby massively updated * phoenix replaced with hiro (pretty near a total rewrite) * target-higan restarted using hiro (just a window for now) * all emulation cores updated to compile again * installation changed to not require root privileges (installs locally) For the foreseeable future (maybe even permanently?), the new higan UI will only build under Linux/BSD with GTK+ 2.20+. Probably the most likely route for Windows/OS X will be to try and figure out how to build hiro/GTK on those platforms, as awful as that would be. The other alternative would be to produce new UIs for those platforms ... which would actually be a good opportunity to make something much more user friendly. Being that I just started on this a few hours ago, that means that for at least a few weeks, don't expect to be able to actually play any games. Right now, you can pretty much just compile the binary and that's it. It's quite possible that some nall changes didn't produce compilation errors, but will produce runtime errors. So until the UI can actually load games, we won't know if anything is broken. But we should mostly be okay. It was mostly just trim<1> -> trim changes, moving to Hash::SHA256 (much cleaner), and patching some reckless memory copy functions enough to compile. Progress isn't going to be like it was before: I'm now dividing my time much thinner between studying and other hobbies. My aim this time is not to produce a binary for everyone to play games on. Rather, it's to keep the emulator alive. I want to be able to apply critical patches again. And I would also like the base of the emulator to live on, for use in other emulator frontends that utilize higan.
2015-02-26 10:10:46 +00:00
#include <nall/string/find.hpp>
#include <nall/string/format.hpp>
Update to v094r09 release. byuu says: This will easily be the biggest diff in the history of higan. And not in a good way. * target-higan and target-loki have been blown away completely * nall and ruby massively updated * phoenix replaced with hiro (pretty near a total rewrite) * target-higan restarted using hiro (just a window for now) * all emulation cores updated to compile again * installation changed to not require root privileges (installs locally) For the foreseeable future (maybe even permanently?), the new higan UI will only build under Linux/BSD with GTK+ 2.20+. Probably the most likely route for Windows/OS X will be to try and figure out how to build hiro/GTK on those platforms, as awful as that would be. The other alternative would be to produce new UIs for those platforms ... which would actually be a good opportunity to make something much more user friendly. Being that I just started on this a few hours ago, that means that for at least a few weeks, don't expect to be able to actually play any games. Right now, you can pretty much just compile the binary and that's it. It's quite possible that some nall changes didn't produce compilation errors, but will produce runtime errors. So until the UI can actually load games, we won't know if anything is broken. But we should mostly be okay. It was mostly just trim<1> -> trim changes, moving to Hash::SHA256 (much cleaner), and patching some reckless memory copy functions enough to compile. Progress isn't going to be like it was before: I'm now dividing my time much thinner between studying and other hobbies. My aim this time is not to produce a binary for everyone to play games on. Rather, it's to keep the emulator alive. I want to be able to apply critical patches again. And I would also like the base of the emulator to live on, for use in other emulator frontends that utilize higan.
2015-02-26 10:10:46 +00:00
#include <nall/string/match.hpp>
#include <nall/string/replace.hpp>
#include <nall/string/split.hpp>
Update to v094r09 release. byuu says: This will easily be the biggest diff in the history of higan. And not in a good way. * target-higan and target-loki have been blown away completely * nall and ruby massively updated * phoenix replaced with hiro (pretty near a total rewrite) * target-higan restarted using hiro (just a window for now) * all emulation cores updated to compile again * installation changed to not require root privileges (installs locally) For the foreseeable future (maybe even permanently?), the new higan UI will only build under Linux/BSD with GTK+ 2.20+. Probably the most likely route for Windows/OS X will be to try and figure out how to build hiro/GTK on those platforms, as awful as that would be. The other alternative would be to produce new UIs for those platforms ... which would actually be a good opportunity to make something much more user friendly. Being that I just started on this a few hours ago, that means that for at least a few weeks, don't expect to be able to actually play any games. Right now, you can pretty much just compile the binary and that's it. It's quite possible that some nall changes didn't produce compilation errors, but will produce runtime errors. So until the UI can actually load games, we won't know if anything is broken. But we should mostly be okay. It was mostly just trim<1> -> trim changes, moving to Hash::SHA256 (much cleaner), and patching some reckless memory copy functions enough to compile. Progress isn't going to be like it was before: I'm now dividing my time much thinner between studying and other hobbies. My aim this time is not to produce a binary for everyone to play games on. Rather, it's to keep the emulator alive. I want to be able to apply critical patches again. And I would also like the base of the emulator to live on, for use in other emulator frontends that utilize higan.
2015-02-26 10:10:46 +00:00
#include <nall/string/trim.hpp>
#include <nall/string/utf8.hpp>
#include <nall/string/utility.hpp>
#include <nall/string/vector.hpp>
#include <nall/string/eval/node.hpp>
#include <nall/string/eval/literal.hpp>
#include <nall/string/eval/parser.hpp>
#include <nall/string/eval/evaluator.hpp>
Update to v091r05 release. [No prior releases were posted to the WIP thread. -Ed.] byuu says: Super Famicom mapping system has been reworked as discussed with the mask= changes. offset becomes base, mode is gone. Also added support for comma-separated fields in the address fields, to reduce the number of map lines needed. <?xml version="1.0" encoding="UTF-8"?> <cartridge region="NTSC"> <superfx revision="2"> <rom name="program.rom" size="0x200000"/> <ram name="save.rwm" size="0x8000"/> <map id="io" address="00-3f,80-bf:3000-32ff"/> <map id="rom" address="00-3f:8000-ffff" mask="0x8000"/> <map id="rom" address="40-5f:0000-ffff"/> <map id="ram" address="00-3f,80-bf:6000-7fff" size="0x2000"/> <map id="ram" address="70-71:0000-ffff"/> </superfx> </cartridge> Or in BML: cartridge region=NTSC superfx revision=2 rom name=program.rom size=0x200000 ram name=save.rwm size=0x8000 map id=io address=00-3f,80-bf:3000-32ff map id=rom address=00-3f:8000-ffff mask=0x8000 map id=rom address=40-5f:0000-ffff map id=ram address=00-3f,80-bf:6000-7fff size=0x2000 map id=ram address=70-71:0000-ffff As a result of the changes, old mappings will no longer work. The above XML example will run Super Mario World 2: Yoshi's Island. Otherwise, you'll have to write your own. All that's left now is to work some sort of database mapping system in, so I can start dumping carts en masse. The NES changes that FitzRoy asked for are mostly in as well. Also, part of the reason I haven't released a WIP ... but fuck it, I'm not going to wait forever to post a new WIP. I've added a skeleton driver to emulate Campus Challenge '92 and Powerfest '94. There's no actual emulation, except for the stuff I can glean from looking at the pictures of the board. It has a DSP-1 (so SR/DR registers), four ROMs that map in and out, RAM, etc. I've also added preliminary mapping to upload high scores to a website, but obviously I need the ROMs first.
2012-10-09 08:25:32 +00:00
#include <nall/string/markup/node.hpp>
#include <nall/string/markup/find.hpp>
Update to v091r05 release. [No prior releases were posted to the WIP thread. -Ed.] byuu says: Super Famicom mapping system has been reworked as discussed with the mask= changes. offset becomes base, mode is gone. Also added support for comma-separated fields in the address fields, to reduce the number of map lines needed. <?xml version="1.0" encoding="UTF-8"?> <cartridge region="NTSC"> <superfx revision="2"> <rom name="program.rom" size="0x200000"/> <ram name="save.rwm" size="0x8000"/> <map id="io" address="00-3f,80-bf:3000-32ff"/> <map id="rom" address="00-3f:8000-ffff" mask="0x8000"/> <map id="rom" address="40-5f:0000-ffff"/> <map id="ram" address="00-3f,80-bf:6000-7fff" size="0x2000"/> <map id="ram" address="70-71:0000-ffff"/> </superfx> </cartridge> Or in BML: cartridge region=NTSC superfx revision=2 rom name=program.rom size=0x200000 ram name=save.rwm size=0x8000 map id=io address=00-3f,80-bf:3000-32ff map id=rom address=00-3f:8000-ffff mask=0x8000 map id=rom address=40-5f:0000-ffff map id=ram address=00-3f,80-bf:6000-7fff size=0x2000 map id=ram address=70-71:0000-ffff As a result of the changes, old mappings will no longer work. The above XML example will run Super Mario World 2: Yoshi's Island. Otherwise, you'll have to write your own. All that's left now is to work some sort of database mapping system in, so I can start dumping carts en masse. The NES changes that FitzRoy asked for are mostly in as well. Also, part of the reason I haven't released a WIP ... but fuck it, I'm not going to wait forever to post a new WIP. I've added a skeleton driver to emulate Campus Challenge '92 and Powerfest '94. There's no actual emulation, except for the stuff I can glean from looking at the pictures of the board. It has a DSP-1 (so SR/DR registers), four ROMs that map in and out, RAM, etc. I've also added preliminary mapping to upload high scores to a website, but obviously I need the ROMs first.
2012-10-09 08:25:32 +00:00
#include <nall/string/markup/bml.hpp>
#include <nall/string/markup/xml.hpp>
Update to v094r09 release. byuu says: This will easily be the biggest diff in the history of higan. And not in a good way. * target-higan and target-loki have been blown away completely * nall and ruby massively updated * phoenix replaced with hiro (pretty near a total rewrite) * target-higan restarted using hiro (just a window for now) * all emulation cores updated to compile again * installation changed to not require root privileges (installs locally) For the foreseeable future (maybe even permanently?), the new higan UI will only build under Linux/BSD with GTK+ 2.20+. Probably the most likely route for Windows/OS X will be to try and figure out how to build hiro/GTK on those platforms, as awful as that would be. The other alternative would be to produce new UIs for those platforms ... which would actually be a good opportunity to make something much more user friendly. Being that I just started on this a few hours ago, that means that for at least a few weeks, don't expect to be able to actually play any games. Right now, you can pretty much just compile the binary and that's it. It's quite possible that some nall changes didn't produce compilation errors, but will produce runtime errors. So until the UI can actually load games, we won't know if anything is broken. But we should mostly be okay. It was mostly just trim<1> -> trim changes, moving to Hash::SHA256 (much cleaner), and patching some reckless memory copy functions enough to compile. Progress isn't going to be like it was before: I'm now dividing my time much thinner between studying and other hobbies. My aim this time is not to produce a binary for everyone to play games on. Rather, it's to keep the emulator alive. I want to be able to apply critical patches again. And I would also like the base of the emulator to live on, for use in other emulator frontends that utilize higan.
2015-02-26 10:10:46 +00:00
#include <nall/string/transform/cml.hpp>
#include <nall/string/transform/dml.hpp>