Commit Graph

7085 Commits

Author SHA1 Message Date
chss95cs@gmail.com 0fd4a2533b Prevent clang-format from moving d3d12_nvapi above the require d3d12 headers 2022-09-11 14:35:33 -07:00
chss95cs@gmail.com 20638c2e61 use Sleep(0) instead of SwitchToThread, should waste less power and help the os with scheduling.
PM4 buffer handling made a virtual member of commandprocessor, place the implementation/declaration into reusable macro files. this is probably the biggest boost here.
Optimized SET_CONSTANT/ LOAD_CONSTANT pm4 ops based on the register range they start writing at, this was also a nice boost

Expose X64 extension flags to code outside of x64 backend, so we can detect and use things like avx512, xop, avx2, etc in normal code
Add freelists for HIR structures to try to reduce the number of last level cache misses during optimization (currently disabled... fixme later)

Analyzed PGO feedback and reordered branches, uninlined functions, moved code out into different functions based on info from it in the PM4 functions, this gave like a 2% boost at best.

Added support for the db16cyc opcode, which is used often in xb360 spinlocks. before it was just being translated to nop, now on x64 we translate it to _mm_pause but may change that in the future to reduce cpu time wasted

texture util - all our divisors were powers of 2, instead we look up a shift. this made texture scaling slightly faster, more so on intel processors which seem to be worse at int divs. GetGuestTextureLayout is now a little faster, although it is still one of the heaviest functions in the emulator when scaling is on.

xe_unlikely_mutex was not a good choice for the guest clock lock, (running theory) on intel processors another thread may take a significant time to update the clock? maybe because of the uint64 division? really not sure, but switched it to xe_mutex. This fixed audio stutter that i had introduced to 1 or 2 games, fixed performance on that n64 rare game with the monkeys.
Took another crack at DMA implementation, another failure.
Instead of passing as a parameter, keep the ringbuffer reader as the first member of commandprocessor so it can be accessed through this
Added macro for noalias
Applied noalias to Memory::LookupHeap. This reduced the size of the executable by 7 kb.
Reworked kernel shim template, this shaved like 100kb off the exe and eliminated the indirect calls from the shim to the actual implementation. We still unconditionally generate string representations of kernel calls though :(, unless it is kHighFrequency

Add nvapi extensions support, currently unused. Will use CPUVISIBLE memory at some point
Inserted prefetches in a few places based on feedback from vtune.
Add native implementation of SHA int8 if all elements are the same

Vectorized comparisons for SetViewport, SetScissorRect
Vectorized ranged comparisons for WriteRegister
Add XE_MSVC_ASSUME
Move FormatInfo::name out of the structure, instead look up the name in a different table. Debug related data and critical runtime data are best kept apart
Templated UpdateSystemConstantValues based on ROV/RTV and primitive_polygonal
Add ArchFloatMask functions, these are for storing the results of floating point comparisons without doing costly float->int pipeline transfers (vucomiss/setb)
Use floatmasks in UpdateSystemConstantValues for checking if dirty, only transfer to int at end of function.
Instead of dirty |= (x == y) in UpdateSystemConstantValues, now we do dirty_u32 |= (x^y). if any of them are not equal, dirty_u32 will be nz, else if theyre all equal it will be zero. This is more friendly to register renaming and the lack of dependencies on EFLAGS lets the compiler reorder better
Add PrefetchSamplerParameters to D3D12TextureCache
use PrefetchSamplerParameters in UpdateBindings to eliminate cache misses that vtune detected

Add PrefetchTextureBinding to D3D12TextureCache
Prefetch texture bindings to get rid of more misses vtune detected (more accesses out of order with random strides)
Rewrote DMAC, still terrible though and have disabled it for now.
Replace tiny memcmp of 6 U64 in render_target_cache with inline loop, msvc fails to make it a loop and instead does a thunk to their memcmp function, which is optimized for larger sizes

PrefetchTextureBinding in AreActiveTextureSRVKeysUpToDate
Replace memcmp calls for pipelinedescription with handwritten cmp
Directly write some registers that dont have special handling in PM4 functions
Changed EstimateMaxY to try to eliminate mispredictions that vtune was reporting, msvc ended up turning the changed code into a series of blends

in ExecutePacketType3_EVENT_WRITE_EXT, instead of writing extents to an array on the stack and then doing xe_copy_and_swap_16 of the data to its dest, pre-swap each constant and then store those. msvc manages to unroll that into wider stores
stop logging XE_SWAP every time we receive XE_SWAP, stop logging the start and end of each viz query

Prefetch watch nodes in FireWatches based on feedback from vtune
Removed dead code from texture_info.cc
NOINLINE on GpuSwap, PGO builds did it so we should too.
2022-09-11 14:14:48 -07:00
chrisps 9a6dd4cd6f
Merge branch 'xenia-canary:canary_experimental' into canary_experimental 2022-09-05 09:08:46 -04:00
chss95cs@gmail.com 0c576877c8 Add constant folding for LVR when 16 aligned, clean up prior commit by removing dead test code for LVR/LVL/STVL/STVR opcodes and legacy hir sequence
Delay using mm_pause in KeAcquireSpinLockAtRaisedIrql_entry, a huge amount of time is spent spinning in halo3
2022-09-04 22:42:51 -05:00
chss95cs@gmail.com d372d8d5e3 nasty commit with a bunch of test code left in, will clean up and pr
Remove the logger_ != nullptr check from shouldlog, it will nearly always be true except on initialization and gets checked later anyway, this shrinks the size of the generated code for some
Select specialized vastcpy for current cpu, for now only have paths for MOVDIR64B and generic avx1
Add XE_UNLIKELY/LIKELY if, they map better to the c++ unlikely/likely attributes which we will need to use soon
Finished reimplementing STVL/STVR/LVL/LVR as their own opcodes. we now generate far less code for these instructions. this also means optimization passes can be written to simplify/remove/replace these instructions in some cases. Found that a good deal of the X86 we were emitting for these instructions was dead code or redundant.
the reduction in generated HIR/x86 should help a lot with compilation times and make function precompilation more feasible as a default

Don't static assert in default prefetch impl, in c++20 the assertion will be triggered even without an instantiation
Reorder some if/else to prod msvc into ordering the branches optimally. it somewhat worked...
Added some notes about which opcodes should be removed/refactored
Dispatch in WriteRegister via vector compares for the bounds. still not very optimal, we ought to be checking whether any register in a range may be special
A lot of work on trying to optimize writeregister, moved wraparound path into a noinline function based on profiling info
Hoist the IsUcodeAnalyzed check out of AnalyzeShader, instead check it before each call. Profiler recorded many hits in the stack frame setup of the function, but none in the actual body of it, so the check is often true but the stack frame setup is run unconditionally
Pre-check whether we're about to write a single register from a ring
Replace more jump tables from draw_util/texture_info with popcnt based sparse indexing/bit tables/shuffle lookups
Place the GPU register file on its own VAD/virtual allocation, it is no longer a member of graphics system
2022-09-04 22:42:51 -05:00
illusion0001 f62ac9868a Make portable default for new install 2022-09-04 22:42:40 -05:00
chrisps 5476d5e422
Merge branch 'xenia-canary:canary_experimental' into canary_experimental 2022-09-04 14:45:03 -04:00
chss95cs@gmail.com 2e5c4937fd Add constant folding for LVR when 16 aligned, clean up prior commit by removing dead test code for LVR/LVL/STVL/STVR opcodes and legacy hir sequence
Delay using mm_pause in KeAcquireSpinLockAtRaisedIrql_entry, a huge amount of time is spent spinning in halo3
2022-09-04 11:44:29 -07:00
chss95cs@gmail.com c6010bd4b1 nasty commit with a bunch of test code left in, will clean up and pr
Remove the logger_ != nullptr check from shouldlog, it will nearly always be true except on initialization and gets checked later anyway, this shrinks the size of the generated code for some
Select specialized vastcpy for current cpu, for now only have paths for MOVDIR64B and generic avx1
Add XE_UNLIKELY/LIKELY if, they map better to the c++ unlikely/likely attributes which we will need to use soon
Finished reimplementing STVL/STVR/LVL/LVR as their own opcodes. we now generate far less code for these instructions. this also means optimization passes can be written to simplify/remove/replace these instructions in some cases. Found that a good deal of the X86 we were emitting for these instructions was dead code or redundant.
the reduction in generated HIR/x86 should help a lot with compilation times and make function precompilation more feasible as a default

Don't static assert in default prefetch impl, in c++20 the assertion will be triggered even without an instantiation
Reorder some if/else to prod msvc into ordering the branches optimally. it somewhat worked...
Added some notes about which opcodes should be removed/refactored
Dispatch in WriteRegister via vector compares for the bounds. still not very optimal, we ought to be checking whether any register in a range may be special
A lot of work on trying to optimize writeregister, moved wraparound path into a noinline function based on profiling info
Hoist the IsUcodeAnalyzed check out of AnalyzeShader, instead check it before each call. Profiler recorded many hits in the stack frame setup of the function, but none in the actual body of it, so the check is often true but the stack frame setup is run unconditionally
Pre-check whether we're about to write a single register from a ring
Replace more jump tables from draw_util/texture_info with popcnt based sparse indexing/bit tables/shuffle lookups
Place the GPU register file on its own VAD/virtual allocation, it is no longer a member of graphics system
2022-09-04 11:04:41 -07:00
Radosław Gliński c1d3e35eb9
Merge pull request #66 from chrisps/canary_experimental
Huge boost to readback_memexport/resolve performance by fixing old bug; miscellaneous optimizations
2022-08-29 00:55:24 +02:00
chss95cs@gmail.com 78c9a48bc2 also use vastcpy for shared memory page stuff 2022-08-28 14:52:12 -07:00
chss95cs@gmail.com f31869092c Fixed a bug with readback_resolve and readback_memexport that was responsible for a large portion of their overhead. readback_memexport and resolve are now usable for games, depending on your hardware. in my case games that were slideshows now run at like 20-30 fps, and my hardware isnt the best for xenia.
add split_map class for mapping keys to values in a way that optimizes for frequent searches and infrequent insertions/removals
remove jump table implementation of GetColorRenderTargetFormatComponentCount, it was appearing relatively high in profiles. instead pack the component counts into a single 32 bit word, which is indexed by shifting
Add cvar to align all basic blocks to a boundary
Add mmio aware load paths
liberally apply XE_RESTRICT in ringbuffer related code
Removed the IS_TRUE and IS_FALSE opcodes, they were pointless duplicates of COMPARE_EQ/COMPARE_NE and i want to simplify our set of opcodes for future backends
More work on LVSR/LVSL/STVR/STVL opcodes
Optimized X64 translated code emission, now only compute instrkey once
Add code for pre-computing integer division magic numbers
Optimized GetHostViewportInfo a little
Move args for GetHostViewportInfo into a class, cache the result and compare for future queries. moved GetHostViewportInfo far lower on the profile
Add (currently not functional, and very racy) asynchronous memcpy code. will improve it and actually use it in future commits.
Add non-temporal memcpy function for huge page-aligned allocations. Used for copying to shared memory/readback
hoist are_accumulated_render_targets_valid_ check out of loop in render_target_cache already bound check.
Add stosb/movsb code for small constant memcpys/memsets that arent worth the overhead of memcpy/memset
2022-08-28 14:24:25 -07:00
Radosław Gliński 335a390d43
Merge pull request #64 from beeanyew/cpu-updates-raiden-fighters
Some minor CPU updates
2022-08-28 20:52:42 +02:00
beeanyew 3569e97e0e [CPU] Add rldicx implementation
NOTE: May or may not be correct, but works for 535507D4.
2022-08-28 20:02:39 +02:00
beeanyew 75ed343e72 [CPU] Add stub OE handling implementation for addex and negx 2022-08-28 20:01:26 +02:00
illusion0001 04c9c02270 Guest crash message more useful 2022-08-24 09:42:56 -05:00
Radosław Gliński 9006b309af
Merge pull request #62 from chrisps/canary_experimental
Minor correctness/constant folding fixes, guest code optimizations for pre-ryzen amd processors
2022-08-23 00:01:24 +02:00
chss95cs@gmail.com 1ffd7ecae8 Remove vpcmov print 2022-08-21 12:40:56 -07:00
chss95cs@gmail.com b5ef3453c7 Disable most XOP code by default, the manual must be wrong for the shifts or we must be assembling them incorrectly, will return to it later and fix
comparisons and select done by xop are fine though
2022-08-21 12:32:33 -07:00
chss95cs@gmail.com b26c6ee1b8 Fix some more constant folding
fabsx does NOT set fpscr
turns out that our vector unsigned compare instructions are a bit wierd?
2022-08-21 10:27:54 -07:00
chss95cs@gmail.com 0ebc109d4d add initial xop codepaths, still need to finish the rest of the compares, and then do shifts, rotates, and PERMUTE
Add vector simplification pass, so far it only recognizes whether VECTOR_DENORMFLUSH is useless and optimizes them away
Tag restgplr/savegplr/restvmx/savevmx/restfpr/savefpr with useful information, i intend to inline them (they tend to be the most heavily called guest functions)
2022-08-21 08:55:42 -07:00
Gliniak da00ede181 [XAM/Settings] Check if provided size doesn't exceed maximal setting size 2022-08-21 17:46:00 +02:00
Radosław Gliński 0b013fdc6b
Merge pull request #61 from chrisps/canary_experimental
performance improvements, kernel fixes, cpu accuracy improvements
2022-08-21 09:31:09 +02:00
chss95cs@gmail.com d85bfc1894 Dont constant evaluate MAX with V128!
Fix signed zeroes behavior for vmaxfp emulation, was causing a block in sonic to move perpetually, very slowly
2022-08-20 14:22:05 -07:00
Gliniak 010b59e81c [Emulator] Install Content: Create header for installed packages
This fixes support for certain DLCs
2022-08-20 20:44:30 +02:00
Gliniak 469d062a50 [Emulator] Updated "Install Content" function to match PR status 2022-08-20 20:44:30 +02:00
Gliniak f19cb704aa [Emulator] Added error checking while creating directories 2022-08-20 20:44:30 +02:00
chss95cs@gmail.com 457296850e Add OPCODE_NEGATED_MUL_ADD/OPCODE_NEGATED_MUL_SUB
Proper handling of nans for VMX max/min on x64 (minps/maxps has special behavior depending on the operand order that vmx does not have for vminfp/vmaxfp)
Add extremely unintrusive guest code profiler utilizing KUSER_SHARED systemtime. This profiler is disabled on platforms other than windows, and on windows is disabled by default by a cvar
Repurpose GUEST_SCRATCH64 stack offset to instead be for storing guest function profile times, define GUEST_SCRATCH as 0 instead, since thats already meant to be a scratch area
Fix xenia silently closing on config errors/other fatal errors by setting has_console_attached_'s default to false
Add alternative code path for guest clock that uses kusershared systemtime instead of QueryPerformanceCounter. This is way faster and I have tested it and found it to be working, but i have disabled it because i do not know how well it works on wine or on processors other than mine
Significantly reduce log spam by setting XELOGAPU and XELOGGPU to be LogLevel::Debug
Changed some LOGI to LOGD in places to reduce log spam
Mark VdSwap as kHighFrequency, it was spamming up logs
Make logging calls less intrusive for the caller by forcing the test of log level inline and moving the format/AppendLogLine stuff to an outlined cold function
Add swcache namespace for software cache operations like prefetches, streaming stores and streaming loads.
Add XE_MSVC_REORDER_BARRIER for preventing msvc from propagating a value too close to its store or from its load
Add xe_unlikely_mutex for locks we know have very little contention
add XE_HOST_CACHE_LINE_SIZE and XE_RESTRICT to platform.h
Microoptimization: Changed most uses of size_t to ring_size_t in RingBuffer, this reduces the size of the inlined ringbuffer operations slightly by eliminating rex prefixes, depending on register allocation
Add BeginPrefetchedRead to ringbuffer, which prefetches the second range if there is one according to the provided PrefetchTag
added inline_loadclock cvar, which will directly use the value of the guest clock from clock.cc in jitted guest code. off by default
change uses of GUEST_SCRATCH64 to GUEST_SCRATCH
Add fast vectorized xenos_half_to_float/xenos_float_to_half (currently resides in x64_seq_vector, move to gpu code maybe at some point)
Add fast x64 codegen for PackFloat16_4/UnpackFloat16_4. Same code can be used for Float16_2 in future commit. This should speed up some games that use these functions heavily
Remove cvar for toggling old float16 behavior
Add VRSAVE register, support mfspr/mtspr vrsave
Add cvar for toggling off codegen for trap instructions and set it to true by default.
Add specialized methods to CommandProcessor: WriteRegistersFromMem, WriteRegisterRangeFromRing, and WriteOneRegisterFromRing. These reduce the overall cost of WriteRegister
Use a fixed size vmem vector for upload ranges, realloc/memsetting on resize  in the inner loop of requestranges was showing up on the profiler (the search in requestranges itself needs work)
Rename fixed_vmem_vector to better fit xenia's naming convention
Only log unknown register writes in WriteRegister if DEBUG :/. We're stuck on MSVC with c++17 so we have no way of influencing the branch ordering for that function without profile guided optimization
Remove binding stride assert in shader_translator.cc, triangle told me its leftover ogl stuff
Mark xe::FatalError as noreturn
If a controller is not connected, delay by 1.1 seconds before checking if it has been reconnected. Asking Xinput about a controller slot that is unused is extremely slow, and XinputGetState/SetState were taking up
an enormous amount of time in profiles. this may have caused a bit of input lag
Protect accesses to input_system with a lock
Add proper handling for user_index>= 4 in XamInputGetState/SetState, properly return zeroed state in GetState
Add missing argument to NtQueryVirtualMemory_entry
Fixed RtlCompareMemoryUlong_entry, it actually does not care if the source is misaligned, and for length it aligns down
Fixed RtlUpperChar and RtlLowerChar, added a table that has their correct return values precomputed
2022-08-20 11:40:19 -07:00
Margen67 f551e59015
CI: Remove game patches
People are too stupid to understand how portable mode works, and these can be outdated anyway.
2022-08-19 03:39:29 -07:00
Gliniak e06978e5be [Premake] Cleanup & Fixed references in cpu-tests 2022-08-17 09:43:55 +02:00
Gliniak 0df92130e6 [Memory] Changed amount of kernel reserved pages.
This fixes flickering in games with resoultion scaling enabled
2022-08-15 17:51:29 +02:00
chss95cs@gmail.com 7cc364dcb8 squash reallocs in command buffers by using large prealloced buffer, directly use virtual memory with it so os allocs on demand
mark raw clock functions as noinline, the way msvc was inlining them and ordering the branches meant that rdtsc would often be speculatively executed
add alternative clock impl for win, instead of using queryperformancecounter we grab systemtime from kusershared. it does not have the same precision as queryperformancecounter, we only have 100 nanosecond precision, but we round to milliseconds so it never made sense to use the performance counter in the first place
stubbed out the "guest clock mutex"... (the entirety of clock.cc needs a rewrite)
added some helpers for minf/maxf without the nan handling behavior
2022-08-14 13:42:08 -07:00
chss95cs@gmail.com c9b2d10e17 alternative mutex impl on windows works but i really can't tell if it helps much. use larger size in deferred_command_list to cut down on resizes in big scenes on m:dur 2022-08-14 10:26:50 -07:00
chss95cs@gmail.com a037bdb2e8 Point ffmpeg submodule to the branch with the nonrecursive split_radix_permutation 2022-08-14 09:20:04 -07:00
chss95cs@gmail.com e5d01af6a6 trying to get new disruptorplus module path to be used 2022-08-14 09:16:40 -07:00
chss95cs@gmail.com 08f7a28920 Alternative mutex 2022-08-14 08:59:11 -07:00
Radosław Gliński 6bc3191b97
Merge pull request #60 from chrisps/canary_experimental
Superbig boost in performance?
2022-08-13 23:14:31 +02:00
chss95cs@gmail.com 495b1f8bc8 once again return to spinloop 2022-08-13 14:05:35 -07:00
chss95cs@gmail.com c9e4119428 Add branch of ffmpeg with non-recursive split_radix_permutation
Add branch of disruptorplus with working blocking_wait_stategy
Switch back to blocking wait for timer queue
2022-08-13 13:43:45 -07:00
chss95cs@gmail.com 020d64a1a1 revert to using old bad spinwait, disruptorplus' blocking_wait code does not compile 2022-08-13 13:20:35 -07:00
chss95cs@gmail.com cb85fe401c Huge set of performance improvements, combined with an architecture specific build and clang-cl users have reported absurd gains over master for some gains, in the range 50%-90%
But for normal msvc builds i would put it at around 30-50%
Added per-xexmodule caching of information per instruction, can be used to remember what code needs compiling at start up
Record what guest addresses wrote mmio and backpropagate that to future runs, eliminating dependence on exception trapping. this makes many games like h3 actually tolerable to run under a debugger
fixed a number of errors where temporaries were being passed by reference/pointer
Can now be compiled with clang-cl 14.0.1, requires -Werror off though and some other solution/project changes.
Added macros wrapping compiler extensions like noinline, forceinline, __expect, and cold.
Removed the "global lock" in guest code completely. It does not properly emulate the behavior of mfmsrd/mtmsr and it seriously cripples amd cpus. Removing this yielded around a 3x speedup in Halo Reach for me.
Disabled the microprofiler for now. The microprofiler has a huge performance cost associated with it. Developers can re-enable it in the base/profiling header if they really need it
Disable the trace writer in release builds. despite just returning after checking if the file was open the trace functions were consuming about 0.60% cpu time total
Add IsValidReg, GetRegisterInfo is a huge (about 45k) branching function and using that to check if a register was valid consumed a significant chunk of time
Optimized RingBuffer::ReadAndSwap and RingBuffer::read_count. This gave us the largest overall boost in performance. The memcpies were unnecessary and one of them was always a no-op
Added simplification rules for multiplicative patterns like (x+x), (x<<1)+x
For the most frequently called win32 functions i added code to call their underlying NT implementations, which lets us skip a lot of MS code we don't care about/isnt relevant to our usecases
^this can be toggled off in the platform_win header
handle indirect call true with constant function pointer, was occurring in h3
lookup host format swizzle in denser array
by default, don't check if a gpu register is unknown, instead just check if its out of range. controlled by a cvar
^looking up whether its known or not took approx 0.3% cpu time
Changed some things in /cpu to make the project UNITYBUILD friendly
The timer thread was spinning way too much and consuming a ton of cpu, changed it to use a blocking wait instead
tagged some conditions as XE_UNLIKELY/LIKELY based on profiler feedback (will only affect clang builds)
Shifted around some code in CommandProcessor::WriteRegister based on how frequently it was executed
added support for docdecaduple precision floating point so that we can represent our performance gains numerically
tons of other stuff im probably forgetting
2022-08-13 12:59:00 -07:00
Radosław Gliński 2f59487bf3
Merge pull request #59 from Uraniumm/canary_experimental
Add nullptr check in CheckScalarConstCmp
2022-08-08 19:47:35 +02:00
Uraniumm a16acbaf59
add nullptr check to mitigate crashes
wip for reach untracked tags build fixes
2022-08-08 02:02:25 -04:00
Radosław Gliński 3ac99e0d7d
Merge pull request #58 from chrisps/canary_experimental
[CPU] VKPKX Implementation, miscellaneous fixes
2022-08-08 07:54:26 +02:00
chss95cs@gmail.com 324a8eb818 A bunch of fixes for division logic:
"turns out theres a lot of quirks with the div instructions we havent been covering
if the denom is 0, we jump to the end and mov eax/rax to dst, which is correct because ppc raises no exceptions for divide by 0 unlike x86
except we don't initialize eax before that jump, so whatever garbage from the previous sequence that has been left in eax/rax is what the result of the instruction will be
and then in our constant folding, we don't do the same zero check in Value::Div, so if we constant folded the denom to 0 we will host crash
the ppc manual says the result for a division by 0 is undefined, but in reality it seems it is always 0
there are a few posts i saw from googling about it, and tests on my rgh gave me 0, but then another issue came up
and that is that we dont check for signed overflow in our division, so we raise an exception if guest code ever does (1<<signbit_pos) / -1
signed overflow in division also produces 0 on ppc
the last thing is that if src2 is constant we skip the 0 check for division
without checking if its nonzero
all weird, likely very rare edge cases, except for maybe the signed overflow division
chrispy — Today at 9:51 AM
oh yeah, and because the int members of constantvalue are all signed ints, we were actually doing signed division always with constant folding"

fixed an earlier mistake by me with the precision of fresx
made some optimization disableable

implemented vkpkx
fixed possible bugs with vsr/vsl constant folding
disabled the nice imul code for now, there was a bug with int64 version and i dont have time to check
started on multiplication/addition/subtraction/division identities
Removed optimized VSL implementation, it's going to have to be rewritten anyway
Added ppc_ctx_t to xboxkrnl shim for direct context access
started working on KeSaveFloatingPointState, re'ed most of it
Exposed some more state/functionality to the kernel for implementing lower level routines like the save/restore ones
Add cvar to re-enable incorrect mxcsr behavior if a user doesnt care and wants better cpu performance
Stubbed out more impossible sequences, replace mul_hi_i32 with a 64 bit multiply
2022-08-07 10:41:26 -07:00
Gliniak f45e9e5e9a [Kernel] Improved handling of internal display resolution 2022-08-02 12:09:25 +02:00
Gliniak 0e1353aa71 Implemented Opcode: mcrf 2022-08-01 14:54:05 +02:00
Radosław Gliński 332f69f36b
Merge pull request #57 from chrisps/canary_experimental
Add separate VMX/fpu mxcsr
2022-07-31 18:43:30 +02:00
chss95cs@gmail.com 968f656d96 Add separate VMX/fpu mxcsr
Add support for constant operands for most fpu instructions
Remove constant folding for most fpu cpde
half float
2022-07-31 08:56:36 -07:00
Radosław Gliński 3185b0ac9c
Merge pull request #55 from Etokapa/patch-1
Adjustments for Building Canary
2022-07-31 09:24:55 +02:00