Use shuffle_ps instead of broadcastss, broadcastss is slower on many intel and amd processors and encodes to the same number of bytes as shuffle_ps
Detect and optimize away PERMUTE with a zero src2 and src3 in constant_propagation_pass instead of in the x64 sequence
For constant PERMUTE, do the Xor/And prior to LoadConstantXmm instead of in the generated code
Simplified code for PERMUTE
Added simplification rule that detects (lzcnt(x) >> log2(bitsizeof_x)) == ( x == 0)
Added set_srcN(value, idx) which can be used to set the nth source of an instruction, which makes more sense than having three different functions that only differ by the field they touch
Added Value::VisitValueOperands for iterating all Value operands an instruction has.
Add BackpropTruncations code to simplification_pass
Changed the (void**) dereferences of raw_context that are done to grab thread_state to instead reference PPCContext and the thread_state field. Moved the thread_state field to the tail of PPCContext.
Moved membase to the tail of PPCContext, since now it is reloaded very infrequently.
Rearranged PPCContext so that the condition registers come first (most accesses to them cant get SSA'd), moved lr and ctr to after gp regs since they are not accessed as much as the main gpregs. This way the most frequently
accessed registers will be accessible via a rel8 displacement instead of rel32 (ideally, we would have only certain CRs at the start, but xenia does pointer arithmetic on CR0's offset to get CRn)
Use alignas(64) to ensure PPCContext's padding
Map PPCContext specially so that the low 32 bits of the context register is 0xE0000000, for the 4k page offset check. Also allocate the page before, so that backends can store their own information that is not relevant to the PPCContext on that page and
reference that data in the generated asm via 8-bit signed displ or 32-bit signed displ. Currently this page is not being utilized, but I plan on stashing some data critical to the x86 backend there
Changed many wrong avx instructions, they worked but they were not intended for the data they operated on, meaning they transferred domains and caused 1-2 cycle stall each time
Added SimdDomain checking/deduction to X64Emitter.
Used SimdDomain code to fix a lot of float/int domain stalls
Use the low 32 bits of the context register instead of constant 0xE0000000 in ComputeAddress
Special path for SELECT_V128 with result of comparison that will use a blend instruction instead of and/or
Many HIR optimizations added in simp pass
A bunch of other stuff running out of time to write this msg
pack local_slot and constant in hir::Value
Instead of loading membase at the start of every function, just load it in HostToGuestThunk
vzeroupper in GuestToHostThunk before calling host function, and in HostToGuestThunk after calling function to prevent AVX dirty state slowdowns. In the future, check if CPU implements AVX as 128x2 and skip if so (https://john-h-k.github.io/VexTransitionPenalties.html)
Remove useless save/restore of ctx pointer, nothing modifies it and it prevents cpus from doing cross-function memory renaming (https://www.agner.org/forum/viewtopic.php?t=41). Could not remove the space on stack because of alignment issues, instead turned it into GUEST_SCRATCH64 which is a temporary that sequences may use
Reorder OpcodeInfo so that name is at offset 0, remove name and add GetOpcodeName function (name is only used for debug code, we are seperating frequently accessed data and rarely accessed data)
Add VECTOR_DENORMFLUSH opcode for handling output to DOT_PRODUCT and other opcodes that implicitly force denormal inputs/outputs to zero, will eventually use for implementing NJM
Rewrite sequences for LOAD_VECTOR_SHL/SHR. The mask with 0xf in it was pointless as all InstrEmit_ functions that create the load shift instructions do that in HIR. The tables are only used for nonzero constant inputs now, which are probably pretty rare. Instead of doing a shift and lookup, a base value is used for both in the constant table and adding/subtracting of the input is done
Reuse result of LoadVectorShl/Shr in InstrEmit_stvlx_, InstrEmit_stvrx_. We were previously calculating it twice which was contributing to the final sequences' fatness. Use OPCODE_SELECT instead of the sequence of or, andnot, and that it was using for merging
Add the proper unconditional denormal input flushing behavior to vfmadd, add it also to vfmsub (making the assumption it has the same behavior)
Remove constant propagation for DOT_PRODUCT_3/4
DOT_PRODUCT_3/4 now returns a vector with all four elements set to the result. (what we were doing before, truncating to float32 and then splatting didnt make any sense)
Add much more correct versions of DOT_PRODUCT_3/4, matching the Xb360's to 1 bit. Still needs work to be a perfect emulation.
Add constant folding for OPCODE_SELECT, OPCODE_INSERT, OPCODE_PERMUTE, OPCODE_SWIZZLE
Remove constant folding for DOT_PRODUCT
Removed the multibyte nop code I committed earlier, it doesnt help us much because nops are only used for debug stuff and its ugly and wouldnt survive in a pr to main
Check for AVX512BMI, use vpermb to shuffle if supported
Put all descriptors used by translated shaders in up to 4 descriptor sets, which is the minimum required, and the most common on Android, `maxBoundDescriptorSets` device limit value
Fixes loading of the 1x1 linear 8_8_8_8 texture containing just a single #FFFFFFFF texel in 4D5307E6, which is used for screen fade and the lobby map loading bar background
Added recognition of impossible comparisons via NZM and optimize them away
Recognize (x + -y) and transform to (x - y) for constants
Recognize (~x ) + 1 and transform to -x
Check and transform comparisons if theyre semantically equal to others
Detect comparisons of single-bit values with their only possible non-zero value and transform to true/false tests
Transform ==0 to IS_FALSE, !=0 to IS_TRUE
Truncate to int8 if operand for IS_TRUE/IS_FALSE has a nzm of 1
Reduced code generated for SubDidCarry slightly
Add special case for InstrEmit_srawix if mask == 1
Cut down the code generated for trap instructions, instead of naive or'ing or compare results do a switch and select the best condition
Rerun simplification pass until no changes, as some optimizations will enable others to be done
Enable rel32 call optimization by default
Make it have no effect on the texture resource as a resource may be used with samplers with different overrides. Also make sure magnification vs. minification is not undefined with it on Direct3D 12.
According to the integral promotion rules https://eel.is/c++draft/conv.prom#5.sentence-1 bit fields can be promoted to `int` if it's wide enough to store their value, and then otherwise, to `unsigned int`. Hopefully fixes Clang building (the `width_div_8` case).
Add special case to TYPE_INT64's EmitAnd for UINT_MAX mask. Do mov32 to 32 if detected to take advantage of implicit zero xt/reg renaming
Add helper function for skipping assignment defs in instr.
Add helper function for checking if an opcode is binary value type
Add several new optimizations to simplificationpass, plus weak NZM calculation code (better full evaluation of Z/NZ will be done later) .
List of optimizations:
If a value is anded with a bitmask that it was already masked against, reuse the old value (this cuts out most FPSCR update garbage, although it does cause a local variable to be allocated for the masked FPSCR and it still repeatedly stores the masked value to the context)
If masking a value that was or'ed against another check whether our mask only considers bits from one value or another. if so, change the operand to the OR input that actually matters
If the only usage of a rotate left's output is an AND against a mask that discards the bits that were rotated in change the opcode to SHIFT_LEFT
If masking against all ones, become an assign.
If XOR or OR against 0, become an assign (additional FPSCR codegen cleanup)
If XOR against all ones, become a NOT
Adding a direct CPUID check to x64_emitter for lzcnt, the version of xbyak we are using is skipping checking for lzcnt on all non-intel cpus, meaning we are generating the much slower bitscan path for AMD cpus.
Float24-as-float32 depth bias is now in the increments of 8, because conversion of the depth to float24 directly in the pixel shaders may destroy the bias qualitatively otherwise if it's too small.
the emitted asm as rel32 calls. Disabled by default, enabled via
resolve_rel32_guest_calls
detect whether cpu has fast jrcxz, fast loop/loope/loopne
much more thorough LoadConstantXMM
New cvar elide_e0_check that allows the backend to assume accesses via
the SP or TLS register will not cross into 0xe0 range
Add x64 codegen for Vector shift uint8
If has fast jrcxz use for some traptrue/breaktrue instructions
Use phat nops
Add cvar use_fast_dot_product, which uses a four instruction sequence
for both dot product instructions which ought to be equivalent. disabled
by default.
On Vulkan, when snorm16 in unsupported, these formats may be emulated as float16, which natively can represent a wide range of numbers including -32 to 32 with blending. However, R16G16_SNORM and R16G16B16A16_SNORM are two separate formats, which may have different support on the device.
Ordering the descriptor sets by the change frequency on Vulkan, in increasing order (the opposite of D3D12 root signatures). The EDRAM binding never changes there (always one storage buffer), while the destination buffer binding may become changeable in the future (to split dispatches if exceeding `maxStorageBufferRange`, for example).
Replace a movzx after setae in both ComputeMemoryAddressOffset and ComputeMemoryAddress with a xor_ of eax prior to the cmp. This reduces the length in bytes of both sequences by 1, and should be a moderate ICache usage reduction thanks to the frequency of these sequences.
While the alpha of the texture data is not used at all (replaced with blue using the view swizzle), still make the shader code state the intention more explicitly if the format is decompressed for use as signed. Unsigned 1.0 is 0xFF, while signed 1.0 is 0x7F.
The original multiplication was likely added early during the development of generic resolution scaling. Before generic resolution scaling, invocations were done for unscaled guest blocks, now they're done for scaled blocks, so with 3x1 scaling, an invocation for 8 blocks writes 8 host blocks, not 24.
Certain games, such as Forza Motorsport 3, submit XMA data with the
stereo flag set with a null second channel. This falls back to mono
conversion when the second channel is null, preventing a crash.
- Changed name of config option to apply_title_update to better reflect what that option does
- Mount TU package to UPDATE: partition
- Simplified UserModule::title_id()
- Splitted loading module into two parts to allow applying TUs and custom patches
6fcf9d21fe made per-vertex diameter vs. constant radius consistent, and with that commit the shader works with direct pixel to NDC conversion, however, the NDC conversion factor was outdated in that commit (still included the 0.5 factor for diameter to radius conversion, resulting in all points being 50% narrower along each axis than needed). Now, the diameter to radius conversion factor is used there properly, and also the multiplication of the per-vertex diameter by 0.5 has been removed from the shader since the constant already includes it now (the constant diameter is passed via the system constants instead of the radius also).
- WinSystemClock is a FILETIME clock without scaling, can convert to
system_time
- XSystemClock is a FILTETIME clock with scaling applied, can only
convert to WinSystemClock